S. Sasi, Srividya Bharadwaj Venkata Subbu, Premkumar Manoharan, L. Abualigah
This research study introduces an ID-based identity authentication protocol that utilizes the enhanced elliptic curve digital signature algorithm, a cryptographic method developed on elliptic curve cryptography. The protocol enhances the Consultative Committee for Space Data Systems (CCSDS) File Delivery Protocol (CFDP), a pioneering protocol explicitly defined for distant space communications. This study employs both dependable and uncertain modes of the CFDP protocol. To make more secure data transactions, two key security risks are effectively mitigated in this research as a result of applying the proposed enhanced elliptic curve cryptography algorithm (ECC) over the ternary galois field. First, it thwarts the impersonation of a harmful entity during a passive attack. Second, it prevents masquerade attacks, further reinforcing the security of space data transmission. This ID-based authentication protocol, therefore, offers a significant advancement in protecting far-space communications, optimizing the integrity of data exchanged across vast distances.
{"title":"Design and implementation of secured file delivery protocol using enhanced elliptic curve cryptography for class I and class II transactions","authors":"S. Sasi, Srividya Bharadwaj Venkata Subbu, Premkumar Manoharan, L. Abualigah","doi":"10.32629/jai.v6i3.740","DOIUrl":"https://doi.org/10.32629/jai.v6i3.740","url":null,"abstract":"This research study introduces an ID-based identity authentication protocol that utilizes the enhanced elliptic curve digital signature algorithm, a cryptographic method developed on elliptic curve cryptography. The protocol enhances the Consultative Committee for Space Data Systems (CCSDS) File Delivery Protocol (CFDP), a pioneering protocol explicitly defined for distant space communications. This study employs both dependable and uncertain modes of the CFDP protocol. To make more secure data transactions, two key security risks are effectively mitigated in this research as a result of applying the proposed enhanced elliptic curve cryptography algorithm (ECC) over the ternary galois field. First, it thwarts the impersonation of a harmful entity during a passive attack. Second, it prevents masquerade attacks, further reinforcing the security of space data transmission. This ID-based authentication protocol, therefore, offers a significant advancement in protecting far-space communications, optimizing the integrity of data exchanged across vast distances.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46548002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shrinath M. Patil-Mangore, Niranjan L. Shegokar, Nand Jee Kanu
Grinding wheel condition monitoring is an important step towards the prediction of grinding wheel faulty conditions. It is beneficial to define techniques to minimize the wear of the grinding wheels and finally enhance the life of the grinding wheels. Grinding wheel condition monitoring is done by two techniques such as (i) direct and (ii) indirect. Direct monitoring employs optical sensors and computer vision techniques, and indirect monitoring is done by signal analysis such as acoustic emission (AE), vibration, cutting force, etc. Methods implemented for grinding wheel monitoring in the published research papers are reviewed. The review is compiled in five sections: (a) process parameters measurement, (b) data acquisition systems, (c) signal analysis techniques, (d) feature extraction, and (e) classification methods. In today’s era of Industry 4.0, a large amount of manufacturing data is generated in the industry. So, conventional machine learning techniques are insufficient to analyze real-time conditioning monitoring of the grinding wheels. However, deep learning techniques such as artificial neural network (ANN), convolutional neural network (CNN) have shown prediction accuracy above 99%.
{"title":"Conditioning and monitoring of grinding wheels: A state-of-the-art review","authors":"Shrinath M. Patil-Mangore, Niranjan L. Shegokar, Nand Jee Kanu","doi":"10.32629/jai.v6i3.622","DOIUrl":"https://doi.org/10.32629/jai.v6i3.622","url":null,"abstract":"Grinding wheel condition monitoring is an important step towards the prediction of grinding wheel faulty conditions. It is beneficial to define techniques to minimize the wear of the grinding wheels and finally enhance the life of the grinding wheels. Grinding wheel condition monitoring is done by two techniques such as (i) direct and (ii) indirect. Direct monitoring employs optical sensors and computer vision techniques, and indirect monitoring is done by signal analysis such as acoustic emission (AE), vibration, cutting force, etc. Methods implemented for grinding wheel monitoring in the published research papers are reviewed. The review is compiled in five sections: (a) process parameters measurement, (b) data acquisition systems, (c) signal analysis techniques, (d) feature extraction, and (e) classification methods. In today’s era of Industry 4.0, a large amount of manufacturing data is generated in the industry. So, conventional machine learning techniques are insufficient to analyze real-time conditioning monitoring of the grinding wheels. However, deep learning techniques such as artificial neural network (ANN), convolutional neural network (CNN) have shown prediction accuracy above 99%.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44457232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The performance of healthcare systems, particularly regarding disease diagnosis and treatment planning, depends on the segmentation of medical images. Fuzzy c-means (FCM) is one of the most widely used clustering techniques for image segmentation due to its simplicity and effectiveness. FCM, on the other hand, has the disadvantages of being noise-sensitive, quickly settling on local optimal solutions, and being sensitive to initial values. This paper suggests a fuzzy c-means clustering improved with a nature-inspired raindrop optimizer for lesion extraction in brain magnetic resonance (MR) images to get around this constraint. In the preprocessing stage, the possible noises in a digital image, such as speckles, gaussian, etc., are eliminated by a hybrid filter—A combination of Gaussian, mean, and median filters. This paper presents a comparative analysis of FCM clustering and FCM-raindrop optimization (FCM-RO) approach. The algorithm performance is evaluated for images subjected to various possible noises that may affect an image during transmission and storage. The proposed FCM-RO approach is comparable to other methods now in use. The suggested system detects lesions with a partition coefficient of 0.9505 and a partition entropy of 0.0890. Brain MR images are analyzed using MATLAB software to find and extract malignancies. Image data retrieved from the public data source Kaggle are used to assess the system’s performance.
{"title":"An improved fuzzy c-means-raindrop optimizer for brain magnetic resonance image segmentation","authors":"Bindu Puthentharayil Vikraman, Jabeena A. Afthab","doi":"10.32629/jai.v6i3.973","DOIUrl":"https://doi.org/10.32629/jai.v6i3.973","url":null,"abstract":"The performance of healthcare systems, particularly regarding disease diagnosis and treatment planning, depends on the segmentation of medical images. Fuzzy c-means (FCM) is one of the most widely used clustering techniques for image segmentation due to its simplicity and effectiveness. FCM, on the other hand, has the disadvantages of being noise-sensitive, quickly settling on local optimal solutions, and being sensitive to initial values. This paper suggests a fuzzy c-means clustering improved with a nature-inspired raindrop optimizer for lesion extraction in brain magnetic resonance (MR) images to get around this constraint. In the preprocessing stage, the possible noises in a digital image, such as speckles, gaussian, etc., are eliminated by a hybrid filter—A combination of Gaussian, mean, and median filters. This paper presents a comparative analysis of FCM clustering and FCM-raindrop optimization (FCM-RO) approach. The algorithm performance is evaluated for images subjected to various possible noises that may affect an image during transmission and storage. The proposed FCM-RO approach is comparable to other methods now in use. The suggested system detects lesions with a partition coefficient of 0.9505 and a partition entropy of 0.0890. Brain MR images are analyzed using MATLAB software to find and extract malignancies. Image data retrieved from the public data source Kaggle are used to assess the system’s performance.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45048357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. M. Naik, H. M. T. Gadiyar, M. B. Kumar, B. K. Jeevitha, G. S. Thyagaraju, U. J. Ujwal, K. Arjun, S. M. Manasa, S. Avinash, J. A. Kumar, T. K. Sowmya, K. P. Uma, A. R. Ramaprasad
In various cloud computing models, the data need to be protected and to access these data in secure manner is important. The cryptographic key which is used to secure these data using both in the encryption as well as in decryption it is mandatory to manage these keys to secure these keys by disclosing in public networks such as any wireless and cloud environment. Utilizing Ciphertext Policy Attribute-based Encryption (CP-ABE), which provides effective data governance and key management, for cloud data encryption. The work based on the combination of Cipher Text-Policy Attribute based Encryption and Proxy Re-Encryption is elaborated in the article (CP-ABE-PRE). The encrypted data should ideally be transformed such that it may be unlocked with new keys, without an intermediate decryption step that would allow the cloud provider to read the plaintext this process is known as data re-encryption. The computational and communication burden on users connecting to the cloud from resource constrained devices can be reduced using the proposed technique. The experimental results show for Cipher Text-Policy Attribute-Based Encryption are compared to the current algorithm (CP-ABE) demonstrate good results in encryption and decryption times. Additionally, the CP-ABE offers crucial distribution and administration options for cloud data. CP-ABE with Proxy Re-Encryption does appear to be highly efficient which proves verifiability and fairness for cloud data users to which also address revocation problem as well as collusion resistant model.
{"title":"Key management and access control based on combination of cipher text-policy attribute-based encryption with Proxy Re-Encryption for cloud data","authors":"R. M. Naik, H. M. T. Gadiyar, M. B. Kumar, B. K. Jeevitha, G. S. Thyagaraju, U. J. Ujwal, K. Arjun, S. M. Manasa, S. Avinash, J. A. Kumar, T. K. Sowmya, K. P. Uma, A. R. Ramaprasad","doi":"10.32629/jai.v6i3.748","DOIUrl":"https://doi.org/10.32629/jai.v6i3.748","url":null,"abstract":"In various cloud computing models, the data need to be protected and to access these data in secure manner is important. The cryptographic key which is used to secure these data using both in the encryption as well as in decryption it is mandatory to manage these keys to secure these keys by disclosing in public networks such as any wireless and cloud environment. Utilizing Ciphertext Policy Attribute-based Encryption (CP-ABE), which provides effective data governance and key management, for cloud data encryption. The work based on the combination of Cipher Text-Policy Attribute based Encryption and Proxy Re-Encryption is elaborated in the article (CP-ABE-PRE). The encrypted data should ideally be transformed such that it may be unlocked with new keys, without an intermediate decryption step that would allow the cloud provider to read the plaintext this process is known as data re-encryption. The computational and communication burden on users connecting to the cloud from resource constrained devices can be reduced using the proposed technique. The experimental results show for Cipher Text-Policy Attribute-Based Encryption are compared to the current algorithm (CP-ABE) demonstrate good results in encryption and decryption times. Additionally, the CP-ABE offers crucial distribution and administration options for cloud data. CP-ABE with Proxy Re-Encryption does appear to be highly efficient which proves verifiability and fairness for cloud data users to which also address revocation problem as well as collusion resistant model.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43099718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper hybrid opposition based—Chaotic little golden-mantled flying fox algorithm and White-winged chough search optimization algorithm (HLFWC) is applied to solve the loss dwindling problem. Key objective of the paper is real power loss reduction, voltage deviation minimization and voltage stability expansion. Proposed little golden-mantled flying fox algorithm is designed based on the deeds of the little golden-mantled flying fox. Maximum classes have single progenies at a period afterwards of prenatal period. This little procreative production means that when populace forfeiture their figures are deliberate to ricochet. In White-winged chough search optimization algorithm magnifying the encumbrance element in a definite assortment will pointedly enlarge the exploration region. In a coiled exploration, the position of any White-winged chough can differ in numerous scopes to cover the exploration region, predominantly in the projected problem. Hybrid opposition based—Chaotic little golden-mantled flying fox algorithm and White-winged chough search optimization algorithm (HLFWC) is accomplished by integrating the actions of little golden-mantled flying fox and White-winged chough. Through the hybridization of both algorithms exploration and exploitation has been balanced throughout the procedure. Proposed hybrid opposition based—Chaotic little golden-mantled flying fox algorithm and White-winged chough search optimization algorithm (HLFWC) is corroborated in IEEE 30 and 57 systems. From the simulation results it has been observed that real power loss reduction, voltage deviation minimization and voltage stability expansion has been achieved.
{"title":"Novel scientific design of hybrid opposition based—Chaotic little golden-mantled flying fox, White-winged chough search optimization algorithm for real power loss reduction and voltage stability expansion","authors":"L. Kanagasabai","doi":"10.32629/jai.v6i3.680","DOIUrl":"https://doi.org/10.32629/jai.v6i3.680","url":null,"abstract":"In this paper hybrid opposition based—Chaotic little golden-mantled flying fox algorithm and White-winged chough search optimization algorithm (HLFWC) is applied to solve the loss dwindling problem. Key objective of the paper is real power loss reduction, voltage deviation minimization and voltage stability expansion. Proposed little golden-mantled flying fox algorithm is designed based on the deeds of the little golden-mantled flying fox. Maximum classes have single progenies at a period afterwards of prenatal period. This little procreative production means that when populace forfeiture their figures are deliberate to ricochet. In White-winged chough search optimization algorithm magnifying the encumbrance element in a definite assortment will pointedly enlarge the exploration region. In a coiled exploration, the position of any White-winged chough can differ in numerous scopes to cover the exploration region, predominantly in the projected problem. Hybrid opposition based—Chaotic little golden-mantled flying fox algorithm and White-winged chough search optimization algorithm (HLFWC) is accomplished by integrating the actions of little golden-mantled flying fox and White-winged chough. Through the hybridization of both algorithms exploration and exploitation has been balanced throughout the procedure. Proposed hybrid opposition based—Chaotic little golden-mantled flying fox algorithm and White-winged chough search optimization algorithm (HLFWC) is corroborated in IEEE 30 and 57 systems. From the simulation results it has been observed that real power loss reduction, voltage deviation minimization and voltage stability expansion has been achieved.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43442184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The introduction of the internet has made security a top anxiety. And the preceding of security permits for improved knowledge in the creation of security tools. Several concerns about safekeeping might have arisen just from the way the internet was set up. Many businesses use firewalls and encryption techniques to protect themselves online. Businesses can design an “intranet” that is protected from potential risks while being linked to the internet. For increased security, better encryption techniques are needed to retain data integrity. For better encryption, it is essential to consider into explanation a few of issues, including key size, chunk size, and encoding ratio. Documents transferred to subordinate storage strategies (such as Hard disk or SD card) must be encrypted to provide security and prevent unwanted access. If a key is kept on secondary storage with the document, it is quite simple to decrypt it. It is desirable to create encryption keys from operator passwords when encoding or decoding the folder rather than keeping the key together with the document.
{"title":"Blowfish based encryption model in real cloud environment","authors":"R. Walia, P. Garg, Manish Kumar","doi":"10.32629/jai.v6i3.695","DOIUrl":"https://doi.org/10.32629/jai.v6i3.695","url":null,"abstract":"The introduction of the internet has made security a top anxiety. And the preceding of security permits for improved knowledge in the creation of security tools. Several concerns about safekeeping might have arisen just from the way the internet was set up. Many businesses use firewalls and encryption techniques to protect themselves online. Businesses can design an “intranet” that is protected from potential risks while being linked to the internet. For increased security, better encryption techniques are needed to retain data integrity. For better encryption, it is essential to consider into explanation a few of issues, including key size, chunk size, and encoding ratio. Documents transferred to subordinate storage strategies (such as Hard disk or SD card) must be encrypted to provide security and prevent unwanted access. If a key is kept on secondary storage with the document, it is quite simple to decrypt it. It is desirable to create encryption keys from operator passwords when encoding or decoding the folder rather than keeping the key together with the document.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44211167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Surjeet Dalal, U. Lilhore, Sarita Simaiya, Vivek Jaglan, Anand Mohan, Sachin Ahuja, Akshat Agrawal, Martin Margala, Prasun Chakrabarti
In coronary artery disease, plaque builds up in the arteries that carry oxygen-rich blood to the heart. Having plaque in the arteries can constrict or impede blood flow, leading to a heart attack. Shortness of breath and soreness in the chest are common symptoms. Lifestyle modifications, medication, and potentially surgery are all options for treatment. In coronary artery disease, plaque builds up in the arteries that carry oxygen-rich blood to the heart. Having plaque in the arteries can constrict or impede blood flow, leading to a heart attack. Shortness of breath and soreness in the chest are common symptoms. Lifestyle modifications, medication, and potentially surgery are all options for treatment. This paper presents a Hybrid Boosted C5.0 model to predict coronary artery disease more precisely. A Hybrid Boosted C5.0 model is formed by combining the C5.0 decision tree and boosting methods. Boosting is a supervised machine learning method that leverages numerous inadequate models to construct a more robust and powerful model. The proposed model and some well-known existing machine learning models, i.e., decision tree, AdaBoost, and random forest, were implemented using an online coronary artery disease dataset of 6611 patients and compared based on various performance measuring parameters. Experimental analysis shows that the proposed model achieved an accuracy of 91.62% at training and 81.33% at the testing phase. The AUC value achieved in the training and testing phase is 0.957 and 0.88, respectively. The Gini value achieved in the training and testing phase is 0.914 and 0.759, respectively, far better than the proposed method.
{"title":"A precise coronary artery disease prediction using Boosted C5.0 decision tree model","authors":"Surjeet Dalal, U. Lilhore, Sarita Simaiya, Vivek Jaglan, Anand Mohan, Sachin Ahuja, Akshat Agrawal, Martin Margala, Prasun Chakrabarti","doi":"10.32629/jai.v6i3.628","DOIUrl":"https://doi.org/10.32629/jai.v6i3.628","url":null,"abstract":"In coronary artery disease, plaque builds up in the arteries that carry oxygen-rich blood to the heart. Having plaque in the arteries can constrict or impede blood flow, leading to a heart attack. Shortness of breath and soreness in the chest are common symptoms. Lifestyle modifications, medication, and potentially surgery are all options for treatment. In coronary artery disease, plaque builds up in the arteries that carry oxygen-rich blood to the heart. Having plaque in the arteries can constrict or impede blood flow, leading to a heart attack. Shortness of breath and soreness in the chest are common symptoms. Lifestyle modifications, medication, and potentially surgery are all options for treatment. This paper presents a Hybrid Boosted C5.0 model to predict coronary artery disease more precisely. A Hybrid Boosted C5.0 model is formed by combining the C5.0 decision tree and boosting methods. Boosting is a supervised machine learning method that leverages numerous inadequate models to construct a more robust and powerful model. The proposed model and some well-known existing machine learning models, i.e., decision tree, AdaBoost, and random forest, were implemented using an online coronary artery disease dataset of 6611 patients and compared based on various performance measuring parameters. Experimental analysis shows that the proposed model achieved an accuracy of 91.62% at training and 81.33% at the testing phase. The AUC value achieved in the training and testing phase is 0.957 and 0.88, respectively. The Gini value achieved in the training and testing phase is 0.914 and 0.759, respectively, far better than the proposed method.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47264440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The infrastructure of smart cities is intended to save citizens’ time and effort. After COVID-19, one of such available infrastructure is electronic shopping. Online consumer reviews have a big influence on the electronic retail market. A lot of customers save time by deciding which products to buy online by evaluating the products’ quality based on user reviews. The goal of this study is to forecast if reviews based on reviews representation mining will be helpful while making online purchases. Predicting helpfulness is used in this suggested study to determine the usefulness of a review in relation to glove vector encoding of reviews text. Using an encoding-based convolution neural network and a bidirectional gated recurrent unit, the authors of this study constructed a classification model. The suggested model outperformed these baseline models and other state-of-the-art techniques in terms of classification outcomes, reaching the greatest accuracy of 95.81%. We also assessed the effectiveness of our models using the criteria of accuracy, precision, and recall. The outcomes presented in this study indicate how the proposed model has a significant influence on enhancing the producers’ or service providers’ businesses.
{"title":"Prediction of customer review’s helpfulness based on sentences encoding using CNN-BiGRU model","authors":"Suryanarayan Sharma, Laxman Singh, Rajdev Tiwari","doi":"10.32629/jai.v6i3.699","DOIUrl":"https://doi.org/10.32629/jai.v6i3.699","url":null,"abstract":"The infrastructure of smart cities is intended to save citizens’ time and effort. After COVID-19, one of such available infrastructure is electronic shopping. Online consumer reviews have a big influence on the electronic retail market. A lot of customers save time by deciding which products to buy online by evaluating the products’ quality based on user reviews. The goal of this study is to forecast if reviews based on reviews representation mining will be helpful while making online purchases. Predicting helpfulness is used in this suggested study to determine the usefulness of a review in relation to glove vector encoding of reviews text. Using an encoding-based convolution neural network and a bidirectional gated recurrent unit, the authors of this study constructed a classification model. The suggested model outperformed these baseline models and other state-of-the-art techniques in terms of classification outcomes, reaching the greatest accuracy of 95.81%. We also assessed the effectiveness of our models using the criteria of accuracy, precision, and recall. The outcomes presented in this study indicate how the proposed model has a significant influence on enhancing the producers’ or service providers’ businesses.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48651486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vehicular ad hoc networks (VANET) have been the attention gainer for the last couple of years due to increasing number of vehicles on the road. Incorporation of VANET with Internet of Things (IoT) has created large number possibilities in terms of power efficiency and secure transmission. The article focuses on the ad-hoc on-demand distance vector (AODV) protocol and its applications in route discovery in VANETs. In this work, the swarm intelligence (SI) inspired modified firefly algorithm has been employed for rank generation of the nodes. It is concluded that the use of IoT devices and advanced routing protocols with SI algorithms can lead to efficient and low-latency route discovery in VANETs using quality of service (QoS) parameters. The experimental analysis shown that the proposed technique has been outperformed the other existing technique in terms of QoS parameters and provides the optimal route discovery mechanism with high throughput and minimum latency.
{"title":"An improved firefly algorithm for the rank generation to optimize the route discovery process in IoV","authors":"Sumit Kumar, Jaspreet Singh","doi":"10.32629/jai.v6i3.705","DOIUrl":"https://doi.org/10.32629/jai.v6i3.705","url":null,"abstract":"Vehicular ad hoc networks (VANET) have been the attention gainer for the last couple of years due to increasing number of vehicles on the road. Incorporation of VANET with Internet of Things (IoT) has created large number possibilities in terms of power efficiency and secure transmission. The article focuses on the ad-hoc on-demand distance vector (AODV) protocol and its applications in route discovery in VANETs. In this work, the swarm intelligence (SI) inspired modified firefly algorithm has been employed for rank generation of the nodes. It is concluded that the use of IoT devices and advanced routing protocols with SI algorithms can lead to efficient and low-latency route discovery in VANETs using quality of service (QoS) parameters. The experimental analysis shown that the proposed technique has been outperformed the other existing technique in terms of QoS parameters and provides the optimal route discovery mechanism with high throughput and minimum latency.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46272267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, the innovative approach to sound classification by exploiting the potential of image processing techniques applied to spectrogram representations of audio signals is reviewed. This study shows the effectiveness of incorporating well-established image processing methodologies, such as filtering, segmentation, and pattern recognition, to enhance the feature extraction and classification performance of audio signals when transformed into spectrograms. An overview is provided of the mathematical methods shared by both image and spectrogram-based audio processing, focusing on the commonalities between the two domains in terms of the underlying principles, techniques, and algorithms. The proposed methodology leverages in particular the power of convolutional neural networks (CNNs) to extract and classify time-frequency features from spectrograms, capitalizing on the advantages of their hierarchical feature learning and robustness to translation and scale variations. Other deep-learning networks and advanced techniques are suggested during the analysis. We discuss the benefits and limitations of transforming audio signals into spectrograms, including human interpretability, compatibility with image processing techniques, and flexibility in time-frequency resolution. By bridging the gap between image processing and audio processing, spectrogram-based audio deep learning gives a deeper perspective on sound classification, offering fundamental insights that serve as a foundation for interdisciplinary research and applications in both domains.
{"title":"Cross-domain synergy: Leveraging image processing techniques for enhanced sound classification through spectrogram analysis using CNNs","authors":"Valentina Franzoni","doi":"10.32629/jai.v6i3.678","DOIUrl":"https://doi.org/10.32629/jai.v6i3.678","url":null,"abstract":"In this paper, the innovative approach to sound classification by exploiting the potential of image processing techniques applied to spectrogram representations of audio signals is reviewed. This study shows the effectiveness of incorporating well-established image processing methodologies, such as filtering, segmentation, and pattern recognition, to enhance the feature extraction and classification performance of audio signals when transformed into spectrograms. An overview is provided of the mathematical methods shared by both image and spectrogram-based audio processing, focusing on the commonalities between the two domains in terms of the underlying principles, techniques, and algorithms. The proposed methodology leverages in particular the power of convolutional neural networks (CNNs) to extract and classify time-frequency features from spectrograms, capitalizing on the advantages of their hierarchical feature learning and robustness to translation and scale variations. Other deep-learning networks and advanced techniques are suggested during the analysis. We discuss the benefits and limitations of transforming audio signals into spectrograms, including human interpretability, compatibility with image processing techniques, and flexibility in time-frequency resolution. By bridging the gap between image processing and audio processing, spectrogram-based audio deep learning gives a deeper perspective on sound classification, offering fundamental insights that serve as a foundation for interdisciplinary research and applications in both domains.","PeriodicalId":70721,"journal":{"name":"自主智能(英文)","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69961260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}