Abstract The Internet of medical things (IoMT) is a modern technology that is increasingly being used to provide good healthcare services. As IoMT devices are vulnerable to cyberattacks, healthcare centers and patients face privacy and security challenges. A safe IoMT environment has been used by combining blockchain (BC) technology with artificial intelligence (AI). However, the services of the systems are costly and suffer from security and privacy problems. This study aims to summarize previous research in the IoMT and discusses the roles of AI, BC, and cybersecurity in the IoMT, as well as the problems, opportunities, and directions of research in this field based on a comprehensive literature review. This review describes the integration schemes of AI, BC, and cybersecurity technologies, which can support the development of new systems based on a decentralized approach, especially in healthcare applications. This study also identifies the strengths and weaknesses of these technologies, as well as the datasets they use.
{"title":"Dimensions of artificial intelligence techniques, blockchain, and cyber security in the Internet of medical things: Opportunities, challenges, and future directions","authors":"Aya Hamid Ameen, M. A. Mohammed, A. N. Rashid","doi":"10.1515/jisys-2022-0267","DOIUrl":"https://doi.org/10.1515/jisys-2022-0267","url":null,"abstract":"Abstract The Internet of medical things (IoMT) is a modern technology that is increasingly being used to provide good healthcare services. As IoMT devices are vulnerable to cyberattacks, healthcare centers and patients face privacy and security challenges. A safe IoMT environment has been used by combining blockchain (BC) technology with artificial intelligence (AI). However, the services of the systems are costly and suffer from security and privacy problems. This study aims to summarize previous research in the IoMT and discusses the roles of AI, BC, and cybersecurity in the IoMT, as well as the problems, opportunities, and directions of research in this field based on a comprehensive literature review. This review describes the integration schemes of AI, BC, and cybersecurity technologies, which can support the development of new systems based on a decentralized approach, especially in healthcare applications. This study also identifies the strengths and weaknesses of these technologies, as well as the datasets they use.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"55 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83832482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The finest resource for consumers to evaluate products is online product reviews, and finding such reviews that are accurate and helpful can be difficult. These reviews may sometimes be corrupted, biased, contradictory, or lacking in detail. This opens the door for customer-focused review analysis methods. A method called “Multi-Domain Keyword Extraction using Word Vectors” aims to streamline the customer experience by giving them reviews from several websites together with in-depth assessments of the evaluations. Using the specific model number of the product, inputs are continuously grabbed from different e-commerce websites. Aspects and key phrases in the reviews are properly identified using machine learning, and the average sentiment for each keyword is calculated using context-based sentiment analysis. To precisely discover the keywords in massive texts, word embedding data will be analyzed by machine learning techniques. A unique methodology developed to locate trustworthy reviews considers several criteria that determine what makes a review credible. The experiments on real-time data sets showed better results compared to the existing traditional models.
{"title":"Aspect-based sentiment analysis on multi-domain reviews through word embedding","authors":"M. Venu Gopalachari, Sangeeta Gupta, Salakapuri Rakesh, Dharmana Jayaram, Pulipati Venkateswara Rao","doi":"10.1515/jisys-2023-0001","DOIUrl":"https://doi.org/10.1515/jisys-2023-0001","url":null,"abstract":"Abstract The finest resource for consumers to evaluate products is online product reviews, and finding such reviews that are accurate and helpful can be difficult. These reviews may sometimes be corrupted, biased, contradictory, or lacking in detail. This opens the door for customer-focused review analysis methods. A method called “Multi-Domain Keyword Extraction using Word Vectors” aims to streamline the customer experience by giving them reviews from several websites together with in-depth assessments of the evaluations. Using the specific model number of the product, inputs are continuously grabbed from different e-commerce websites. Aspects and key phrases in the reviews are properly identified using machine learning, and the average sentiment for each keyword is calculated using context-based sentiment analysis. To precisely discover the keywords in massive texts, word embedding data will be analyzed by machine learning techniques. A unique methodology developed to locate trustworthy reviews considers several criteria that determine what makes a review credible. The experiments on real-time data sets showed better results compared to the existing traditional models.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"20 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78466644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract With the advent of the Internet of Things (IoT) era, the application of intelligent devices in the network is becoming more and more extensive, and the monitoring technology is gradually developing towards the direction of intelligence and digitization. As a hot topic in the field of computer vision, face recognition faces problems such as low level of intelligence and long processing time. Therefore, under the technical support of the IoTs, the research uses internet protocol cameras to collect face information, improves the principal component analysis (PCA), poses a PLV algorithm, and then applies it to the face recognition system for remote monitoring. The outcomes demonstrate that in the Olivetti Research Laboratory face database, the accuracy of PLV is relatively stable, and the highest and lowest are 98 and 94%, respectively. In Yale testing, the accuracy of this algorithm is 12% higher than that of PCA algorithm; In the database of Georgia Institute of Technology (GT), the PLV algorithm requires a time range of 0.2–0.3 seconds and has high operational efficiency. In the selected remote monitoring face database, the accuracy of the method is stable at more than 90%, with the highest being 98%, indicating that it can effectively improve the accuracy of face recognition and provide a reference technical means for further optimization of the remote monitoring system.
随着物联网(IoT)时代的到来,智能设备在网络中的应用越来越广泛,监控技术也逐渐朝着智能化、数字化的方向发展。人脸识别作为计算机视觉领域的研究热点,面临着智能水平低、处理时间长等问题。因此,本研究在物联网的技术支持下,利用互联网协议摄像头采集人脸信息,对主成分分析(PCA)进行改进,提出PLV算法,并将其应用于人脸识别系统进行远程监控。结果表明,在Olivetti研究实验室人脸数据库中,PLV的准确率相对稳定,最高为98%,最低为94%。在Yale测试中,该算法的准确率比PCA算法提高了12%;在Georgia Institute of Technology (GT)的数据库中,PLV算法需要0.2-0.3秒的时间范围,具有较高的运算效率。在所选的远程监控人脸数据库中,该方法的准确率稳定在90%以上,最高达到98%,表明该方法可以有效提高人脸识别的准确率,为远程监控系统的进一步优化提供了参考技术手段。
{"title":"Face recognition of remote monitoring under the Ipv6 protocol technology of Internet of Things architecture","authors":"Bo Fu","doi":"10.1515/jisys-2022-0283","DOIUrl":"https://doi.org/10.1515/jisys-2022-0283","url":null,"abstract":"Abstract With the advent of the Internet of Things (IoT) era, the application of intelligent devices in the network is becoming more and more extensive, and the monitoring technology is gradually developing towards the direction of intelligence and digitization. As a hot topic in the field of computer vision, face recognition faces problems such as low level of intelligence and long processing time. Therefore, under the technical support of the IoTs, the research uses internet protocol cameras to collect face information, improves the principal component analysis (PCA), poses a PLV algorithm, and then applies it to the face recognition system for remote monitoring. The outcomes demonstrate that in the Olivetti Research Laboratory face database, the accuracy of PLV is relatively stable, and the highest and lowest are 98 and 94%, respectively. In Yale testing, the accuracy of this algorithm is 12% higher than that of PCA algorithm; In the database of Georgia Institute of Technology (GT), the PLV algorithm requires a time range of 0.2–0.3 seconds and has high operational efficiency. In the selected remote monitoring face database, the accuracy of the method is stable at more than 90%, with the highest being 98%, indicating that it can effectively improve the accuracy of face recognition and provide a reference technical means for further optimization of the remote monitoring system.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134882970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The accurate recognition of speech is beneficial to the fields of machine translation and intelligent human–computer interaction. After briefly introducing speech recognition algorithms, this study proposed to recognize speech with a recurrent neural network (RNN) and adopted the connectionist temporal classification (CTC) algorithm to align input speech sequences and output text sequences forcibly. Simulation experiments compared the RNN-CTC algorithm with the Gaussian mixture model–hidden Markov model and convolutional neural network-CTC algorithms. The results demonstrated that the more training samples the speech recognition algorithm had, the higher the recognition accuracy of the trained algorithm was, but the training time consumption increased gradually; the more samples a trained speech recognition algorithm had to test, the lower the recognition accuracy and the longer the testing time. The proposed RNN-CTC speech recognition algorithm always had the highest accuracy and the lowest training and testing time among the three algorithms when the number of training and testing samples was the same.
{"title":"Recognition of English speech – using a deep learning algorithm","authors":"Shuyan Wang","doi":"10.1515/jisys-2022-0236","DOIUrl":"https://doi.org/10.1515/jisys-2022-0236","url":null,"abstract":"Abstract The accurate recognition of speech is beneficial to the fields of machine translation and intelligent human–computer interaction. After briefly introducing speech recognition algorithms, this study proposed to recognize speech with a recurrent neural network (RNN) and adopted the connectionist temporal classification (CTC) algorithm to align input speech sequences and output text sequences forcibly. Simulation experiments compared the RNN-CTC algorithm with the Gaussian mixture model–hidden Markov model and convolutional neural network-CTC algorithms. The results demonstrated that the more training samples the speech recognition algorithm had, the higher the recognition accuracy of the trained algorithm was, but the training time consumption increased gradually; the more samples a trained speech recognition algorithm had to test, the lower the recognition accuracy and the longer the testing time. The proposed RNN-CTC speech recognition algorithm always had the highest accuracy and the lowest training and testing time among the three algorithms when the number of training and testing samples was the same.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"8 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90637695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract At present, low-cost Red Green Blue Depth (RGB-D) sensors are mainly used in indoor robot environment perception, but the depth information obtained by RGB-D cameras has problems such as poor accuracy and high noise, and the generated 3D color point cloud map has low accuracy. In order to solve these problems, this article proposes a vision sensor-based point cloud map generation algorithm for robot indoor navigation. The aim is to obtain a more accurate point cloud map through visual SLAM and Kalman filtering visual-inertial navigation attitude fusion algorithm. The results show that in the positioning speed test data of the fusion algorithm in this study, the average time-consuming of camera tracking is 23.4 ms, which can meet the processing speed requirement of 42 frames per second. The yaw angle error of the fusion algorithm is the smallest, and the ATE test values of the algorithm are smaller than those of the Inertial measurement unit and Simultaneous-Localization-and-Mapping algorithms. This research algorithm can make the mapping process more stable and robust. It can use visual sensors to make more accurate route planning, and this algorithm improves the indoor positioning accuracy of the robot. In addition, the research algorithm can also obtain a dense point cloud map in real time, which provides a more comprehensive idea for the research of robot indoor navigation point cloud map generation.
{"title":"Robot indoor navigation point cloud map generation algorithm based on visual sensing","authors":"Qin Zhang, Xiushan Liu","doi":"10.1515/jisys-2022-0258","DOIUrl":"https://doi.org/10.1515/jisys-2022-0258","url":null,"abstract":"Abstract At present, low-cost Red Green Blue Depth (RGB-D) sensors are mainly used in indoor robot environment perception, but the depth information obtained by RGB-D cameras has problems such as poor accuracy and high noise, and the generated 3D color point cloud map has low accuracy. In order to solve these problems, this article proposes a vision sensor-based point cloud map generation algorithm for robot indoor navigation. The aim is to obtain a more accurate point cloud map through visual SLAM and Kalman filtering visual-inertial navigation attitude fusion algorithm. The results show that in the positioning speed test data of the fusion algorithm in this study, the average time-consuming of camera tracking is 23.4 ms, which can meet the processing speed requirement of 42 frames per second. The yaw angle error of the fusion algorithm is the smallest, and the ATE test values of the algorithm are smaller than those of the Inertial measurement unit and Simultaneous-Localization-and-Mapping algorithms. This research algorithm can make the mapping process more stable and robust. It can use visual sensors to make more accurate route planning, and this algorithm improves the indoor positioning accuracy of the robot. In addition, the research algorithm can also obtain a dense point cloud map in real time, which provides a more comprehensive idea for the research of robot indoor navigation point cloud map generation.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"72 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86257607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. A. Rawi, Murtada K. Elbashir, Awadallah M. Ahmed
Abstract The problem addressed in this study is the limitations of previous works that considered electrocardiogram (ECG) classification as a multiclass problem, despite many abnormalities being diagnosed simultaneously in real life, making it a multilabel classification problem. The aim of the study is to test the effectiveness of deep learning (DL)-based methods (Inception, MobileNet, LeNet, AlexNet, VGG16, and ResNet50) using three large 12-lead ECG datasets to overcome this limitation. The define-by-run technique is used to build the most efficient DL model using the tree-structured Parzen estimator (TPE) algorithm. Results show that the proposed methods achieve high accuracy and precision in classifying ECG abnormalities for large datasets, with the best results being 97.89% accuracy and 90.83% precision for the Ningbo dataset, classifying 42 classes for the Inception model; 96.53% accuracy and 85.67% precision for the PTB-XL dataset, classifying 24 classes for the Alex net model; and 95.02% accuracy and 70.71% precision for the Georgia dataset, classifying 23 classes for the Alex net model. The best results achieved for the optimum model that was proposed by the define-by-run technique were 97.33% accuracy and 97.71% precision for the Ningbo dataset, classifying 42 classes; 96.60% accuracy and 83.66% precision for the PTB-XL dataset, classifying 24 classes; and 94.32% accuracy and 66.97% precision for the Georgia dataset, classifying 23 classes. The proposed DL-based methods using the TPE algorithm provide accurate results for multilabel classification of ECG abnormalities, improving the diagnostic accuracy of heart conditions.
{"title":"Deep learning models for multilabel ECG abnormalities classification: A comparative study using TPE optimization","authors":"A. A. Rawi, Murtada K. Elbashir, Awadallah M. Ahmed","doi":"10.1515/jisys-2023-0002","DOIUrl":"https://doi.org/10.1515/jisys-2023-0002","url":null,"abstract":"Abstract The problem addressed in this study is the limitations of previous works that considered electrocardiogram (ECG) classification as a multiclass problem, despite many abnormalities being diagnosed simultaneously in real life, making it a multilabel classification problem. The aim of the study is to test the effectiveness of deep learning (DL)-based methods (Inception, MobileNet, LeNet, AlexNet, VGG16, and ResNet50) using three large 12-lead ECG datasets to overcome this limitation. The define-by-run technique is used to build the most efficient DL model using the tree-structured Parzen estimator (TPE) algorithm. Results show that the proposed methods achieve high accuracy and precision in classifying ECG abnormalities for large datasets, with the best results being 97.89% accuracy and 90.83% precision for the Ningbo dataset, classifying 42 classes for the Inception model; 96.53% accuracy and 85.67% precision for the PTB-XL dataset, classifying 24 classes for the Alex net model; and 95.02% accuracy and 70.71% precision for the Georgia dataset, classifying 23 classes for the Alex net model. The best results achieved for the optimum model that was proposed by the define-by-run technique were 97.33% accuracy and 97.71% precision for the Ningbo dataset, classifying 42 classes; 96.60% accuracy and 83.66% precision for the PTB-XL dataset, classifying 24 classes; and 94.32% accuracy and 66.97% precision for the Georgia dataset, classifying 23 classes. The proposed DL-based methods using the TPE algorithm provide accurate results for multilabel classification of ECG abnormalities, improving the diagnostic accuracy of heart conditions.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"101 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80459212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Min Qin, Ravi Kumar, Mohammad Shabaz, Sanjay Agal, Pavitar Parkash Singh, Anooja Ammini
Abstract With the wide popularization of Internet of Things (IoT) technology, the design and implementation of intelligent speech equipment have attracted more and more researchers’ attention. Speech recognition is one of the core technologies to control intelligent mechanical equipment. An industrial IoT sensor-based broadcast speech recognition and control system is presented to address the issue of integrating a broadcast speech recognition and control system with an IoT sensor for smart cities. In this work, a design approach for creating an intelligent voice control system for the Robot operating system (ROS) is provided. The speech recognition control program for the ROS is created using the Baidu intelligent voice software development kit, and the experiment is run on a particular robot platform. ROS makes use of communication modules to implement network connections between various system modules, mostly via topic-based asynchronous data transmission. A point-to-point network structure serves as the communication channel for the many operations that make up the ROS. The hardware component is mostly made up of the main controller’s motor driving module, a power module, a WiFi module, a Bluetooth module, a laser ranging module, etc. According to the experimental findings, the control system can identify the gathered sound signals, translate them into control instructions, and then direct the robot platform to carry out the necessary actions in accordance with the control instructions. Over 95% of speech is recognized. The control system has a high recognition rate and is simple to use, which is what most industrial controls require. It has significant implications for the advancement of control technology and may significantly increase production and life efficiency.
{"title":"Broadcast speech recognition and control system based on Internet of Things sensors for smart cities","authors":"Min Qin, Ravi Kumar, Mohammad Shabaz, Sanjay Agal, Pavitar Parkash Singh, Anooja Ammini","doi":"10.1515/jisys-2023-0067","DOIUrl":"https://doi.org/10.1515/jisys-2023-0067","url":null,"abstract":"Abstract With the wide popularization of Internet of Things (IoT) technology, the design and implementation of intelligent speech equipment have attracted more and more researchers’ attention. Speech recognition is one of the core technologies to control intelligent mechanical equipment. An industrial IoT sensor-based broadcast speech recognition and control system is presented to address the issue of integrating a broadcast speech recognition and control system with an IoT sensor for smart cities. In this work, a design approach for creating an intelligent voice control system for the Robot operating system (ROS) is provided. The speech recognition control program for the ROS is created using the Baidu intelligent voice software development kit, and the experiment is run on a particular robot platform. ROS makes use of communication modules to implement network connections between various system modules, mostly via topic-based asynchronous data transmission. A point-to-point network structure serves as the communication channel for the many operations that make up the ROS. The hardware component is mostly made up of the main controller’s motor driving module, a power module, a WiFi module, a Bluetooth module, a laser ranging module, etc. According to the experimental findings, the control system can identify the gathered sound signals, translate them into control instructions, and then direct the robot platform to carry out the necessary actions in accordance with the control instructions. Over 95% of speech is recognized. The control system has a high recognition rate and is simple to use, which is what most industrial controls require. It has significant implications for the advancement of control technology and may significantly increase production and life efficiency.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135261082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah Ghanim Mahmood Al-kababchee, Z. Algamal, O. Qasim
Abstract Data mining’s primary clustering method has several uses, including gene analysis. A set of unlabeled data is divided into clusters using data features in a clustering study, which is an unsupervised learning problem. Data in a cluster are more comparable to one another than to those in other groups. However, the number of clusters has a direct impact on how well the K-means algorithm performs. In order to find the best solutions for these real-world optimization issues, it is necessary to use techniques that properly explore the search spaces. In this research, an enhancement of K-means clustering is proposed by applying an equilibrium optimization approach. The suggested approach adjusts the number of clusters while simultaneously choosing the best attributes to find the optimal answer. The findings establish the usefulness of the suggested method in comparison to existing algorithms in terms of intra-cluster distances and Rand index based on five datasets. Through the results shown and a comparison of the proposed method with the rest of the traditional methods, it was found that the proposal is better in terms of the internal dimension of the elements within the same cluster, as well as the Rand index. In conclusion, the suggested technique can be successfully employed for data clustering and can offer significant support.
{"title":"Enhancement of K-means clustering in big data based on equilibrium optimizer algorithm","authors":"Sarah Ghanim Mahmood Al-kababchee, Z. Algamal, O. Qasim","doi":"10.1515/jisys-2022-0230","DOIUrl":"https://doi.org/10.1515/jisys-2022-0230","url":null,"abstract":"Abstract Data mining’s primary clustering method has several uses, including gene analysis. A set of unlabeled data is divided into clusters using data features in a clustering study, which is an unsupervised learning problem. Data in a cluster are more comparable to one another than to those in other groups. However, the number of clusters has a direct impact on how well the K-means algorithm performs. In order to find the best solutions for these real-world optimization issues, it is necessary to use techniques that properly explore the search spaces. In this research, an enhancement of K-means clustering is proposed by applying an equilibrium optimization approach. The suggested approach adjusts the number of clusters while simultaneously choosing the best attributes to find the optimal answer. The findings establish the usefulness of the suggested method in comparison to existing algorithms in terms of intra-cluster distances and Rand index based on five datasets. Through the results shown and a comparison of the proposed method with the rest of the traditional methods, it was found that the proposal is better in terms of the internal dimension of the elements within the same cluster, as well as the Rand index. In conclusion, the suggested technique can be successfully employed for data clustering and can offer significant support.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"56 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89420203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In wireless communication technology, wireless sensor networks usually need to collect and process information in very harsh environment. Therefore, accurate positioning of sensors becomes the key to wireless communication technology. In this study, Davidon–Fletcher–Powell (DFP) algorithm was combined with particle swarm optimization (PSO) to reduce the influence of distance estimation error on positioning accuracy by using the characteristics of PSO iterative optimization. From the experimental results, among the average precision (AP) values of DFP, PSO, and PSO-DFP algorithms, the AP value of PSO-DFP was 0.9972. In the analysis of node positioning error, the maximum node positioning error of PSO-DFP was only about 21 mm. The results showed that the PSO-DFP algorithm had better performance, and the average positioning error of the algorithm was inversely proportional to the proportion of anchor nodes, node communication radius, and node density. In conclusion, the wireless sensor node location algorithm combined with PSO-DFP has a better location effect and higher stability than the traditional location algorithm.
{"title":"Wireless sensor node localization algorithm combined with PSO-DFP","authors":"Jingjing Sun, Peng Zhang, Xiaohong Kong","doi":"10.1515/jisys-2022-0323","DOIUrl":"https://doi.org/10.1515/jisys-2022-0323","url":null,"abstract":"Abstract In wireless communication technology, wireless sensor networks usually need to collect and process information in very harsh environment. Therefore, accurate positioning of sensors becomes the key to wireless communication technology. In this study, Davidon–Fletcher–Powell (DFP) algorithm was combined with particle swarm optimization (PSO) to reduce the influence of distance estimation error on positioning accuracy by using the characteristics of PSO iterative optimization. From the experimental results, among the average precision (AP) values of DFP, PSO, and PSO-DFP algorithms, the AP value of PSO-DFP was 0.9972. In the analysis of node positioning error, the maximum node positioning error of PSO-DFP was only about 21 mm. The results showed that the PSO-DFP algorithm had better performance, and the average positioning error of the algorithm was inversely proportional to the proportion of anchor nodes, node communication radius, and node density. In conclusion, the wireless sensor node location algorithm combined with PSO-DFP has a better location effect and higher stability than the traditional location algorithm.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135649924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiao Ye, Hemant N. Patel, Sankaranamasivayam Meena, Renato R. Maaliw, Samuel-Soma M. Ajibade, Ismail Keshta
Abstract In order to realize online detection and control of network viruses in robots, the authors propose a data mining-based anti-virus solution for smart robots. First, using internet of things (IoT) intrusion prevention system design method based on network intrusion signal detection and feedforward modulation filtering design, the overall design description and function analysis are carried out, and then the intrusion signal detection algorithm is designed, and finally, the hardware design and software development for a breach protection solution for the IoT are completed, and the integrated design of the system is realized. The findings demonstrated that based on the mean value of 10,000 tests, the IoT’s average packet loss rate is 0. Conclusion: This system has high accuracy, good performance, and strong compatibility and friendliness.
{"title":"Smart robots’ virus defense using data mining technology","authors":"Jiao Ye, Hemant N. Patel, Sankaranamasivayam Meena, Renato R. Maaliw, Samuel-Soma M. Ajibade, Ismail Keshta","doi":"10.1515/jisys-2023-0065","DOIUrl":"https://doi.org/10.1515/jisys-2023-0065","url":null,"abstract":"Abstract In order to realize online detection and control of network viruses in robots, the authors propose a data mining-based anti-virus solution for smart robots. First, using internet of things (IoT) intrusion prevention system design method based on network intrusion signal detection and feedforward modulation filtering design, the overall design description and function analysis are carried out, and then the intrusion signal detection algorithm is designed, and finally, the hardware design and software development for a breach protection solution for the IoT are completed, and the integrated design of the system is realized. The findings demonstrated that based on the mean value of 10,000 tests, the IoT’s average packet loss rate is 0. Conclusion: This system has high accuracy, good performance, and strong compatibility and friendliness.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135650263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}