A. A. Rawi, Murtada K. Elbashir, Awadallah M. Ahmed
Abstract The problem addressed in this study is the limitations of previous works that considered electrocardiogram (ECG) classification as a multiclass problem, despite many abnormalities being diagnosed simultaneously in real life, making it a multilabel classification problem. The aim of the study is to test the effectiveness of deep learning (DL)-based methods (Inception, MobileNet, LeNet, AlexNet, VGG16, and ResNet50) using three large 12-lead ECG datasets to overcome this limitation. The define-by-run technique is used to build the most efficient DL model using the tree-structured Parzen estimator (TPE) algorithm. Results show that the proposed methods achieve high accuracy and precision in classifying ECG abnormalities for large datasets, with the best results being 97.89% accuracy and 90.83% precision for the Ningbo dataset, classifying 42 classes for the Inception model; 96.53% accuracy and 85.67% precision for the PTB-XL dataset, classifying 24 classes for the Alex net model; and 95.02% accuracy and 70.71% precision for the Georgia dataset, classifying 23 classes for the Alex net model. The best results achieved for the optimum model that was proposed by the define-by-run technique were 97.33% accuracy and 97.71% precision for the Ningbo dataset, classifying 42 classes; 96.60% accuracy and 83.66% precision for the PTB-XL dataset, classifying 24 classes; and 94.32% accuracy and 66.97% precision for the Georgia dataset, classifying 23 classes. The proposed DL-based methods using the TPE algorithm provide accurate results for multilabel classification of ECG abnormalities, improving the diagnostic accuracy of heart conditions.
{"title":"Deep learning models for multilabel ECG abnormalities classification: A comparative study using TPE optimization","authors":"A. A. Rawi, Murtada K. Elbashir, Awadallah M. Ahmed","doi":"10.1515/jisys-2023-0002","DOIUrl":"https://doi.org/10.1515/jisys-2023-0002","url":null,"abstract":"Abstract The problem addressed in this study is the limitations of previous works that considered electrocardiogram (ECG) classification as a multiclass problem, despite many abnormalities being diagnosed simultaneously in real life, making it a multilabel classification problem. The aim of the study is to test the effectiveness of deep learning (DL)-based methods (Inception, MobileNet, LeNet, AlexNet, VGG16, and ResNet50) using three large 12-lead ECG datasets to overcome this limitation. The define-by-run technique is used to build the most efficient DL model using the tree-structured Parzen estimator (TPE) algorithm. Results show that the proposed methods achieve high accuracy and precision in classifying ECG abnormalities for large datasets, with the best results being 97.89% accuracy and 90.83% precision for the Ningbo dataset, classifying 42 classes for the Inception model; 96.53% accuracy and 85.67% precision for the PTB-XL dataset, classifying 24 classes for the Alex net model; and 95.02% accuracy and 70.71% precision for the Georgia dataset, classifying 23 classes for the Alex net model. The best results achieved for the optimum model that was proposed by the define-by-run technique were 97.33% accuracy and 97.71% precision for the Ningbo dataset, classifying 42 classes; 96.60% accuracy and 83.66% precision for the PTB-XL dataset, classifying 24 classes; and 94.32% accuracy and 66.97% precision for the Georgia dataset, classifying 23 classes. The proposed DL-based methods using the TPE algorithm provide accurate results for multilabel classification of ECG abnormalities, improving the diagnostic accuracy of heart conditions.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80459212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract At present, low-cost Red Green Blue Depth (RGB-D) sensors are mainly used in indoor robot environment perception, but the depth information obtained by RGB-D cameras has problems such as poor accuracy and high noise, and the generated 3D color point cloud map has low accuracy. In order to solve these problems, this article proposes a vision sensor-based point cloud map generation algorithm for robot indoor navigation. The aim is to obtain a more accurate point cloud map through visual SLAM and Kalman filtering visual-inertial navigation attitude fusion algorithm. The results show that in the positioning speed test data of the fusion algorithm in this study, the average time-consuming of camera tracking is 23.4 ms, which can meet the processing speed requirement of 42 frames per second. The yaw angle error of the fusion algorithm is the smallest, and the ATE test values of the algorithm are smaller than those of the Inertial measurement unit and Simultaneous-Localization-and-Mapping algorithms. This research algorithm can make the mapping process more stable and robust. It can use visual sensors to make more accurate route planning, and this algorithm improves the indoor positioning accuracy of the robot. In addition, the research algorithm can also obtain a dense point cloud map in real time, which provides a more comprehensive idea for the research of robot indoor navigation point cloud map generation.
{"title":"Robot indoor navigation point cloud map generation algorithm based on visual sensing","authors":"Qin Zhang, Xiushan Liu","doi":"10.1515/jisys-2022-0258","DOIUrl":"https://doi.org/10.1515/jisys-2022-0258","url":null,"abstract":"Abstract At present, low-cost Red Green Blue Depth (RGB-D) sensors are mainly used in indoor robot environment perception, but the depth information obtained by RGB-D cameras has problems such as poor accuracy and high noise, and the generated 3D color point cloud map has low accuracy. In order to solve these problems, this article proposes a vision sensor-based point cloud map generation algorithm for robot indoor navigation. The aim is to obtain a more accurate point cloud map through visual SLAM and Kalman filtering visual-inertial navigation attitude fusion algorithm. The results show that in the positioning speed test data of the fusion algorithm in this study, the average time-consuming of camera tracking is 23.4 ms, which can meet the processing speed requirement of 42 frames per second. The yaw angle error of the fusion algorithm is the smallest, and the ATE test values of the algorithm are smaller than those of the Inertial measurement unit and Simultaneous-Localization-and-Mapping algorithms. This research algorithm can make the mapping process more stable and robust. It can use visual sensors to make more accurate route planning, and this algorithm improves the indoor positioning accuracy of the robot. In addition, the research algorithm can also obtain a dense point cloud map in real time, which provides a more comprehensive idea for the research of robot indoor navigation point cloud map generation.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86257607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Min Qin, Ravi Kumar, Mohammad Shabaz, Sanjay Agal, Pavitar Parkash Singh, Anooja Ammini
Abstract With the wide popularization of Internet of Things (IoT) technology, the design and implementation of intelligent speech equipment have attracted more and more researchers’ attention. Speech recognition is one of the core technologies to control intelligent mechanical equipment. An industrial IoT sensor-based broadcast speech recognition and control system is presented to address the issue of integrating a broadcast speech recognition and control system with an IoT sensor for smart cities. In this work, a design approach for creating an intelligent voice control system for the Robot operating system (ROS) is provided. The speech recognition control program for the ROS is created using the Baidu intelligent voice software development kit, and the experiment is run on a particular robot platform. ROS makes use of communication modules to implement network connections between various system modules, mostly via topic-based asynchronous data transmission. A point-to-point network structure serves as the communication channel for the many operations that make up the ROS. The hardware component is mostly made up of the main controller’s motor driving module, a power module, a WiFi module, a Bluetooth module, a laser ranging module, etc. According to the experimental findings, the control system can identify the gathered sound signals, translate them into control instructions, and then direct the robot platform to carry out the necessary actions in accordance with the control instructions. Over 95% of speech is recognized. The control system has a high recognition rate and is simple to use, which is what most industrial controls require. It has significant implications for the advancement of control technology and may significantly increase production and life efficiency.
{"title":"Broadcast speech recognition and control system based on Internet of Things sensors for smart cities","authors":"Min Qin, Ravi Kumar, Mohammad Shabaz, Sanjay Agal, Pavitar Parkash Singh, Anooja Ammini","doi":"10.1515/jisys-2023-0067","DOIUrl":"https://doi.org/10.1515/jisys-2023-0067","url":null,"abstract":"Abstract With the wide popularization of Internet of Things (IoT) technology, the design and implementation of intelligent speech equipment have attracted more and more researchers’ attention. Speech recognition is one of the core technologies to control intelligent mechanical equipment. An industrial IoT sensor-based broadcast speech recognition and control system is presented to address the issue of integrating a broadcast speech recognition and control system with an IoT sensor for smart cities. In this work, a design approach for creating an intelligent voice control system for the Robot operating system (ROS) is provided. The speech recognition control program for the ROS is created using the Baidu intelligent voice software development kit, and the experiment is run on a particular robot platform. ROS makes use of communication modules to implement network connections between various system modules, mostly via topic-based asynchronous data transmission. A point-to-point network structure serves as the communication channel for the many operations that make up the ROS. The hardware component is mostly made up of the main controller’s motor driving module, a power module, a WiFi module, a Bluetooth module, a laser ranging module, etc. According to the experimental findings, the control system can identify the gathered sound signals, translate them into control instructions, and then direct the robot platform to carry out the necessary actions in accordance with the control instructions. Over 95% of speech is recognized. The control system has a high recognition rate and is simple to use, which is what most industrial controls require. It has significant implications for the advancement of control technology and may significantly increase production and life efficiency.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135261082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah Ghanim Mahmood Al-kababchee, Z. Algamal, O. Qasim
Abstract Data mining’s primary clustering method has several uses, including gene analysis. A set of unlabeled data is divided into clusters using data features in a clustering study, which is an unsupervised learning problem. Data in a cluster are more comparable to one another than to those in other groups. However, the number of clusters has a direct impact on how well the K-means algorithm performs. In order to find the best solutions for these real-world optimization issues, it is necessary to use techniques that properly explore the search spaces. In this research, an enhancement of K-means clustering is proposed by applying an equilibrium optimization approach. The suggested approach adjusts the number of clusters while simultaneously choosing the best attributes to find the optimal answer. The findings establish the usefulness of the suggested method in comparison to existing algorithms in terms of intra-cluster distances and Rand index based on five datasets. Through the results shown and a comparison of the proposed method with the rest of the traditional methods, it was found that the proposal is better in terms of the internal dimension of the elements within the same cluster, as well as the Rand index. In conclusion, the suggested technique can be successfully employed for data clustering and can offer significant support.
{"title":"Enhancement of K-means clustering in big data based on equilibrium optimizer algorithm","authors":"Sarah Ghanim Mahmood Al-kababchee, Z. Algamal, O. Qasim","doi":"10.1515/jisys-2022-0230","DOIUrl":"https://doi.org/10.1515/jisys-2022-0230","url":null,"abstract":"Abstract Data mining’s primary clustering method has several uses, including gene analysis. A set of unlabeled data is divided into clusters using data features in a clustering study, which is an unsupervised learning problem. Data in a cluster are more comparable to one another than to those in other groups. However, the number of clusters has a direct impact on how well the K-means algorithm performs. In order to find the best solutions for these real-world optimization issues, it is necessary to use techniques that properly explore the search spaces. In this research, an enhancement of K-means clustering is proposed by applying an equilibrium optimization approach. The suggested approach adjusts the number of clusters while simultaneously choosing the best attributes to find the optimal answer. The findings establish the usefulness of the suggested method in comparison to existing algorithms in terms of intra-cluster distances and Rand index based on five datasets. Through the results shown and a comparison of the proposed method with the rest of the traditional methods, it was found that the proposal is better in terms of the internal dimension of the elements within the same cluster, as well as the Rand index. In conclusion, the suggested technique can be successfully employed for data clustering and can offer significant support.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89420203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In wireless communication technology, wireless sensor networks usually need to collect and process information in very harsh environment. Therefore, accurate positioning of sensors becomes the key to wireless communication technology. In this study, Davidon–Fletcher–Powell (DFP) algorithm was combined with particle swarm optimization (PSO) to reduce the influence of distance estimation error on positioning accuracy by using the characteristics of PSO iterative optimization. From the experimental results, among the average precision (AP) values of DFP, PSO, and PSO-DFP algorithms, the AP value of PSO-DFP was 0.9972. In the analysis of node positioning error, the maximum node positioning error of PSO-DFP was only about 21 mm. The results showed that the PSO-DFP algorithm had better performance, and the average positioning error of the algorithm was inversely proportional to the proportion of anchor nodes, node communication radius, and node density. In conclusion, the wireless sensor node location algorithm combined with PSO-DFP has a better location effect and higher stability than the traditional location algorithm.
{"title":"Wireless sensor node localization algorithm combined with PSO-DFP","authors":"Jingjing Sun, Peng Zhang, Xiaohong Kong","doi":"10.1515/jisys-2022-0323","DOIUrl":"https://doi.org/10.1515/jisys-2022-0323","url":null,"abstract":"Abstract In wireless communication technology, wireless sensor networks usually need to collect and process information in very harsh environment. Therefore, accurate positioning of sensors becomes the key to wireless communication technology. In this study, Davidon–Fletcher–Powell (DFP) algorithm was combined with particle swarm optimization (PSO) to reduce the influence of distance estimation error on positioning accuracy by using the characteristics of PSO iterative optimization. From the experimental results, among the average precision (AP) values of DFP, PSO, and PSO-DFP algorithms, the AP value of PSO-DFP was 0.9972. In the analysis of node positioning error, the maximum node positioning error of PSO-DFP was only about 21 mm. The results showed that the PSO-DFP algorithm had better performance, and the average positioning error of the algorithm was inversely proportional to the proportion of anchor nodes, node communication radius, and node density. In conclusion, the wireless sensor node location algorithm combined with PSO-DFP has a better location effect and higher stability than the traditional location algorithm.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135649924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In order to improve the shortcomings of the traditional monitoring equipment that is difficult to measure the daily physical parameters of the elderly and improve the accuracy of parameter measurement, this article designs wearable devices through the Internet of Things technology and virtual reality technology. With this device, four daily physical parameters of the elderly, such as exercise heart rate, blood pressure, plantar health, and sleep function, are measured. The feasibility of the measurement method and equipment is verified by experiments. The experimental results showed that the accuracy of the measurement method based on the reflective photoplethysmography signal was high, with the mean and difference values of the subjects’ heart rate basically lying around 0 BPM and in good agreement between the estimated heart rate and the reference value. In the blood pressure measurements, the correlation coefficient between the Prs {P}_{rs} estimate and the reference value was 0.81. The estimation accuracy of the device used in the article was high, with the highest correlation coefficient of 0.96 ± 0.02 for subjects’ heart rate at rest, and its estimation error rate was 0.02 ± 0.01. The Pnth {P}_{{n}th} value for subject B8 exceeded the threshold of 0.5 before subject B21, and subject B8 had more severe symptoms, which was consistent with the actual situation. The wearable device was able to identify the subject’s eye features and provide appropriate videos to help subjects with poor sleep quality to fall asleep. The article provides a method and device that facilitates healthcare professionals to make real-time enquiries and receive user health advice.
摘要:为了改善传统监测设备难以测量老年人日常身体参数的缺点,提高参数测量的准确性,本文通过物联网技术和虚拟现实技术设计可穿戴设备。通过该装置,可以测量老年人的运动心率、血压、足底健康、睡眠功能等4项日常身体参数。实验验证了测量方法和设备的可行性。实验结果表明,基于反射光脉搏波信号的测量方法精度较高,被试心率均值和差值基本在0 BPM左右,估计值与参考值吻合较好。在血压测量中,P rs {P}_{rs}估计值与参考值的相关系数为0.81。本文所用装置的估计精度较高,与被试静息心率的相关系数最高为0.96±0.02,估计错误率为0.02±0.01。受试者B8的P nt h {P}_{{n}th}值在受试者B21之前超过了0.5的阈值,且受试者B8的症状更为严重,这与实际情况相符。该可穿戴设备能够识别受试者的眼部特征,并提供合适的视频,帮助睡眠质量较差的受试者入睡。本文提供了一种方法和设备,方便医疗保健专业人员进行实时查询和接收用户健康建议。
{"title":"Intelligent medical IoT health monitoring system based on VR and wearable devices","authors":"Yufei Wang, Xiaofeng An, Weiwei Xu","doi":"10.1515/jisys-2022-0291","DOIUrl":"https://doi.org/10.1515/jisys-2022-0291","url":null,"abstract":"Abstract In order to improve the shortcomings of the traditional monitoring equipment that is difficult to measure the daily physical parameters of the elderly and improve the accuracy of parameter measurement, this article designs wearable devices through the Internet of Things technology and virtual reality technology. With this device, four daily physical parameters of the elderly, such as exercise heart rate, blood pressure, plantar health, and sleep function, are measured. The feasibility of the measurement method and equipment is verified by experiments. The experimental results showed that the accuracy of the measurement method based on the reflective photoplethysmography signal was high, with the mean and difference values of the subjects’ heart rate basically lying around 0 BPM and in good agreement between the estimated heart rate and the reference value. In the blood pressure measurements, the correlation coefficient between the <m:math xmlns:m=\"http://www.w3.org/1998/Math/MathML\"> <m:msub> <m:mrow> <m:mi>P</m:mi> </m:mrow> <m:mrow> <m:mo>r</m:mo> <m:mo>s</m:mo> </m:mrow> </m:msub> </m:math> {P}_{rs} estimate and the reference value was 0.81. The estimation accuracy of the device used in the article was high, with the highest correlation coefficient of 0.96 ± 0.02 for subjects’ heart rate at rest, and its estimation error rate was 0.02 ± 0.01. The <m:math xmlns:m=\"http://www.w3.org/1998/Math/MathML\"> <m:msub> <m:mrow> <m:mi>P</m:mi> </m:mrow> <m:mrow> <m:mi mathvariant=\"italic\">n</m:mi> <m:mi>t</m:mi> <m:mi>h</m:mi> </m:mrow> </m:msub> </m:math> {P}_{{n}th} value for subject B8 exceeded the threshold of 0.5 before subject B21, and subject B8 had more severe symptoms, which was consistent with the actual situation. The wearable device was able to identify the subject’s eye features and provide appropriate videos to help subjects with poor sleep quality to fall asleep. The article provides a method and device that facilitates healthcare professionals to make real-time enquiries and receive user health advice.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135952787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Increasing the efficiency of an enterprise largely depends on the productivity of its employees, which must be properly assessed and the correct assessment of the contribution of each employee is important. In this regard, this article is devoted to a study conducted by the authors on the development of a digital employee rating system (DERES). The study was conducted on the basis of machine learning technologies and modern assessment methods that will allow companies to evaluate the performance of their departments, analyze the competencies of the employees and predict the rating of employees in the future. The authors developed a 360-degree employee rating model and a rating prediction model using regression machine learning algorithms. The article also analyzed the results obtained using the employee evaluation model, which showed that the performance of the tested employees is reduced due to remote work. Using DERES, a rating analysis of a real business company was carried out with recommendations for improving the efficiency of employees. An analysis of the forecasting results obtained using the rating prediction model developed by the authors showed that personal development and relationship are key parameters in predicting the future rating of employees. In addition, the authors provide a detailed description of the developed DERES information system, main components, and architecture.
{"title":"Development of a digital employee rating evaluation system (DERES) based on machine learning algorithms and 360-degree method","authors":"Gulnar Balakayeva, Mukhit Zhanuzakov, Gaukhar Kalmenova","doi":"10.1515/jisys-2023-0008","DOIUrl":"https://doi.org/10.1515/jisys-2023-0008","url":null,"abstract":"Abstract Increasing the efficiency of an enterprise largely depends on the productivity of its employees, which must be properly assessed and the correct assessment of the contribution of each employee is important. In this regard, this article is devoted to a study conducted by the authors on the development of a digital employee rating system (DERES). The study was conducted on the basis of machine learning technologies and modern assessment methods that will allow companies to evaluate the performance of their departments, analyze the competencies of the employees and predict the rating of employees in the future. The authors developed a 360-degree employee rating model and a rating prediction model using regression machine learning algorithms. The article also analyzed the results obtained using the employee evaluation model, which showed that the performance of the tested employees is reduced due to remote work. Using DERES, a rating analysis of a real business company was carried out with recommendations for improving the efficiency of employees. An analysis of the forecasting results obtained using the rating prediction model developed by the authors showed that personal development and relationship are key parameters in predicting the future rating of employees. In addition, the authors provide a detailed description of the developed DERES information system, main components, and architecture.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135953974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiao Ye, Hemant N. Patel, Sankaranamasivayam Meena, Renato R. Maaliw, Samuel-Soma M. Ajibade, Ismail Keshta
Abstract In order to realize online detection and control of network viruses in robots, the authors propose a data mining-based anti-virus solution for smart robots. First, using internet of things (IoT) intrusion prevention system design method based on network intrusion signal detection and feedforward modulation filtering design, the overall design description and function analysis are carried out, and then the intrusion signal detection algorithm is designed, and finally, the hardware design and software development for a breach protection solution for the IoT are completed, and the integrated design of the system is realized. The findings demonstrated that based on the mean value of 10,000 tests, the IoT’s average packet loss rate is 0. Conclusion: This system has high accuracy, good performance, and strong compatibility and friendliness.
{"title":"Smart robots’ virus defense using data mining technology","authors":"Jiao Ye, Hemant N. Patel, Sankaranamasivayam Meena, Renato R. Maaliw, Samuel-Soma M. Ajibade, Ismail Keshta","doi":"10.1515/jisys-2023-0065","DOIUrl":"https://doi.org/10.1515/jisys-2023-0065","url":null,"abstract":"Abstract In order to realize online detection and control of network viruses in robots, the authors propose a data mining-based anti-virus solution for smart robots. First, using internet of things (IoT) intrusion prevention system design method based on network intrusion signal detection and feedforward modulation filtering design, the overall design description and function analysis are carried out, and then the intrusion signal detection algorithm is designed, and finally, the hardware design and software development for a breach protection solution for the IoT are completed, and the integrated design of the system is realized. The findings demonstrated that based on the mean value of 10,000 tests, the IoT’s average packet loss rate is 0. Conclusion: This system has high accuracy, good performance, and strong compatibility and friendliness.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135650263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In multimedia correspondence, steganography schemes are commonly applied. To reduce storage capacity, multimedia files, including images, are always compressed. Most steganographic video schemes are, therefore, not compression tolerant. In the frame sequences, the video includes extra hidden space. Artificial intelligence (AI) creates a digital world of real-time information for athletes, sponsors, and broadcasters. AI is reshaping business, and although it has already produced a significant impact on other sectors, the sports industry is the newest and most receptive one. Human-centered AI for web applications has substantially influenced audience participation, strategic plan execution, and other aspects of the sports industry that have traditionally relied heavily on statistics. Thus, this study presents the motion vector steganography of sports training video integrating with the artificial bee colony algorithm (MVS-ABC). The motion vector stenography detects the hidden information from the motion vectors in the sports training video bitstreams. Artificial bee colony (ABC) algorithm optimizes the block assignment to inject a hidden message into a host video, in which the block assignment is considered a combinatorial optimization problem. The experimental analysis evaluates the data embedding performance using steganographic technology compared with existing embedding technologies, using the ABC algorithm compared with other genetic algorithms. The findings show that the proposed model can give the highest performance in terms of embedding capacity and the least error rate of video steganography compared with the existing models.
{"title":"Motion vector steganography algorithm of sports training video integrating with artificial bee colony algorithm and human-centered AI for web applications","authors":"Jinmao Tong, Zhongwang Cao, Wenjiang J. Fu","doi":"10.1515/jisys-2022-0093","DOIUrl":"https://doi.org/10.1515/jisys-2022-0093","url":null,"abstract":"Abstract In multimedia correspondence, steganography schemes are commonly applied. To reduce storage capacity, multimedia files, including images, are always compressed. Most steganographic video schemes are, therefore, not compression tolerant. In the frame sequences, the video includes extra hidden space. Artificial intelligence (AI) creates a digital world of real-time information for athletes, sponsors, and broadcasters. AI is reshaping business, and although it has already produced a significant impact on other sectors, the sports industry is the newest and most receptive one. Human-centered AI for web applications has substantially influenced audience participation, strategic plan execution, and other aspects of the sports industry that have traditionally relied heavily on statistics. Thus, this study presents the motion vector steganography of sports training video integrating with the artificial bee colony algorithm (MVS-ABC). The motion vector stenography detects the hidden information from the motion vectors in the sports training video bitstreams. Artificial bee colony (ABC) algorithm optimizes the block assignment to inject a hidden message into a host video, in which the block assignment is considered a combinatorial optimization problem. The experimental analysis evaluates the data embedding performance using steganographic technology compared with existing embedding technologies, using the ABC algorithm compared with other genetic algorithms. The findings show that the proposed model can give the highest performance in terms of embedding capacity and the least error rate of video steganography compared with the existing models.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77697577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Min Lin, Yanyan Xu, Chenghao Cai, Dengfeng Ke, Kaile Su
Abstract Named entity recognition (NER) is the localization and classification of entities with specific meanings in text data, usually used for applications such as relation extraction, question answering, etc. Chinese is a language with Chinese characters as the basic unit, but a Chinese named entity is normally a word containing several characters, so both the relationships between words and those between characters play an important role in Chinese NER. At present, a large number of studies have demonstrated that reasonable word information can effectively improve deep learning models for Chinese NER. Besides, graph convolution can help deep learning models perform better for sequence labeling. Therefore, in this article, we combine word information and graph convolution and propose our Lattice-Transformer-Graph (LTG) deep learning model for Chinese NER. The proposed model pays more attention to additional word information through position-attention, and therefore can learn relationships between characters by using lattice-transformer. Moreover, the adapted graph convolutional layer enables the model to learn both richer character relationships and word relationships and hence helps to recognize Chinese named entities better. Our experiments show that compared with 12 other state-of-the-art models, LTG achieves the best results on the public datasets of Microsoft Research Asia, Resume, and WeiboNER, with the F1 score of 95.89%, 96.81%, and 72.32%, respectively.
{"title":"A lattice-transformer-graph deep learning model for Chinese named entity recognition","authors":"Min Lin, Yanyan Xu, Chenghao Cai, Dengfeng Ke, Kaile Su","doi":"10.1515/jisys-2022-2014","DOIUrl":"https://doi.org/10.1515/jisys-2022-2014","url":null,"abstract":"Abstract Named entity recognition (NER) is the localization and classification of entities with specific meanings in text data, usually used for applications such as relation extraction, question answering, etc. Chinese is a language with Chinese characters as the basic unit, but a Chinese named entity is normally a word containing several characters, so both the relationships between words and those between characters play an important role in Chinese NER. At present, a large number of studies have demonstrated that reasonable word information can effectively improve deep learning models for Chinese NER. Besides, graph convolution can help deep learning models perform better for sequence labeling. Therefore, in this article, we combine word information and graph convolution and propose our Lattice-Transformer-Graph (LTG) deep learning model for Chinese NER. The proposed model pays more attention to additional word information through position-attention, and therefore can learn relationships between characters by using lattice-transformer. Moreover, the adapted graph convolutional layer enables the model to learn both richer character relationships and word relationships and hence helps to recognize Chinese named entities better. Our experiments show that compared with 12 other state-of-the-art models, LTG achieves the best results on the public datasets of Microsoft Research Asia, Resume, and WeiboNER, with the F1 score of 95.89%, 96.81%, and 72.32%, respectively.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88834255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}