Abstract In order to improve the shortcomings of the traditional monitoring equipment that is difficult to measure the daily physical parameters of the elderly and improve the accuracy of parameter measurement, this article designs wearable devices through the Internet of Things technology and virtual reality technology. With this device, four daily physical parameters of the elderly, such as exercise heart rate, blood pressure, plantar health, and sleep function, are measured. The feasibility of the measurement method and equipment is verified by experiments. The experimental results showed that the accuracy of the measurement method based on the reflective photoplethysmography signal was high, with the mean and difference values of the subjects’ heart rate basically lying around 0 BPM and in good agreement between the estimated heart rate and the reference value. In the blood pressure measurements, the correlation coefficient between the Prs {P}_{rs} estimate and the reference value was 0.81. The estimation accuracy of the device used in the article was high, with the highest correlation coefficient of 0.96 ± 0.02 for subjects’ heart rate at rest, and its estimation error rate was 0.02 ± 0.01. The Pnth {P}_{{n}th} value for subject B8 exceeded the threshold of 0.5 before subject B21, and subject B8 had more severe symptoms, which was consistent with the actual situation. The wearable device was able to identify the subject’s eye features and provide appropriate videos to help subjects with poor sleep quality to fall asleep. The article provides a method and device that facilitates healthcare professionals to make real-time enquiries and receive user health advice.
摘要:为了改善传统监测设备难以测量老年人日常身体参数的缺点,提高参数测量的准确性,本文通过物联网技术和虚拟现实技术设计可穿戴设备。通过该装置,可以测量老年人的运动心率、血压、足底健康、睡眠功能等4项日常身体参数。实验验证了测量方法和设备的可行性。实验结果表明,基于反射光脉搏波信号的测量方法精度较高,被试心率均值和差值基本在0 BPM左右,估计值与参考值吻合较好。在血压测量中,P rs {P}_{rs}估计值与参考值的相关系数为0.81。本文所用装置的估计精度较高,与被试静息心率的相关系数最高为0.96±0.02,估计错误率为0.02±0.01。受试者B8的P nt h {P}_{{n}th}值在受试者B21之前超过了0.5的阈值,且受试者B8的症状更为严重,这与实际情况相符。该可穿戴设备能够识别受试者的眼部特征,并提供合适的视频,帮助睡眠质量较差的受试者入睡。本文提供了一种方法和设备,方便医疗保健专业人员进行实时查询和接收用户健康建议。
{"title":"Intelligent medical IoT health monitoring system based on VR and wearable devices","authors":"Yufei Wang, Xiaofeng An, Weiwei Xu","doi":"10.1515/jisys-2022-0291","DOIUrl":"https://doi.org/10.1515/jisys-2022-0291","url":null,"abstract":"Abstract In order to improve the shortcomings of the traditional monitoring equipment that is difficult to measure the daily physical parameters of the elderly and improve the accuracy of parameter measurement, this article designs wearable devices through the Internet of Things technology and virtual reality technology. With this device, four daily physical parameters of the elderly, such as exercise heart rate, blood pressure, plantar health, and sleep function, are measured. The feasibility of the measurement method and equipment is verified by experiments. The experimental results showed that the accuracy of the measurement method based on the reflective photoplethysmography signal was high, with the mean and difference values of the subjects’ heart rate basically lying around 0 BPM and in good agreement between the estimated heart rate and the reference value. In the blood pressure measurements, the correlation coefficient between the <m:math xmlns:m=\"http://www.w3.org/1998/Math/MathML\"> <m:msub> <m:mrow> <m:mi>P</m:mi> </m:mrow> <m:mrow> <m:mo>r</m:mo> <m:mo>s</m:mo> </m:mrow> </m:msub> </m:math> {P}_{rs} estimate and the reference value was 0.81. The estimation accuracy of the device used in the article was high, with the highest correlation coefficient of 0.96 ± 0.02 for subjects’ heart rate at rest, and its estimation error rate was 0.02 ± 0.01. The <m:math xmlns:m=\"http://www.w3.org/1998/Math/MathML\"> <m:msub> <m:mrow> <m:mi>P</m:mi> </m:mrow> <m:mrow> <m:mi mathvariant=\"italic\">n</m:mi> <m:mi>t</m:mi> <m:mi>h</m:mi> </m:mrow> </m:msub> </m:math> {P}_{{n}th} value for subject B8 exceeded the threshold of 0.5 before subject B21, and subject B8 had more severe symptoms, which was consistent with the actual situation. The wearable device was able to identify the subject’s eye features and provide appropriate videos to help subjects with poor sleep quality to fall asleep. The article provides a method and device that facilitates healthcare professionals to make real-time enquiries and receive user health advice.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"210 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135952787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Increasing the efficiency of an enterprise largely depends on the productivity of its employees, which must be properly assessed and the correct assessment of the contribution of each employee is important. In this regard, this article is devoted to a study conducted by the authors on the development of a digital employee rating system (DERES). The study was conducted on the basis of machine learning technologies and modern assessment methods that will allow companies to evaluate the performance of their departments, analyze the competencies of the employees and predict the rating of employees in the future. The authors developed a 360-degree employee rating model and a rating prediction model using regression machine learning algorithms. The article also analyzed the results obtained using the employee evaluation model, which showed that the performance of the tested employees is reduced due to remote work. Using DERES, a rating analysis of a real business company was carried out with recommendations for improving the efficiency of employees. An analysis of the forecasting results obtained using the rating prediction model developed by the authors showed that personal development and relationship are key parameters in predicting the future rating of employees. In addition, the authors provide a detailed description of the developed DERES information system, main components, and architecture.
{"title":"Development of a digital employee rating evaluation system (DERES) based on machine learning algorithms and 360-degree method","authors":"Gulnar Balakayeva, Mukhit Zhanuzakov, Gaukhar Kalmenova","doi":"10.1515/jisys-2023-0008","DOIUrl":"https://doi.org/10.1515/jisys-2023-0008","url":null,"abstract":"Abstract Increasing the efficiency of an enterprise largely depends on the productivity of its employees, which must be properly assessed and the correct assessment of the contribution of each employee is important. In this regard, this article is devoted to a study conducted by the authors on the development of a digital employee rating system (DERES). The study was conducted on the basis of machine learning technologies and modern assessment methods that will allow companies to evaluate the performance of their departments, analyze the competencies of the employees and predict the rating of employees in the future. The authors developed a 360-degree employee rating model and a rating prediction model using regression machine learning algorithms. The article also analyzed the results obtained using the employee evaluation model, which showed that the performance of the tested employees is reduced due to remote work. Using DERES, a rating analysis of a real business company was carried out with recommendations for improving the efficiency of employees. An analysis of the forecasting results obtained using the rating prediction model developed by the authors showed that personal development and relationship are key parameters in predicting the future rating of employees. In addition, the authors provide a detailed description of the developed DERES information system, main components, and architecture.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135953974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In multimedia correspondence, steganography schemes are commonly applied. To reduce storage capacity, multimedia files, including images, are always compressed. Most steganographic video schemes are, therefore, not compression tolerant. In the frame sequences, the video includes extra hidden space. Artificial intelligence (AI) creates a digital world of real-time information for athletes, sponsors, and broadcasters. AI is reshaping business, and although it has already produced a significant impact on other sectors, the sports industry is the newest and most receptive one. Human-centered AI for web applications has substantially influenced audience participation, strategic plan execution, and other aspects of the sports industry that have traditionally relied heavily on statistics. Thus, this study presents the motion vector steganography of sports training video integrating with the artificial bee colony algorithm (MVS-ABC). The motion vector stenography detects the hidden information from the motion vectors in the sports training video bitstreams. Artificial bee colony (ABC) algorithm optimizes the block assignment to inject a hidden message into a host video, in which the block assignment is considered a combinatorial optimization problem. The experimental analysis evaluates the data embedding performance using steganographic technology compared with existing embedding technologies, using the ABC algorithm compared with other genetic algorithms. The findings show that the proposed model can give the highest performance in terms of embedding capacity and the least error rate of video steganography compared with the existing models.
{"title":"Motion vector steganography algorithm of sports training video integrating with artificial bee colony algorithm and human-centered AI for web applications","authors":"Jinmao Tong, Zhongwang Cao, Wenjiang J. Fu","doi":"10.1515/jisys-2022-0093","DOIUrl":"https://doi.org/10.1515/jisys-2022-0093","url":null,"abstract":"Abstract In multimedia correspondence, steganography schemes are commonly applied. To reduce storage capacity, multimedia files, including images, are always compressed. Most steganographic video schemes are, therefore, not compression tolerant. In the frame sequences, the video includes extra hidden space. Artificial intelligence (AI) creates a digital world of real-time information for athletes, sponsors, and broadcasters. AI is reshaping business, and although it has already produced a significant impact on other sectors, the sports industry is the newest and most receptive one. Human-centered AI for web applications has substantially influenced audience participation, strategic plan execution, and other aspects of the sports industry that have traditionally relied heavily on statistics. Thus, this study presents the motion vector steganography of sports training video integrating with the artificial bee colony algorithm (MVS-ABC). The motion vector stenography detects the hidden information from the motion vectors in the sports training video bitstreams. Artificial bee colony (ABC) algorithm optimizes the block assignment to inject a hidden message into a host video, in which the block assignment is considered a combinatorial optimization problem. The experimental analysis evaluates the data embedding performance using steganographic technology compared with existing embedding technologies, using the ABC algorithm compared with other genetic algorithms. The findings show that the proposed model can give the highest performance in terms of embedding capacity and the least error rate of video steganography compared with the existing models.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"347 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77697577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In the case of magnetic resonance imaging (MRI) imaging, image processing is crucial. In the medical industry, MRI images are commonly used to analyze and diagnose tumor growth in the body. A number of successful brain tumor identification and classification procedures have been developed by various experts. Existing approaches face a number of obstacles, including detection time, accuracy, and tumor size. Early detection of brain tumors improves options for treatment and patient survival rates. Manually segmenting brain tumors from a significant number of MRI data for brain tumor diagnosis is a tough and time-consuming task. Automatic image segmentation of brain tumors is required. The objective of this study is to evaluate the degree of accuracy and simplify the medical picture segmentation procedure used to identify the type of brain tumor from MRI results. Additionally, this work suggests a novel method for identifying brain malignancies utilizing the Bagging Ensemble with K-Nearest Neighbor (BKNN) in order to raise the KNN’s accuracy and quality rate. For image segmentation, a U-Net architecture is utilized first, followed by a bagging-based k-NN prediction algorithm for classification. The goal of employing U-Net is to improve the accuracy and uniformity of parameter distribution in the layers. Each decision tree is fitted on a little different training dataset during classification, and the bagged decision trees are effective since each tree has minor differences and generates slightly different skilled predictions. The overall classification accuracy was up to 97.7 percent, confirming the efficiency of the suggested strategy for distinguishing normal and pathological tissues from brain MR images; this is greater than the methods that are already in use.
{"title":"A novel deep learning-based brain tumor detection using the Bagging ensemble with K-nearest neighbor","authors":"K. Archana, G. Komarasamy","doi":"10.1515/jisys-2022-0206","DOIUrl":"https://doi.org/10.1515/jisys-2022-0206","url":null,"abstract":"Abstract In the case of magnetic resonance imaging (MRI) imaging, image processing is crucial. In the medical industry, MRI images are commonly used to analyze and diagnose tumor growth in the body. A number of successful brain tumor identification and classification procedures have been developed by various experts. Existing approaches face a number of obstacles, including detection time, accuracy, and tumor size. Early detection of brain tumors improves options for treatment and patient survival rates. Manually segmenting brain tumors from a significant number of MRI data for brain tumor diagnosis is a tough and time-consuming task. Automatic image segmentation of brain tumors is required. The objective of this study is to evaluate the degree of accuracy and simplify the medical picture segmentation procedure used to identify the type of brain tumor from MRI results. Additionally, this work suggests a novel method for identifying brain malignancies utilizing the Bagging Ensemble with K-Nearest Neighbor (BKNN) in order to raise the KNN’s accuracy and quality rate. For image segmentation, a U-Net architecture is utilized first, followed by a bagging-based k-NN prediction algorithm for classification. The goal of employing U-Net is to improve the accuracy and uniformity of parameter distribution in the layers. Each decision tree is fitted on a little different training dataset during classification, and the bagged decision trees are effective since each tree has minor differences and generates slightly different skilled predictions. The overall classification accuracy was up to 97.7 percent, confirming the efficiency of the suggested strategy for distinguishing normal and pathological tissues from brain MR images; this is greater than the methods that are already in use.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"7 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86866533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Environmental landscaping is known to build, plan, and manage landscapes that consider the ecology of a site and produce gardens that benefit both people and the rest of the ecosystem. Landscaping and the environment are combined in landscape design planning to provide holistic answers to complex issues. Seeding native species and eradicating alien species are just a few ways humans influence the region’s ecosystem. Landscape architecture is the design of landscapes, urban areas, or gardens and their modification. It comprises the construction of urban and rural landscapes via coordinating the creation and management of open spaces and economics, finding a job, and working within a confined project budget. There was a lot of discussion about global warming and water shortages. There is a lot of hope to be found even in the face of seemingly insurmountable obstacles. AI is becoming more significant in many urban landscape planning and design elements with the advent of web 4.0 and Human-Centred computing. It created a virtual reality-based landscape to create deep neural networks (DNNs) to make deep learning (DL) more user-friendly and efficient. Users may only manipulate physical items in this environment to manually construct neural networks. These setups are automatically converted into a model, and the real-time testing set is reported and aware of the DNN models that users are producing. This research presents a novel strategy for combining DL-DNN with landscape architecture, providing a long-term solution to the problem of environmental pollution. Carbon dioxide levels are constantly checked when green plants are in and around the house. Plants, on either hand, remove toxins from the air, making it easier to maintain a healthy environment. Human-centered Artificial Intelligence-based web 4.0 may be used to assess and evaluate the data model. The study findings can be sent back into the design process for further modification and optimization.
{"title":"Environmental landscape design and planning system based on computer vision and deep learning","authors":"Xiubo Chen","doi":"10.1515/jisys-2022-0092","DOIUrl":"https://doi.org/10.1515/jisys-2022-0092","url":null,"abstract":"Abstract Environmental landscaping is known to build, plan, and manage landscapes that consider the ecology of a site and produce gardens that benefit both people and the rest of the ecosystem. Landscaping and the environment are combined in landscape design planning to provide holistic answers to complex issues. Seeding native species and eradicating alien species are just a few ways humans influence the region’s ecosystem. Landscape architecture is the design of landscapes, urban areas, or gardens and their modification. It comprises the construction of urban and rural landscapes via coordinating the creation and management of open spaces and economics, finding a job, and working within a confined project budget. There was a lot of discussion about global warming and water shortages. There is a lot of hope to be found even in the face of seemingly insurmountable obstacles. AI is becoming more significant in many urban landscape planning and design elements with the advent of web 4.0 and Human-Centred computing. It created a virtual reality-based landscape to create deep neural networks (DNNs) to make deep learning (DL) more user-friendly and efficient. Users may only manipulate physical items in this environment to manually construct neural networks. These setups are automatically converted into a model, and the real-time testing set is reported and aware of the DNN models that users are producing. This research presents a novel strategy for combining DL-DNN with landscape architecture, providing a long-term solution to the problem of environmental pollution. Carbon dioxide levels are constantly checked when green plants are in and around the house. Plants, on either hand, remove toxins from the air, making it easier to maintain a healthy environment. Human-centered Artificial Intelligence-based web 4.0 may be used to assess and evaluate the data model. The study findings can be sent back into the design process for further modification and optimization.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"65 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88959754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Min Lin, Yanyan Xu, Chenghao Cai, Dengfeng Ke, Kaile Su
Abstract Named entity recognition (NER) is the localization and classification of entities with specific meanings in text data, usually used for applications such as relation extraction, question answering, etc. Chinese is a language with Chinese characters as the basic unit, but a Chinese named entity is normally a word containing several characters, so both the relationships between words and those between characters play an important role in Chinese NER. At present, a large number of studies have demonstrated that reasonable word information can effectively improve deep learning models for Chinese NER. Besides, graph convolution can help deep learning models perform better for sequence labeling. Therefore, in this article, we combine word information and graph convolution and propose our Lattice-Transformer-Graph (LTG) deep learning model for Chinese NER. The proposed model pays more attention to additional word information through position-attention, and therefore can learn relationships between characters by using lattice-transformer. Moreover, the adapted graph convolutional layer enables the model to learn both richer character relationships and word relationships and hence helps to recognize Chinese named entities better. Our experiments show that compared with 12 other state-of-the-art models, LTG achieves the best results on the public datasets of Microsoft Research Asia, Resume, and WeiboNER, with the F1 score of 95.89%, 96.81%, and 72.32%, respectively.
{"title":"A lattice-transformer-graph deep learning model for Chinese named entity recognition","authors":"Min Lin, Yanyan Xu, Chenghao Cai, Dengfeng Ke, Kaile Su","doi":"10.1515/jisys-2022-2014","DOIUrl":"https://doi.org/10.1515/jisys-2022-2014","url":null,"abstract":"Abstract Named entity recognition (NER) is the localization and classification of entities with specific meanings in text data, usually used for applications such as relation extraction, question answering, etc. Chinese is a language with Chinese characters as the basic unit, but a Chinese named entity is normally a word containing several characters, so both the relationships between words and those between characters play an important role in Chinese NER. At present, a large number of studies have demonstrated that reasonable word information can effectively improve deep learning models for Chinese NER. Besides, graph convolution can help deep learning models perform better for sequence labeling. Therefore, in this article, we combine word information and graph convolution and propose our Lattice-Transformer-Graph (LTG) deep learning model for Chinese NER. The proposed model pays more attention to additional word information through position-attention, and therefore can learn relationships between characters by using lattice-transformer. Moreover, the adapted graph convolutional layer enables the model to learn both richer character relationships and word relationships and hence helps to recognize Chinese named entities better. Our experiments show that compared with 12 other state-of-the-art models, LTG achieves the best results on the public datasets of Microsoft Research Asia, Resume, and WeiboNER, with the F1 score of 95.89%, 96.81%, and 72.32%, respectively.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"1 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88834255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Privacy is the main concern in cyberspace because, every single click of a user on Internet is recognized and analyzed for different purposes like credit card purchase records, healthcare records, business, personalized shopping store experience to the user, deciding marketing strategy, and the list goes on. Here, the user’s personal information is considered a risk process. Though data mining applications focus on statistically useful patterns and not on the personal data of individuals, there is a threat of unrestricted access to individual records. Also, it is necessary to maintain the secrecy of data while retaining the accuracy of data classification and quality as well. For real-time applications, the data analytics carried out should be time efficient. Here, the proposed Convolution-based Privacy Preserving Algorithm (C-PPA) transforms the input into lower dimensions while preserving privacy which leads to better mining accuracy. The proposed algorithm is evaluated over different privacy-preserving metrics like accuracy, precision, recall, and F1-measure. Simulations carried out show that the average increment in the accuracy of C-PPA is 14.15 for Convolutional Neural Network (CNN) classifier when compared with results without C-PPA. Overlap-add C-PPA is proposed for parallel processing which is based on overlap-add convolution. It shows an average accuracy increment of 12.49 for CNN. The analytics show that the algorithm benefits regarding privacy preservation, data utility, and performance. Since the algorithm works on lowering the dimensions of data, the communication cost over the Internet is also reduced.
{"title":"Data analysis with performance and privacy enhanced classification","authors":"R. Tajanpure, A. Muddana","doi":"10.1515/jisys-2022-0215","DOIUrl":"https://doi.org/10.1515/jisys-2022-0215","url":null,"abstract":"Abstract Privacy is the main concern in cyberspace because, every single click of a user on Internet is recognized and analyzed for different purposes like credit card purchase records, healthcare records, business, personalized shopping store experience to the user, deciding marketing strategy, and the list goes on. Here, the user’s personal information is considered a risk process. Though data mining applications focus on statistically useful patterns and not on the personal data of individuals, there is a threat of unrestricted access to individual records. Also, it is necessary to maintain the secrecy of data while retaining the accuracy of data classification and quality as well. For real-time applications, the data analytics carried out should be time efficient. Here, the proposed Convolution-based Privacy Preserving Algorithm (C-PPA) transforms the input into lower dimensions while preserving privacy which leads to better mining accuracy. The proposed algorithm is evaluated over different privacy-preserving metrics like accuracy, precision, recall, and F1-measure. Simulations carried out show that the average increment in the accuracy of C-PPA is 14.15 for Convolutional Neural Network (CNN) classifier when compared with results without C-PPA. Overlap-add C-PPA is proposed for parallel processing which is based on overlap-add convolution. It shows an average accuracy increment of 12.49 for CNN. The analytics show that the algorithm benefits regarding privacy preservation, data utility, and performance. Since the algorithm works on lowering the dimensions of data, the communication cost over the Internet is also reduced.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"102 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84761303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Real-time object detection is an integral part of internet of things (IoT) application, which is an important research field of computer vision. Existing lightweight algorithms cannot handle target occlusions well in target detection tasks in indoor narrow scenes, resulting in a large number of missed detections and misclassifications. To this end, an accurate real-time multi-scale detection method that integrates density-based spatial clustering of applications with noise (DBSCAN) clustering algorithm and the improved You Only Look Once (YOLO)-v4-tiny network is proposed. First, by improving the neck network of the YOLOv4-tiny model, the detailed information of the shallow network is utilized to boost the average precision of the model to identify dense small objects, and the Cross mini-Batch Normalization strategy is adopted to improve the accuracy of statistical information. Second, the DBSCAN clustering algorithm is fused with the modified network to achieve better clustering effects. Finally, Mosaic data enrichment technique is adopted during model training process to improve the capability of the model to recognize occluded targets. Experimental results show that compared to the original YOLOv4-tiny algorithm, the mAP values of the improved algorithm on the self-construct dataset are significantly improved, and the processing speed can well meet the requirements of real-time applications on embedded devices. The performance of the proposed model on public datasets PASCAL VOC07 and PASCAL VOC12 is also better than that of other advanced lightweight algorithms, and the detection ability for occluded objects is significantly improved, which meets the requirements of mobile terminals for real-time detection in crowded indoor environments.
摘要实时目标检测是物联网应用的重要组成部分,是计算机视觉的一个重要研究领域。现有的轻量级算法在室内狭窄场景的目标检测任务中不能很好地处理目标遮挡,导致大量的漏检和误分类。为此,提出了一种将基于密度的应用空间聚类与噪声(DBSCAN)聚类算法和改进的You Only Look Once (YOLO)-v4-tiny网络相结合的精确实时多尺度检测方法。首先,通过改进YOLOv4-tiny模型的颈部网络,利用浅层网络的详细信息提高模型识别密集小目标的平均精度,并采用Cross mini-Batch归一化策略提高统计信息的精度。其次,将DBSCAN聚类算法与改进后的网络进行融合,获得更好的聚类效果。最后,在模型训练过程中采用马赛克数据充实技术,提高模型对遮挡目标的识别能力。实验结果表明,与原始的YOLOv4-tiny算法相比,改进算法在自构建数据集上的mAP值有了显著提高,处理速度可以很好地满足嵌入式设备上实时应用的要求。本文提出的模型在公共数据集PASCAL VOC07和PASCAL VOC12上的性能也优于其他先进的轻量级算法,对遮挡物的检测能力显著提高,满足了移动终端在拥挤室内环境下实时检测的要求。
{"title":"Accurate and real-time object detection in crowded indoor spaces based on the fusion of DBSCAN algorithm and improved YOLOv4-tiny network","authors":"Jianing Shen, Yang Zhou","doi":"10.1515/jisys-2022-0268","DOIUrl":"https://doi.org/10.1515/jisys-2022-0268","url":null,"abstract":"Abstract Real-time object detection is an integral part of internet of things (IoT) application, which is an important research field of computer vision. Existing lightweight algorithms cannot handle target occlusions well in target detection tasks in indoor narrow scenes, resulting in a large number of missed detections and misclassifications. To this end, an accurate real-time multi-scale detection method that integrates density-based spatial clustering of applications with noise (DBSCAN) clustering algorithm and the improved You Only Look Once (YOLO)-v4-tiny network is proposed. First, by improving the neck network of the YOLOv4-tiny model, the detailed information of the shallow network is utilized to boost the average precision of the model to identify dense small objects, and the Cross mini-Batch Normalization strategy is adopted to improve the accuracy of statistical information. Second, the DBSCAN clustering algorithm is fused with the modified network to achieve better clustering effects. Finally, Mosaic data enrichment technique is adopted during model training process to improve the capability of the model to recognize occluded targets. Experimental results show that compared to the original YOLOv4-tiny algorithm, the mAP values of the improved algorithm on the self-construct dataset are significantly improved, and the processing speed can well meet the requirements of real-time applications on embedded devices. The performance of the proposed model on public datasets PASCAL VOC07 and PASCAL VOC12 is also better than that of other advanced lightweight algorithms, and the detection ability for occluded objects is significantly improved, which meets the requirements of mobile terminals for real-time detection in crowded indoor environments.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"114 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85065332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The current technology of foundation pit deformation measurement is inefficient, and its accuracy is not ideal. Therefore, an intelligent prediction model of foundation pit deformation based on back propagation neural network (BPNN) is proposed to predict the foundation pit deformation intelligently, with high accuracy and efficiency, so as to improve the safety of the project. Firstly, to address the shortcomings of BPNNs, which rely on the initial parameter settings and tend to fall into local optimum and unstable performance, this study adopts the modified particle swarm optimization (MPSO) to optimise the parameters of BPNNs and constructs a pit deformation prediction model based on the MPSO–BP algorithm to achieve predictive measurements of pit deformation. After training and testing the data samples, the results show that the prediction accuracy of the MPSO–BP pit deformation prediction model is 99.76%, which is 2.25% higher than that of the particle swarm optimization–back propagation (PSO–BP) pit deformation prediction model and 3.01% higher than that of the BP pit deformation prediction model. The aforementioned results show that the MPSO–BP pit deformation prediction model proposed in this study can effectively predict the pit deformation variables of construction projects and provide data support for the protective measures of the staff, which is helpful for the cause of construction projects in China.
{"title":"Construction pit deformation measurement technology based on neural network algorithm","authors":"Yong Wu, Xiaoli Zhou","doi":"10.1515/jisys-2022-0292","DOIUrl":"https://doi.org/10.1515/jisys-2022-0292","url":null,"abstract":"Abstract The current technology of foundation pit deformation measurement is inefficient, and its accuracy is not ideal. Therefore, an intelligent prediction model of foundation pit deformation based on back propagation neural network (BPNN) is proposed to predict the foundation pit deformation intelligently, with high accuracy and efficiency, so as to improve the safety of the project. Firstly, to address the shortcomings of BPNNs, which rely on the initial parameter settings and tend to fall into local optimum and unstable performance, this study adopts the modified particle swarm optimization (MPSO) to optimise the parameters of BPNNs and constructs a pit deformation prediction model based on the MPSO–BP algorithm to achieve predictive measurements of pit deformation. After training and testing the data samples, the results show that the prediction accuracy of the MPSO–BP pit deformation prediction model is 99.76%, which is 2.25% higher than that of the particle swarm optimization–back propagation (PSO–BP) pit deformation prediction model and 3.01% higher than that of the BP pit deformation prediction model. The aforementioned results show that the MPSO–BP pit deformation prediction model proposed in this study can effectively predict the pit deformation variables of construction projects and provide data support for the protective measures of the staff, which is helpful for the cause of construction projects in China.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135361368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In this study, we proposed a fast line-structured light stripe center extraction algorithm based on an improved barycenter algorithm to address the problem that the conventional strip center extraction algorithm cannot meet the requirements of a structured light 3D measurement system in terms of speed and accuracy. First, the algorithm performs pretreatment of the structured light image and obtains the approximate position of the stripe center through skeleton extraction. Next, the normal direction of each pixel on the skeleton is solved using the gray gradient method. Then, the weighted gray center of the gravity method is used to solve the stripe center coordinates along the normal direction. Finally, a smooth strip centerline is fitted using the least squares method. The experimental results show that the improved algorithm achieved significant improvement in speed, sub-pixel level accuracy, and a good structured light stripe center extraction effect, as well as the repeated measurement accuracy of the improved algorithm is within 0.01 mm, and the algorithm has good repeatability.
{"title":"Research on the center extraction algorithm of structured light fringe based on an improved gray gravity center method","authors":"Jun Wang, Jingjing Wu, Xiang Jiao, Yue Ding","doi":"10.1515/jisys-2022-0195","DOIUrl":"https://doi.org/10.1515/jisys-2022-0195","url":null,"abstract":"Abstract In this study, we proposed a fast line-structured light stripe center extraction algorithm based on an improved barycenter algorithm to address the problem that the conventional strip center extraction algorithm cannot meet the requirements of a structured light 3D measurement system in terms of speed and accuracy. First, the algorithm performs pretreatment of the structured light image and obtains the approximate position of the stripe center through skeleton extraction. Next, the normal direction of each pixel on the skeleton is solved using the gray gradient method. Then, the weighted gray center of the gravity method is used to solve the stripe center coordinates along the normal direction. Finally, a smooth strip centerline is fitted using the least squares method. The experimental results show that the improved algorithm achieved significant improvement in speed, sub-pixel level accuracy, and a good structured light stripe center extraction effect, as well as the repeated measurement accuracy of the improved algorithm is within 0.01 mm, and the algorithm has good repeatability.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134883378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}