首页 > 最新文献

Engineering Applications of Artificial Intelligence最新文献

英文 中文
Picking point localization method of table grape picking robot based on you only look once version 8 nano
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-02-15 DOI: 10.1016/j.engappai.2025.110266
Yanjun Zhu , Shunshun Sui , Wensheng Du , Xiang Li , Ping Liu
Automatic localization of the picking point for table grape is the key to achieving intelligent harvesting. Aiming at the problem, the You Only Look Once version 8 nano-Deformable Convolutional Networks-Wise Intersection over Union-Fourth Detection Layer (YOLO v8n-DWF) network was designed for the table grape picking robot to realize the detection of table grapes and the localization of picking points. The Deformable Convolutional Networks (DCN) model was used to enhance the robustness of table grape detection and improve the detection precision of grape stems. To improve the detection precision of table grape and stem and reduce the impact of low-quality data on model generalization ability, Wise-Intersection over Union version 3 (WIou v3) was applied. In addition, aiming at the problem of difficultly identifying due to small target and low pixel of table grape stem, a fourth detection layer for small target detection was added to improve the recognition and detection ability of the network model for table grape stem. Further, a more accurate geometric localization method of picking points was proposed to achieve fast picking of table grapes. Finally, the results showed that the detection precision, recall, mean Average Precision50 (mAP50) and mean Average Precision50-95 (mAP50-95) of the YOLO v8n-DWF model was 97.9%, 95.3%, 97.6% and 85.4%, respectively. In addition, the success rate of the geometric method based on the results of identification for YOLO v8n-DWF was 88.24%, and the average picking success rate of table grapes in field experiments was 87.40%. It can fully meet the requirements of intelligent picking of table grapes.
{"title":"Picking point localization method of table grape picking robot based on you only look once version 8 nano","authors":"Yanjun Zhu ,&nbsp;Shunshun Sui ,&nbsp;Wensheng Du ,&nbsp;Xiang Li ,&nbsp;Ping Liu","doi":"10.1016/j.engappai.2025.110266","DOIUrl":"10.1016/j.engappai.2025.110266","url":null,"abstract":"<div><div>Automatic localization of the picking point for table grape is the key to achieving intelligent harvesting. Aiming at the problem, the You Only Look Once version 8 nano-Deformable Convolutional Networks-Wise Intersection over Union-Fourth Detection Layer (YOLO v8n-DWF) network was designed for the table grape picking robot to realize the detection of table grapes and the localization of picking points. The Deformable Convolutional Networks (DCN) model was used to enhance the robustness of table grape detection and improve the detection precision of grape stems. To improve the detection precision of table grape and stem and reduce the impact of low-quality data on model generalization ability, Wise-Intersection over Union version 3 (<em>WIou v3</em>) was applied. In addition, aiming at the problem of difficultly identifying due to small target and low pixel of table grape stem, a fourth detection layer for small target detection was added to improve the recognition and detection ability of the network model for table grape stem. Further, a more accurate geometric localization method of picking points was proposed to achieve fast picking of table grapes. Finally, the results showed that the detection precision, recall, mean Average Precision<sub>50</sub> (<em>mAP</em><sub><em>50</em></sub>) and mean Average Precision<sub>50-95</sub> (<em>mAP</em><sub><em>50-95</em></sub>) of the YOLO v8n-DWF model was 97.9%, 95.3%, 97.6% and 85.4%, respectively. In addition, the success rate of the geometric method based on the results of identification for YOLO v8n-DWF was 88.24%, and the average picking success rate of table grapes in field experiments was 87.40%. It can fully meet the requirements of intelligent picking of table grapes.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"146 ","pages":"Article 110266"},"PeriodicalIF":7.5,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143418880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-supervised deep contrastive and auto-regressive domain adaptation for time-series based on channel recalibration
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-02-15 DOI: 10.1016/j.engappai.2025.110280
Guangju Yang , Tian-jian Luo , Xiaochen Zhang
Time-series based unsupervised domain adaptation (UDA) techniques have been widely adopted to the applications of intelligent systems, such as sleep staging, fault diagnosis, and human activity recognition. However, recently methods have overlooked the importance of temporal feature representations and the distribution discrepancies across domains, which deteriorated UDA performance. To address these challenges, we proposed a novel Self-supervised Deep Contrastive and Auto-regressive Domain Adaptation (SDCADA) model for cross-domain time-series classification. Specifically, the cross-domain mixup preprocessing strategy is applied to reduce sample-level distribution discrepancy, then we proposed to introduce the channel recalibration module for adaptively selecting discriminative representations. Afterwards, the auto-regressive discriminator and teacher model are proposed to reduce the distribution discrepancies of feature representations. Finally, a total of six losses, including contrastive and adversarial learning, are weighted and jointly optimized to train the SDCADA model. The proposed SDCADA model has been systematically experimented on four cross-domain time-series benchmarked datasets, and its classification performance surpasses several recently proposed state-of-the-art models. Moreover, it effectively captures discriminative and comprehensive cross-domain time-series feature representations with parameter insensitivity.
{"title":"Self-supervised deep contrastive and auto-regressive domain adaptation for time-series based on channel recalibration","authors":"Guangju Yang ,&nbsp;Tian-jian Luo ,&nbsp;Xiaochen Zhang","doi":"10.1016/j.engappai.2025.110280","DOIUrl":"10.1016/j.engappai.2025.110280","url":null,"abstract":"<div><div>Time-series based unsupervised domain adaptation (UDA) techniques have been widely adopted to the applications of intelligent systems, such as sleep staging, fault diagnosis, and human activity recognition. However, recently methods have overlooked the importance of temporal feature representations and the distribution discrepancies across domains, which deteriorated UDA performance. To address these challenges, we proposed a novel <strong>S</strong>elf-supervised <strong>D</strong>eep <strong>C</strong>ontrastive and <strong>A</strong>uto-regressive <strong>D</strong>omain <strong>A</strong>daptation (SDCADA) model for cross-domain time-series classification. Specifically, the cross-domain mixup preprocessing strategy is applied to reduce sample-level distribution discrepancy, then we proposed to introduce the channel recalibration module for adaptively selecting discriminative representations. Afterwards, the auto-regressive discriminator and teacher model are proposed to reduce the distribution discrepancies of feature representations. Finally, a total of six losses, including contrastive and adversarial learning, are weighted and jointly optimized to train the SDCADA model. The proposed SDCADA model has been systematically experimented on four cross-domain time-series benchmarked datasets, and its classification performance surpasses several recently proposed state-of-the-art models. Moreover, it effectively captures discriminative and comprehensive cross-domain time-series feature representations with parameter insensitivity.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"145 ","pages":"Article 110280"},"PeriodicalIF":7.5,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143422397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent exogenous networks with Bayesian distributed backpropagation for nonlinear single delay brain electrical activity rhythms in Parkinson's disease system
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-02-15 DOI: 10.1016/j.engappai.2025.110281
Roshana Mukhtar , Chuan-Yu Chang , Muhammad Asif Zahoor Raja , Naveed Ishtiaq Chaudhary , Nabeela Anwar , Iftikhar Ahmad , Chi-Min Shu
In this study, a novel intelligent adaptive exogenous network backpropagated with a Bayesian distributive scheme is introduced for nonlinear Parkinson's disease systems (NPDS) represented with three differential classes governing the brain's electrical activity rhythms (BEAR) at different cerebral cortex positions considering single and multiple delays in one continuing response. The computing structure is formulated with a multi-layer architecture of nonlinear autoregressive exogenous networks (NARX) with backpropagation through a Bayesian distributed algorithm (BDA). The synthetic datasets for the execution of NARX-BDA are acquired through the Adams numerical solver for NPDS involving single and multiple delays in one variable of BEAR for different sensor locations on the cerebral cortex. The designed computing structure of NARX-BDA is operated arbitrarily for acquired datasets and used for training samples for network formulation on mean square error (MSE) sense while testing samples to validate the performance on unbiased inputs. The exhaustive numerical experimentation studies are conducted for NARX-BDA in solving delayed variants of NPDS, and comparative studies are carried out through numerical solutions by means of convergence curves on MSE for training and testing instances, absolute error, error histograms, statistical studies on regression and correlation measurements to certify the performance.
{"title":"Intelligent exogenous networks with Bayesian distributed backpropagation for nonlinear single delay brain electrical activity rhythms in Parkinson's disease system","authors":"Roshana Mukhtar ,&nbsp;Chuan-Yu Chang ,&nbsp;Muhammad Asif Zahoor Raja ,&nbsp;Naveed Ishtiaq Chaudhary ,&nbsp;Nabeela Anwar ,&nbsp;Iftikhar Ahmad ,&nbsp;Chi-Min Shu","doi":"10.1016/j.engappai.2025.110281","DOIUrl":"10.1016/j.engappai.2025.110281","url":null,"abstract":"<div><div>In this study, a novel intelligent adaptive exogenous network backpropagated with a Bayesian distributive scheme is introduced for nonlinear Parkinson's disease systems (NPDS) represented with three differential classes governing the brain's electrical activity rhythms (BEAR) at different cerebral cortex positions considering single and multiple delays in one continuing response. The computing structure is formulated with a multi-layer architecture of nonlinear autoregressive exogenous networks (NARX) with backpropagation through a Bayesian distributed algorithm (BDA). The synthetic datasets for the execution of NARX-BDA are acquired through the Adams numerical solver for NPDS involving single and multiple delays in one variable of BEAR for different sensor locations on the cerebral cortex. The designed computing structure of NARX-BDA is operated arbitrarily for acquired datasets and used for training samples for network formulation on mean square error (MSE) sense while testing samples to validate the performance on unbiased inputs. The exhaustive numerical experimentation studies are conducted for NARX-BDA in solving delayed variants of NPDS, and comparative studies are carried out through numerical solutions by means of convergence curves on MSE for training and testing instances, absolute error, error histograms, statistical studies on regression and correlation measurements to certify the performance.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"145 ","pages":"Article 110281"},"PeriodicalIF":7.5,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143422451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning based methodological approach for prediction of dynamic modulus and phase angle of asphalt concrete
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-02-14 DOI: 10.1016/j.engappai.2025.110269
Nishigandha Rajeshwar Jukte, Aravind Krishna Swamy
The present work proposes a deep learning approach to predict dynamic modulus and phase angle of asphalt mixtures. The dynamic modulus and phase angle data reported in national cooperative highway research program 9–19 were utilized to validate the proposed approach. Within this database 201 distinct asphalt mixtures were selected. Dynamic modulus and phase angle mastercurves were constructed for individual mixtures using combination of two sigmoidal functions and two temperature shift factor determination approaches. The input variables for deep learning model consisted of information regarding reduced frequency, binder properties, aggregate gradation, and mixture volumetrics. When compared to other input variables, reduced frequency and binder properties were highly correlated with dynamic modulus and phase angle. Further, recursive feature elimination was used to rank all input variables. Using these ranked input variables, deep learning-based models for predicting dynamic modulus and phase angle were developed. The deep learning architecture (finalized through exhaustive optimization) dependent on parameter under consideration, and mastercurve construction approach was adopted. Detailed statistical analysis indicated dynamic modulus predictive models performed better when compared to phase angle predictive models. The numerical values of goodness of fit indicators showed that accuracy of deep learning-based model was dependent on mastercurve construction approach adopted. Overall results indicated that the deep learning-based models can predict dynamic modulus and phase angle with good accuracy. The output from the proposed deep learning models can be used as direct input into pavement design framework, which can result in accurate prediction of pavement performance.
{"title":"Deep learning based methodological approach for prediction of dynamic modulus and phase angle of asphalt concrete","authors":"Nishigandha Rajeshwar Jukte,&nbsp;Aravind Krishna Swamy","doi":"10.1016/j.engappai.2025.110269","DOIUrl":"10.1016/j.engappai.2025.110269","url":null,"abstract":"<div><div>The present work proposes a deep learning approach to predict dynamic modulus and phase angle of asphalt mixtures. The dynamic modulus and phase angle data reported in national cooperative highway research program 9–19 were utilized to validate the proposed approach. Within this database 201 distinct asphalt mixtures were selected. Dynamic modulus and phase angle mastercurves were constructed for individual mixtures using combination of two sigmoidal functions and two temperature shift factor determination approaches. The input variables for deep learning model consisted of information regarding reduced frequency, binder properties, aggregate gradation, and mixture volumetrics. When compared to other input variables, reduced frequency and binder properties were highly correlated with dynamic modulus and phase angle. Further, recursive feature elimination was used to rank all input variables. Using these ranked input variables, deep learning-based models for predicting dynamic modulus and phase angle were developed. The deep learning architecture (finalized through exhaustive optimization) dependent on parameter under consideration, and mastercurve construction approach was adopted. Detailed statistical analysis indicated dynamic modulus predictive models performed better when compared to phase angle predictive models. The numerical values of goodness of fit indicators showed that accuracy of deep learning-based model was dependent on mastercurve construction approach adopted. Overall results indicated that the deep learning-based models can predict dynamic modulus and phase angle with good accuracy. The output from the proposed deep learning models can be used as direct input into pavement design framework, which can result in accurate prediction of pavement performance.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"145 ","pages":"Article 110269"},"PeriodicalIF":7.5,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143402626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust nonlinear control of permanent magnet synchronous motor drives: An evolutionary algorithm optimized passivity-based control approach with a high-order sliding mode observer
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-02-14 DOI: 10.1016/j.engappai.2025.110256
Youcef Belkhier , Siham Fredj , Haroon Rashid , Mohamed Benbouzid
Permanent Magnet Synchronous Machines (PMSMs) have revolutionized motor design by replacing traditional components like rotor windings, brushes, and sliding contacts with permanent magnets. This innovation has significantly improved operational efficiency and reduced maintenance needs. However, controlling PMSMs remains challenging due to the changing dynamics of the machine over time and its sensitivity to different environmental conditions.
To tackle these challenges, this study presents a novel nonlinear control approach called passivity-based control (PBC). Unlike conventional methods, PBC manages both the electrical and mechanical dynamics of the system, focusing on energy flow and dissipation to maintain stability. To make the control more robust, the approach combines a nonlinear observer and a high-order sliding mode controller (HSMC), which enhance the system's ability to handle disturbances and parameter changes. Additionally, the study uses Genetic Algorithm (GA) optimization to fine-tune the parameters of the PBC, observer, and HSMC. This optimization improves the motor's tracking accuracy and robustness against external disruptions.
The result is a control framework that preserves the natural dynamics of PMSMs while improving their stability and performance. Experimental validation using the platform for real-time simulation (OPAL-RT) and real world on a PMSM using dSPACE DS1202 board demonstrates that this method outperforms existing techniques under a variety of operating conditions, highlighting its effectiveness and reliability.
{"title":"Robust nonlinear control of permanent magnet synchronous motor drives: An evolutionary algorithm optimized passivity-based control approach with a high-order sliding mode observer","authors":"Youcef Belkhier ,&nbsp;Siham Fredj ,&nbsp;Haroon Rashid ,&nbsp;Mohamed Benbouzid","doi":"10.1016/j.engappai.2025.110256","DOIUrl":"10.1016/j.engappai.2025.110256","url":null,"abstract":"<div><div>Permanent Magnet Synchronous Machines (PMSMs) have revolutionized motor design by replacing traditional components like rotor windings, brushes, and sliding contacts with permanent magnets. This innovation has significantly improved operational efficiency and reduced maintenance needs. However, controlling PMSMs remains challenging due to the changing dynamics of the machine over time and its sensitivity to different environmental conditions.</div><div>To tackle these challenges, this study presents a novel nonlinear control approach called passivity-based control (PBC). Unlike conventional methods, PBC manages both the electrical and mechanical dynamics of the system, focusing on energy flow and dissipation to maintain stability. To make the control more robust, the approach combines a nonlinear observer and a high-order sliding mode controller (HSMC), which enhance the system's ability to handle disturbances and parameter changes. Additionally, the study uses Genetic Algorithm (GA) optimization to fine-tune the parameters of the PBC, observer, and HSMC. This optimization improves the motor's tracking accuracy and robustness against external disruptions.</div><div>The result is a control framework that preserves the natural dynamics of PMSMs while improving their stability and performance. Experimental validation using the platform for real-time simulation (OPAL-RT) and real world on a PMSM using dSPACE DS1202 board demonstrates that this method outperforms existing techniques under a variety of operating conditions, highlighting its effectiveness and reliability.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"145 ","pages":"Article 110256"},"PeriodicalIF":7.5,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143402624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy performance prediction of centrifugal pumps based on adaptive support vector regression
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-02-14 DOI: 10.1016/j.engappai.2025.110247
Huican Luo , Peijian Zhou , Jiayi Cui , Yang Wang , Haisheng Zheng , Yantian Wang
It is of great significance to speed up the development and optimization of pumps with energy performance prediction methods. Machine learning is widely used for performance prediction of centrifugal pumps due to its fast and accurate predictions. However, the prediction model performance distinctly for the different geometry and performance parameters. This paper proposes an adaptive support vector regression (SVR) model for predicting centrifugal pump energy performance, which incorporates input-output correlation analysis and differential evolution to automatically adjust the input parameter weights. The model's performance was validated against experimental data, yielding mean absolute residuals (MAR) of 0.174 for head, 0.113 for power, and 1.658 for efficiency. Additionally, the model achieved an R2 of 0.995 and a mean square error (MSE) of 2.99. In multi-operation conditions, by adjusting the parameter vector, the adaptive SVR reduced the mean absolute relative error (MARE) of head, power, and efficiency to 0.443%, 1.07%, and 6.63%, respectively, representing improvements of 79.6%, 86.2%, and 31.6% compared to the original SVR model. The proposed model also outperformed the adaptive least squares support vector regression (LSSVR).
{"title":"Energy performance prediction of centrifugal pumps based on adaptive support vector regression","authors":"Huican Luo ,&nbsp;Peijian Zhou ,&nbsp;Jiayi Cui ,&nbsp;Yang Wang ,&nbsp;Haisheng Zheng ,&nbsp;Yantian Wang","doi":"10.1016/j.engappai.2025.110247","DOIUrl":"10.1016/j.engappai.2025.110247","url":null,"abstract":"<div><div>It is of great significance to speed up the development and optimization of pumps with energy performance prediction methods. Machine learning is widely used for performance prediction of centrifugal pumps due to its fast and accurate predictions. However, the prediction model performance distinctly for the different geometry and performance parameters. This paper proposes an adaptive support vector regression (SVR) model for predicting centrifugal pump energy performance, which incorporates input-output correlation analysis and differential evolution to automatically adjust the input parameter weights. The model's performance was validated against experimental data, yielding mean absolute residuals (MAR) of 0.174 for head, 0.113 for power, and 1.658 for efficiency. Additionally, the model achieved an R<sup>2</sup> of 0.995 and a mean square error (MSE) of 2.99. In multi-operation conditions, by adjusting the parameter vector, the adaptive SVR reduced the mean absolute relative error (MARE) of head, power, and efficiency to 0.443%, 1.07%, and 6.63%, respectively, representing improvements of 79.6%, 86.2%, and 31.6% compared to the original SVR model. The proposed model also outperformed the adaptive least squares support vector regression (LSSVR).</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"145 ","pages":"Article 110247"},"PeriodicalIF":7.5,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143402627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Next-generation healthcare: Digital twin technology and Monkeypox Skin Lesion Detector network enhancing monkeypox detection - Comparison with pre-trained models
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-02-14 DOI: 10.1016/j.engappai.2025.110257
Vikas Sharma , Akshi Kumar , Kapil Sharma
The rise of digital healthcare has led to the adoption of various technologies aimed at enhancing health operations, patient well-being, and healthcare costs. Digital Twin (DT) technology is a pivotal innovation in this domain. Monkeypox virus (MPXV), a zoonotic virus, poses a significant public health risk, particularly in remote regions of Central and West Africa. Early diagnosis of monkeypox lesions is crucial but challenging due to similarities with other skin conditions. Many studies have employed deep-learning models to detect the monkeypox virus. However, those models often require substantial storage space. This research introduces the Monkeypox Skin Lesion Detector Network (MxSLDNet), an automated digital twin framework designed to enhance digital healthcare operations by enabling early detection and classification of monkeypox and non-monkeypox lesions. Monkeypox Skin Lesion Detector Network (MxSLDNet) significantly advances monkeypox lesion identification, outperforming conventional models like Visual Geometry Group 19 (VGG-19), Densely Connected Network 121 (DenseNet-121), Efficient Network B4 (EfficientNet-B4) and Residual Network 101 (ResNet-101) regarding precision, recall, F1-score, and accuracy, while requiring less storage. This innovation addresses the critical issue of storage demands, making the Monkeypox Skin Lesion Detector Network (MxSLDNet) a viable solution for early monkeypox lesion detection in resource-limited healthcare settings. Utilizing the “Monkeypox Skin Lesion Dataset” with 1428 monkeypox and 1764 non-monkeypox images, Monkeypox Skin Lesion Detector Network (MxSLDNet) achieves high recall, precision, and F1-scores of 0.96, 0.95, and 0.95, respectively. Integrating digital twins into healthcare promises to create a scalable, intelligent, and comprehensive health ecosystem, enhancing treatments by connecting patients and healthcare providers.
{"title":"Next-generation healthcare: Digital twin technology and Monkeypox Skin Lesion Detector network enhancing monkeypox detection - Comparison with pre-trained models","authors":"Vikas Sharma ,&nbsp;Akshi Kumar ,&nbsp;Kapil Sharma","doi":"10.1016/j.engappai.2025.110257","DOIUrl":"10.1016/j.engappai.2025.110257","url":null,"abstract":"<div><div>The rise of digital healthcare has led to the adoption of various technologies aimed at enhancing health operations, patient well-being, and healthcare costs. Digital Twin (DT) technology is a pivotal innovation in this domain. Monkeypox virus (MPXV), a zoonotic virus, poses a significant public health risk, particularly in remote regions of Central and West Africa. Early diagnosis of monkeypox lesions is crucial but challenging due to similarities with other skin conditions. Many studies have employed deep-learning models to detect the monkeypox virus. However, those models often require substantial storage space. This research introduces the Monkeypox Skin Lesion Detector Network (MxSLDNet), an automated digital twin framework designed to enhance digital healthcare operations by enabling early detection and classification of monkeypox and non-monkeypox lesions. Monkeypox Skin Lesion Detector Network (MxSLDNet) significantly advances monkeypox lesion identification, outperforming conventional models like Visual Geometry Group 19 (VGG-19), Densely Connected Network 121 (DenseNet-121), Efficient Network B4 (EfficientNet-B4) and Residual Network 101 (ResNet-101) regarding precision, recall, F1-score, and accuracy, while requiring less storage. This innovation addresses the critical issue of storage demands, making the Monkeypox Skin Lesion Detector Network (MxSLDNet) a viable solution for early monkeypox lesion detection in resource-limited healthcare settings. Utilizing the “Monkeypox Skin Lesion Dataset” with 1428 monkeypox and 1764 non-monkeypox images, Monkeypox Skin Lesion Detector Network (MxSLDNet) achieves high recall, precision, and F1-scores of 0.96, 0.95, and 0.95, respectively. Integrating digital twins into healthcare promises to create a scalable, intelligent, and comprehensive health ecosystem, enhancing treatments by connecting patients and healthcare providers.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"145 ","pages":"Article 110257"},"PeriodicalIF":7.5,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143403119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid pathfinding optimization for the Lightning Network with Reinforcement Learning
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-02-13 DOI: 10.1016/j.engappai.2025.110225
Danila Valko , Daniel Kudenko
Payment channel networks, such as Bitcoin’s Lightning Network, have emerged to address blockchain scalability issues, enabling rapid transactions. Despite their potential, these networks often experience payment failures due to delays in pathfinding, unreliable routes and infrastructure issues, resulting in excessive carbon emissions. Current reinforcement learning solutions for payment channel networks mainly address issues like payment channel balance and routing fees but often overlook the infrastructure-related causes of payment failure. This paper introduces a novel reinforcement learning-based architecture that combines reinforcement learning agent with native deterministic pathfinding algorithms. This hybrid approach leverages the fast, complete solutions of deterministic algorithms while adapting to the network’s dynamic and probabilistic nature of payments to significantly enhance payment success rates. Experiments on real network snapshots show that this approach outperforms native pathfinding algorithms and state-of-the-art static optimization methods, providing improved reliability and efficiency in dynamic network conditions. In scenarios with payment failure rates greater than 5%, the proposed approach achieves a 10% higher payment success rate than existing methods, while maintaining balanced performance on economic key metrics such as the payment fee and throughput, and on sustainability key metrics such as payment path length, number of inter-country/continental hops, and average carbon intensity.
{"title":"Hybrid pathfinding optimization for the Lightning Network with Reinforcement Learning","authors":"Danila Valko ,&nbsp;Daniel Kudenko","doi":"10.1016/j.engappai.2025.110225","DOIUrl":"10.1016/j.engappai.2025.110225","url":null,"abstract":"<div><div>Payment channel networks, such as Bitcoin’s Lightning Network, have emerged to address blockchain scalability issues, enabling rapid transactions. Despite their potential, these networks often experience payment failures due to delays in pathfinding, unreliable routes and infrastructure issues, resulting in excessive carbon emissions. Current reinforcement learning solutions for payment channel networks mainly address issues like payment channel balance and routing fees but often overlook the infrastructure-related causes of payment failure. This paper introduces a novel reinforcement learning-based architecture that combines reinforcement learning agent with native deterministic pathfinding algorithms. This hybrid approach leverages the fast, complete solutions of deterministic algorithms while adapting to the network’s dynamic and probabilistic nature of payments to significantly enhance payment success rates. Experiments on real network snapshots show that this approach outperforms native pathfinding algorithms and state-of-the-art static optimization methods, providing improved reliability and efficiency in dynamic network conditions. In scenarios with payment failure rates greater than 5%, the proposed approach achieves a 10% higher payment success rate than existing methods, while maintaining balanced performance on economic key metrics such as the payment fee and throughput, and on sustainability key metrics such as payment path length, number of inter-country/continental hops, and average carbon intensity.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"146 ","pages":"Article 110225"},"PeriodicalIF":7.5,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143395693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hierarchical deep reinforcement learning method for coupled transportation and power distribution system dispatching
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-02-13 DOI: 10.1016/j.engappai.2025.110264
Qi Han, Xueping Li, Liangce He
The randomness and dimensionality growth of variables in the Coupled transportation and power distribution systems (CTPS) pose challenges for effectively solving CTPS dispatching tasks. This paper presents a hierarchical deep reinforcement learning (HDRL) method, which disperses the action and state space of CTPS onto decision-making layer and autonomous optimization layer. The Cloud DRL model in the decision-making layer is responsible for the load assignment task of charging stations. The distribution network (DN) and transportation network (TN) DRL models in the autonomous optimization layer are responsible for optimizing the DN and TN respectively. A layer-wise training method is adopted to alleviate the asynchronous convergence problem of HDRL. Firstly, the Gurobi assists in achieving the efficient training of Cloud DRL model by ensuring the reward effectiveness of autonomous optimization layers. Meanwhile, the differential evolution (DE) algorithm assists in optimizing the diversity and focalization of the Transitions by controlling distribution patterns of species initialization, during the pre-sampling and training stage. Then, the trained Cloud DRL model is frozen to train the DN and TN DRL models. This method is tested on two different sizes of CTPS. Simulation analysis shows that this method improves the training performance of the HDRL model.
{"title":"A hierarchical deep reinforcement learning method for coupled transportation and power distribution system dispatching","authors":"Qi Han,&nbsp;Xueping Li,&nbsp;Liangce He","doi":"10.1016/j.engappai.2025.110264","DOIUrl":"10.1016/j.engappai.2025.110264","url":null,"abstract":"<div><div>The randomness and dimensionality growth of variables in the Coupled transportation and power distribution systems (CTPS) pose challenges for effectively solving CTPS dispatching tasks. This paper presents a hierarchical deep reinforcement learning (HDRL) method, which disperses the action and state space of CTPS onto decision-making layer and autonomous optimization layer. The Cloud DRL model in the decision-making layer is responsible for the load assignment task of charging stations. The distribution network (DN) and transportation network (TN) DRL models in the autonomous optimization layer are responsible for optimizing the DN and TN respectively. A layer-wise training method is adopted to alleviate the asynchronous convergence problem of HDRL. Firstly, the Gurobi assists in achieving the efficient training of Cloud DRL model by ensuring the reward effectiveness of autonomous optimization layers. Meanwhile, the differential evolution (DE) algorithm assists in optimizing the diversity and focalization of the Transitions by controlling distribution patterns of species initialization, during the pre-sampling and training stage. Then, the trained Cloud DRL model is frozen to train the DN and TN DRL models. This method is tested on two different sizes of CTPS. Simulation analysis shows that this method improves the training performance of the HDRL model.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"145 ","pages":"Article 110264"},"PeriodicalIF":7.5,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143395964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated classification of thyroid disease using deep learning with neuroevolution model training
IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-02-13 DOI: 10.1016/j.engappai.2025.110209
Mohammad Rashid Dubayan , Sara Ershadi-Nasab , Mariam Zomorodi , Pawel Plawiak , Ryszard Tadeusiewicz , Mohammad Beheshti Roui

Background:

Thyroid disease is a common endocrine disorder; its timely and accurate diagnosis is important. Using clinical and laboratory data input, we developed an artificial neural network (ANN) for thyroid disease classification, incorporating an evolutionary algorithm for network optimization.

Methods:

The proposed model combined ANN with a genetic algorithm (GA), which iteratively modified the weights and biases of the ANN architecture. The weights, encoded as genes in a chromosome and represented as one-dimensional vectors, were updated in each iteration. Binary cross-entropy loss was used as the fitness function to calculate the suitability of solutions in the genetic algorithm. The model was trained and tested on an open-access Hypothyroid disease dataset comprising multiparametric variables of 3772 samples (291 thyroid patients and 3481 controls).

Results:

Our model attends %99.14 accuracy for binary classification (thyroid disease vs. normal), outperforming published models.

Conclusion:

Incorporating GA optimization into ANN enabled the model to explore diverse solutions and escape local optima more effectively, leading to better performance and generalizability. The excellent results support the feasibility of implementing the proposed model for thyroid disease screening in the clinical setting.
{"title":"Automated classification of thyroid disease using deep learning with neuroevolution model training","authors":"Mohammad Rashid Dubayan ,&nbsp;Sara Ershadi-Nasab ,&nbsp;Mariam Zomorodi ,&nbsp;Pawel Plawiak ,&nbsp;Ryszard Tadeusiewicz ,&nbsp;Mohammad Beheshti Roui","doi":"10.1016/j.engappai.2025.110209","DOIUrl":"10.1016/j.engappai.2025.110209","url":null,"abstract":"<div><h3>Background:</h3><div>Thyroid disease is a common endocrine disorder; its timely and accurate diagnosis is important. Using clinical and laboratory data input, we developed an artificial neural network (ANN) for thyroid disease classification, incorporating an evolutionary algorithm for network optimization.</div></div><div><h3>Methods:</h3><div>The proposed model combined ANN with a genetic algorithm (GA), which iteratively modified the weights and biases of the ANN architecture. The weights, encoded as genes in a chromosome and represented as one-dimensional vectors, were updated in each iteration. Binary cross-entropy loss was used as the fitness function to calculate the suitability of solutions in the genetic algorithm. The model was trained and tested on an open-access Hypothyroid disease dataset comprising multiparametric variables of 3772 samples (291 thyroid patients and 3481 controls).</div></div><div><h3>Results:</h3><div>Our model attends <span><math><mrow><mtext>%</mtext><mn>99</mn><mo>.</mo><mn>14</mn></mrow></math></span> accuracy for binary classification (thyroid disease vs. normal), outperforming published models.</div></div><div><h3>Conclusion:</h3><div>Incorporating GA optimization into ANN enabled the model to explore diverse solutions and escape local optima more effectively, leading to better performance and generalizability. The excellent results support the feasibility of implementing the proposed model for thyroid disease screening in the clinical setting.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"146 ","pages":"Article 110209"},"PeriodicalIF":7.5,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143403406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Engineering Applications of Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1