{"title":"Automatic optimization model of transmission line based on GIS and genetic algorithm","authors":"Yuan Qin, Zhao Li, Jieyu Ding, Fei Zhao, Mingmeng Meng","doi":"10.2139/ssrn.4220612","DOIUrl":"https://doi.org/10.2139/ssrn.4220612","url":null,"abstract":"","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47928015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1016/j.array.2022.100265
Baraka Maiseli
Nonlinear anisotropic diffusion has attracted a great deal of attention for its ability to simultaneously remove noise and preserve semantic image features. This ability favors several image processing and computer vision applications, including noise removal in medical and scientific images that contain critical features (textures, edges, and contours). Despite their promising performance, methods based on nonlinear anisotropic diffusion suffer from practical limitations that have been lightly discussed in the literature. Our work surfaces these limitations as an attempt to create future research opportunities. In addition, we have proposed a diffusion-driven method that generates superior results compared with classical methods, including the popular Perona–Malik formulation. The proposed method embeds a kernel that properly guides the diffusion process across image regions. Experimental results show that our kernel encourages effective noise removal and ensures preservation of significant image features. We have provided potential research problems to further expand the current results.
{"title":"Nonlinear anisotropic diffusion methods for image denoising problems: Challenges and future research opportunities","authors":"Baraka Maiseli","doi":"10.1016/j.array.2022.100265","DOIUrl":"https://doi.org/10.1016/j.array.2022.100265","url":null,"abstract":"<div><p>Nonlinear anisotropic diffusion has attracted a great deal of attention for its ability to simultaneously remove noise and preserve semantic image features. This ability favors several image processing and computer vision applications, including noise removal in medical and scientific images that contain critical features (textures, edges, and contours). Despite their promising performance, methods based on nonlinear anisotropic diffusion suffer from practical limitations that have been lightly discussed in the literature. Our work surfaces these limitations as an attempt to create future research opportunities. In addition, we have proposed a diffusion-driven method that generates superior results compared with classical methods, including the popular Perona–Malik formulation. The proposed method embeds a kernel that properly guides the diffusion process across image regions. Experimental results show that our kernel encourages effective noise removal and ensures preservation of significant image features. We have provided potential research problems to further expand the current results.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49752695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The inconsistency in triage evaluation in emergency departments (EDs) and the limitations in practice within the standard triage tools among triage nurses have led researchers to seek more accurate and robust triage evaluation that provides better patient prioritization based on their medical conditions. This study aspires to establish the best methodological practices for applying machine learning (ML) techniques to build an automated triage model for more accurate evaluation.
Methods
A comparative study of selected supervised ML models was conducted to determine the best-performing approach to evaluate patient triage outcomes in hospital emergency departments. A retrospective dataset of 2688 patients who visited the ED between April 1, 2020 and June 9, 2020 was collected. Data included patient demographics (age and gender), Vital signs (body temperature, respiratory rate, heart rate, blood pressure and oxygen saturation), chief complaints, and chronic illness. Nine supervised ML techniques were investigated in this study. Models were trained based on patient disposition outcomes and then validated to evaluate their performance.
Findings
ML models show high capabilities in predicting patient disposition outcomes in ED settings. Four models (KNN, GBDT, XGBoost, and RF) performed better than the rest. RF was selected as the optimal model as it demonstrated a slight advantage over the other models with 89.1% micro accuracy, 89.0% precision, 89.1% recall, and 89.0% F1-score, exhibiting outstanding performance in differentiation between patients with critical outcomes (e.g., Mortality and ICU admission) from those patients with less critical outcomes (e.g., discharged and hospitalized) in ED settings.
Conclusion
Machine learning techniques demonstrate high promise in improving predictive abilities in emergency medicine and providing robust decision-making tools that can enhance the patient triage process, assist triage personnel in their decision and thus reduce the effects of ED overcrowding and enhance patient outcomes.
{"title":"A comparative study of supervised machine learning approaches to predict patient triage outcomes in hospital emergency departments","authors":"Hamza Elhaj , Nebil Achour , Marzia Hoque Tania , Kurtulus Aciksari","doi":"10.1016/j.array.2023.100281","DOIUrl":"10.1016/j.array.2023.100281","url":null,"abstract":"<div><h3>Background</h3><p>The inconsistency in triage evaluation in emergency departments (EDs) and the limitations in practice within the standard triage tools among triage nurses have led researchers to seek more accurate and robust triage evaluation that provides better patient prioritization based on their medical conditions. This study aspires to establish the best methodological practices for applying machine learning (ML) techniques to build an automated triage model for more accurate evaluation.</p></div><div><h3>Methods</h3><p>A comparative study of selected supervised ML models was conducted to determine the best-performing approach to evaluate patient triage outcomes in hospital emergency departments. A retrospective dataset of 2688 patients who visited the ED between April 1, 2020 and June 9, 2020 was collected. Data included patient demographics (age and gender), Vital signs (body temperature, respiratory rate, heart rate, blood pressure and oxygen saturation), chief complaints, and chronic illness. Nine supervised ML techniques were investigated in this study. Models were trained based on patient disposition outcomes and then validated to evaluate their performance.</p></div><div><h3>Findings</h3><p>ML models show high capabilities in predicting patient disposition outcomes in ED settings. Four models (KNN, GBDT, XGBoost, and RF) performed better than the rest. RF was selected as the optimal model as it demonstrated a slight advantage over the other models with 89.1% micro accuracy, 89.0% precision, 89.1% recall, and 89.0% F1-score, exhibiting outstanding performance in differentiation between patients with critical outcomes (e.g., Mortality and ICU admission) from those patients with less critical outcomes (e.g., discharged and hospitalized) in ED settings.</p></div><div><h3>Conclusion</h3><p>Machine learning techniques demonstrate high promise in improving predictive abilities in emergency medicine and providing robust decision-making tools that can enhance the patient triage process, assist triage personnel in their decision and thus reduce the effects of ED overcrowding and enhance patient outcomes.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44500110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1016/j.array.2022.100274
Mashaan Alshammari , John Stavrakakis , Adel F. Ahmed , Masahiro Takatsuka
SpectralNet is a graph clustering method that uses neural network to find an embedding that separates the data. So far it was only used with -nn graphs, which are usually constructed using a distance metric (e.g., Euclidean distance). -nn graphs restrict the points to have a fixed number of neighbors regardless of the local statistics around them. We proposed a new SpectralNet similarity metric based on random projection trees (rpTrees). Our experiments revealed that SpectralNet produces better clustering accuracy using rpTree similarity metric compared to -nn graph with a distance metric. Also, we found out that rpTree parameters do not affect the clustering accuracy. These parameters include the leaf size and the selection of projection direction. It is computationally efficient to keep the leaf size in order of , and project the points onto a random direction instead of trying to find the direction with the maximum dispersion.
{"title":"Random projection tree similarity metric for SpectralNet","authors":"Mashaan Alshammari , John Stavrakakis , Adel F. Ahmed , Masahiro Takatsuka","doi":"10.1016/j.array.2022.100274","DOIUrl":"10.1016/j.array.2022.100274","url":null,"abstract":"<div><p>SpectralNet is a graph clustering method that uses neural network to find an embedding that separates the data. So far it was only used with <span><math><mi>k</mi></math></span>-nn graphs, which are usually constructed using a distance metric (e.g., Euclidean distance). <span><math><mi>k</mi></math></span>-nn graphs restrict the points to have a fixed number of neighbors regardless of the local statistics around them. We proposed a new SpectralNet similarity metric based on random projection trees (rpTrees). Our experiments revealed that SpectralNet produces better clustering accuracy using rpTree similarity metric compared to <span><math><mi>k</mi></math></span>-nn graph with a distance metric. Also, we found out that rpTree parameters do not affect the clustering accuracy. These parameters include the leaf size and the selection of projection direction. It is computationally efficient to keep the leaf size in order of <span><math><mrow><mo>log</mo><mrow><mo>(</mo><mi>n</mi><mo>)</mo></mrow></mrow></math></span>, and project the points onto a random direction instead of trying to find the direction with the maximum dispersion.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44984319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1016/j.array.2022.100266
Yuancun Qin , Zhaozheng Li , Jieyu Ding , Fei Zhao , Ming Meng
At present, the planning of transmission lines mainly relies on human decision-making and lacks intelligence. This paper combines the advantages of GIS in processing spatial data with the advantages of genetic algorithm to explore the optimization method of transmission line planning. The combination of GIS and genetic algorithm can minimize the interference of human factors and quickly solve the path planning problem of transmission lines. According to the theoretical model of genetic algorithm, this study constructs the transmission line optimization model based on genetic algorithm, and realizes the Add-ins plug-in development of the transmission line planning model based on genetic algorithm with the help of C # language. Taking 500 kV overhead transmission line about 150 km from Jiantang Substation (starting point) in Shangri-La County to Tai’ an Substation (ending point) in Lijiang as an example, two groups of experiments are designed under the conditions of considering traffic single factor and comprehensive multi-factor respectively. It is obtained that the path optimization effect of genetic algorithm is the best under the condition of comprehensive multi-factor, which proves the rationality and superiority of the model constructed in this study.
{"title":"Automatic optimization model of transmission line based on GIS and genetic algorithm","authors":"Yuancun Qin , Zhaozheng Li , Jieyu Ding , Fei Zhao , Ming Meng","doi":"10.1016/j.array.2022.100266","DOIUrl":"https://doi.org/10.1016/j.array.2022.100266","url":null,"abstract":"<div><p>At present, the planning of transmission lines mainly relies on human decision-making and lacks intelligence. This paper combines the advantages of GIS in processing spatial data with the advantages of genetic algorithm to explore the optimization method of transmission line planning. The combination of GIS and genetic algorithm can minimize the interference of human factors and quickly solve the path planning problem of transmission lines. According to the theoretical model of genetic algorithm, this study constructs the transmission line optimization model based on genetic algorithm, and realizes the Add-ins plug-in development of the transmission line planning model based on genetic algorithm with the help of C # language. Taking 500 kV overhead transmission line about 150 km from Jiantang Substation (starting point) in Shangri-La County to Tai’ an Substation (ending point) in Lijiang as an example, two groups of experiments are designed under the conditions of considering traffic single factor and comprehensive multi-factor respectively. It is obtained that the path optimization effect of genetic algorithm is the best under the condition of comprehensive multi-factor, which proves the rationality and superiority of the model constructed in this study.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49766026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graphical Abstract Minimum Number of Scans for Collagen Fibre Direction Estimation Using Magic Angle Directional Imaging (MADI) with a priori Information
基于先验信息的幻角定向成像(MADI)用于胶原纤维方向估计的最小扫描次数
{"title":"Minimum number of scans for collagen fibre direction estimation using Magic Angle Directional Imaging (MADI) with a priori information","authors":"Harry Lanz, M. Ristic, K. Chappell, J. McGinley","doi":"10.2139/ssrn.4252154","DOIUrl":"https://doi.org/10.2139/ssrn.4252154","url":null,"abstract":"Graphical Abstract Minimum Number of Scans for Collagen Fibre Direction Estimation Using Magic Angle Directional Imaging (MADI) with a priori Information","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47321285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1016/j.array.2023.100280
Jeffrey Elcock, Nekiesha Edward
In heterogeneous computing environments, finding optimized solutions continues to be one of the most challenging problems as we continuously seek better and improved performances. Task scheduling in such environments is NP-hard, so it is imperative that we tackle this critical issue with a desire of producing effective and efficient solutions. For several types of applications, the task scheduling problem is crucial, and throughout the literature, there are a plethora of different algorithms using several different techniques and varying approaches. Ant Colony Optimization (ACO) is one such technique used to address the problem. This popular optimization technique is based on the cooperative behavior of ants seeking to identify the shortest path between their nest and food sources. It is with this in mind that we propose an ACO-based algorithm, called ACO-RNK, as an efficient solution to the task scheduling problem. Our algorithm utilizes pheromone and a priority-based heuristic, known as the upward rank value, as well as an insertion-based policy, along with a pheromone aging mechanism which aims to avoid premature convergence to guide the ants to good quality solutions. To evaluate the performance of our algorithm, we compared our algorithm with the HEFT algorithm and the MGACO algorithm using randomly generated directed acyclic graphs (DAGs). The simulation results indicated that our algorithm experienced comparable or even better performance, than the selected algorithms.
{"title":"An efficient ACO-based algorithm for task scheduling in heterogeneous multiprocessing environments","authors":"Jeffrey Elcock, Nekiesha Edward","doi":"10.1016/j.array.2023.100280","DOIUrl":"10.1016/j.array.2023.100280","url":null,"abstract":"<div><p>In heterogeneous computing environments, finding optimized solutions continues to be one of the most challenging problems as we continuously seek better and improved performances. Task scheduling in such environments is <em>N</em>P-hard, so it is imperative that we tackle this critical issue with a desire of producing effective and efficient solutions. For several types of applications, the task scheduling problem is crucial, and throughout the literature, there are a plethora of different algorithms using several different techniques and varying approaches. Ant Colony Optimization (ACO) is one such technique used to address the problem. This popular optimization technique is based on the cooperative behavior of ants seeking to identify the shortest path between their nest and food sources. It is with this in mind that we propose an ACO-based algorithm, called ACO-RNK, as an efficient solution to the task scheduling problem. Our algorithm utilizes pheromone and a priority-based heuristic, known as the upward rank value, as well as an insertion-based policy, along with a pheromone aging mechanism which aims to avoid premature convergence to guide the ants to good quality solutions. To evaluate the performance of our algorithm, we compared our algorithm with the HEFT algorithm and the MGACO algorithm using randomly generated directed acyclic graphs (DAGs). The simulation results indicated that our algorithm experienced comparable or even better performance, than the selected algorithms.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45009678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1016/j.array.2022.100268
R. Al Jahdali , S. Kortas , M. Shaikh , L. Dalcin , M. Parsani
Industrially relevant computational fluid dynamics simulations frequently require vast computational resources that are only available to governments, wealthy corporations, and wealthy institutions. Thus, in many contexts and realities, high-performance computing grids and cloud resources on demand should be evaluated as viable alternatives to conventional computing clusters. In this work, we present the analysis of the time-to-solution and cost of an entropy stable collocated discontinuous Galerkin (SSDC) compressible computational fluid dynamics framework on Ibex, the on-premises cluster at KAUST, and the Amazon Web Services Elastic Compute Cloud for complex compressible flows. SSDC is a prototype of the next generation computational fluid dynamics frameworks developed following the road map established by the NASA CFD vision 2030. We simulate complex flow problems using high-order accurate fully-discrete entropy stable algorithms. In terms of time-to-solution, the Amazon Elastic Compute Cloud delivers the best performance, with the Graviton2 processors based on the Arm architecture being the fastest. However, the results also indicate that the Ibex nodes based on the AMD Rome architecture deliver good performance, close to those observed for the Amazon Elastic Compute Cloud. Furthermore, we observed that computations performed on the Ibex on-premises cluster are currently less expensive than those performed in the cloud. Our findings could be used to develop guidelines for selecting high-performance computing cloud resources to simulate realistic fluid flow problems.
工业相关的计算流体动力学模拟经常需要大量的计算资源,而这些资源只有政府、富有的公司和富有的机构才能获得。因此,在许多环境和现实中,应将高性能计算网格和按需云资源作为传统计算集群的可行替代方案进行评估。在这项工作中,我们分析了Ibex上熵稳定并配不连续Galerkin (SSDC)可压缩计算流体动力学框架的求解时间和成本,KAUST的本地集群和Amazon Web Services弹性计算云用于复杂的可压缩流。SSDC是下一代计算流体动力学框架的原型,是根据NASA CFD愿景2030建立的路线图开发的。我们使用高阶精确的全离散熵稳定算法来模拟复杂的流动问题。在解决方案的时间方面,亚马逊弹性计算云提供了最好的性能,基于Arm架构的gravon2处理器是最快的。然而,结果也表明,基于AMD Rome架构的Ibex节点提供了良好的性能,接近亚马逊弹性计算云的性能。此外,我们观察到,在Ibex本地集群上执行的计算目前比在云中执行的计算更便宜。我们的研究结果可用于制定选择高性能计算云资源来模拟现实流体流动问题的指南。
{"title":"Evaluation of next-generation high-order compressible fluid dynamic solver on cloud computing for complex industrial flows","authors":"R. Al Jahdali , S. Kortas , M. Shaikh , L. Dalcin , M. Parsani","doi":"10.1016/j.array.2022.100268","DOIUrl":"10.1016/j.array.2022.100268","url":null,"abstract":"<div><p>Industrially relevant computational fluid dynamics simulations frequently require vast computational resources that are only available to governments, wealthy corporations, and wealthy institutions. Thus, in many contexts and realities, high-performance computing grids and cloud resources on demand should be evaluated as viable alternatives to conventional computing clusters. In this work, we present the analysis of the time-to-solution and cost of an entropy stable collocated discontinuous Galerkin (SSDC) compressible computational fluid dynamics framework on Ibex, the on-premises cluster at KAUST, and the Amazon Web Services Elastic Compute Cloud for complex compressible flows. SSDC is a prototype of the next generation computational fluid dynamics frameworks developed following the road map established by the NASA CFD vision 2030. We simulate complex flow problems using high-order accurate fully-discrete entropy stable algorithms. In terms of time-to-solution, the Amazon Elastic Compute Cloud delivers the best performance, with the Graviton2 processors based on the Arm architecture being the fastest. However, the results also indicate that the Ibex nodes based on the AMD Rome architecture deliver good performance, close to those observed for the Amazon Elastic Compute Cloud. Furthermore, we observed that computations performed on the Ibex on-premises cluster are currently less expensive than those performed in the cloud. Our findings could be used to develop guidelines for selecting high-performance computing cloud resources to simulate realistic fluid flow problems.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44776260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1016/j.array.2023.100277
Biao Chen , Yang Zhou , Chaoyang Chen , Zain Sayeed , Jie Hu , Jin Qi , Todd Frush , Henry Goitz , John Hovorka , Mark Cheng , Carlos Palacio
Processing multiple channels of bioelectrical signals for bionic assistive robot volitional motion control is still a challenging task due to the interference of systematic noise, artifacts, individual bio-variability, and other factors. Emerging machine learning (ML) provides an enabling technology for the development of the next generation of smart devices and assistive systems and edging computing. However, the integration of ML into a robotic control system faces major challenges. This paper presents ML computing to process twelve channels of shoulder and upper limb myoelectrical signals for shoulder motion pattern recognition and real-time upper arm exoskeleton volitional control. Shoulder motion patterns included drinking, opening a door, abducting, and resting. ML algorithms included support vector machine (SVM), artificial neural network (ANN), and Logistic regression (LR). The accuracy of the three ML algorithms was evaluated respectively and compared to determine the optimal ML algorithm. Results showed that overall SVM algorithms yielded better accuracy than the LR and ANN algorithms. The offline accuracy was 96 ± 3.8% for SVM, 96 ± 3.8% for ANN, and 93 ± 6.3% for LR, while the online accuracy was 90 ± 9.1% for SVM, 86 ± 12.0% for ANN, and 85 ± 11.3% for LR respectively. The offline pattern recognition had a higher accuracy than the accuracy of real-time exoskeleton motion control. This study demonstrated that ML computing provides a reliable approach for shoulder motion pattern recognition and real-time exoskeleton volitional motion control.
{"title":"Volitional control of upper-limb exoskeleton empowered by EMG sensors and machine learning computing","authors":"Biao Chen , Yang Zhou , Chaoyang Chen , Zain Sayeed , Jie Hu , Jin Qi , Todd Frush , Henry Goitz , John Hovorka , Mark Cheng , Carlos Palacio","doi":"10.1016/j.array.2023.100277","DOIUrl":"10.1016/j.array.2023.100277","url":null,"abstract":"<div><p>Processing multiple channels of bioelectrical signals for bionic assistive robot volitional motion control is still a challenging task due to the interference of systematic noise, artifacts, individual bio-variability, and other factors. Emerging machine learning (ML) provides an enabling technology for the development of the next generation of smart devices and assistive systems and edging computing. However, the integration of ML into a robotic control system faces major challenges. This paper presents ML computing to process twelve channels of shoulder and upper limb myoelectrical signals for shoulder motion pattern recognition and real-time upper arm exoskeleton volitional control. Shoulder motion patterns included drinking, opening a door, abducting, and resting. ML algorithms included support vector machine (SVM), artificial neural network (ANN), and Logistic regression (LR). The accuracy of the three ML algorithms was evaluated respectively and compared to determine the optimal ML algorithm. Results showed that overall SVM algorithms yielded better accuracy than the LR and ANN algorithms. The offline accuracy was 96 ± 3.8% for SVM, 96 ± 3.8% for ANN, and 93 ± 6.3% for LR, while the online accuracy was 90 ± 9.1% for SVM, 86 ± 12.0% for ANN, and 85 ± 11.3% for LR respectively. The offline pattern recognition had a higher accuracy than the accuracy of real-time exoskeleton motion control. This study demonstrated that ML computing provides a reliable approach for shoulder motion pattern recognition and real-time exoskeleton volitional motion control.</p></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46610832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}