Shenyilang Zhang, Yinfeng Fang, Jiacheng Wan, Guozhang Jiang, Gongfa Li
{"title":"Transfer Learning Enhanced Cross-Subject Hand Gesture Recognition with sEMG","authors":"Shenyilang Zhang, Yinfeng Fang, Jiacheng Wan, Guozhang Jiang, Gongfa Li","doi":"10.1007/s40846-023-00837-5","DOIUrl":null,"url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>This study explores the emerging field of human physical action classification within human–machine interaction (HMI), with potential applications in assisting individuals with disabilities and robotics. The research focuses on addressing the challenges posed by diverse sEMG signals, aiming for improved cross-subject hand gesture recognition.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>The proposed approach utilizes deep transfer learning technology, employing multi-feature images (MFI) generated through grayscale conversion and RGB mapping of numerical matrices. These MFIs are fed as input into a fine-tuned AlexNet model. Two databases, ISRMyo-I and Ninapro DB1, are employed for experimentation. Rigorous testing is conducted to identify optimal parameters and feature combinations. Data augmentation techniques are applied, doubling the MFI dataset. Cross-subject experiments encompass six wrist gestures from Ninapro DB1 and thirteen gestures from ISRMyo-I.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>The study demonstrates substantial performance enhancements. In Ninapro DB1, the mean accuracy achieves 86.16%, showcasing a 13.25% improvement over the best-performing traditional decoding method. Similarly, in ISRMyo-I, a mean accuracy of 70.41% is attained, signifying a 7.4% increase in accuracy compared to traditional methods.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>This research establishes a robust framework capable of mitigating cross-user differences in hand gesture recognition based on sEMG signals. By employing deep transfer learning techniques and multi-feature image processing, the study significantly enhances the accuracy of cross-subject hand gesture recognition. This advancement holds promise for enriching human–machine interaction and extending the practical applications of this technology in assisting disabled individuals and robotics.</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":" 3","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical and Biological Engineering","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s40846-023-00837-5","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose
This study explores the emerging field of human physical action classification within human–machine interaction (HMI), with potential applications in assisting individuals with disabilities and robotics. The research focuses on addressing the challenges posed by diverse sEMG signals, aiming for improved cross-subject hand gesture recognition.
Methods
The proposed approach utilizes deep transfer learning technology, employing multi-feature images (MFI) generated through grayscale conversion and RGB mapping of numerical matrices. These MFIs are fed as input into a fine-tuned AlexNet model. Two databases, ISRMyo-I and Ninapro DB1, are employed for experimentation. Rigorous testing is conducted to identify optimal parameters and feature combinations. Data augmentation techniques are applied, doubling the MFI dataset. Cross-subject experiments encompass six wrist gestures from Ninapro DB1 and thirteen gestures from ISRMyo-I.
Results
The study demonstrates substantial performance enhancements. In Ninapro DB1, the mean accuracy achieves 86.16%, showcasing a 13.25% improvement over the best-performing traditional decoding method. Similarly, in ISRMyo-I, a mean accuracy of 70.41% is attained, signifying a 7.4% increase in accuracy compared to traditional methods.
Conclusion
This research establishes a robust framework capable of mitigating cross-user differences in hand gesture recognition based on sEMG signals. By employing deep transfer learning techniques and multi-feature image processing, the study significantly enhances the accuracy of cross-subject hand gesture recognition. This advancement holds promise for enriching human–machine interaction and extending the practical applications of this technology in assisting disabled individuals and robotics.
期刊介绍:
The purpose of Journal of Medical and Biological Engineering, JMBE, is committed to encouraging and providing the standard of biomedical engineering. The journal is devoted to publishing papers related to clinical engineering, biomedical signals, medical imaging, bio-informatics, tissue engineering, and so on. Other than the above articles, any contributions regarding hot issues and technological developments that help reach the purpose are also included.