{"title":"Hand gesture recognition using multi-sensor information fusion","authors":"Aiguo Wang, Huancheng Liu, Jingyu Yan","doi":"10.1117/12.2671270","DOIUrl":null,"url":null,"abstract":"Accurately recognizing hand gestures has great significance in assisting human-computer interaction, enhancing user experience, and developing a human-centered ubiquitous system. Due to the inherent complexity of hand gestures, however, how to capture discriminant features of hand motions and build a gesture recognition model remains crucial. To this end, we herein propose a gesture recognition method based on multi-sensor information fusion. Specifically, we first use the accelerometer and surface electromyography (sEMG) sensor to capture the kinematic and physiological signals of hand motions. Afterward, we utilize the sliding window technique to segment the streaming sensor data and extract various features from each segment to return a feature vector. We then optimize a gesture recognition model with the feature vectors. Finally, comparative experiments are conducted on the collected dataset in terms of different machine learning models, different sensors, as well as different types of features. Results show the joint use of sEMG sensor and accelerometer achieves the average accuracy of 97.88% compared to the 90.38% of using sEMG sensor and 84.03% of using accelerometer among four classifiers, which indicates the effectiveness of multi-sensor fusion. Besides, we quantitatively investigate the impact of null gesture on a gesture recognizer.","PeriodicalId":227528,"journal":{"name":"International Conference on Artificial Intelligence and Computer Engineering (ICAICE 2022)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Artificial Intelligence and Computer Engineering (ICAICE 2022)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2671270","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Accurately recognizing hand gestures has great significance in assisting human-computer interaction, enhancing user experience, and developing a human-centered ubiquitous system. Due to the inherent complexity of hand gestures, however, how to capture discriminant features of hand motions and build a gesture recognition model remains crucial. To this end, we herein propose a gesture recognition method based on multi-sensor information fusion. Specifically, we first use the accelerometer and surface electromyography (sEMG) sensor to capture the kinematic and physiological signals of hand motions. Afterward, we utilize the sliding window technique to segment the streaming sensor data and extract various features from each segment to return a feature vector. We then optimize a gesture recognition model with the feature vectors. Finally, comparative experiments are conducted on the collected dataset in terms of different machine learning models, different sensors, as well as different types of features. Results show the joint use of sEMG sensor and accelerometer achieves the average accuracy of 97.88% compared to the 90.38% of using sEMG sensor and 84.03% of using accelerometer among four classifiers, which indicates the effectiveness of multi-sensor fusion. Besides, we quantitatively investigate the impact of null gesture on a gesture recognizer.