{"title":"Lightweight Deep Learning Model for Hand Gesture Recognition Based on ㎜Wave Radar Point Cloud","authors":"Soojin Lee, Jiheon Kang","doi":"10.5302/j.icros.2023.23.0096","DOIUrl":null,"url":null,"abstract":"This paper introduces a lightweight deep learning model for human-hand-gesture recognition, leveraging point cloud data acquired from a mmWave radar. The proposed 2D projection method can be applied for the preprocessing of input data for lightweight deep learning models by effectively preserving the spatial and coordinate information of each point within the 3D voxel point cloud. In addition, we proposed a 2D-CNN-TCN deep learning model that significantly reduces the number of learnable parameters while maintaining or improving the accuracy of hand-gesture recognition. The mmWave radar sensor module used in this study was IWR6843AoPEVM from Texas Instruments, and a comprehensive dataset consisting of nine distinct hand gestures was collected, with each gesture captured over a duration of 20–25 min, resulting in a total collection time of 190 min. The proposed model was trained and evaluated on a general-purpose PC. The proposed 2D-CNN-TCN model was compared to the 3D-CNN-LSTM model to reflect the 3D voxel input and time-series characteristics. The performance evaluation demonstrated that the performance of the proposed model was 1.3% enhanced with respect to the 3D-CNN-LSTM model, resulting in a recognition accuracy of 95.06% for the proposed model. Moreover, the proposed model achieved a 5.5% reduction in the number of model parameters with respect to the 3D-CNN-LSTM model. Furthermore, the lightweight deep learning model was successfully deployed as an Android application, and the usability of the model was verified through real-time hand-gesture recognition.","PeriodicalId":38644,"journal":{"name":"Journal of Institute of Control, Robotics and Systems","volume":"107 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Institute of Control, Robotics and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5302/j.icros.2023.23.0096","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Mathematics","Score":null,"Total":0}
引用次数: 0
Abstract
This paper introduces a lightweight deep learning model for human-hand-gesture recognition, leveraging point cloud data acquired from a mmWave radar. The proposed 2D projection method can be applied for the preprocessing of input data for lightweight deep learning models by effectively preserving the spatial and coordinate information of each point within the 3D voxel point cloud. In addition, we proposed a 2D-CNN-TCN deep learning model that significantly reduces the number of learnable parameters while maintaining or improving the accuracy of hand-gesture recognition. The mmWave radar sensor module used in this study was IWR6843AoPEVM from Texas Instruments, and a comprehensive dataset consisting of nine distinct hand gestures was collected, with each gesture captured over a duration of 20–25 min, resulting in a total collection time of 190 min. The proposed model was trained and evaluated on a general-purpose PC. The proposed 2D-CNN-TCN model was compared to the 3D-CNN-LSTM model to reflect the 3D voxel input and time-series characteristics. The performance evaluation demonstrated that the performance of the proposed model was 1.3% enhanced with respect to the 3D-CNN-LSTM model, resulting in a recognition accuracy of 95.06% for the proposed model. Moreover, the proposed model achieved a 5.5% reduction in the number of model parameters with respect to the 3D-CNN-LSTM model. Furthermore, the lightweight deep learning model was successfully deployed as an Android application, and the usability of the model was verified through real-time hand-gesture recognition.