S. P. K. Arachchi, Noorkholis Luthfil Hakim, Hui-Huang Hsu, S. Klimenko, T. Shih
{"title":"Real-Time Static and Dynamic Gesture Recognition Using Mixed Space Features for 3D Virtual World's Interactions","authors":"S. P. K. Arachchi, Noorkholis Luthfil Hakim, Hui-Huang Hsu, S. Klimenko, T. Shih","doi":"10.1109/WAINA.2018.00157","DOIUrl":null,"url":null,"abstract":"Gesture Recognition is a technology that makes devices such as a computer capable of recognizing and responding to different gestures produced by the human body. With the recent growth of 3D virtual world applications, the demand to improve the gesture recognition method, especially hand gesture recognition, has increased. In this paper, we propose a novel vision-based gesture recognition system for controlling the 3D virtual world based on depth images obtained from the 3D camera device. For the proposed system, we used mix spatial space features consisting of 3D and 2D space features. The finger position in the point cloud represents the 3D space feature and the contour of hand from the images as 2D space feature. To investigate the robustness of our system, we designed 9 gestures including 6 static and 3 dynamic varieties. During experiments, we instruct people to display those gestures and calculate the recognition rate. Our results demonstrate that the proposed system was able to recognize the 9 gestures very well with the average accuracy of 95% for static gestures and 81.34% for dynamic ones.","PeriodicalId":296466,"journal":{"name":"2018 32nd International Conference on Advanced Information Networking and Applications Workshops (WAINA)","volume":"204 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 32nd International Conference on Advanced Information Networking and Applications Workshops (WAINA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WAINA.2018.00157","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11
Abstract
Gesture Recognition is a technology that makes devices such as a computer capable of recognizing and responding to different gestures produced by the human body. With the recent growth of 3D virtual world applications, the demand to improve the gesture recognition method, especially hand gesture recognition, has increased. In this paper, we propose a novel vision-based gesture recognition system for controlling the 3D virtual world based on depth images obtained from the 3D camera device. For the proposed system, we used mix spatial space features consisting of 3D and 2D space features. The finger position in the point cloud represents the 3D space feature and the contour of hand from the images as 2D space feature. To investigate the robustness of our system, we designed 9 gestures including 6 static and 3 dynamic varieties. During experiments, we instruct people to display those gestures and calculate the recognition rate. Our results demonstrate that the proposed system was able to recognize the 9 gestures very well with the average accuracy of 95% for static gestures and 81.34% for dynamic ones.