Tathagat Banerjee, K. Srikar, S. Reddy, Krishna Sai Biradar, Rithika Reddy Koripally, Gummadi. Varshith
{"title":"使用Leap运动控制器和计算机视觉提供的红外图像进行手势识别","authors":"Tathagat Banerjee, K. Srikar, S. Reddy, Krishna Sai Biradar, Rithika Reddy Koripally, Gummadi. Varshith","doi":"10.1109/ICIPTM52218.2021.9388334","DOIUrl":null,"url":null,"abstract":"Speech Impairment and conversion of sign language to human re-engineered audio signals is something computer science has always been interested in. However, the architectural robustness and extraction of features on a very insignificant area of change have posed decade long problems to achieve this idea. The paper proposes a Convolutional Neural network based on a deep belief model on Data imagery collected by leap motion controllers on hand sign recognition. The database is composed of 10 different hand-gestures that were performed by 10 different subjects (5 men and 5 women) which is presented, composed by a set of near-infrared images acquired by the Leap Motion sensor. The paper tries to achieve high accuracy on the pertaining training set inorder to create and form a robust model. It embraces the first step towards image understanding of human signs and aid specially-abled people. We have implemented and tested the algorithm for 2000 images each class. The paper achieves the accuracy and precision of 99.4% and 99.68% respectively. The implications of the study intend to enhance understanding of infrared imagery for small areas of localization feature detection and intend to help the idea of human audio re-engineering a resurgence by using the same.","PeriodicalId":315265,"journal":{"name":"2021 International Conference on Innovative Practices in Technology and Management (ICIPTM)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Hand Sign Recognition using Infrared Imagery Provided by Leap Motion Controller and Computer Vision\",\"authors\":\"Tathagat Banerjee, K. Srikar, S. Reddy, Krishna Sai Biradar, Rithika Reddy Koripally, Gummadi. Varshith\",\"doi\":\"10.1109/ICIPTM52218.2021.9388334\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Speech Impairment and conversion of sign language to human re-engineered audio signals is something computer science has always been interested in. However, the architectural robustness and extraction of features on a very insignificant area of change have posed decade long problems to achieve this idea. The paper proposes a Convolutional Neural network based on a deep belief model on Data imagery collected by leap motion controllers on hand sign recognition. The database is composed of 10 different hand-gestures that were performed by 10 different subjects (5 men and 5 women) which is presented, composed by a set of near-infrared images acquired by the Leap Motion sensor. The paper tries to achieve high accuracy on the pertaining training set inorder to create and form a robust model. It embraces the first step towards image understanding of human signs and aid specially-abled people. We have implemented and tested the algorithm for 2000 images each class. The paper achieves the accuracy and precision of 99.4% and 99.68% respectively. The implications of the study intend to enhance understanding of infrared imagery for small areas of localization feature detection and intend to help the idea of human audio re-engineering a resurgence by using the same.\",\"PeriodicalId\":315265,\"journal\":{\"name\":\"2021 International Conference on Innovative Practices in Technology and Management (ICIPTM)\",\"volume\":\"54 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-02-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Innovative Practices in Technology and Management (ICIPTM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIPTM52218.2021.9388334\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Innovative Practices in Technology and Management (ICIPTM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIPTM52218.2021.9388334","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Hand Sign Recognition using Infrared Imagery Provided by Leap Motion Controller and Computer Vision
Speech Impairment and conversion of sign language to human re-engineered audio signals is something computer science has always been interested in. However, the architectural robustness and extraction of features on a very insignificant area of change have posed decade long problems to achieve this idea. The paper proposes a Convolutional Neural network based on a deep belief model on Data imagery collected by leap motion controllers on hand sign recognition. The database is composed of 10 different hand-gestures that were performed by 10 different subjects (5 men and 5 women) which is presented, composed by a set of near-infrared images acquired by the Leap Motion sensor. The paper tries to achieve high accuracy on the pertaining training set inorder to create and form a robust model. It embraces the first step towards image understanding of human signs and aid specially-abled people. We have implemented and tested the algorithm for 2000 images each class. The paper achieves the accuracy and precision of 99.4% and 99.68% respectively. The implications of the study intend to enhance understanding of infrared imagery for small areas of localization feature detection and intend to help the idea of human audio re-engineering a resurgence by using the same.