使用Leap运动控制器和计算机视觉提供的红外图像进行手势识别

Tathagat Banerjee, K. Srikar, S. Reddy, Krishna Sai Biradar, Rithika Reddy Koripally, Gummadi. Varshith
{"title":"使用Leap运动控制器和计算机视觉提供的红外图像进行手势识别","authors":"Tathagat Banerjee, K. Srikar, S. Reddy, Krishna Sai Biradar, Rithika Reddy Koripally, Gummadi. Varshith","doi":"10.1109/ICIPTM52218.2021.9388334","DOIUrl":null,"url":null,"abstract":"Speech Impairment and conversion of sign language to human re-engineered audio signals is something computer science has always been interested in. However, the architectural robustness and extraction of features on a very insignificant area of change have posed decade long problems to achieve this idea. The paper proposes a Convolutional Neural network based on a deep belief model on Data imagery collected by leap motion controllers on hand sign recognition. The database is composed of 10 different hand-gestures that were performed by 10 different subjects (5 men and 5 women) which is presented, composed by a set of near-infrared images acquired by the Leap Motion sensor. The paper tries to achieve high accuracy on the pertaining training set inorder to create and form a robust model. It embraces the first step towards image understanding of human signs and aid specially-abled people. We have implemented and tested the algorithm for 2000 images each class. The paper achieves the accuracy and precision of 99.4% and 99.68% respectively. The implications of the study intend to enhance understanding of infrared imagery for small areas of localization feature detection and intend to help the idea of human audio re-engineering a resurgence by using the same.","PeriodicalId":315265,"journal":{"name":"2021 International Conference on Innovative Practices in Technology and Management (ICIPTM)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Hand Sign Recognition using Infrared Imagery Provided by Leap Motion Controller and Computer Vision\",\"authors\":\"Tathagat Banerjee, K. Srikar, S. Reddy, Krishna Sai Biradar, Rithika Reddy Koripally, Gummadi. Varshith\",\"doi\":\"10.1109/ICIPTM52218.2021.9388334\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Speech Impairment and conversion of sign language to human re-engineered audio signals is something computer science has always been interested in. However, the architectural robustness and extraction of features on a very insignificant area of change have posed decade long problems to achieve this idea. The paper proposes a Convolutional Neural network based on a deep belief model on Data imagery collected by leap motion controllers on hand sign recognition. The database is composed of 10 different hand-gestures that were performed by 10 different subjects (5 men and 5 women) which is presented, composed by a set of near-infrared images acquired by the Leap Motion sensor. The paper tries to achieve high accuracy on the pertaining training set inorder to create and form a robust model. It embraces the first step towards image understanding of human signs and aid specially-abled people. We have implemented and tested the algorithm for 2000 images each class. The paper achieves the accuracy and precision of 99.4% and 99.68% respectively. The implications of the study intend to enhance understanding of infrared imagery for small areas of localization feature detection and intend to help the idea of human audio re-engineering a resurgence by using the same.\",\"PeriodicalId\":315265,\"journal\":{\"name\":\"2021 International Conference on Innovative Practices in Technology and Management (ICIPTM)\",\"volume\":\"54 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-02-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Innovative Practices in Technology and Management (ICIPTM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIPTM52218.2021.9388334\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Innovative Practices in Technology and Management (ICIPTM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIPTM52218.2021.9388334","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

语言障碍和将手语转换为人类重新设计的音频信号是计算机科学一直感兴趣的问题。然而,架构的健壮性和特征的提取在一个非常微不足道的变化领域已经提出了长达十年的问题来实现这个想法。本文提出了一种基于深度信念模型的卷积神经网络,用于手势识别的跳跃运动控制器采集的数据图像。该数据库由10个不同受试者(5男5女)的10种不同手势组成,由Leap Motion传感器获取的一组近红外图像组成。本文试图在相关的训练集上达到较高的准确率,以创建并形成一个鲁棒模型。它迈出了理解人类符号的第一步,帮助了残疾人。我们已经为每个类的2000张图像实现并测试了该算法。本文的准确度和精密度分别达到99.4%和99.68%。该研究的意义在于增强对红外图像在小区域定位特征检测中的理解,并通过使用相同的方法帮助人类音频重新设计的想法复苏。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Hand Sign Recognition using Infrared Imagery Provided by Leap Motion Controller and Computer Vision
Speech Impairment and conversion of sign language to human re-engineered audio signals is something computer science has always been interested in. However, the architectural robustness and extraction of features on a very insignificant area of change have posed decade long problems to achieve this idea. The paper proposes a Convolutional Neural network based on a deep belief model on Data imagery collected by leap motion controllers on hand sign recognition. The database is composed of 10 different hand-gestures that were performed by 10 different subjects (5 men and 5 women) which is presented, composed by a set of near-infrared images acquired by the Leap Motion sensor. The paper tries to achieve high accuracy on the pertaining training set inorder to create and form a robust model. It embraces the first step towards image understanding of human signs and aid specially-abled people. We have implemented and tested the algorithm for 2000 images each class. The paper achieves the accuracy and precision of 99.4% and 99.68% respectively. The implications of the study intend to enhance understanding of infrared imagery for small areas of localization feature detection and intend to help the idea of human audio re-engineering a resurgence by using the same.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Influence of Beam Divergence on the Performance of Underwater Wireless Spatial/Spectral OCDMA System Enhancement of Security by Infrared Array Sensor Based IOT System Recognition of Suspicious Human Activities using KLT and Kalman Filter for ATM Surveillance System Design and Development of Cognitive IoT Assistance System for Visually Impaired $2\times 1$ Circular Patch Antenna Array for Improve Antenna Parameters in 2.4 GHz ISM Frequency Band
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1