Gesture Recognition Model with Multi-Tracking Capture System for Human-Robot Interaction

K. H. Nguyen, Anh-Duy Pham, Tri Bien Minh, Thi-Thu-Thao Phan, X. Do
{"title":"Gesture Recognition Model with Multi-Tracking Capture System for Human-Robot Interaction","authors":"K. H. Nguyen, Anh-Duy Pham, Tri Bien Minh, Thi-Thu-Thao Phan, X. Do","doi":"10.1109/ICSSE58758.2023.10227183","DOIUrl":null,"url":null,"abstract":"This study develops a wireless gesture recognition system for human-robot interaction using a high-speed marker-based motion capture system that requires no hardware development and no onboard power source. A novel gesture recognition model recognizes four gestures performed while holding a rigid-body object and translates them into robot control signals. The gestures are Flick Back to Front, Flick Front to Back, Rotate Clockwise, and Rotate Counter-clockwise. The system has four main components: a host PC with Vicon Tracker software, a set of Vicon Vantage V8 infrared cameras, a client PC that receives motion capture data and translates gestures to instruct a simulated KUKA youBot robot in the Gazebo simulation environment, and a gesture input unit. The poses of the gesture input unit are used to train the model using a surrogate deep neural network and the XGBoost ensemble method in a semi-supervised setting. The algorithm’s decision-making process is explicated through the implementation of the Layer-wise Relevance Propagation methodology in PyTorch. The control approach is similar to the way trainers teach domestic pets to perform specific actions in response to different gestures. The proposed method offers an alternative to commanding the robot through typing or using joysticks. The current gesture recognition rate is around 60%, but performance will improve over time as new training samples are collected and event detection algorithms are improved to avoid misinterpreting unrelated movements as classified gestures.","PeriodicalId":280745,"journal":{"name":"2023 International Conference on System Science and Engineering (ICSSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on System Science and Engineering (ICSSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSSE58758.2023.10227183","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This study develops a wireless gesture recognition system for human-robot interaction using a high-speed marker-based motion capture system that requires no hardware development and no onboard power source. A novel gesture recognition model recognizes four gestures performed while holding a rigid-body object and translates them into robot control signals. The gestures are Flick Back to Front, Flick Front to Back, Rotate Clockwise, and Rotate Counter-clockwise. The system has four main components: a host PC with Vicon Tracker software, a set of Vicon Vantage V8 infrared cameras, a client PC that receives motion capture data and translates gestures to instruct a simulated KUKA youBot robot in the Gazebo simulation environment, and a gesture input unit. The poses of the gesture input unit are used to train the model using a surrogate deep neural network and the XGBoost ensemble method in a semi-supervised setting. The algorithm’s decision-making process is explicated through the implementation of the Layer-wise Relevance Propagation methodology in PyTorch. The control approach is similar to the way trainers teach domestic pets to perform specific actions in response to different gestures. The proposed method offers an alternative to commanding the robot through typing or using joysticks. The current gesture recognition rate is around 60%, but performance will improve over time as new training samples are collected and event detection algorithms are improved to avoid misinterpreting unrelated movements as classified gestures.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于多跟踪捕获系统的人机交互手势识别模型
本研究开发了一种用于人机交互的无线手势识别系统,该系统使用基于高速标记的动作捕捉系统,无需硬件开发,也无需机载电源。一种新型的手势识别模型可以识别手持刚体物体时的四种手势,并将其转化为机器人控制信号。手势是向后弹到前面,从前向后弹到后面,顺时针旋转和逆时针旋转。该系统有四个主要组成部分:一台装有Vicon Tracker软件的主机PC,一套Vicon Vantage V8红外摄像机,一台接收动作捕捉数据并转换手势以指示Gazebo模拟环境中模拟的KUKA youBot机器人的客户端PC,以及一个手势输入单元。在半监督设置下,使用代理深度神经网络和XGBoost集成方法,使用手势输入单元的姿态来训练模型。通过在PyTorch中实现分层相关传播方法来说明算法的决策过程。这种控制方法类似于驯兽师教家养宠物对不同手势做出特定动作的方式。所提出的方法提供了一种通过打字或使用操纵杆来指挥机器人的替代方法。目前的手势识别率约为60%,但随着新的训练样本的收集和事件检测算法的改进,性能将随着时间的推移而提高,以避免将不相关的动作误解为分类手势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Deep Q-Network (DQN) Approach for Automatic Vehicles Applied in the Intelligent Transportation System (ITS) Improvement in Proportional Energy Sharing and DC Bus Voltage Restoring for DC Microgrid in the Islanded Operation Mode A New Buck-Boost Converter Structure With Improved Efficiency Performance of Energy Harvesting Aided Multi-hop Mobile Relay Networks With and Without Using Cooperative Communication A New Novel of Prescribed Optimal Control and Its Application for Smart Damping System
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1