基于时空的二维骨架图像的卷积神经网络动态手势识别

J. Paulo, L. Garrote, P. Peixoto, U. Nunes
{"title":"基于时空的二维骨架图像的卷积神经网络动态手势识别","authors":"J. Paulo, L. Garrote, P. Peixoto, U. Nunes","doi":"10.1109/RO-MAN50785.2021.9515418","DOIUrl":null,"url":null,"abstract":"This paper presents a dynamic gesture recognition approach using a novel spatiotemporal 2D skeleton image representation that can be fed to computationally efficient deep convolutional neural networks, for applications on human-robot interaction. Gestures are a seamless modality of human interaction and represent a potentially natural way to interact with the smart devices around us, like robots. The contribution of this paper is the proposal of a visually interpretable representation of dynamic gestures, which has a two-fold advantage: (i) conveys both spatial and temporal characteristics relying on a technique inspired in computer graphics, (ii) and can be used with simple and efficient architectures of convolutional neural networks. In our representation, a 3D skeleton model is projected to a 2D camera’s point-of-view, preserving spatial relations, and through a sliding window the temporal domain is encoded in a fused image of consecutive frames, through a shading motion effect achieved by manipulating a transparency coefficient. The result is a 2D image that when fed to simple custom-designed convolutional neural networks, it is achieved accurate classification of dynamic gestures. Experimmental reuslts obtained with a purposely captured 6 gesture dataset of 11 subjects, and also 2 public datasets, give evidence of a strong performance of our approach, when compared to other methods.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"1138-1144"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Spatiotemporal 2D Skeleton-based Image for Dynamic Gesture Recognition Using Convolutional Neural Networks\",\"authors\":\"J. Paulo, L. Garrote, P. Peixoto, U. Nunes\",\"doi\":\"10.1109/RO-MAN50785.2021.9515418\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a dynamic gesture recognition approach using a novel spatiotemporal 2D skeleton image representation that can be fed to computationally efficient deep convolutional neural networks, for applications on human-robot interaction. Gestures are a seamless modality of human interaction and represent a potentially natural way to interact with the smart devices around us, like robots. The contribution of this paper is the proposal of a visually interpretable representation of dynamic gestures, which has a two-fold advantage: (i) conveys both spatial and temporal characteristics relying on a technique inspired in computer graphics, (ii) and can be used with simple and efficient architectures of convolutional neural networks. In our representation, a 3D skeleton model is projected to a 2D camera’s point-of-view, preserving spatial relations, and through a sliding window the temporal domain is encoded in a fused image of consecutive frames, through a shading motion effect achieved by manipulating a transparency coefficient. The result is a 2D image that when fed to simple custom-designed convolutional neural networks, it is achieved accurate classification of dynamic gestures. Experimmental reuslts obtained with a purposely captured 6 gesture dataset of 11 subjects, and also 2 public datasets, give evidence of a strong performance of our approach, when compared to other methods.\",\"PeriodicalId\":6854,\"journal\":{\"name\":\"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)\",\"volume\":\"1 1\",\"pages\":\"1138-1144\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/RO-MAN50785.2021.9515418\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN50785.2021.9515418","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文提出了一种动态手势识别方法,该方法使用一种新的时空二维骨架图像表示,可以将其输入到计算效率高的深度卷积神经网络中,用于人机交互。手势是人类互动的一种无缝方式,代表了一种与我们周围的智能设备(如机器人)互动的潜在自然方式。本文的贡献是提出了一种视觉上可解释的动态手势表示,它具有双重优势:(i)依赖于计算机图形学中的一种技术来传达空间和时间特征,(ii)并且可以与卷积神经网络的简单高效架构一起使用。在我们的表示中,3D骨架模型被投影到2D摄像机的视点上,保持空间关系,并通过滑动窗口将时域编码为连续帧的融合图像,通过操纵透明度系数实现阴影运动效果。结果是一个二维图像,当输入到简单的定制设计的卷积神经网络时,它实现了动态手势的准确分类。与其他方法相比,通过故意捕获11个受试者的6个手势数据集和2个公共数据集获得的实验结果证明了我们的方法具有很强的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Spatiotemporal 2D Skeleton-based Image for Dynamic Gesture Recognition Using Convolutional Neural Networks
This paper presents a dynamic gesture recognition approach using a novel spatiotemporal 2D skeleton image representation that can be fed to computationally efficient deep convolutional neural networks, for applications on human-robot interaction. Gestures are a seamless modality of human interaction and represent a potentially natural way to interact with the smart devices around us, like robots. The contribution of this paper is the proposal of a visually interpretable representation of dynamic gestures, which has a two-fold advantage: (i) conveys both spatial and temporal characteristics relying on a technique inspired in computer graphics, (ii) and can be used with simple and efficient architectures of convolutional neural networks. In our representation, a 3D skeleton model is projected to a 2D camera’s point-of-view, preserving spatial relations, and through a sliding window the temporal domain is encoded in a fused image of consecutive frames, through a shading motion effect achieved by manipulating a transparency coefficient. The result is a 2D image that when fed to simple custom-designed convolutional neural networks, it is achieved accurate classification of dynamic gestures. Experimmental reuslts obtained with a purposely captured 6 gesture dataset of 11 subjects, and also 2 public datasets, give evidence of a strong performance of our approach, when compared to other methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Birds of a Feather Flock Together: But do Humans and Robots? A Meta-Analysis of Human and Robot Personality Matching Responsiveness towards robot-assisted interactions among pre-primary children of Indian ethnicity Discrepancies between designs of robot communicative styles and their perceived assertiveness The Influence of Robot's Unexpected Behavior on Individual Cognitive Performance An Exploration of Accessible Remote Tele-operation for Assistive Mobile Manipulators in the Home
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1