Cross-Scene Sign Language Gesture Recognition Based on Frequency-Modulated Continuous Wave Radar

Signals Pub Date : 2022-12-06 DOI:10.3390/signals3040052
Xiao-chao Dang, Kefeng Wei, Zhanjun Hao, Zhongyu Ma
{"title":"Cross-Scene Sign Language Gesture Recognition Based on Frequency-Modulated Continuous Wave Radar","authors":"Xiao-chao Dang, Kefeng Wei, Zhanjun Hao, Zhongyu Ma","doi":"10.3390/signals3040052","DOIUrl":null,"url":null,"abstract":"This paper uses millimeter-wave radar to recognize gestures in four different scene domains. The four scene domains are the experimental environment, the experimental location, the experimental direction, and the experimental personnel. The experiments are carried out in four scene domains, using part of the data of a scene domain as the training set for training. The remaining data is used as a validation set to validate the training results. Furthermore, the gesture recognition results of known scenes can be extended to unknown stages after obtaining the original gesture data in different scene domains. Then, three kinds of hand gesture features independent of the scene domain are extracted: range-time spectrum, range-doppler spectrum, and range-angle spectrum. Then, they are fused to represent a complete and comprehensive gesture action. Then, the gesture is trained and recognized using the three-dimensional convolutional neural network (CNN) model. Experimental results show that the three-dimensional CNN can fuse different gesture feature sets. The average recognition rate of the fused gesture features in the same scene domain is 87%, and the average recognition rate in the unknown scene domain is 83.1%, which verifies the feasibility of gesture recognition across scene domains.","PeriodicalId":93815,"journal":{"name":"Signals","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Signals","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/signals3040052","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper uses millimeter-wave radar to recognize gestures in four different scene domains. The four scene domains are the experimental environment, the experimental location, the experimental direction, and the experimental personnel. The experiments are carried out in four scene domains, using part of the data of a scene domain as the training set for training. The remaining data is used as a validation set to validate the training results. Furthermore, the gesture recognition results of known scenes can be extended to unknown stages after obtaining the original gesture data in different scene domains. Then, three kinds of hand gesture features independent of the scene domain are extracted: range-time spectrum, range-doppler spectrum, and range-angle spectrum. Then, they are fused to represent a complete and comprehensive gesture action. Then, the gesture is trained and recognized using the three-dimensional convolutional neural network (CNN) model. Experimental results show that the three-dimensional CNN can fuse different gesture feature sets. The average recognition rate of the fused gesture features in the same scene domain is 87%, and the average recognition rate in the unknown scene domain is 83.1%, which verifies the feasibility of gesture recognition across scene domains.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于调频连续波雷达的跨场景手语手势识别
本文利用毫米波雷达对四种不同场景域的手势进行识别。四个场景域分别是实验环境、实验地点、实验方向和实验人员。实验在四个场景域中进行,使用场景域中的部分数据作为训练集进行训练。剩余的数据用作验证集来验证训练结果。此外,在获取不同场景域的原始手势数据后,可以将已知场景的手势识别结果扩展到未知阶段。然后,提取了三种独立于场景域的手势特征:距离-时间谱、距离-多普勒谱和距离-角度谱。然后,它们被融合成一个完整而全面的手势动作。然后,使用三维卷积神经网络(CNN)模型对手势进行训练和识别。实验结果表明,三维CNN可以融合不同的手势特征集。融合手势特征在同一场景域中的平均识别率为87%,在未知场景域中的平均识别率为83.1%,验证了跨场景域手势识别的可行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
3.20
自引率
0.00%
发文量
0
审稿时长
11 weeks
期刊最新文献
Detection of Movement and Lead-Popping Artifacts in Polysomnography EEG Data. Development of an Integrated System of sEMG Signal Acquisition, Processing, and Analysis with AI Techniques Correction: Martin et al. ApeTI: A Thermal Image Dataset for Face and Nose Segmentation with Apes. Signals 2024, 5, 147–164 On the Impulse Response of Singular Discrete LTI Systems and Three Fourier Transform Pairs Noncooperative Spectrum Sensing Strategy Based on Recurrence Quantification Analysis in the Context of the Cognitive Radio
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1