Evaluating the Pertinence of Pose Estimation model for Sign Language Translation

K. Amrutha, P. Prabu
{"title":"Evaluating the Pertinence of Pose Estimation model for Sign Language Translation","authors":"K. Amrutha, P. Prabu","doi":"10.1142/s1469026823410092","DOIUrl":null,"url":null,"abstract":"Sign Language is the natural language used by a community that is hearing impaired. It is necessary to convert this language to a commonly understandable form as it is used by a comparatively small part of society. The automatic Sign Language interpreters can convert the signs into text or audio by interpreting the hand movements and the corresponding facial expression. These two modalities work in tandem to give complete meaning to each word. In verbal communication, emotions can be conveyed by changing the tone and pitch of the voice, but in sign language, emotions are expressed using nonmanual movements that include body posture and facial muscle movements. Each such subtle moment should be considered as a feature and extracted using different models. This paper proposes three different models that can be used for varying levels of sign language. The first test was carried out using the Convex Hull-based Sign Language Recognition (SLR) finger spelling sign language, next using a Convolution Neural Network-based Sign Language Recognition (CNN-SLR) for fingerspelling sign language, and finally pose-based SLR for word-level sign language. The experiments show that the pose-based SLR model that captures features using landmark or key points has better SLR accuracy than Convex Hull and CNN-based SLR models.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Comput. Intell. Appl.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/s1469026823410092","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Sign Language is the natural language used by a community that is hearing impaired. It is necessary to convert this language to a commonly understandable form as it is used by a comparatively small part of society. The automatic Sign Language interpreters can convert the signs into text or audio by interpreting the hand movements and the corresponding facial expression. These two modalities work in tandem to give complete meaning to each word. In verbal communication, emotions can be conveyed by changing the tone and pitch of the voice, but in sign language, emotions are expressed using nonmanual movements that include body posture and facial muscle movements. Each such subtle moment should be considered as a feature and extracted using different models. This paper proposes three different models that can be used for varying levels of sign language. The first test was carried out using the Convex Hull-based Sign Language Recognition (SLR) finger spelling sign language, next using a Convolution Neural Network-based Sign Language Recognition (CNN-SLR) for fingerspelling sign language, and finally pose-based SLR for word-level sign language. The experiments show that the pose-based SLR model that captures features using landmark or key points has better SLR accuracy than Convex Hull and CNN-based SLR models.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
手势翻译中姿态估计模型的针对性评价
手语是听力受损群体使用的自然语言。有必要把这种语言转换成一种通俗易懂的形式,因为它只被社会上相对较小的一部分人使用。自动手语翻译可以通过解读手势动作和相应的面部表情,将手势转换成文字或音频。这两种模式协同工作,赋予每个单词完整的含义。在语言交流中,情绪可以通过改变声音的音调和音高来传达,但在手语中,情绪是通过包括身体姿势和面部肌肉运动在内的非手动动作来表达的。每个这样微妙的时刻都应该被视为一个特征,并使用不同的模型进行提取。本文提出了三种不同的模型,可用于不同水平的手语。首先使用基于凸壳的手语识别(SLR)进行手指拼写手语测试,然后使用基于卷积神经网络的手语识别(CNN-SLR)进行手指拼写手语测试,最后使用基于姿势的SLR进行单词级手语测试。实验表明,利用地标或关键点捕获特征的基于姿态的单反模型比基于凸壳和cnn的单反模型具有更好的单反精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
CT Images Segmentation Using a Deep Learning-Based Approach for Preoperative Projection of Human Organ Model Using Augmented Reality Technology Styling Classification of Group Photos Fusing Head and Pose Features Genetic Algorithm-Based Optimal Resource Trust Line Prediction in Cloud Computing Shearlet Transform-Based Novel Method for Multimodality Medical Image Fusion Using Deep Learning An Energy-Efficient Clustering and Fuzzy-Based Path Selection for Flying Ad-Hoc Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1