手动和非手动手语分析中基于交叉注意力的影响模型

Lipisha Chaudhary, Fei Xu, Ifeoma Nwogu
{"title":"手动和非手动手语分析中基于交叉注意力的影响模型","authors":"Lipisha Chaudhary, Fei Xu, Ifeoma Nwogu","doi":"arxiv-2409.08162","DOIUrl":null,"url":null,"abstract":"Both manual (relating to the use of hands) and non-manual markers (NMM), such\nas facial expressions or mouthing cues, are important for providing the\ncomplete meaning of phrases in American Sign Language (ASL). Efforts have been\nmade in advancing sign language to spoken/written language understanding, but\nmost of these have primarily focused on manual features only. In this work,\nusing advanced neural machine translation methods, we examine and report on the\nextent to which facial expressions contribute to understanding sign language\nphrases. We present a sign language translation architecture consisting of\ntwo-stream encoders, with one encoder handling the face and the other handling\nthe upper body (with hands). We propose a new parallel cross-attention decoding\nmechanism that is useful for quantifying the influence of each input modality\non the output. The two streams from the encoder are directed simultaneously to\ndifferent attention stacks in the decoder. Examining the properties of the\nparallel cross-attention weights allows us to analyze the importance of facial\nmarkers compared to body and hand features during a translating task.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":"44 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-Attention Based Influence Model for Manual and Nonmanual Sign Language Analysis\",\"authors\":\"Lipisha Chaudhary, Fei Xu, Ifeoma Nwogu\",\"doi\":\"arxiv-2409.08162\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Both manual (relating to the use of hands) and non-manual markers (NMM), such\\nas facial expressions or mouthing cues, are important for providing the\\ncomplete meaning of phrases in American Sign Language (ASL). Efforts have been\\nmade in advancing sign language to spoken/written language understanding, but\\nmost of these have primarily focused on manual features only. In this work,\\nusing advanced neural machine translation methods, we examine and report on the\\nextent to which facial expressions contribute to understanding sign language\\nphrases. We present a sign language translation architecture consisting of\\ntwo-stream encoders, with one encoder handling the face and the other handling\\nthe upper body (with hands). We propose a new parallel cross-attention decoding\\nmechanism that is useful for quantifying the influence of each input modality\\non the output. The two streams from the encoder are directed simultaneously to\\ndifferent attention stacks in the decoder. Examining the properties of the\\nparallel cross-attention weights allows us to analyze the importance of facial\\nmarkers compared to body and hand features during a translating task.\",\"PeriodicalId\":501130,\"journal\":{\"name\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"volume\":\"44 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08162\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08162","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

手动标记(与手的使用有关)和非手动标记(NMM),如面部表情或口型提示,对于提供美国手语(ASL)中短语的完整含义都很重要。人们一直在努力将手语提升到口语/书面语理解的水平,但其中大部分都只侧重于人工特征。在这项工作中,我们使用先进的神经机器翻译方法,研究并报告了面部表情对理解手语短语的贡献程度。我们提出了一种由两个流编码器组成的手语翻译架构,其中一个编码器处理面部,另一个处理上半身(包括手)。我们提出了一种新的并行交叉注意力解码机制,可用于量化每种输入模式对输出的影响。来自编码器的两个数据流同时进入解码器中的不同注意堆栈。通过研究并行交叉注意力权重的特性,我们可以分析在翻译任务中面部标记与身体和手部特征相比的重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Cross-Attention Based Influence Model for Manual and Nonmanual Sign Language Analysis
Both manual (relating to the use of hands) and non-manual markers (NMM), such as facial expressions or mouthing cues, are important for providing the complete meaning of phrases in American Sign Language (ASL). Efforts have been made in advancing sign language to spoken/written language understanding, but most of these have primarily focused on manual features only. In this work, using advanced neural machine translation methods, we examine and report on the extent to which facial expressions contribute to understanding sign language phrases. We present a sign language translation architecture consisting of two-stream encoders, with one encoder handling the face and the other handling the upper body (with hands). We propose a new parallel cross-attention decoding mechanism that is useful for quantifying the influence of each input modality on the output. The two streams from the encoder are directed simultaneously to different attention stacks in the decoder. Examining the properties of the parallel cross-attention weights allows us to analyze the importance of facial markers compared to body and hand features during a translating task.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Massively Multi-Person 3D Human Motion Forecasting with Scene Context Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution Precise Forecasting of Sky Images Using Spatial Warping JEAN: Joint Expression and Audio-guided NeRF-based Talking Face Generation Applications of Knowledge Distillation in Remote Sensing: A Survey
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1