模拟美国手语的速度和时间产生逼真的动画

Sedeeq Al-khazraji, Larwan Berke, Sushant Kafle, Peter Yeung, Matt Huenerfauth
{"title":"模拟美国手语的速度和时间产生逼真的动画","authors":"Sedeeq Al-khazraji, Larwan Berke, Sushant Kafle, Peter Yeung, Matt Huenerfauth","doi":"10.1145/3234695.3236356","DOIUrl":null,"url":null,"abstract":"To enable more websites to provide content in the form of sign language, we investigate software to partially automate the synthesis of animations of American Sign Language (ASL), based on a human-authored message specification. We automatically select: where prosodic pauses should be inserted (based on the syntax or other features), the time-duration of these pauses, and the variations of the speed at which individual words are performed (e.g. slower at the end of phrases). Based on an analysis of a corpus of multi-sentence ASL recordings with motion-capture data, we trained machine-learning models, which were evaluated in a cross-validation study. The best model out-performed a prior state-of-the-art ASL timing model. In a study with native ASL signers evaluating animations generated from either our new model or from a simple baseline (uniform speed and no pauses), participants indicated a preference for speed and pausing in ASL animations from our model.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"102 4 Pt 1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"28","resultStr":"{\"title\":\"Modeling the Speed and Timing of American Sign Language to Generate Realistic Animations\",\"authors\":\"Sedeeq Al-khazraji, Larwan Berke, Sushant Kafle, Peter Yeung, Matt Huenerfauth\",\"doi\":\"10.1145/3234695.3236356\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To enable more websites to provide content in the form of sign language, we investigate software to partially automate the synthesis of animations of American Sign Language (ASL), based on a human-authored message specification. We automatically select: where prosodic pauses should be inserted (based on the syntax or other features), the time-duration of these pauses, and the variations of the speed at which individual words are performed (e.g. slower at the end of phrases). Based on an analysis of a corpus of multi-sentence ASL recordings with motion-capture data, we trained machine-learning models, which were evaluated in a cross-validation study. The best model out-performed a prior state-of-the-art ASL timing model. In a study with native ASL signers evaluating animations generated from either our new model or from a simple baseline (uniform speed and no pauses), participants indicated a preference for speed and pausing in ASL animations from our model.\",\"PeriodicalId\":110197,\"journal\":{\"name\":\"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility\",\"volume\":\"102 4 Pt 1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"28\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3234695.3236356\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3234695.3236356","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 28

摘要

为了使更多的网站能够提供手语形式的内容,我们研究了基于人工编写的消息规范,部分自动化合成美国手语(ASL)动画的软件。我们自动选择:应该插入韵律停顿的位置(基于语法或其他特征),这些停顿的时间持续时间,以及单个单词执行速度的变化(例如,在短语末尾更慢)。基于对带有动作捕捉数据的多句ASL记录语料库的分析,我们训练了机器学习模型,并在交叉验证研究中对其进行了评估。最好的模型优于先前最先进的美国手语计时模型。在一项研究中,美国手语使用者评估了由我们的新模型或简单基线(匀速和无停顿)生成的动画,参与者表示更喜欢我们模型中的美国手语动画中的速度和停顿。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Modeling the Speed and Timing of American Sign Language to Generate Realistic Animations
To enable more websites to provide content in the form of sign language, we investigate software to partially automate the synthesis of animations of American Sign Language (ASL), based on a human-authored message specification. We automatically select: where prosodic pauses should be inserted (based on the syntax or other features), the time-duration of these pauses, and the variations of the speed at which individual words are performed (e.g. slower at the end of phrases). Based on an analysis of a corpus of multi-sentence ASL recordings with motion-capture data, we trained machine-learning models, which were evaluated in a cross-validation study. The best model out-performed a prior state-of-the-art ASL timing model. In a study with native ASL signers evaluating animations generated from either our new model or from a simple baseline (uniform speed and no pauses), participants indicated a preference for speed and pausing in ASL animations from our model.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Examining Image-Based Button Labeling for Accessibility in Android Apps through Large-Scale Analysis HoloLearn "Siri Talks at You": An Empirical Investigation of Voice-Activated Personal Assistant (VAPA) Usage by Individuals Who Are Blind BrightLights: Gamifying Data Capture for Situational Visual Impairments Tangicraft
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1