Md Shahinur Alam , Jason Lamberton , Jianye Wang , Carly Leannah , Sarah Miller , Joseph Palagano , Myles de Bastion , Heather L. Smith , Melissa Malzkuhn , Lorna C. Quandt
{"title":"ASL 冠军!:深度学习驱动手语识别的虚拟现实游戏","authors":"Md Shahinur Alam , Jason Lamberton , Jianye Wang , Carly Leannah , Sarah Miller , Joseph Palagano , Myles de Bastion , Heather L. Smith , Melissa Malzkuhn , Lorna C. Quandt","doi":"10.1016/j.cexr.2024.100059","DOIUrl":null,"url":null,"abstract":"<div><p>We developed an American Sign Language (ASL) learning platform in a Virtual Reality (VR) environment to facilitate immersive interaction and real-time feedback for ASL learners. We describe the first game to use an interactive teaching style in which users learn from a fluent signing avatar and the first implementation of ASL sign recognition using deep learning within the VR environment. Advanced motion-capture technology powers an expressive ASL teaching avatar within an immersive three-dimensional environment. The teacher demonstrates an ASL sign for an object, prompting the user to copy the sign. Upon the user’s signing, a third-party plugin executes the sign recognition process alongside a deep learning model. Depending on the accuracy of a user’s sign production, the avatar repeats the sign or introduces a new one. We gathered a 3D VR ASL dataset from fifteen diverse participants to power the sign recognition model. The proposed deep learning model’s training, validation, and test accuracy are 90.12%, 89.37%, and 86.66%, respectively. The functional prototype can teach sign language vocabulary and be successfully adapted as an interactive ASL learning platform in VR.</p></div>","PeriodicalId":100320,"journal":{"name":"Computers & Education: X Reality","volume":"4 ","pages":"Article 100059"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949678024000096/pdfft?md5=93fa5220f68e5778acb9969e40e147f4&pid=1-s2.0-S2949678024000096-main.pdf","citationCount":"0","resultStr":"{\"title\":\"ASL champ!: a virtual reality game with deep-learning driven sign recognition\",\"authors\":\"Md Shahinur Alam , Jason Lamberton , Jianye Wang , Carly Leannah , Sarah Miller , Joseph Palagano , Myles de Bastion , Heather L. Smith , Melissa Malzkuhn , Lorna C. Quandt\",\"doi\":\"10.1016/j.cexr.2024.100059\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>We developed an American Sign Language (ASL) learning platform in a Virtual Reality (VR) environment to facilitate immersive interaction and real-time feedback for ASL learners. We describe the first game to use an interactive teaching style in which users learn from a fluent signing avatar and the first implementation of ASL sign recognition using deep learning within the VR environment. Advanced motion-capture technology powers an expressive ASL teaching avatar within an immersive three-dimensional environment. The teacher demonstrates an ASL sign for an object, prompting the user to copy the sign. Upon the user’s signing, a third-party plugin executes the sign recognition process alongside a deep learning model. Depending on the accuracy of a user’s sign production, the avatar repeats the sign or introduces a new one. We gathered a 3D VR ASL dataset from fifteen diverse participants to power the sign recognition model. The proposed deep learning model’s training, validation, and test accuracy are 90.12%, 89.37%, and 86.66%, respectively. The functional prototype can teach sign language vocabulary and be successfully adapted as an interactive ASL learning platform in VR.</p></div>\",\"PeriodicalId\":100320,\"journal\":{\"name\":\"Computers & Education: X Reality\",\"volume\":\"4 \",\"pages\":\"Article 100059\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2949678024000096/pdfft?md5=93fa5220f68e5778acb9969e40e147f4&pid=1-s2.0-S2949678024000096-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Education: X Reality\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949678024000096\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Education: X Reality","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949678024000096","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
我们在虚拟现实(VR)环境中开发了一个美国手语(ASL)学习平台,为 ASL 学习者提供身临其境的互动和实时反馈。我们描述了第一款采用互动教学方式的游戏,在这款游戏中,用户通过一个流利的手语化身进行学习,我们还描述了第一款在 VR 环境中使用深度学习实现 ASL 手语识别的游戏。先进的动作捕捉技术为身临其境的三维环境中富有表现力的 ASL 教学化身提供了动力。教师为一个物体演示 ASL 手势,提示用户复制手势。在用户做出手势后,第三方插件会与深度学习模型一起执行手势识别过程。根据用户手势的准确性,头像会重复手势或引入新的手势。我们收集了来自 15 位不同参与者的 3D VR ASL 数据集,为手语识别模型提供支持。所提出的深度学习模型的训练、验证和测试准确率分别为 90.12%、89.37% 和 86.66%。该功能原型可以教授手语词汇,并可成功改编为 VR 中的交互式 ASL 学习平台。
ASL champ!: a virtual reality game with deep-learning driven sign recognition
We developed an American Sign Language (ASL) learning platform in a Virtual Reality (VR) environment to facilitate immersive interaction and real-time feedback for ASL learners. We describe the first game to use an interactive teaching style in which users learn from a fluent signing avatar and the first implementation of ASL sign recognition using deep learning within the VR environment. Advanced motion-capture technology powers an expressive ASL teaching avatar within an immersive three-dimensional environment. The teacher demonstrates an ASL sign for an object, prompting the user to copy the sign. Upon the user’s signing, a third-party plugin executes the sign recognition process alongside a deep learning model. Depending on the accuracy of a user’s sign production, the avatar repeats the sign or introduces a new one. We gathered a 3D VR ASL dataset from fifteen diverse participants to power the sign recognition model. The proposed deep learning model’s training, validation, and test accuracy are 90.12%, 89.37%, and 86.66%, respectively. The functional prototype can teach sign language vocabulary and be successfully adapted as an interactive ASL learning platform in VR.