首页 > 最新文献

Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility最新文献

英文 中文
Social Haptic Communication mimicked with vibrotactile patterns - an evaluation by users with deafblindness 用振动触觉模式模拟社会触觉交流——聋哑用户的评估
M. Plaisier, A. Kappers
Many devices, such as smart phones, implement vibration motors for tactile feedback. When multiple vibration motors are placed on, for instance, the backrest of a chair it is possible to trace shapes on the back of a person by sequentially switching motors on and off. Social Haptic Communication (SHC) is a tactile mode of communication for persons with deafblindness that makes use of tracing shapes or other types of spatiotemporal patterns with the hand on the back of another person. This could be emulated using vibrotactile patterns. Here we investigated whether SHC users with deafblindness would recognize the vibrotactile patterns as SHC signs (Haptices). In several cases the participants immediately linked a vibrotactile patterns to the Haptice that is was meant to imitate. Together with the participants we improved and expanded the set of vibrotactile patterns.
许多设备,如智能手机,采用振动电机进行触觉反馈。例如,当多个振动马达被放置在椅子的靠背上时,就可以通过依次开关马达来追踪人背部的形状。社交触觉交流(Social Haptic Communication, SHC)是一种聋哑人的触觉交流方式,它利用手放在另一个人的背上,追踪形状或其他类型的时空模式。这可以用振动触觉模式来模拟。本研究调查了聋哑SHC使用者是否将振动触觉模式识别为SHC标志(Haptices)。在一些情况下,参与者立即将振动触觉模式与他们要模仿的触觉联系起来。我们与参与者一起改进和扩展了振动触觉模式集。
{"title":"Social Haptic Communication mimicked with vibrotactile patterns - an evaluation by users with deafblindness","authors":"M. Plaisier, A. Kappers","doi":"10.1145/3441852.3476528","DOIUrl":"https://doi.org/10.1145/3441852.3476528","url":null,"abstract":"Many devices, such as smart phones, implement vibration motors for tactile feedback. When multiple vibration motors are placed on, for instance, the backrest of a chair it is possible to trace shapes on the back of a person by sequentially switching motors on and off. Social Haptic Communication (SHC) is a tactile mode of communication for persons with deafblindness that makes use of tracing shapes or other types of spatiotemporal patterns with the hand on the back of another person. This could be emulated using vibrotactile patterns. Here we investigated whether SHC users with deafblindness would recognize the vibrotactile patterns as SHC signs (Haptices). In several cases the participants immediately linked a vibrotactile patterns to the Haptice that is was meant to imitate. Together with the participants we improved and expanded the set of vibrotactile patterns.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"605 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116378311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
CollabAlly: Accessible Collaboration Awareness in Document Editing 协作:文档编辑中可访问的协作意识
Cheuk Yin Phipson Lee, Zhuohao Zhang, Jaylin Herskovitz, Jooyoung Seo, Anhong Guo
Collaborative document editing tools are widely used in both professional and academic workplaces. While these tools provide some accessibility features, it is still challenging for blind users to gain collaboration awareness that sighted people can easily obtain using visual cues (e.g., who edited or commented where and what in the document). To address this gap, we present CollabAlly, a browser extension that makes extractable collaborative and contextual information in document editing accessible for blind users. With CollabAlly, blind users can easily access collaborators’ information, track real-time or asynchronous content and comment changes, and navigate through these elements. In order to convey this complex information through audio, CollabAlly uses voice fonts and spatial audio to enhance users’ collaboration awareness in shared documents. Through a series of pilot studies with a coauthor who is blind, CollabAlly’s design was refined to include more information and to be more compatible with existing screen readers.
协作文档编辑工具广泛应用于专业和学术工作场所。虽然这些工具提供了一些辅助功能,但对于盲人用户来说,获得协作意识仍然是一个挑战,而正常人可以很容易地通过视觉线索(例如,谁编辑或评论了文档中的位置和内容)获得协作意识。为了解决这个问题,我们推出了CollabAlly,这是一个浏览器扩展,可以为盲人用户提供文档编辑中可提取的协作和上下文信息。使用CollabAlly,盲人用户可以轻松访问合作者的信息,跟踪实时或异步内容和评论变化,并在这些元素之间导航。为了通过音频传达这些复杂的信息,CollabAlly使用语音字体和空间音频来增强用户在共享文档中的协作意识。通过与一位盲人合作进行的一系列试点研究,CollabAlly的设计得到了改进,包括了更多的信息,并与现有的屏幕阅读器更兼容。
{"title":"CollabAlly: Accessible Collaboration Awareness in Document Editing","authors":"Cheuk Yin Phipson Lee, Zhuohao Zhang, Jaylin Herskovitz, Jooyoung Seo, Anhong Guo","doi":"10.1145/3441852.3476562","DOIUrl":"https://doi.org/10.1145/3441852.3476562","url":null,"abstract":"Collaborative document editing tools are widely used in both professional and academic workplaces. While these tools provide some accessibility features, it is still challenging for blind users to gain collaboration awareness that sighted people can easily obtain using visual cues (e.g., who edited or commented where and what in the document). To address this gap, we present CollabAlly, a browser extension that makes extractable collaborative and contextual information in document editing accessible for blind users. With CollabAlly, blind users can easily access collaborators’ information, track real-time or asynchronous content and comment changes, and navigate through these elements. In order to convey this complex information through audio, CollabAlly uses voice fonts and spatial audio to enhance users’ collaboration awareness in shared documents. Through a series of pilot studies with a coauthor who is blind, CollabAlly’s design was refined to include more information and to be more compatible with existing screen readers.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125476147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Understanding Barriers and Design Opportunities to Improve Healthcare and QOL for Older Adults through Voice Assistants 了解障碍和设计机会,通过语音助手改善老年人的医疗保健和生活质量
Chen Chen, Janet G. Johnson, Kemeberly Charles, Alice Lee, Ella T. Lifset, M. Hogarth, A. Moore, E. Farcas, Nadir Weibel
Voice-based Intelligent Virtual Assistants (IVAs) promise to improve healthcare management and Quality of Life (QOL) by introducing the paradigm of hands-free and eye-free interactions. However, there has been little understanding regarding the challenges for designing such systems for older adults, especially when it comes to healthcare related tasks. To tackle this, we consider the processes of care delivery and QOL enhancements for older adults as a collaborative task between patients and providers. By interviewing 16 older adults living independently or semi–independently and 5 providers, we identified 12 barriers that older adults might encounter during daily routine and while managing health. We ultimately highlighted key design challenges and opportunities that might be introduced when integrating voice-based IVAs into the life of older adults. Our work will benefit practitioners who study and attempt to create full-fledged IVA-powered smart devices to deliver better care and support an increased QOL for aging populations.
基于语音的智能虚拟助理(IVAs)承诺通过引入免手和免眼交互模式来改善医疗保健管理和生活质量(QOL)。然而,对于为老年人设计这样的系统所面临的挑战,尤其是涉及到与医疗保健相关的任务时,人们知之甚少。为了解决这个问题,我们认为老年人的护理交付和生活质量提高过程是患者和提供者之间的协作任务。通过采访16名独立或半独立生活的老年人和5名医疗服务提供者,我们确定了老年人在日常生活和管理健康时可能遇到的12个障碍。我们最终强调了在将基于语音的IVAs整合到老年人的生活中时可能引入的关键设计挑战和机遇。我们的工作将使那些研究和尝试创造成熟的iva智能设备的从业人员受益,为老年人提供更好的护理和更高的生活质量。
{"title":"Understanding Barriers and Design Opportunities to Improve Healthcare and QOL for Older Adults through Voice Assistants","authors":"Chen Chen, Janet G. Johnson, Kemeberly Charles, Alice Lee, Ella T. Lifset, M. Hogarth, A. Moore, E. Farcas, Nadir Weibel","doi":"10.1145/3441852.3471218","DOIUrl":"https://doi.org/10.1145/3441852.3471218","url":null,"abstract":"Voice-based Intelligent Virtual Assistants (IVAs) promise to improve healthcare management and Quality of Life (QOL) by introducing the paradigm of hands-free and eye-free interactions. However, there has been little understanding regarding the challenges for designing such systems for older adults, especially when it comes to healthcare related tasks. To tackle this, we consider the processes of care delivery and QOL enhancements for older adults as a collaborative task between patients and providers. By interviewing 16 older adults living independently or semi–independently and 5 providers, we identified 12 barriers that older adults might encounter during daily routine and while managing health. We ultimately highlighted key design challenges and opportunities that might be introduced when integrating voice-based IVAs into the life of older adults. Our work will benefit practitioners who study and attempt to create full-fledged IVA-powered smart devices to deliver better care and support an increased QOL for aging populations.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124349666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Wearable Interactions for Users with Motor Impairments: Systematic Review, Inventory, and Research Implications 运动障碍用户的可穿戴交互:系统回顾,库存和研究意义
Alexandru-Ionuț Șiean, Radu-Daniel Vatavu
We conduct a systematic literature review on wearable interactions for users with motor impairments and report results from a meta-analysis of 57 scientific articles identified in the ACM DL and IEEE Xplore databases. Our findings show limited research conducted on accessible wearable interactions (e.g., just four papers addressing smartwatch input), a disproportionate interest for hand gestures compared to other input modalities for wearable devices, and low numbers of participants with motor impairments involved in user studies about wearable interactions (a median of 6.0 and average of 8.2 participants per study). We compile an inventory of 92 finger, hand, head, shoulder, eye gaze, and foot gesture commands for smartwatches, smartglasses, headsets, earsets, fitness trackers, data gloves, and armband wearable devices extracted from the scientific literature that we surveyed. Based on our findings, we propose four directions for future research on accessible wearable interactions for users with motor impairments.
我们对运动障碍用户的可穿戴交互进行了系统的文献综述,并报告了对ACM DL和IEEE Xplore数据库中57篇科学文章的荟萃分析结果。我们的研究结果表明,对可穿戴交互的研究有限(例如,只有四篇论文涉及智能手表的输入),与可穿戴设备的其他输入方式相比,对手势的兴趣不成比例,参与可穿戴交互用户研究的运动障碍参与者数量很少(中位数为6.0,平均为8.2)。我们从调查的科学文献中提取了智能手表、智能眼镜、头戴式耳机、耳塞、健身追踪器、数据手套和臂环等可穿戴设备的92个手指、手、头、肩、眼睛注视和脚的手势命令。基于我们的研究结果,我们提出了运动障碍用户可穿戴交互的四个未来研究方向。
{"title":"Wearable Interactions for Users with Motor Impairments: Systematic Review, Inventory, and Research Implications","authors":"Alexandru-Ionuț Șiean, Radu-Daniel Vatavu","doi":"10.1145/3441852.3471212","DOIUrl":"https://doi.org/10.1145/3441852.3471212","url":null,"abstract":"We conduct a systematic literature review on wearable interactions for users with motor impairments and report results from a meta-analysis of 57 scientific articles identified in the ACM DL and IEEE Xplore databases. Our findings show limited research conducted on accessible wearable interactions (e.g., just four papers addressing smartwatch input), a disproportionate interest for hand gestures compared to other input modalities for wearable devices, and low numbers of participants with motor impairments involved in user studies about wearable interactions (a median of 6.0 and average of 8.2 participants per study). We compile an inventory of 92 finger, hand, head, shoulder, eye gaze, and foot gesture commands for smartwatches, smartglasses, headsets, earsets, fitness trackers, data gloves, and armband wearable devices extracted from the scientific literature that we surveyed. Based on our findings, we propose four directions for future research on accessible wearable interactions for users with motor impairments.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116666267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Fluent: An AI Augmented Writing Tool for People who Stutter 流利:一个为口吃者提供的人工智能增强写作工具
Bhavya Ghai, Klaus Mueller
Stuttering is a speech disorder which impacts the personal and professional lives of millions of people worldwide. To save themselves from stigma and discrimination, people who stutter (PWS) may adopt different strategies to conceal their stuttering. One of the common strategies is word substitution where an individual avoids saying a word they might stutter on and use an alternative instead. This process itself can cause stress and add more burden. In this work, we present Fluent, an AI augmented writing tool which assists PWS in writing scripts which they can speak more fluently. Fluent embodies a novel active learning based method of identifying words an individual might struggle pronouncing. Such words are highlighted in the interface. On hovering over any such word, Fluent presents a set of alternative words which have similar meaning but are easier to speak. The user is free to accept or ignore these suggestions. Based on such user interaction (feedback), Fluent continuously evolves its classifier to better suit the personalized needs of each user. We evaluated our tool by measuring its ability to identify difficult words for 10 simulated users. We found that our tool can identify difficult words with a mean accuracy of over 80% in under 20 interactions and it keeps improving with more feedback. Our tool can be beneficial for certain important life situations like giving a talk, presentation, etc. The source code for this tool has been made publicly accessible at github.com/bhavyaghai/Fluent.
口吃是一种语言障碍,影响着全世界数百万人的个人和职业生活。为了使自己免受污名和歧视,口吃者可能会采取不同的策略来掩盖自己的口吃。一种常见的策略是单词替换,即人们避免说他们可能会口吃的单词,而使用替代词。这个过程本身会造成压力,增加更多的负担。在这项工作中,我们介绍了Fluent,这是一个人工智能增强的写作工具,可以帮助PWS编写更流利的脚本。流利体现了一种新颖的基于主动学习的方法来识别个人可能难以发音的单词。这样的单词会在界面中突出显示。将鼠标悬停在任何这样的单词上,Fluent就会显示一组具有相似意思但更容易说的替代单词。用户可以自由地接受或忽略这些建议。基于这样的用户交互(反馈),Fluent不断改进其分类器,以更好地适应每个用户的个性化需求。我们通过测量10个模拟用户识别难词的能力来评估我们的工具。我们发现,我们的工具可以在不到20次交互中识别困难的单词,平均准确率超过80%,并且随着更多的反馈,它还在不断提高。我们的工具可以在某些重要的生活场合中发挥作用,比如演讲、演讲等。这个工具的源代码可以在github.com/bhavyaghai/Fluent上公开访问。
{"title":"Fluent: An AI Augmented Writing Tool for People who Stutter","authors":"Bhavya Ghai, Klaus Mueller","doi":"10.1145/3441852.3471211","DOIUrl":"https://doi.org/10.1145/3441852.3471211","url":null,"abstract":"Stuttering is a speech disorder which impacts the personal and professional lives of millions of people worldwide. To save themselves from stigma and discrimination, people who stutter (PWS) may adopt different strategies to conceal their stuttering. One of the common strategies is word substitution where an individual avoids saying a word they might stutter on and use an alternative instead. This process itself can cause stress and add more burden. In this work, we present Fluent, an AI augmented writing tool which assists PWS in writing scripts which they can speak more fluently. Fluent embodies a novel active learning based method of identifying words an individual might struggle pronouncing. Such words are highlighted in the interface. On hovering over any such word, Fluent presents a set of alternative words which have similar meaning but are easier to speak. The user is free to accept or ignore these suggestions. Based on such user interaction (feedback), Fluent continuously evolves its classifier to better suit the personalized needs of each user. We evaluated our tool by measuring its ability to identify difficult words for 10 simulated users. We found that our tool can identify difficult words with a mean accuracy of over 80% in under 20 interactions and it keeps improving with more feedback. Our tool can be beneficial for certain important life situations like giving a talk, presentation, etc. The source code for this tool has been made publicly accessible at github.com/bhavyaghai/Fluent.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121952052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Non-Visual Cooking: Exploring Practices and Challenges of Meal Preparation by People with Visual Impairments 非视觉烹饪:探索有视觉障碍的人准备食物的实践和挑战
Franklin Mingzhe Li, Jamie Dorst, Peter Cederberg, Patrick Carrington
The reliance on vision for tasks related to cooking and eating healthy can present barriers to cooking for oneself and achieving proper nutrition. There has been little research exploring cooking practices and challenges faced by people with visual impairments. We present a content analysis of 122 YouTube videos to highlight the cooking practices of visually impaired people, and we describe detailed practices for 12 different cooking activities (e.g., cutting and chopping, measuring, testing food for doneness). Based on the cooking practices, we also conducted semi-structured interviews with 12 visually impaired people who have cooking experience and show existing challenges, concerns, and risks in cooking (e.g., tracking the status of tasks in progress, verifying whether things are peeled or cleaned thoroughly). We further discuss opportunities to support the current practices and improve the independence of people with visual impairments in cooking (e.g., zero-touch interactions for cooking). Overall, our findings provide guidance for future research exploring various assistive technologies to help people cook without relying on vision.
在烹饪和健康饮食相关的任务中依赖视觉可能会给自己烹饪和获得适当营养带来障碍。很少有研究探讨烹饪方法和视觉障碍患者面临的挑战。我们对122个YouTube视频进行了内容分析,以突出视障人士的烹饪实践,并详细描述了12种不同的烹饪活动(例如,切割和切碎,测量,测试食物的熟度)。在烹饪实践的基础上,我们还对12名有烹饪经验的视障人士进行了半结构化的访谈,他们在烹饪中表现出存在的挑战、担忧和风险(例如,跟踪正在进行的任务状态,验证食物是否被彻底剥皮或清洁)。我们进一步讨论了支持当前做法和提高视障人士烹饪独立性的机会(例如,烹饪的零接触互动)。总的来说,我们的发现为未来的研究提供了指导,探索各种辅助技术,帮助人们在不依赖视觉的情况下做饭。
{"title":"Non-Visual Cooking: Exploring Practices and Challenges of Meal Preparation by People with Visual Impairments","authors":"Franklin Mingzhe Li, Jamie Dorst, Peter Cederberg, Patrick Carrington","doi":"10.1145/3441852.3471215","DOIUrl":"https://doi.org/10.1145/3441852.3471215","url":null,"abstract":"The reliance on vision for tasks related to cooking and eating healthy can present barriers to cooking for oneself and achieving proper nutrition. There has been little research exploring cooking practices and challenges faced by people with visual impairments. We present a content analysis of 122 YouTube videos to highlight the cooking practices of visually impaired people, and we describe detailed practices for 12 different cooking activities (e.g., cutting and chopping, measuring, testing food for doneness). Based on the cooking practices, we also conducted semi-structured interviews with 12 visually impaired people who have cooking experience and show existing challenges, concerns, and risks in cooking (e.g., tracking the status of tasks in progress, verifying whether things are peeled or cleaned thoroughly). We further discuss opportunities to support the current practices and improve the independence of people with visual impairments in cooking (e.g., zero-touch interactions for cooking). Overall, our findings provide guidance for future research exploring various assistive technologies to help people cook without relying on vision.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126956530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
See-Through Captions: Real-Time Captioning on Transparent Display for Deaf and Hard-of-Hearing People 透明字幕:为聋人和听力障碍者提供透明显示的实时字幕
Kenta Yamamoto, Ippei Suzuki, Akihisa Shitara, Yoichi Ochiai
Real-time captioning is a useful technique for deaf and hard-of-hearing (DHH) people to talk to hearing people. With the improvement in device performance and the accuracy of automatic speech recognition (ASR), real-time captioning is becoming an important tool for helping DHH people in their daily lives. To realize higher-quality communication and overcome the limitations of mobile and augmented-reality devices, real-time captioning that can be used comfortably while maintaining nonverbal communication and preventing incorrect recognition is required. Therefore, we propose a real-time captioning system that uses a transparent display. In this system, the captions are presented on both sides of the display to address the problem of incorrect ASR results, and the highly transparent display makes it possible to see both the body language and the captions.
实时字幕是聋人和听障人士与正常人交谈的一项有用技术。随着设备性能的提高和自动语音识别(ASR)准确度的提高,实时字幕正成为帮助DHH患者日常生活的重要工具。为了实现高质量的通信,克服移动和增强现实设备的限制,需要在保持非语言通信和防止错误识别的同时舒适地使用实时字幕。因此,我们提出了一种使用透明显示器的实时字幕系统。在本系统中,字幕呈现在显示屏的两侧,以解决ASR结果不正确的问题,并且高度透明的显示屏使得可以同时看到肢体语言和字幕。
{"title":"See-Through Captions: Real-Time Captioning on Transparent Display for Deaf and Hard-of-Hearing People","authors":"Kenta Yamamoto, Ippei Suzuki, Akihisa Shitara, Yoichi Ochiai","doi":"10.1145/3441852.3476551","DOIUrl":"https://doi.org/10.1145/3441852.3476551","url":null,"abstract":"Real-time captioning is a useful technique for deaf and hard-of-hearing (DHH) people to talk to hearing people. With the improvement in device performance and the accuracy of automatic speech recognition (ASR), real-time captioning is becoming an important tool for helping DHH people in their daily lives. To realize higher-quality communication and overcome the limitations of mobile and augmented-reality devices, real-time captioning that can be used comfortably while maintaining nonverbal communication and preventing incorrect recognition is required. Therefore, we propose a real-time captioning system that uses a transparent display. In this system, the captions are presented on both sides of the display to address the problem of incorrect ASR results, and the highly transparent display makes it possible to see both the body language and the captions.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131133454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Developing Accessible Mobile Applications with Cross-Platform Development Frameworks 使用跨平台开发框架开发可访问的移动应用程序
S. Mascetti, Mattia Ducci, Niccolò Cantù, Paolo Pecis, D. Ahmetovic
We illustrate our experience, gained over years of involvement in multiple research and commercial projects, in developing accessible mobile apps with cross-platform development frameworks (CPDF). These frameworks allow the developers to write the app code only once and run it on both iOS and Android. However, they have limited support for accessibility features, in particular for what concerns the interaction with the system screen reader. To study the coverage of accessibility features in CPDFs, we first systematically analyze screen reader APIs available in native iOS and Android, and we examine whether and at what level the same functionalities are available in two popular CPDF: Xamarin and React Native. This analysis unveils that there are many functionalities shared between native iOS and Android APIs, but most of them are not available neither in React Native nor in Xamarin. In particular, not even all basic APIs are exposed by the examined CPDF. Accessing the unavailable APIs is still possible, but it requires additional effort by the developers who need to write platform-specific code in native APIs, hence partially negating the advantages of CPDF. To address this problem, we consider a representative set of native APIs that cannot be directly accessed from React Native and Xamarin and we report challenges encountered in accessing them.
我们展示了我们多年来参与多个研究和商业项目,在开发跨平台开发框架(CPDF)的可访问移动应用程序方面获得的经验。这些框架允许开发者只编写一次应用代码,并同时在iOS和Android上运行。然而,它们对可访问性特性的支持有限,特别是与系统屏幕阅读器的交互。为了研究CPDF中可访问性特性的覆盖范围,我们首先系统地分析了原生iOS和Android中可用的屏幕阅读器api,并检查了两个流行的CPDF (Xamarin和React native)是否以及在什么级别上提供相同的功能。这一分析揭示了原生iOS和Android api之间有许多共享的功能,但其中大多数功能在React native和Xamarin中都不可用。特别是,甚至不是所有的基本api都由所检查的CPDF公开。访问不可用的api仍然是可能的,但这需要开发人员在本机api中编写特定于平台的代码,因此部分地抵消了CPDF的优势。为了解决这个问题,我们考虑了一组具有代表性的原生api,它们不能从React native和Xamarin直接访问,我们报告了访问它们时遇到的挑战。
{"title":"Developing Accessible Mobile Applications with Cross-Platform Development Frameworks","authors":"S. Mascetti, Mattia Ducci, Niccolò Cantù, Paolo Pecis, D. Ahmetovic","doi":"10.1145/3441852.3476469","DOIUrl":"https://doi.org/10.1145/3441852.3476469","url":null,"abstract":"We illustrate our experience, gained over years of involvement in multiple research and commercial projects, in developing accessible mobile apps with cross-platform development frameworks (CPDF). These frameworks allow the developers to write the app code only once and run it on both iOS and Android. However, they have limited support for accessibility features, in particular for what concerns the interaction with the system screen reader. To study the coverage of accessibility features in CPDFs, we first systematically analyze screen reader APIs available in native iOS and Android, and we examine whether and at what level the same functionalities are available in two popular CPDF: Xamarin and React Native. This analysis unveils that there are many functionalities shared between native iOS and Android APIs, but most of them are not available neither in React Native nor in Xamarin. In particular, not even all basic APIs are exposed by the examined CPDF. Accessing the unavailable APIs is still possible, but it requires additional effort by the developers who need to write platform-specific code in native APIs, hence partially negating the advantages of CPDF. To address this problem, we consider a representative set of native APIs that cannot be directly accessed from React Native and Xamarin and we report challenges encountered in accessing them.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131128532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1