首页 > 最新文献

Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility最新文献

英文 中文
Video Gaming for the Vision Impaired 视障人士的电子游戏
Manohar Swaminathan, Sujeath Pareddy, T. Sawant, Shubi Agarwal
Mainstream video games are predominantly inaccessible to people with visual impairments (VIPs). We present ongoing research that aims to make such games go beyond accessibility, by making them engaging and enjoyable for visually impaired players. We have built a new interaction toolkit called the Responsive Spatial Audio Cloud (ReSAC), developed around spatial audio technology, to enable visually impaired players to play video games. VIPs successfully finished a simple video game integrated with ReSAC and reported enjoying the experience.
主流电子游戏主要面向视觉障碍人士(vip)。我们正在进行的研究旨在让这类游戏超越易访问性,让视障玩家更有吸引力和乐趣。我们已经建立了一个新的交互工具包,称为响应空间音频云(ReSAC),围绕空间音频技术开发,使视障玩家能够玩视频游戏。vip们成功地完成了一个与ReSAC集成的简单视频游戏,并表示很享受这种体验。
{"title":"Video Gaming for the Vision Impaired","authors":"Manohar Swaminathan, Sujeath Pareddy, T. Sawant, Shubi Agarwal","doi":"10.1145/3234695.3241025","DOIUrl":"https://doi.org/10.1145/3234695.3241025","url":null,"abstract":"Mainstream video games are predominantly inaccessible to people with visual impairments (VIPs). We present ongoing research that aims to make such games go beyond accessibility, by making them engaging and enjoyable for visually impaired players. We have built a new interaction toolkit called the Responsive Spatial Audio Cloud (ReSAC), developed around spatial audio technology, to enable visually impaired players to play video games. VIPs successfully finished a simple video game integrated with ReSAC and reported enjoying the experience.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134011513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Incorporating Social Factors in Accessible Design 无障碍设计中融入社会因素
Kristen Shinohara, J. Wobbrock, W. Pratt
Personal technologies are rarely designed to be accessible to disabled people, partly due to the perceived challenge of including disability in design. Through design workshops, we addressed this challenge by infusing user-centered design activities with Design for Social Accessibility-a perspective emphasizing social aspects of accessibility-to investigate how professional designers can leverage social factors to include accessibility in design. We focused on how professional designers incorporated Design for Social Accessibility's three tenets: (1) to work with users with and without visual impairments; (2) to consider social and functional factors; (3) to employ tools-a framework and method cards-to raise awareness and prompt reflection on social aspects toward accessible design. We then interviewed designers about their workshop experiences. We found DSA to be an effective set of tools and strategies incorporating social/functional and non/disabled perspectives that helped designers create accessible design.
个人技术很少被设计为残疾人可以使用,部分原因是在设计中考虑到残疾的挑战。通过设计研讨会,我们通过将以用户为中心的设计活动与社会无障碍设计(强调可访问性的社会方面)相结合来解决这一挑战,研究专业设计师如何利用社会因素在设计中包括可访问性。我们关注的是专业设计师如何将Design for Social Accessibility的三个原则结合起来:(1)与有或没有视觉障碍的用户一起工作;(2)考虑社会和功能因素;(3)使用工具——框架和方法卡——来提高人们对无障碍设计的认识,并促使人们对社会方面进行反思。然后我们采访了设计师们关于他们的工作坊经历。我们发现DSA是一套有效的工具和策略,结合了社会/功能和非/残疾的视角,帮助设计师创建无障碍设计。
{"title":"Incorporating Social Factors in Accessible Design","authors":"Kristen Shinohara, J. Wobbrock, W. Pratt","doi":"10.1145/3234695.3236346","DOIUrl":"https://doi.org/10.1145/3234695.3236346","url":null,"abstract":"Personal technologies are rarely designed to be accessible to disabled people, partly due to the perceived challenge of including disability in design. Through design workshops, we addressed this challenge by infusing user-centered design activities with Design for Social Accessibility-a perspective emphasizing social aspects of accessibility-to investigate how professional designers can leverage social factors to include accessibility in design. We focused on how professional designers incorporated Design for Social Accessibility's three tenets: (1) to work with users with and without visual impairments; (2) to consider social and functional factors; (3) to employ tools-a framework and method cards-to raise awareness and prompt reflection on social aspects toward accessible design. We then interviewed designers about their workshop experiences. We found DSA to be an effective set of tools and strategies incorporating social/functional and non/disabled perspectives that helped designers create accessible design.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131275898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Designing an Animated Character System for American Sign Language 美国手语动画角色系统的设计
Danielle Bragg, R. Kushalnagar, R. Ladner
Sign languages lack a standard written form, preventing millions of Deaf people from accessing text in their primary language. A major barrier to adoption is difficulty learning a system which represents complex 3D movements with stationary symbols. In this work, we leverage the animation capabilities of modern screens to create the first animated character system prototype for sign language, producing text that combines iconic symbols and movement. Using animation to represent sign movements can increase resemblance to the live language, making the character system easier to learn. We explore this idea through the lens of American Sign Language (ASL), presenting 1) a pilot study underscoring the potential value of an animated ASL character system, 2) a structured approach for designing animations for an existing ASL character system, and 3) a design probe workshop with ASL users eliciting guidelines for the animated character system design.
手语缺乏标准的书面形式,使数百万聋人无法使用母语文本。采用的主要障碍是难以学习用固定符号表示复杂3D运动的系统。在这项工作中,我们利用现代屏幕的动画功能,为手语创建了第一个动画角色系统原型,生成了结合了标志性符号和运动的文本。使用动画来表示手势动作可以增加与现场语言的相似性,使字符系统更容易学习。我们通过美国手语(ASL)的视角来探索这一想法,提出1)强调动画ASL字符系统潜在价值的试点研究,2)为现有ASL字符系统设计动画的结构化方法,以及3)与ASL用户一起进行的设计探索研讨会,为动画字符系统设计提供指导。
{"title":"Designing an Animated Character System for American Sign Language","authors":"Danielle Bragg, R. Kushalnagar, R. Ladner","doi":"10.1145/3234695.3236338","DOIUrl":"https://doi.org/10.1145/3234695.3236338","url":null,"abstract":"Sign languages lack a standard written form, preventing millions of Deaf people from accessing text in their primary language. A major barrier to adoption is difficulty learning a system which represents complex 3D movements with stationary symbols. In this work, we leverage the animation capabilities of modern screens to create the first animated character system prototype for sign language, producing text that combines iconic symbols and movement. Using animation to represent sign movements can increase resemblance to the live language, making the character system easier to learn. We explore this idea through the lens of American Sign Language (ASL), presenting 1) a pilot study underscoring the potential value of an animated ASL character system, 2) a structured approach for designing animations for an existing ASL character system, and 3) a design probe workshop with ASL users eliciting guidelines for the animated character system design.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123843600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Self-Identifying Tactile Overlays 自我识别触觉覆盖
Mauro Ávila-Soto, Alexandra Voit, A. Hassan, A. Schmidt, Tonja Machulla
Tactile overlays for touch-screen devices are an opportunity to display content for users with visual impairments. However, when users switch tactile overlays, the displayed content on the touch-screen devices still correspond to the previous overlay. Currently, users have to change the displayed content on the touch-screen devices manually which hinders a fluid user interaction. In this paper, we introduce self-identifying overlays - an automated method for touch-screen devices to identify tactile overlays placed on the screen and to adapt the displayed content based on the applied tactile overlay. We report on a pilot study with two participants with visual impairments to evaluate this approach with a functional content exploration application based on an adapted textbook.
触摸屏设备的触觉覆盖为有视觉障碍的用户提供了显示内容的机会。然而,当用户切换触觉覆盖时,触摸屏设备上显示的内容仍然对应于之前的覆盖。目前,用户必须手动更改触摸屏设备上显示的内容,这阻碍了流畅的用户交互。在本文中,我们介绍了自识别叠加——一种触摸屏设备自动识别放置在屏幕上的触觉叠加并根据应用的触觉叠加调整显示内容的方法。我们报告了一项试点研究,其中两名参与者有视觉障碍,以评估基于改编教科书的功能内容探索应用程序的这种方法。
{"title":"Self-Identifying Tactile Overlays","authors":"Mauro Ávila-Soto, Alexandra Voit, A. Hassan, A. Schmidt, Tonja Machulla","doi":"10.1145/3234695.3241021","DOIUrl":"https://doi.org/10.1145/3234695.3241021","url":null,"abstract":"Tactile overlays for touch-screen devices are an opportunity to display content for users with visual impairments. However, when users switch tactile overlays, the displayed content on the touch-screen devices still correspond to the previous overlay. Currently, users have to change the displayed content on the touch-screen devices manually which hinders a fluid user interaction. In this paper, we introduce self-identifying overlays - an automated method for touch-screen devices to identify tactile overlays placed on the screen and to adapt the displayed content based on the applied tactile overlay. We report on a pilot study with two participants with visual impairments to evaluate this approach with a functional content exploration application based on an adapted textbook.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122177620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Who Should Have Access to my Pointing Data?: Privacy Tradeoffs of Adaptive Assistive Technologies 谁应该有权访问我的指向数据?自适应辅助技术的隐私权衡
Foad Hamidi, Kellie Poneres, Aaron K. Massey, A. Hurst
Customizing assistive technologies based on user needs, abilities, and preferences is necessary for accessibility, especially for individuals whose abilities vary due to a diagnosis, medication, or other external factors. Adaptive Assistive Technologies (AATs) that can automatically monitor a user's current abilities and adapt functionality and appearance accordingly offer exciting solutions. However, there is an often-overlooked privacy tradeoff between usability and user privacy when designing such systems. We present a general privacy threat model analysis of AATs and contextualize it with findings from an interview study with older adults who experience pointing problems. We found that participants had positive attitude towards assistive technologies that gather their personal data but also had strong preferences for how their data should be used and who should have access to it. We identify a need to seriously consider privacy threats when designing assistive technologies to avoid exposing users to them.
基于用户需求、能力和偏好定制辅助技术对于可访问性是必要的,特别是对于那些能力因诊断、药物或其他外部因素而变化的个人。自适应辅助技术(AATs)可以自动监控用户当前的能力,并相应地调整功能和外观,提供了令人兴奋的解决方案。然而,在设计这样的系统时,在可用性和用户隐私之间有一个经常被忽视的隐私权衡。我们提出了aat的一般隐私威胁模型分析,并将其与对经历指向问题的老年人的访谈研究结果联系起来。我们发现,参与者对收集他们个人数据的辅助技术持积极态度,但对如何使用他们的数据以及谁应该访问这些数据也有强烈的偏好。在设计辅助技术时,我们认为有必要认真考虑隐私威胁,以避免用户暴露在这些威胁之下。
{"title":"Who Should Have Access to my Pointing Data?: Privacy Tradeoffs of Adaptive Assistive Technologies","authors":"Foad Hamidi, Kellie Poneres, Aaron K. Massey, A. Hurst","doi":"10.1145/3234695.3239331","DOIUrl":"https://doi.org/10.1145/3234695.3239331","url":null,"abstract":"Customizing assistive technologies based on user needs, abilities, and preferences is necessary for accessibility, especially for individuals whose abilities vary due to a diagnosis, medication, or other external factors. Adaptive Assistive Technologies (AATs) that can automatically monitor a user's current abilities and adapt functionality and appearance accordingly offer exciting solutions. However, there is an often-overlooked privacy tradeoff between usability and user privacy when designing such systems. We present a general privacy threat model analysis of AATs and contextualize it with findings from an interview study with older adults who experience pointing problems. We found that participants had positive attitude towards assistive technologies that gather their personal data but also had strong preferences for how their data should be used and who should have access to it. We identify a need to seriously consider privacy threats when designing assistive technologies to avoid exposing users to them.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132006645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Towards More Robust Speech Interactions for Deaf and Hard of Hearing Users 为聋人和重听用户提供更强大的语言交互
Raymond Fok, Harmanpreet Kaur, Skanda Palani, Martez E. Mott, Walter S. Lasecki
Mobile, wearable, and other ubiquitous computing devices are increasingly creating a context in which conventional keyboard and screen-based inputs are being replaced in favor of more natural speech-based interactions. Digital personal assistants use speech to control a wide range of functionality, from environmental controls to information access. However, many deaf and hard-of-hearing users have speech patterns that vary from those of hearing users due to incomplete acoustic feedback from their own voices. Because automatic speech recognition (ASR) systems are largely trained using speech from hearing individuals, speech-controlled technologies are typically inaccessible to deaf users. Prior work has focused on providing deaf users access to aural output via real-time captioning or signing, but little has been done to improve users' ability to provide input to these systems' speech-based interfaces. Further, the vocalization patterns of deaf speech often make accurate recognition intractable for both automated systems and human listeners, making traditional approaches to mitigate ASR limitations, such as human captionists, less effective. To bridge this accessibility gap, we investigate the limitations of common speech recognition approaches and techniques---both automatic and human-powered---when applied to deaf speech. We then explore the effectiveness of an iterative crowdsourcing workflow, and characterize the potential for groups to collectively exceed the performance of individuals. This paper contributes a better understanding of the challenges of deaf speech recognition and provides insights for future system development in this space.
移动、可穿戴和其他无处不在的计算设备正在日益创造一种环境,在这种环境中,传统的键盘和基于屏幕的输入正在被更自然的基于语音的交互所取代。数字个人助理使用语音来控制各种功能,从环境控制到信息访问。然而,许多耳聋和重听用户的语言模式与正常人不同,这是由于他们自己声音的声学反馈不完整。由于自动语音识别(ASR)系统在很大程度上是使用听力正常的人的语音进行训练的,因此失聪用户通常无法使用语音控制技术。先前的工作主要集中在通过实时字幕或签名为聋人用户提供听觉输出,但在提高用户为这些系统的基于语音的界面提供输入的能力方面做得很少。此外,聋人语言的发声模式往往使自动化系统和人类听者难以准确识别,使得减轻ASR限制的传统方法(如人类captionists)效果不佳。为了弥补这种可访问性差距,我们研究了通用语音识别方法和技术(自动和人工)在聋人语言应用时的局限性。然后,我们探讨了迭代众包工作流程的有效性,并描述了群体集体超越个人绩效的潜力。本文有助于更好地理解聋人语音识别的挑战,并为该领域未来的系统开发提供见解。
{"title":"Towards More Robust Speech Interactions for Deaf and Hard of Hearing Users","authors":"Raymond Fok, Harmanpreet Kaur, Skanda Palani, Martez E. Mott, Walter S. Lasecki","doi":"10.1145/3234695.3236343","DOIUrl":"https://doi.org/10.1145/3234695.3236343","url":null,"abstract":"Mobile, wearable, and other ubiquitous computing devices are increasingly creating a context in which conventional keyboard and screen-based inputs are being replaced in favor of more natural speech-based interactions. Digital personal assistants use speech to control a wide range of functionality, from environmental controls to information access. However, many deaf and hard-of-hearing users have speech patterns that vary from those of hearing users due to incomplete acoustic feedback from their own voices. Because automatic speech recognition (ASR) systems are largely trained using speech from hearing individuals, speech-controlled technologies are typically inaccessible to deaf users. Prior work has focused on providing deaf users access to aural output via real-time captioning or signing, but little has been done to improve users' ability to provide input to these systems' speech-based interfaces. Further, the vocalization patterns of deaf speech often make accurate recognition intractable for both automated systems and human listeners, making traditional approaches to mitigate ASR limitations, such as human captionists, less effective. To bridge this accessibility gap, we investigate the limitations of common speech recognition approaches and techniques---both automatic and human-powered---when applied to deaf speech. We then explore the effectiveness of an iterative crowdsourcing workflow, and characterize the potential for groups to collectively exceed the performance of individuals. This paper contributes a better understanding of the challenges of deaf speech recognition and provides insights for future system development in this space.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129826844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Design of an Augmented Reality Magnification Aid for Low Vision Users 一种用于低视力用户的增强现实放大辅助设备的设计
Lee Stearns, Leah Findlater, Jon E. Froehlich
Augmented reality (AR) systems that enhance visual capabilities could make text and other fine details more accessible for low vision users, improving independence and quality of life. Prior work has begun to investigate the potential of assistive AR, but recent advancements enable new AR visualizations and interactions not yet explored in the context of assistive technology. In this paper, we follow an iterative design process with feedback and suggestions from seven visually impaired participants, designing and testing AR magnification ideas using the Microsoft HoloLens. Participants identified several advantages to the concept of head-worn magnification (e.g., portability, privacy, ready availability), and to our AR designs in particular (e.g., a more natural reading experience and the ability to multitask). We discuss the strengths and weaknesses of this AR magnification approach and summarize lessons learned throughout the process.
增强视觉能力的增强现实(AR)系统可以使低视力用户更容易获得文本和其他细节,从而提高独立性和生活质量。先前的工作已经开始调查辅助AR的潜力,但最近的进展使新的AR可视化和交互在辅助技术的背景下尚未被探索。在本文中,我们遵循了一个迭代的设计过程,从七个视障参与者的反馈和建议中,使用微软HoloLens设计和测试AR放大的想法。参与者确定了头戴式放大概念的几个优点(例如,便携性,隐私性,随时可用性),特别是我们的AR设计(例如,更自然的阅读体验和多任务处理能力)。我们讨论了这种AR放大方法的优点和缺点,并总结了在整个过程中吸取的教训。
{"title":"Design of an Augmented Reality Magnification Aid for Low Vision Users","authors":"Lee Stearns, Leah Findlater, Jon E. Froehlich","doi":"10.1145/3234695.3236361","DOIUrl":"https://doi.org/10.1145/3234695.3236361","url":null,"abstract":"Augmented reality (AR) systems that enhance visual capabilities could make text and other fine details more accessible for low vision users, improving independence and quality of life. Prior work has begun to investigate the potential of assistive AR, but recent advancements enable new AR visualizations and interactions not yet explored in the context of assistive technology. In this paper, we follow an iterative design process with feedback and suggestions from seven visually impaired participants, designing and testing AR magnification ideas using the Microsoft HoloLens. Participants identified several advantages to the concept of head-worn magnification (e.g., portability, privacy, ready availability), and to our AR designs in particular (e.g., a more natural reading experience and the ability to multitask). We discuss the strengths and weaknesses of this AR magnification approach and summarize lessons learned throughout the process.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125725876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Gaze Typing using Multi-key Selection Technique 使用多键选择技术的注视打字
Tanya Bafna
Gaze typing for people with extreme motor disabilities like full body paralysis can be extremely slow and discouraging for daily communication. The most popular technique in gaze typing, known as dwell time typing, is based on fixation on every letter of the word for a fixed amount of time, to type the word. In this preliminary study, the goal was to test a new technique of gaze typing that requires fixating only on the first and the last letter of the word. Analysis of the data collected suggests that the newly described technique is 63% faster than dwell time typing for novices in gaze interaction, without influencing the error rate. Using this technique would have a tremendous impact on communication speed, comfort and working efficiency of people with disabilities.
对于患有全身瘫痪等极端运动障碍的人来说,凝视打字可能非常缓慢,而且不利于日常交流。凝视打字中最流行的一种技术,被称为“停留时间打字”,就是在固定的时间内盯着单词的每个字母,然后输入这个单词。在这项初步研究中,目的是测试一种新的凝视输入技术,这种技术只需要盯着单词的第一个和最后一个字母。对收集数据的分析表明,新描述的技术在注视交互中比停留时间输入快63%,而不影响错误率。使用这种技术将对残疾人的沟通速度、舒适度和工作效率产生巨大的影响。
{"title":"Gaze Typing using Multi-key Selection Technique","authors":"Tanya Bafna","doi":"10.1145/3234695.3240992","DOIUrl":"https://doi.org/10.1145/3234695.3240992","url":null,"abstract":"Gaze typing for people with extreme motor disabilities like full body paralysis can be extremely slow and discouraging for daily communication. The most popular technique in gaze typing, known as dwell time typing, is based on fixation on every letter of the word for a fixed amount of time, to type the word. In this preliminary study, the goal was to test a new technique of gaze typing that requires fixating only on the first and the last letter of the word. Analysis of the data collected suggests that the newly described technique is 63% faster than dwell time typing for novices in gaze interaction, without influencing the error rate. Using this technique would have a tremendous impact on communication speed, comfort and working efficiency of people with disabilities.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115041520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Towards Accessible Conversations in a Mobile Context for People who are Deaf and Hard of Hearing 为聋人和听力障碍者在移动环境中实现无障碍对话
D. Jain, Rachel L. Franz, Leah Findlater, Jackson Cannon, R. Kushalnagar, Jon E. Froehlich
Prior work has explored communication challenges faced by people who are deaf and hard of hearing (DHH) and the potential role of new captioning and support technologies to address these challenges; however, the focus has been on stationary contexts such as group meetings and lectures. In this paper, we present two studies examining the needs of DHH people in moving contexts (e.g., walking) and the potential for mobile captions on head-mounted displays (HMDs) to support those needs. Our formative study with 12 DHH participants identifies social and environmental challenges unique to or exacerbated by moving contexts. Informed by these findings, we introduce and evaluate a proof-of-concept HMD prototype with 10 DHH participants. Results show that, while walking, HMD captions can support communication access and improve attentional balance between the speakers(s) and navigating the environment. We close by describing open questions in the mobile context space and design guidelines for future technology.
先前的工作探讨了聋人和重听人(DHH)面临的沟通挑战,以及新的字幕和支持技术在解决这些挑战方面的潜在作用;然而,重点是固定的环境,如小组会议和讲座。在本文中,我们提出了两项研究,研究了DHH患者在运动环境(例如步行)中的需求,以及头戴式显示器(hmd)上的移动字幕支持这些需求的潜力。我们对12名DHH参与者的形成性研究确定了移动环境所特有或加剧的社会和环境挑战。根据这些发现,我们介绍并评估了10名DHH参与者的概念验证HMD原型。结果表明,在行走时,头戴式显示器字幕可以支持通信访问,改善说话者之间的注意力平衡和对环境的导航。最后,我们描述了移动环境空间中的开放问题和未来技术的设计指南。
{"title":"Towards Accessible Conversations in a Mobile Context for People who are Deaf and Hard of Hearing","authors":"D. Jain, Rachel L. Franz, Leah Findlater, Jackson Cannon, R. Kushalnagar, Jon E. Froehlich","doi":"10.1145/3234695.3236362","DOIUrl":"https://doi.org/10.1145/3234695.3236362","url":null,"abstract":"Prior work has explored communication challenges faced by people who are deaf and hard of hearing (DHH) and the potential role of new captioning and support technologies to address these challenges; however, the focus has been on stationary contexts such as group meetings and lectures. In this paper, we present two studies examining the needs of DHH people in moving contexts (e.g., walking) and the potential for mobile captions on head-mounted displays (HMDs) to support those needs. Our formative study with 12 DHH participants identifies social and environmental challenges unique to or exacerbated by moving contexts. Informed by these findings, we introduce and evaluate a proof-of-concept HMD prototype with 10 DHH participants. Results show that, while walking, HMD captions can support communication access and improve attentional balance between the speakers(s) and navigating the environment. We close by describing open questions in the mobile context space and design guidelines for future technology.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114899896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility 第20届国际ACM SIGACCESS计算机与可访问性会议论文集
Enrico Pontelli, S. Trewin
It is our great pleasure to welcome you to the 9th ACM SIGACCESS Conference on Computers and Accessibility -- ASSETS'07. As in the past, ASSETS 2007 explores the potential of computer and information technologies to support and include everyone, regardless of age or disability. ASSETS is the premier forum for presenting innovative research on the design and use of both mainstream and specialized assistive technologies by people of all ages and with different capabilities, and those around them. The call for papers attracted 86 technical paper submissions from 18 countries spread over 5 continents. A further 33 poster and demonstration submissions were received by the poster and demonstration chairs, Anna Dickinson and Joy Goodman-Deane. All were peer-reviewed by an international program committee, in order to ensure that the accepted work truly represents the state of the art in accessibility. 27 papers and 21 posters and demonstrations were accepted. ASSETS 2007 continues its tradition of encouraging dialog through a single-track forum with opportunities for delegates to share results, mingle and discuss their work. This year, the conference opens with a keynote speech by Jonathan Wolpaw, professor and research physician at the Wadsworth Center, New York State Department of Health and State University of New York. His presentation describes the latest research in brain-computer interfaces for communication and control. The main conference program continues with seven technical paper sessions and two poster and demonstration sessions. These proceedings contain both the technical papers, and two-page extended abstracts for each of the poster and demonstration submissions. This year's program continues the SIGACCESS student research competition (SRC), sponsored by Microsoft Research. The SRC, chaired by Harriet Fell, is an opportunity for both graduate and undergraduate students to present their work at the conference in poster form. Abstracts from the accepted SRC submissions are included in these proceedings. At the conference, selected entrants will give a short presentation in the main program, and a panel of judges will select one or more finalists, who will be entered into the Grand Finals of ACM's Student Research Competition. As in previous years, the main program is preceded by a doctoral consortium, sponsored by the National Science Foundation and chaired by Clayton Lewis and Sri Kurniawan. This provides an opportunity for doctoral students in the early stages of research to present their work and receive feedback from peers and a selected pool of experts. All participants in the doctoral consortium will also present their work during one of the main conference poster sessions, and one participant, selected by the doctoral consortium committee, will give a presentation in a conference session. Following the tradition of the ASSETS conference series, two awards will be made at the conference: the SIGACCESS Best Paper Award,
我们非常高兴地欢迎您参加第九届ACM SIGACCESS计算机与可访问性会议—ASSETS'07。与过去一样,资产2007探索计算机和信息技术的潜力,以支持和包容每个人,无论年龄或残疾。ASSETS是一个主要的论坛,为所有年龄和不同能力的人以及他们周围的人提供关于主流和专业辅助技术的设计和使用的创新研究。论文征集活动吸引了来自5大洲18个国家的86篇技术论文。海报和演示主席Anna Dickinson和Joy Goodman-Deane收到了另外33份海报和演示提交。所有作品都经过国际项目委员会的同行评审,以确保被接受的作品真正代表了无障碍技术的最新水平。接受了27篇论文和21篇海报和示范。ASSETS 2007延续了其鼓励对话的传统,通过单轨论坛,代表们有机会分享成果,交流和讨论他们的工作。今年,会议由纽约州卫生部和纽约州立大学沃兹沃斯中心的教授兼研究医师乔纳森·沃尔帕(Jonathan Wolpaw)发表主旨演讲开幕。他的报告描述了脑机通信和控制接口的最新研究。主要的会议计划继续进行,包括七个技术论文会议和两个海报和演示会议。这些会议记录既包括技术论文,也包括每个海报和演示提交的两页扩展摘要。今年的项目延续了由微软研究院赞助的SIGACCESS学生研究竞赛(SRC)。SRC由Harriet Fell担任主席,研究生和本科生都有机会在会议上以海报的形式展示他们的作品。接受的SRC提交的摘要包括在这些会议中。在会议上,被选中的参赛者将在主程序中做一个简短的介绍,评审团将选出一名或多名决赛选手,他们将进入ACM学生研究竞赛的总决赛。与往年一样,主要项目之前有一个博士联盟,由国家科学基金会(National Science Foundation)赞助,克莱顿·刘易斯(Clayton Lewis)和Sri Kurniawan担任主席。这为处于研究早期阶段的博士生提供了一个展示他们的工作并从同行和选定的专家那里获得反馈的机会。博士联盟的所有参与者还将在一次主要会议海报上展示他们的工作,并且由博士联盟委员会选出的一名参与者将在一次会议上进行演示。按照ASSETS系列会议的传统,会议将颁发两个奖项:SIGACCESS最佳论文奖和SIGACCESS最佳学生论文奖。
{"title":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","authors":"Enrico Pontelli, S. Trewin","doi":"10.1145/3234695","DOIUrl":"https://doi.org/10.1145/3234695","url":null,"abstract":"It is our great pleasure to welcome you to the 9th ACM SIGACCESS Conference on Computers and Accessibility -- ASSETS'07. As in the past, ASSETS 2007 explores the potential of computer and information technologies to support and include everyone, regardless of age or disability. ASSETS is the premier forum for presenting innovative research on the design and use of both mainstream and specialized assistive technologies by people of all ages and with different capabilities, and those around them. \u0000 \u0000The call for papers attracted 86 technical paper submissions from 18 countries spread over 5 continents. A further 33 poster and demonstration submissions were received by the poster and demonstration chairs, Anna Dickinson and Joy Goodman-Deane. All were peer-reviewed by an international program committee, in order to ensure that the accepted work truly represents the state of the art in accessibility. 27 papers and 21 posters and demonstrations were accepted. \u0000 \u0000ASSETS 2007 continues its tradition of encouraging dialog through a single-track forum with opportunities for delegates to share results, mingle and discuss their work. This year, the conference opens with a keynote speech by Jonathan Wolpaw, professor and research physician at the Wadsworth Center, New York State Department of Health and State University of New York. His presentation describes the latest research in brain-computer interfaces for communication and control. The main conference program continues with seven technical paper sessions and two poster and demonstration sessions. These proceedings contain both the technical papers, and two-page extended abstracts for each of the poster and demonstration submissions. \u0000 \u0000This year's program continues the SIGACCESS student research competition (SRC), sponsored by Microsoft Research. The SRC, chaired by Harriet Fell, is an opportunity for both graduate and undergraduate students to present their work at the conference in poster form. Abstracts from the accepted SRC submissions are included in these proceedings. At the conference, selected entrants will give a short presentation in the main program, and a panel of judges will select one or more finalists, who will be entered into the Grand Finals of ACM's Student Research Competition. \u0000 \u0000As in previous years, the main program is preceded by a doctoral consortium, sponsored by the National Science Foundation and chaired by Clayton Lewis and Sri Kurniawan. This provides an opportunity for doctoral students in the early stages of research to present their work and receive feedback from peers and a selected pool of experts. All participants in the doctoral consortium will also present their work during one of the main conference poster sessions, and one participant, selected by the doctoral consortium committee, will give a presentation in a conference session. \u0000 \u0000Following the tradition of the ASSETS conference series, two awards will be made at the conference: the SIGACCESS Best Paper Award, ","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127017249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1