首页 > 最新文献

Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility最新文献

英文 中文
A Navigation Method for Visually Impaired People: Easy to Imagine the Structure of the Stairs 一种适合视障人士的导航方法:容易想象楼梯的结构
Asuka Miyake, Misa Hirao, Mitsuhiro Goto, Chihiro Takayama, Masahiro Watanabe, Hiroya Minami
People with visual impairments or blindness (VIB) face many problems when they enter unfamiliar areas by themselves. To address this problem, we aim to enable people with VIB to walk alone, even in unfamiliar areas. We propose a navigation method that enables people with VIB to imagine structures such as staircases easily and thus move safely when walking alone even in unfamiliar areas. An experiment is conducted with six participants with VIB walking up or down stairs with four different structures in an indoor environment. Its results verify that the proposed method could provide appropriate amounts of guidance messages and convey the messages in a safer manner than the existing method.
视障或失明人士在独自进入不熟悉的环境时面临许多问题。为了解决这个问题,我们的目标是让VIB患者能够独自行走,即使是在不熟悉的地方。我们提出了一种导航方法,使患有VIB的人能够轻松想象楼梯等结构,从而在不熟悉的区域独自行走时安全移动。实验中,六名患有VIB的参与者在室内环境中上下四种不同结构的楼梯。结果表明,该方法可以提供适当数量的引导信息,并且比现有方法更安全地传递信息。
{"title":"A Navigation Method for Visually Impaired People: Easy to Imagine the Structure of the Stairs","authors":"Asuka Miyake, Misa Hirao, Mitsuhiro Goto, Chihiro Takayama, Masahiro Watanabe, Hiroya Minami","doi":"10.1145/3373625.3418002","DOIUrl":"https://doi.org/10.1145/3373625.3418002","url":null,"abstract":"People with visual impairments or blindness (VIB) face many problems when they enter unfamiliar areas by themselves. To address this problem, we aim to enable people with VIB to walk alone, even in unfamiliar areas. We propose a navigation method that enables people with VIB to imagine structures such as staircases easily and thus move safely when walking alone even in unfamiliar areas. An experiment is conducted with six participants with VIB walking up or down stairs with four different structures in an indoor environment. Its results verify that the proposed method could provide appropriate amounts of guidance messages and convey the messages in a safer manner than the existing method.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129970711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exploring Collection of Sign Language Datasets: Privacy, Participation, and Model Performance 探索手语数据集的收集:隐私、参与和模型性能
Danielle Bragg, Oscar Koller, Naomi K. Caselli, W. Thies
As machine learning algorithms continue to improve, collecting training data becomes increasingly valuable. At the same time, increased focus on data collection may introduce compounding privacy concerns. Accessibility projects in particular may put vulnerable populations at risk, as disability status is sensitive, and collecting data from small populations limits anonymity. To help address privacy concerns while maintaining algorithmic performance on machine learning tasks, we propose privacy-enhancing distortions of training datasets. We explore this idea through the lens of sign language video collection, which is crucial for advancing sign language recognition and translation. We present a web study exploring signers’ concerns in contributing to video corpora and their attitudes about using filters, and a computer vision experiment exploring sign language recognition performance with filtered data. Our results suggest that privacy concerns may exist in contributing to sign language corpora, that filters (especially expressive avatars and blurred faces) may impact willingness to participate, and that training on more filtered data may boost recognition accuracy in some cases.
随着机器学习算法的不断改进,收集训练数据变得越来越有价值。与此同时,对数据收集的日益关注可能会带来更多的隐私问题。无障碍项目尤其可能使弱势群体处于危险之中,因为残疾状况是敏感的,而从小群体收集数据限制了匿名性。为了帮助解决隐私问题,同时保持机器学习任务的算法性能,我们提出了训练数据集的隐私增强扭曲。我们通过手语视频收集的镜头来探索这一想法,这对促进手语识别和翻译至关重要。我们提出了一项网络研究,探讨了签字人对视频语料库的贡献和他们对使用过滤器的态度,以及一个计算机视觉实验,探讨了过滤数据的手语识别性能。我们的研究结果表明,在手语语料库中可能存在隐私问题,过滤器(特别是富有表现力的头像和模糊的面孔)可能会影响参与的意愿,在某些情况下,对更多过滤数据的训练可能会提高识别的准确性。
{"title":"Exploring Collection of Sign Language Datasets: Privacy, Participation, and Model Performance","authors":"Danielle Bragg, Oscar Koller, Naomi K. Caselli, W. Thies","doi":"10.1145/3373625.3417024","DOIUrl":"https://doi.org/10.1145/3373625.3417024","url":null,"abstract":"As machine learning algorithms continue to improve, collecting training data becomes increasingly valuable. At the same time, increased focus on data collection may introduce compounding privacy concerns. Accessibility projects in particular may put vulnerable populations at risk, as disability status is sensitive, and collecting data from small populations limits anonymity. To help address privacy concerns while maintaining algorithmic performance on machine learning tasks, we propose privacy-enhancing distortions of training datasets. We explore this idea through the lens of sign language video collection, which is crucial for advancing sign language recognition and translation. We present a web study exploring signers’ concerns in contributing to video corpora and their attitudes about using filters, and a computer vision experiment exploring sign language recognition performance with filtered data. Our results suggest that privacy concerns may exist in contributing to sign language corpora, that filters (especially expressive avatars and blurred faces) may impact willingness to participate, and that training on more filtered data may boost recognition accuracy in some cases.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130888318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
HoloSound: Combining Speech and Sound Identification for Deaf or Hard of Hearing Users on a Head-mounted Display HoloSound:在头戴式显示器上结合聋人或重听用户的语音和声音识别
Ru Guo, Yiru Yang, Johnson Kuang, Xue Bin, D. Jain, Steven M. Goodman, Leah Findlater, Jon E. Froehlich
Head-mounted displays can provide private and glanceable speech and sound feedback to deaf and hard of hearing people, yet prior systems have largely focused on speech transcription. We introduce HoloSound, a HoloLens-based augmented reality (AR) prototype that uses deep learning to classify and visualize sound identity and location in addition to providing speech transcription. This poster paper presents a working proof-of-concept prototype, and discusses future opportunities for advancing AR-based sound awareness.
头戴式显示器可以为聋哑人和重听人提供私密的、可浏览的语音和声音反馈,但之前的系统主要集中在语音转录上。我们介绍了HoloSound,这是一款基于hololens的增强现实(AR)原型,除了提供语音转录外,还使用深度学习对声音识别和位置进行分类和可视化。这张海报展示了一个可行的概念验证原型,并讨论了未来推进基于ar的声音感知的机会。
{"title":"HoloSound: Combining Speech and Sound Identification for Deaf or Hard of Hearing Users on a Head-mounted Display","authors":"Ru Guo, Yiru Yang, Johnson Kuang, Xue Bin, D. Jain, Steven M. Goodman, Leah Findlater, Jon E. Froehlich","doi":"10.1145/3373625.3418031","DOIUrl":"https://doi.org/10.1145/3373625.3418031","url":null,"abstract":"Head-mounted displays can provide private and glanceable speech and sound feedback to deaf and hard of hearing people, yet prior systems have largely focused on speech transcription. We introduce HoloSound, a HoloLens-based augmented reality (AR) prototype that uses deep learning to classify and visualize sound identity and location in addition to providing speech transcription. This poster paper presents a working proof-of-concept prototype, and discusses future opportunities for advancing AR-based sound awareness.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131116757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Lessons Learned in Designing AI for Autistic Adults 为自闭症成年人设计人工智能的经验教训
Andrew Begel, John C. Tang, Sean Andrist, Michael Barnett, Tony Carbary, Piali Choudhury, Edward Cutrell, Alberto Fung, Sasa Junuzovic, Daniel J. McDuff, Kael Rowan, Shibashankar Sahoo, Jennifer Frances Waldern, Jessica Wolk, Hui Zheng, Annuska Zolyomi
Through an iterative design process using Wizard of Oz (WOz) prototypes, we designed a video calling application for people with Autism Spectrum Disorder. Our Video Calling for Autism prototype provided an Expressiveness Mirror that gave feedback to autistic people on how their facial expressions might be interpreted by their neurotypical conversation partners. This feedback was in the form of emojis representing six emotions and a bar indicating the amount of overall expressiveness demonstrated by the user. However, when we built a working prototype and conducted a user study with autistic participants, their negative feedback caused us to reconsider how our design process led to a prototype that they did not find useful. We reflect on the design challenges around developing AI technology for an autistic user population, how Wizard of Oz prototypes can be overly optimistic in representing AI-driven prototypes, how autistic research participants can respond differently to user experience prototypes of varying fidelity, and how designing for people with diverse abilities needs to include that population in the development process.
通过使用绿野仙踪(WOz)原型的迭代设计过程,我们为自闭症谱系障碍患者设计了一个视频通话应用程序。我们的自闭症视频通话原型提供了一面表达镜,可以向自闭症患者反馈他们的面部表情如何被他们正常的对话伙伴理解。这种反馈以表情符号的形式呈现,这些表情符号代表了六种情绪,还有一个显示用户整体表达能力的条形图。然而,当我们构建一个可工作的原型并与自闭症参与者进行用户研究时,他们的负面反馈使我们重新考虑我们的设计过程是如何导致他们认为没有用处的原型的。我们思考了围绕为自闭症用户群体开发AI技术的设计挑战,《绿野仙踪》的原型如何在代表AI驱动的原型时过于乐观,自闭症研究参与者如何对不同保真度的用户体验原型做出不同的反应,以及如何为不同能力的人设计需要在开发过程中包括这些人群。
{"title":"Lessons Learned in Designing AI for Autistic Adults","authors":"Andrew Begel, John C. Tang, Sean Andrist, Michael Barnett, Tony Carbary, Piali Choudhury, Edward Cutrell, Alberto Fung, Sasa Junuzovic, Daniel J. McDuff, Kael Rowan, Shibashankar Sahoo, Jennifer Frances Waldern, Jessica Wolk, Hui Zheng, Annuska Zolyomi","doi":"10.1145/3373625.3418305","DOIUrl":"https://doi.org/10.1145/3373625.3418305","url":null,"abstract":"Through an iterative design process using Wizard of Oz (WOz) prototypes, we designed a video calling application for people with Autism Spectrum Disorder. Our Video Calling for Autism prototype provided an Expressiveness Mirror that gave feedback to autistic people on how their facial expressions might be interpreted by their neurotypical conversation partners. This feedback was in the form of emojis representing six emotions and a bar indicating the amount of overall expressiveness demonstrated by the user. However, when we built a working prototype and conducted a user study with autistic participants, their negative feedback caused us to reconsider how our design process led to a prototype that they did not find useful. We reflect on the design challenges around developing AI technology for an autistic user population, how Wizard of Oz prototypes can be overly optimistic in representing AI-driven prototypes, how autistic research participants can respond differently to user experience prototypes of varying fidelity, and how designing for people with diverse abilities needs to include that population in the development process.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132844202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Access Differential and Inequitable Access: Inaccessibility for Doctoral Students in Computing 访问差异和不公平访问:计算机博士生的不可访问性
Kristen Shinohara, Michael J. McQuaid, Nayeri Jacobo
Increasingly, support for students with disabilities in post-secondary education has boosted enrollment and graduates rates. Yet, such successes are not translated to doctoral degrees. For example, in 2018, the National Science Foundation reported 3% of math and computer science doctorate recipients identified as having a visual limitation while 1.2% identified as having a hearing limitation. To better understand why few students with disabilities pursue PhDs in computing and related fields, we conducted an interview study with 19 current and former graduate students who identified as blind or low vision, or deaf or hard of hearing. We asked participants about challenges or barriers they encountered in graduate school. We asked about accommodations they received, or did not receive, and about different forms of support. We found that a wide range of inaccessibility issues in research, courses, and in managing accommodations impacted student progress. Contributions from this work include identifying two forms of access inequality that emerged: (1) access differential: the gap between the access that non/disabled students experience, and (2) inequitable access: the degree of inadequacy of existing accommodations to address inaccessibility.
对残疾学生接受高等教育的支持越来越多地提高了入学率和毕业率。然而,这样的成功并没有转化为博士学位。例如,2018年,美国国家科学基金会(National Science Foundation)报告称,3%的数学和计算机科学博士学位获得者被认为有视觉障碍,1.2%的人被认为有听力障碍。为了更好地理解为什么很少有残疾学生在计算机和相关领域攻读博士学位,我们对19名目前和以前的研究生进行了采访研究,他们被认定为失明或低视力,或失聪或听力障碍。我们向参与者询问了他们在研究生院遇到的挑战或障碍。我们询问了他们收到或没有收到的住宿,以及不同形式的支持。我们发现,在研究、课程和住宿管理方面,各种各样的无障碍问题影响了学生的进步。这项工作的贡献包括确定出现的两种形式的访问不平等:(1)访问差异:非/残疾学生所经历的访问之间的差距;(2)访问不公平:现有设施解决不可访问性的不足程度。
{"title":"Access Differential and Inequitable Access: Inaccessibility for Doctoral Students in Computing","authors":"Kristen Shinohara, Michael J. McQuaid, Nayeri Jacobo","doi":"10.1145/3373625.3416989","DOIUrl":"https://doi.org/10.1145/3373625.3416989","url":null,"abstract":"Increasingly, support for students with disabilities in post-secondary education has boosted enrollment and graduates rates. Yet, such successes are not translated to doctoral degrees. For example, in 2018, the National Science Foundation reported 3% of math and computer science doctorate recipients identified as having a visual limitation while 1.2% identified as having a hearing limitation. To better understand why few students with disabilities pursue PhDs in computing and related fields, we conducted an interview study with 19 current and former graduate students who identified as blind or low vision, or deaf or hard of hearing. We asked participants about challenges or barriers they encountered in graduate school. We asked about accommodations they received, or did not receive, and about different forms of support. We found that a wide range of inaccessibility issues in research, courses, and in managing accommodations impacted student progress. Contributions from this work include identifying two forms of access inequality that emerged: (1) access differential: the gap between the access that non/disabled students experience, and (2) inequitable access: the degree of inadequacy of existing accommodations to address inaccessibility.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133694891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Action Blocks: Making Mobile Technology Accessible for People with Cognitive Disabilities 行动模块:为有认知障碍的人提供移动技术
Lia Carrari, Rain Michaels, Ajit Narayanan, Lei Shi, Xiang Xiao
Mobile technology has become an indispensable part of our daily lives. From home automation to digital entertainment, we rely on mobile technology to progress through our daily routines. However, mobile technology requires complex interactions and nontrivial cognitive efforts to use, and is often inaccessible to people with cognitive disabilities. With this in mind, we designed Action Blocks, an application that provides one-tap access to digital services on Android. A user and/or their caregiver can configure an Action Block with customized commands, such as calling a certain person, turning on the lights. The Action Block is associated with a memorable image (e.g., a photo of the person to call, an icon of a lightbulb) and placed on the device home screen as a one-tap button, as shown in Figure 1. Action Blocks was launched in May 2020 and received much useful feedback. In this demonstration, we report the key design considerations of Action Blocks as well as the lessons we learned from user feedback.
移动技术已经成为我们日常生活中不可或缺的一部分。从家庭自动化到数字娱乐,我们依靠移动技术在日常生活中取得进步。然而,移动技术需要复杂的交互和非平凡的认知努力来使用,并且通常对有认知障碍的人来说是无法使用的。考虑到这一点,我们设计了Action Blocks,这是一个在Android上提供一键访问数字服务的应用程序。用户和/或他们的护理人员可以用定制的命令配置Action Block,例如呼叫特定的人,打开灯。动作块与一个令人难忘的图像(例如,要呼叫的人的照片,灯泡的图标)相关联,并作为一键按钮放置在设备主屏幕上,如图1所示。行动模块于2020年5月启动,并收到了许多有用的反馈。在这个演示中,我们报告了Action block的关键设计考虑以及我们从用户反馈中学到的经验教训。
{"title":"Action Blocks: Making Mobile Technology Accessible for People with Cognitive Disabilities","authors":"Lia Carrari, Rain Michaels, Ajit Narayanan, Lei Shi, Xiang Xiao","doi":"10.1145/3373625.3418043","DOIUrl":"https://doi.org/10.1145/3373625.3418043","url":null,"abstract":"Mobile technology has become an indispensable part of our daily lives. From home automation to digital entertainment, we rely on mobile technology to progress through our daily routines. However, mobile technology requires complex interactions and nontrivial cognitive efforts to use, and is often inaccessible to people with cognitive disabilities. With this in mind, we designed Action Blocks, an application that provides one-tap access to digital services on Android. A user and/or their caregiver can configure an Action Block with customized commands, such as calling a certain person, turning on the lights. The Action Block is associated with a memorable image (e.g., a photo of the person to call, an icon of a lightbulb) and placed on the device home screen as a one-tap button, as shown in Figure 1. Action Blocks was launched in May 2020 and received much useful feedback. In this demonstration, we report the key design considerations of Action Blocks as well as the lessons we learned from user feedback.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132698523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Disability design and innovation in computing research in low resource settings 低资源环境下计算研究中的残疾设计与创新
Dafne Zuleima Morgado Ramirez, G. Barbareschi, M. Donovan-Hall, Mohammad Sobuh, Nida' Elayyan, Brenda T. Nakandi, R. Ssekitoleko, J. Olenja, G. Magomere, Sibylle Daymond, Jake Honeywill, Ian Harris, N. Mbugua, L. Kenney, C. Holloway
80% of people with disabilities worldwide live in low resourced settings, rural areas, informal settlements and in multidimensional poverty. ICT4D leverages technological innovations to deliver programs for international development. But very few do so with a focus on and involving people with disabilities in low resource settings. Also, most studies largely focus on publishing the results of the research with a focus on the positive stories and not the learnings and recommendations regarding research processes. In short, researchers rarely examine what was challenging in the process of collaboration. We present reflections from the field across four studies. Our contributions are: (1) an overview of past work in computing with a focus on disability in low resource settings and (2) learnings and recommendations from four collaborative projects in Uganda, Jordan and Kenya over the last two years, that are relevant for future HCI studies in low resource settings with communities with disabilities. We do this through a lens of Disability Interaction and ICT4D.
全世界80%的残疾人生活在资源匮乏的环境、农村地区、非正式住区和多维贫困中。ict - 4d利用技术创新为国际发展提供方案。但很少有人这样做,关注并让资源匮乏的残疾人参与进来。此外,大多数研究主要集中在发表研究结果,关注积极的故事,而不是关于研究过程的学习和建议。简而言之,研究人员很少检查合作过程中的挑战。我们在四项研究中提出了该领域的思考。我们的贡献是:(1)概述了过去在计算方面的工作,重点关注低资源环境下的残疾;(2)过去两年在乌干达、约旦和肯尼亚的四个合作项目的学习和建议,这些项目与未来在低资源环境下与残疾社区的HCI研究相关。我们通过残疾互动和ICT4D来做到这一点。
{"title":"Disability design and innovation in computing research in low resource settings","authors":"Dafne Zuleima Morgado Ramirez, G. Barbareschi, M. Donovan-Hall, Mohammad Sobuh, Nida' Elayyan, Brenda T. Nakandi, R. Ssekitoleko, J. Olenja, G. Magomere, Sibylle Daymond, Jake Honeywill, Ian Harris, N. Mbugua, L. Kenney, C. Holloway","doi":"10.1145/3373625.3417301","DOIUrl":"https://doi.org/10.1145/3373625.3417301","url":null,"abstract":"80% of people with disabilities worldwide live in low resourced settings, rural areas, informal settlements and in multidimensional poverty. ICT4D leverages technological innovations to deliver programs for international development. But very few do so with a focus on and involving people with disabilities in low resource settings. Also, most studies largely focus on publishing the results of the research with a focus on the positive stories and not the learnings and recommendations regarding research processes. In short, researchers rarely examine what was challenging in the process of collaboration. We present reflections from the field across four studies. Our contributions are: (1) an overview of past work in computing with a focus on disability in low resource settings and (2) learnings and recommendations from four collaborative projects in Uganda, Jordan and Kenya over the last two years, that are relevant for future HCI studies in low resource settings with communities with disabilities. We do this through a lens of Disability Interaction and ICT4D.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114433970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
AIGuide: An Augmented Reality Hand Guidance Application for People with Visual Impairments AIGuide:一个增强现实的手指导应用程序,为人们的视觉障碍
Nelson Daniel Troncoso Aldas, Sooyeon Lee, Chonghan Lee, M. Rosson, John Millar Carroll, N. Vijaykrishnan
Locating and grasping objects is a critical task in people’s daily lives. For people with visual impairments, this task can be a daily struggle. The support of augmented reality frameworks in smartphones has the potential to overcome the limitations of current object detection applications designed for people with visual impairments. We present AIGuide, a self-contained offline smartphone application that leverages augmented reality technology to help users locate and pick up objects around them. We conducted a user study to validate its effectiveness at providing guidance, compare it to other assistive technology form factors, evaluate the use of multimodal feedback, and provide feedback about the overall experience. Our results show that AIGuide is a promising technology to help people with visual impairments locate and acquire objects in their daily routine.
定位和抓取物体是人们日常生活中的一项重要任务。对于有视觉障碍的人来说,这项任务可能是每天的挣扎。智能手机中增强现实框架的支持有可能克服当前为视觉障碍人士设计的物体检测应用程序的局限性。我们介绍了AIGuide,一个独立的离线智能手机应用程序,利用增强现实技术来帮助用户定位和拾取他们周围的物体。我们进行了一项用户研究,以验证其在提供指导方面的有效性,将其与其他辅助技术形式因素进行比较,评估多模式反馈的使用情况,并提供有关整体体验的反馈。我们的研究结果表明,AIGuide是一种很有前途的技术,可以帮助视力受损的人在日常生活中定位和获取物体。
{"title":"AIGuide: An Augmented Reality Hand Guidance Application for People with Visual Impairments","authors":"Nelson Daniel Troncoso Aldas, Sooyeon Lee, Chonghan Lee, M. Rosson, John Millar Carroll, N. Vijaykrishnan","doi":"10.1145/3373625.3417028","DOIUrl":"https://doi.org/10.1145/3373625.3417028","url":null,"abstract":"Locating and grasping objects is a critical task in people’s daily lives. For people with visual impairments, this task can be a daily struggle. The support of augmented reality frameworks in smartphones has the potential to overcome the limitations of current object detection applications designed for people with visual impairments. We present AIGuide, a self-contained offline smartphone application that leverages augmented reality technology to help users locate and pick up objects around them. We conducted a user study to validate its effectiveness at providing guidance, compare it to other assistive technology form factors, evaluate the use of multimodal feedback, and provide feedback about the overall experience. Our results show that AIGuide is a promising technology to help people with visual impairments locate and acquire objects in their daily routine.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115006336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Comparison of Methods for Teaching Accessibility in University Computing Courses 大学计算机课程无障碍教学方法比较
Qiwen Zhao, Vaishnavi Mande, Paula Conn, Sedeeq Al-khazraji, Kristen Shinohara, S. Ludi, Matt Huenerfauth
With an increasing demand for computing professionals with skills in accessibility, it is important for university faculty to select effective methods for educating computing students about barriers faced by users with disabilities and approaches to improving accessibility. While some prior work had evaluated accessibility educational interventions, many prior studies have consisted of firsthand reports from faculty or short-term evaluations. This paper reports on the results of a systematic evaluation of methods for teaching accessibility from a longitudinal study across 29 sections of a human-computer interaction course (required for students in a computing degree program), as taught by 10 distinct professors, throughout four years, with over 400 students. A control condition (course without accessibility content) was compared to four intervention conditions: week of lectures on accessibility, team design project requiring some accessibility consideration, interaction with someone with a disability, and collaboration with a team member with a disability. Comparing survey data immediately before and after the course, we found that the Lectures, Projects, and Interaction conditions were effective in increasing students' likelihood to consider people with disabilities on a design scenario, awareness of accessibility barriers, and knowledge of technical approaches for improving accessibility - with students in the Team Member condition having higher scores on the final measure only. However, comparing survey responses from students immediately before the course and from approximately 2 years later, almost no significant gains were observed, suggesting that interventions within a single course are insufficient for producing long-term changes in measures of students’ accessibility learning. This study contributes to empirical knowledge to inform university faculty in selecting effective methods for teaching accessibility, and it motivates further research on how to achieve long-term changes in accessibility knowledge, e.g. by reinforcing accessibility throughout a degree program.
随着对具有可访问性技能的计算机专业人员的需求不断增加,大学教师选择有效的方法来教育计算机学生了解残疾用户面临的障碍和改善可访问性的方法是很重要的。虽然先前的一些工作评估了无障碍教育干预措施,但许多先前的研究都是由教师的第一手报告或短期评估组成的。本文报告了一项对人机交互课程(计算机学位课程的学生必修课程)的29个部分进行的纵向研究对教学可访问性方法的系统评估结果,该课程由10位不同的教授教授,历时四年,有400多名学生。对照条件(没有无障碍内容的课程)与四种干预条件进行了比较:一周的无障碍讲座,需要考虑无障碍的团队设计项目,与残疾人的互动,以及与残疾人团队成员的合作。比较课程前后的调查数据,我们发现讲座、项目和互动条件在增加学生在设计场景中考虑残疾人的可能性、对无障碍障碍的认识以及改善无障碍的技术方法的知识方面是有效的——只有团队成员条件下的学生在最终测量中得分更高。然而,比较课程开始前和大约2年后学生的调查反应,几乎没有观察到显著的收益,这表明在单一课程中的干预不足以产生学生无障碍学习措施的长期变化。该研究为大学教师选择有效的无障碍教学方法提供了经验知识,并激发了对如何实现无障碍知识的长期变化的进一步研究,例如通过在整个学位课程中加强无障碍。
{"title":"Comparison of Methods for Teaching Accessibility in University Computing Courses","authors":"Qiwen Zhao, Vaishnavi Mande, Paula Conn, Sedeeq Al-khazraji, Kristen Shinohara, S. Ludi, Matt Huenerfauth","doi":"10.1145/3373625.3417013","DOIUrl":"https://doi.org/10.1145/3373625.3417013","url":null,"abstract":"With an increasing demand for computing professionals with skills in accessibility, it is important for university faculty to select effective methods for educating computing students about barriers faced by users with disabilities and approaches to improving accessibility. While some prior work had evaluated accessibility educational interventions, many prior studies have consisted of firsthand reports from faculty or short-term evaluations. This paper reports on the results of a systematic evaluation of methods for teaching accessibility from a longitudinal study across 29 sections of a human-computer interaction course (required for students in a computing degree program), as taught by 10 distinct professors, throughout four years, with over 400 students. A control condition (course without accessibility content) was compared to four intervention conditions: week of lectures on accessibility, team design project requiring some accessibility consideration, interaction with someone with a disability, and collaboration with a team member with a disability. Comparing survey data immediately before and after the course, we found that the Lectures, Projects, and Interaction conditions were effective in increasing students' likelihood to consider people with disabilities on a design scenario, awareness of accessibility barriers, and knowledge of technical approaches for improving accessibility - with students in the Team Member condition having higher scores on the final measure only. However, comparing survey responses from students immediately before the course and from approximately 2 years later, almost no significant gains were observed, suggesting that interventions within a single course are insufficient for producing long-term changes in measures of students’ accessibility learning. This study contributes to empirical knowledge to inform university faculty in selecting effective methods for teaching accessibility, and it motivates further research on how to achieve long-term changes in accessibility knowledge, e.g. by reinforcing accessibility throughout a degree program.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129587341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Designing and Evaluating Head-based Pointing on Smartphones for People with Motor Impairments 为运动障碍人士设计和评估智能手机的头部指向功能
Muratcan Cicek, Ankit Dave, Wenxin Feng, Michael Xuelin Huang, J. Haines, Jeffrey Nichols
Head-based pointing is an alternative input method for people with motor impairments to access computing devices. This paper proposes a calibration-free head-tracking input mechanism for mobile devices that makes use of the front-facing camera that is standard on most devices. To evaluate our design, we performed two Fitts’ Law studies. First, a comparison study of our method with an existing head-based pointing solution, Eva Facial Mouse, with subjects without motor impairments. Second, we conducted what we believe is the first Fitts’ Law study using a mobile head tracker with subjects with motor impairments. We extend prior studies with a greater range of index of difficulties (IDs) [1.62, 5.20] bits and achieved promising throughput (average 0.61 bps with motor impairments and 0.90 bps without). We found that users’ throughput was 0.95 bps on average in our most difficult task (IDs: 5.20 bits), which involved selecting a target half the size of the Android recommendation for a touch target after moving nearly the full height of the screen. This suggests the system is capable of fine precision tasks. We summarize our observations and the lessons from our user studies into a set of design guidelines for head-based pointing systems.
头部指向是运动障碍人士访问计算设备的另一种输入法。本文提出了一种用于移动设备的免校准头部跟踪输入机制,该机制利用了大多数设备上的标准前置摄像头。为了评估我们的设计,我们进行了两次菲茨定律研究。首先,将我们的方法与现有的基于头部的指向解决方案Eva面部鼠标进行比较研究,受试者没有运动障碍。其次,我们使用移动头部追踪器对运动障碍患者进行了我们认为是第一次的菲茨定律研究。我们将先前的研究扩展为更大范围的困难指数(IDs)[1.62, 5.20]位,并取得了很好的吞吐量(运动障碍患者平均0.61 bps,无运动障碍患者平均0.90 bps)。我们发现,在我们最困难的任务(id: 5.20比特)中,用户的吞吐量平均为0.95 bps,这涉及在移动屏幕的整个高度后选择一个Android推荐的触摸目标大小的一半。这表明该系统能够完成精细的任务。我们总结了我们的观察和经验教训,从我们的用户研究到一套设计指南的头部为基础的指向系统。
{"title":"Designing and Evaluating Head-based Pointing on Smartphones for People with Motor Impairments","authors":"Muratcan Cicek, Ankit Dave, Wenxin Feng, Michael Xuelin Huang, J. Haines, Jeffrey Nichols","doi":"10.1145/3373625.3416994","DOIUrl":"https://doi.org/10.1145/3373625.3416994","url":null,"abstract":"Head-based pointing is an alternative input method for people with motor impairments to access computing devices. This paper proposes a calibration-free head-tracking input mechanism for mobile devices that makes use of the front-facing camera that is standard on most devices. To evaluate our design, we performed two Fitts’ Law studies. First, a comparison study of our method with an existing head-based pointing solution, Eva Facial Mouse, with subjects without motor impairments. Second, we conducted what we believe is the first Fitts’ Law study using a mobile head tracker with subjects with motor impairments. We extend prior studies with a greater range of index of difficulties (IDs) [1.62, 5.20] bits and achieved promising throughput (average 0.61 bps with motor impairments and 0.90 bps without). We found that users’ throughput was 0.95 bps on average in our most difficult task (IDs: 5.20 bits), which involved selecting a target half the size of the Android recommendation for a touch target after moving nearly the full height of the screen. This suggests the system is capable of fine precision tasks. We summarize our observations and the lessons from our user studies into a set of design guidelines for head-based pointing systems.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129074720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1