首页 > 最新文献

Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility最新文献

英文 中文
Aided Nonverbal Communication through Physical Expressive Objects 通过实物表达辅助非语言交流
Stephanie Valencia, M. Steidl, Michael L. Rivera, Cynthia L. Bennett, Jeffrey P. Bigham, H. Admoni
Augmentative and alternative communication (AAC) devices enable speech-based communication, but generating speech is not the only resource needed to have a successful conversation. Being able to signal one wishes to take a turn by raising a hand or providing some other cue is critical in securing a turn to speak. Experienced conversation partners know how to recognize the nonverbal communication an augmented communicator (AC) displays, but these same nonverbal gestures can be hard to interpret by people who meet an AC for the first time. Prior work has identified motion-based AAC as a viable and underexplored modality for increasing ACs’ agency in conversation. We build on this prior work to dig deeper into a particular case study on motion-based AAC by co-designing a physical expressive object to support ACs during conversations. We found that our physical expressive object could support communication with unfamiliar partners. As such, we present our process and resulting lessons on the designed object itself and the co-design process.
辅助和替代性交流(AAC)设备可以实现基于语音的交流,但生成语音并不是成功对话所需的唯一资源。能够通过举手或提供其他提示来表示自己希望轮流发言,这对确保轮到自己发言至关重要。经验丰富的对话伙伴知道如何识别增强型交流者(AC)所表现出的非语言交流,但对于初次见到增强型交流者的人来说,这些非语言手势可能很难理解。先前的研究已经发现,基于动作的辅助交流是一种可行但尚未充分开发的模式,可以增强辅助交流者在对话中的能动性。我们在先前工作的基础上,通过共同设计一个物理表达对象来支持对话中的交流者,对基于动作的 AAC 进行了更深入的案例研究。我们发现,我们的实体表达对象可以支持与不熟悉的伙伴进行交流。因此,我们将介绍我们的设计过程,以及设计对象本身和共同设计过程的经验教训。
{"title":"Aided Nonverbal Communication through Physical Expressive Objects","authors":"Stephanie Valencia, M. Steidl, Michael L. Rivera, Cynthia L. Bennett, Jeffrey P. Bigham, H. Admoni","doi":"10.1145/3441852.3471228","DOIUrl":"https://doi.org/10.1145/3441852.3471228","url":null,"abstract":"Augmentative and alternative communication (AAC) devices enable speech-based communication, but generating speech is not the only resource needed to have a successful conversation. Being able to signal one wishes to take a turn by raising a hand or providing some other cue is critical in securing a turn to speak. Experienced conversation partners know how to recognize the nonverbal communication an augmented communicator (AC) displays, but these same nonverbal gestures can be hard to interpret by people who meet an AC for the first time. Prior work has identified motion-based AAC as a viable and underexplored modality for increasing ACs’ agency in conversation. We build on this prior work to dig deeper into a particular case study on motion-based AAC by co-designing a physical expressive object to support ACs during conversations. We found that our physical expressive object could support communication with unfamiliar partners. As such, we present our process and resulting lessons on the designed object itself and the co-design process.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129676221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
iReadMore: A Reading Therapy App Co-Designed by People with Aphasia and Alexia iReadMore:由失语症和失读症患者共同设计的阅读治疗应用程序
Thomas Langford, A. Leff, D. Romano
We present the iReadMore app, a reading therapy for people with acquired reading or language impairments (known as alexia and aphasia respectively). The app was co-designed by people with alexia and aphasia, and has been demonstrated to significantly improve reading speed and accuracy in a randomized controlled trial. It is intended to be used at home without the support of a therapist. Therefore, accessibility and maintaining therapy engagement are key elements in achieving the high therapy doses required for rehabilitation of reading impairments. As such, these elements were developed in a co-design process that included 50 participants over 2 phases. This demonstration will present the flow of the application and detail how we translated a clinically validated prototype into a publicly available therapy app used by hundreds of people with acquired reading impairments since its release in March 2021.
我们介绍了iReadMore应用程序,这是一种针对有获得性阅读或语言障碍(分别被称为失读症和失语症)的人的阅读疗法。这款应用是由失语症和失语症患者共同设计的,在一项随机对照试验中,它已被证明能显著提高阅读速度和准确性。它旨在在没有治疗师支持的情况下在家中使用。因此,可及性和维持治疗参与是实现阅读障碍康复所需的高治疗剂量的关键因素。因此,这些元素是在共同设计过程中开发的,其中包括50名参与者,分两个阶段进行。本演示将展示应用程序的流程,并详细介绍我们如何将临床验证的原型转化为公开可用的治疗应用程序,自2021年3月发布以来,已有数百名获得性阅读障碍患者使用。
{"title":"iReadMore: A Reading Therapy App Co-Designed by People with Aphasia and Alexia","authors":"Thomas Langford, A. Leff, D. Romano","doi":"10.1145/3441852.3476518","DOIUrl":"https://doi.org/10.1145/3441852.3476518","url":null,"abstract":"We present the iReadMore app, a reading therapy for people with acquired reading or language impairments (known as alexia and aphasia respectively). The app was co-designed by people with alexia and aphasia, and has been demonstrated to significantly improve reading speed and accuracy in a randomized controlled trial. It is intended to be used at home without the support of a therapist. Therefore, accessibility and maintaining therapy engagement are key elements in achieving the high therapy doses required for rehabilitation of reading impairments. As such, these elements were developed in a co-design process that included 50 participants over 2 phases. This demonstration will present the flow of the application and detail how we translated a clinically validated prototype into a publicly available therapy app used by hundreds of people with acquired reading impairments since its release in March 2021.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121200071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VStroll: An Audio-based Virtual Exploration to Encourage Walking among People with Vision Impairments VStroll:一种基于音频的虚拟探索,鼓励有视力障碍的人行走
Gesu India, Mohit Jain, Pallav Karya, Nirmalendu Diwakar, Manohar Swaminathan
Current infrastructure design, discouragement by parents, and lack of internal motivation act as barriers for people with visual impairments (PVIs) to perform physical activities at par with sighted individuals. This has triggered accessible exercise technologies to be an emerging area of research. However, most current solutions have either safety concerns and/or are expensive, hence limiting their mass adoption. In our work, we propose VStroll, a smartphone app to promote walking among PVIs, by enabling them to virtually explore real-world locations, while physically walking in the safety and comfort of their homes. Walking is a cheap, accessible, and a common physical activity for people with blindness. VStroll has several added features, such as places-of-interest (POI) announcement using spatial audio and voice input for route selection at every intersection, which helps the user to gain spatial awareness while walking. To understand the usability of VStroll, 16 participants used our app for five days, followed by a semi-structured interview. Overall, our participants took 253 trips, walked for 50.8 hours covering 121.6 kms. We uncovered novel insights, such as discovering new POIs and fitness-related updates acted as key motivators, route selection boosted their confidence in navigation, and spatial audio resulted in an immersive experience. We conclude the paper with key lessons learned to promote accessible exercise technologies.
目前的基础设施设计、父母的不鼓励以及缺乏内在动力是视障人士与正常人一样进行体育活动的障碍。这促使无障碍运动技术成为一个新兴的研究领域。然而,目前大多数解决方案要么存在安全问题,要么价格昂贵,因此限制了它们的大规模采用。在我们的工作中,我们提出了一个智能手机应用程序VStroll,通过使他们能够虚拟地探索现实世界的位置,同时在家中安全舒适地行走,从而促进pv之间的步行。对于盲人来说,步行是一种便宜、方便、常见的体育活动。VStroll有几个附加功能,例如在每个十字路口使用空间音频和语音输入来选择路线的景点(POI)公告,这有助于用户在行走时获得空间意识。为了了解VStroll的可用性,16名参与者使用了我们的应用程序五天,然后进行了半结构化的采访。总的来说,我们的参与者进行了253次旅行,步行了50.8小时,覆盖了121.6公里。我们发现了一些新颖的见解,例如发现新的poi和与健身相关的更新是关键的激励因素,路线选择提高了他们对导航的信心,空间音频带来了沉浸式体验。最后,我们总结了促进无障碍运动技术的关键经验教训。
{"title":"VStroll: An Audio-based Virtual Exploration to Encourage Walking among People with Vision Impairments","authors":"Gesu India, Mohit Jain, Pallav Karya, Nirmalendu Diwakar, Manohar Swaminathan","doi":"10.1145/3441852.3471206","DOIUrl":"https://doi.org/10.1145/3441852.3471206","url":null,"abstract":"Current infrastructure design, discouragement by parents, and lack of internal motivation act as barriers for people with visual impairments (PVIs) to perform physical activities at par with sighted individuals. This has triggered accessible exercise technologies to be an emerging area of research. However, most current solutions have either safety concerns and/or are expensive, hence limiting their mass adoption. In our work, we propose VStroll, a smartphone app to promote walking among PVIs, by enabling them to virtually explore real-world locations, while physically walking in the safety and comfort of their homes. Walking is a cheap, accessible, and a common physical activity for people with blindness. VStroll has several added features, such as places-of-interest (POI) announcement using spatial audio and voice input for route selection at every intersection, which helps the user to gain spatial awareness while walking. To understand the usability of VStroll, 16 participants used our app for five days, followed by a semi-structured interview. Overall, our participants took 253 trips, walked for 50.8 hours covering 121.6 kms. We uncovered novel insights, such as discovering new POIs and fitness-related updates acted as key motivators, route selection boosted their confidence in navigation, and spatial audio resulted in an immersive experience. We conclude the paper with key lessons learned to promote accessible exercise technologies.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"285 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121306621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Determining a Taxonomy of Accessible Phrases During Exercise Instruction for People with Visual Impairments for Text Analysis 为文本分析确定视障人士运动指导中可访问短语的分类
Jeehan Malik, Masuma Akter Rumi, Morgan DeNeve, Calvin Skalla, Lindsay E Ball, L. Lieberman, Kyle Rector
Physical activity is an important part of quality life, however people with visual impairments (PVIs) are less likely to participate in physical activity than their sighted peers. One barrier is that exercise instructors may not give accessible verbal instructions. There is a potential for text analysis to determine these phrases, and in response provide more accessible instructions. First, a taxonomy of accessible phrases needs to be developed. To address this problem, we conducted user studies with 10 PVIs exercising along with audio and video aerobic workouts. We analyzed video footage of their exercise along with interviews to determine a preliminary set of phrases that are helpful or confusing. We then conducted an iterative qualitative analysis of six other exercise videos and sought expert feedback to derive our taxonomy. We hope these findings inform systems that analyze instructional phrases for accessibility to PVIs.
体育活动是高质量生活的重要组成部分,然而,与视力正常的同龄人相比,视力受损的人较少参加体育活动。其中一个障碍是,运动教练可能不会给出容易理解的口头指导。有可能通过文本分析来确定这些短语,并相应地提供更容易理解的说明。首先,需要开发可访问短语的分类法。为了解决这个问题,我们对10个PVIs进行了用户研究,同时进行了有氧运动的音频和视频。我们分析了他们练习的视频片段以及采访,以确定一组有用或令人困惑的初步短语。然后,我们对另外六个运动视频进行了反复的定性分析,并寻求专家的反馈,以得出我们的分类。我们希望这些发现可以为分析教学短语的系统提供信息。
{"title":"Determining a Taxonomy of Accessible Phrases During Exercise Instruction for People with Visual Impairments for Text Analysis","authors":"Jeehan Malik, Masuma Akter Rumi, Morgan DeNeve, Calvin Skalla, Lindsay E Ball, L. Lieberman, Kyle Rector","doi":"10.1145/3441852.3476567","DOIUrl":"https://doi.org/10.1145/3441852.3476567","url":null,"abstract":"Physical activity is an important part of quality life, however people with visual impairments (PVIs) are less likely to participate in physical activity than their sighted peers. One barrier is that exercise instructors may not give accessible verbal instructions. There is a potential for text analysis to determine these phrases, and in response provide more accessible instructions. First, a taxonomy of accessible phrases needs to be developed. To address this problem, we conducted user studies with 10 PVIs exercising along with audio and video aerobic workouts. We analyzed video footage of their exercise along with interviews to determine a preliminary set of phrases that are helpful or confusing. We then conducted an iterative qualitative analysis of six other exercise videos and sought expert feedback to derive our taxonomy. We hope these findings inform systems that analyze instructional phrases for accessibility to PVIs.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123074429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Going Beyond One-Size-Fits-All Image Descriptions to Satisfy the Information Wants of People Who are Blind or Have Low Vision 超越一刀切的图像描述,满足盲人或弱视人士的信息需求
Abigale Stangl, Nitin Verma, K. Fleischmann, M. Morris, D. Gurari
Image descriptions are how people who are blind or have low vision (BLV) access information depicted within images. To our knowledge, no prior work has examined how a description for an image should be designed for different scenarios in which users encounter images. Scenarios consist of the information goal the person has when seeking information from or about an image, paired with the source where the image is found. To address this gap, we interviewed 28 people who are BLV to learn how the scenario impacts what image content (information) should go into an image description. We offer our findings as a foundation for considering how to design next-generation image description technologies that can both (A) support a departure from one-size-fits-all image descriptions to context-aware descriptions, and (B) reveal what content to include in minimum viable descriptions for a large range of scenarios.
图像描述是盲人或视力低下的人如何获取图像中描述的信息。据我们所知,之前没有研究过如何为用户遇到图像的不同场景设计图像描述。场景由人们在从图像中寻找信息或关于图像的信息时的信息目标组成,并与找到图像的源配对。为了解决这一差距,我们采访了28位BLV,以了解场景如何影响图像内容(信息)应该进入图像描述。我们提供了我们的发现作为考虑如何设计下一代图像描述技术的基础,这些技术可以(a)支持从一刀切的图像描述到上下文感知描述的转变,以及(B)揭示在大范围场景的最小可行描述中包含哪些内容。
{"title":"Going Beyond One-Size-Fits-All Image Descriptions to Satisfy the Information Wants of People Who are Blind or Have Low Vision","authors":"Abigale Stangl, Nitin Verma, K. Fleischmann, M. Morris, D. Gurari","doi":"10.1145/3441852.3471233","DOIUrl":"https://doi.org/10.1145/3441852.3471233","url":null,"abstract":"Image descriptions are how people who are blind or have low vision (BLV) access information depicted within images. To our knowledge, no prior work has examined how a description for an image should be designed for different scenarios in which users encounter images. Scenarios consist of the information goal the person has when seeking information from or about an image, paired with the source where the image is found. To address this gap, we interviewed 28 people who are BLV to learn how the scenario impacts what image content (information) should go into an image description. We offer our findings as a foundation for considering how to design next-generation image description technologies that can both (A) support a departure from one-size-fits-all image descriptions to context-aware descriptions, and (B) reveal what content to include in minimum viable descriptions for a large range of scenarios.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114674866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Equivalent Telecommunications Access on Mobile Devices 流动装置上的等效电讯接达
Gary W. Behm, S. Ali, Spencer Montan
Currently, Deaf and hard of hearing (D/HH) callers using mobile phones cannot place a video or captioned based call to a Telecommunication Relay Services (TRS) Communication Assistant (CA) using the carrier assigned mobile phone number. D/HH callers need to use accessible hardware (video phones, captioned telephones, Teletypewriter, TTY) or download mobile applications to place or receive calls. D/HH callers’ generalized and emergency contact information in captioned/video applications is not linked to the built-in directory. Through our research and development work, we propose a concept to allow D/HH callers to have the option to make captioned and video calls through mobile device native dialer systems without the need to download applications. This proposed concept includes the all-in-one solution of Video Relay Services (VRS), 3-Party Video Calls, Voice-to-Text Captioning, and NextGen 911 built into the dialer systems. This demonstration introduces a concept that would make placing and receiving calls through TRS more native-like to that of auditory telephone users.
目前,使用移动电话的聋人和重听人(D/HH)呼叫者不能使用运营商分配的移动电话号码向电信中继服务(TRS)通信助理(CA)发出视频或字幕呼叫。D/HH呼叫者需要使用可访问的硬件(视频电话,字幕电话,电传打字机,TTY)或下载移动应用程序来拨打或接听电话。D/HH呼叫者在字幕/视频应用中的一般和紧急联系信息没有链接到内置目录。通过我们的研究和开发工作,我们提出了一个概念,允许D/HH呼叫者可以选择通过移动设备本地拨号系统进行字幕和视频通话,而无需下载应用程序。这个提议的概念包括视频中继服务(VRS)、三方视频呼叫、语音转文本字幕和内置在拨号系统中的NextGen 911的一体化解决方案。这个演示介绍了一个概念,它将使通过TRS拨打和接听电话对听觉电话用户来说更像母语。
{"title":"Equivalent Telecommunications Access on Mobile Devices","authors":"Gary W. Behm, S. Ali, Spencer Montan","doi":"10.1145/3441852.3476535","DOIUrl":"https://doi.org/10.1145/3441852.3476535","url":null,"abstract":"Currently, Deaf and hard of hearing (D/HH) callers using mobile phones cannot place a video or captioned based call to a Telecommunication Relay Services (TRS) Communication Assistant (CA) using the carrier assigned mobile phone number. D/HH callers need to use accessible hardware (video phones, captioned telephones, Teletypewriter, TTY) or download mobile applications to place or receive calls. D/HH callers’ generalized and emergency contact information in captioned/video applications is not linked to the built-in directory. Through our research and development work, we propose a concept to allow D/HH callers to have the option to make captioned and video calls through mobile device native dialer systems without the need to download applications. This proposed concept includes the all-in-one solution of Video Relay Services (VRS), 3-Party Video Calls, Voice-to-Text Captioning, and NextGen 911 built into the dialer systems. This demonstration introduces a concept that would make placing and receiving calls through TRS more native-like to that of auditory telephone users.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129576504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lost in Translation: Challenges and Barriers to Sign Language-Accessible User Research 迷失在翻译中:手语无障碍用户研究的挑战和障碍
Amelie Unger, D. Wallach, Nicole Jochems
In this experience report, we describe an approach to ability-based focus groups with sign language users in a remote environment. We discuss our main lessons learned in terms of requirements for sign language-accessibility within research, calling out issues such as the need to address users in their natural language, ensuring translation for all parts of research processes, and including users not only within the conducted method but already within preparation phases. Based on requirements such as these, we argue that HCI research currently faces a dilemma when it comes to hearing researchers working with the sign language user population—having to handle the increasingly emphasized demand for conducting user research with this specific target group while lacking accessible tools and procedures to do so. Concluding our experience report, we address this dilemma by discussing the two sides of its fundamental challenge: Inadequate communication with and insufficient representation of sign language users within research.
在这份经验报告中,我们描述了一种在远程环境中与手语用户进行基于能力的焦点小组的方法。我们讨论了在研究中对手语可访问性的要求方面吸取的主要经验教训,提出了一些问题,例如需要用用户的自然语言与他们交谈,确保研究过程的所有部分的翻译,以及不仅将用户纳入实施方法,而且已经在准备阶段。基于这些需求,我们认为人机交互研究目前面临着一个困境,当涉及到听力研究人员与手语用户群体合作时,必须处理日益强调的对这一特定目标群体进行用户研究的需求,同时缺乏可访问的工具和程序来这样做。在总结我们的经验报告时,我们通过讨论其基本挑战的两个方面来解决这一困境:与手语使用者的沟通不足,以及在研究中手语使用者的代表性不足。
{"title":"Lost in Translation: Challenges and Barriers to Sign Language-Accessible User Research","authors":"Amelie Unger, D. Wallach, Nicole Jochems","doi":"10.1145/3441852.3476473","DOIUrl":"https://doi.org/10.1145/3441852.3476473","url":null,"abstract":"In this experience report, we describe an approach to ability-based focus groups with sign language users in a remote environment. We discuss our main lessons learned in terms of requirements for sign language-accessibility within research, calling out issues such as the need to address users in their natural language, ensuring translation for all parts of research processes, and including users not only within the conducted method but already within preparation phases. Based on requirements such as these, we argue that HCI research currently faces a dilemma when it comes to hearing researchers working with the sign language user population—having to handle the increasingly emphasized demand for conducting user research with this specific target group while lacking accessible tools and procedures to do so. Concluding our experience report, we address this dilemma by discussing the two sides of its fundamental challenge: Inadequate communication with and insufficient representation of sign language users within research.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129089521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Accessible Citizen Science, by people with intellectual disability 无障碍公民科学,由智障人士编写
Robert L. Howlett, Laurianne Sitbon, Maria Hoogstrate, Saminda Sundeepa Balasuriya
This research explores the conditions and opportunities for citizen science applications to enhance their accessibility to people with intellectual disability (ID). In this paper, we present how the knowledge gathered by co-designing with a group of 3 participants with ID led to a design judged accessible and engaging by another group of 4 participants with ID. We contribute the key elements of that design: static subject, visual engagement, embodiment and social connectedness.
本研究探讨了公民科学应用提高对智障人士的可及性的条件和机会。在本文中,我们展示了通过与一组3名具有ID的参与者共同设计而收集的知识如何导致另一组4名具有ID的参与者判断可访问和参与的设计。我们贡献了该设计的关键元素:静态主题,视觉参与,体现和社会联系。
{"title":"Accessible Citizen Science, by people with intellectual disability","authors":"Robert L. Howlett, Laurianne Sitbon, Maria Hoogstrate, Saminda Sundeepa Balasuriya","doi":"10.1145/3441852.3476558","DOIUrl":"https://doi.org/10.1145/3441852.3476558","url":null,"abstract":"This research explores the conditions and opportunities for citizen science applications to enhance their accessibility to people with intellectual disability (ID). In this paper, we present how the knowledge gathered by co-designing with a group of 3 participants with ID led to a design judged accessible and engaging by another group of 4 participants with ID. We contribute the key elements of that design: static subject, visual engagement, embodiment and social connectedness.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127658255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Participatory Design and Research: Challenges for Augmentative and Alternative Communication Technologies 参与式设计和研究:增强和替代通信技术的挑战
A. Waller
User-Centred Design (UCD) and Participatory Action Research (PAR) have laid the foundations for Universal Accessibility. The inclusion of disabled end users in the design of digital Assistive Technology (dAT) is now an expectation within the accessibility field. However, some areas of dAT research fall short of this gold standard, especially when end users have speech, language and/or cognitive impairments. This is a particular challenge when developing technology for individuals who use Augmentative and Alternative Communication (AAC). In her ASSETS 2021 keynote talk, Prof. Waller provides a brief history of the development of AAC technologies since the early 1970s with a focus on users with severe speech and physical disabilities, illustrating that, despite significant advances in technology, the underlying design of AAC has not changed. This is in part due to challenges associated with the inclusion of a diverse user group in all stages of research from project ideation to product evaluation. She will demonstrate how a more inclusive approach might be achieved and will challenge the research community to consider the nature of interdisciplinary research teams and their role in setting the research agenda.
以用户为中心的设计(UCD)和参与式行动研究(PAR)为普遍可及性奠定了基础。在数字辅助技术(dAT)的设计中包含残疾终端用户现在是可访问性领域的期望。然而,数据数据研究的一些领域达不到这一黄金标准,特别是当终端用户有语音、语言和/或认知障碍时。在为使用辅助和替代通信(AAC)的个人开发技术时,这是一个特别的挑战。在她的ASSETS 2021主题演讲中,Waller教授简要介绍了自20世纪70年代初以来AAC技术的发展历史,重点关注严重语言和身体残疾的用户,说明尽管技术取得了重大进步,但AAC的基本设计并没有改变。这在一定程度上是由于在从项目构思到产品评估的所有研究阶段纳入不同的用户群体所带来的挑战。她将展示如何实现更具包容性的方法,并将挑战研究界考虑跨学科研究团队的性质及其在制定研究议程中的作用。
{"title":"Participatory Design and Research: Challenges for Augmentative and Alternative Communication Technologies","authors":"A. Waller","doi":"10.1145/3441852.3487958","DOIUrl":"https://doi.org/10.1145/3441852.3487958","url":null,"abstract":"User-Centred Design (UCD) and Participatory Action Research (PAR) have laid the foundations for Universal Accessibility. The inclusion of disabled end users in the design of digital Assistive Technology (dAT) is now an expectation within the accessibility field. However, some areas of dAT research fall short of this gold standard, especially when end users have speech, language and/or cognitive impairments. This is a particular challenge when developing technology for individuals who use Augmentative and Alternative Communication (AAC). In her ASSETS 2021 keynote talk, Prof. Waller provides a brief history of the development of AAC technologies since the early 1970s with a focus on users with severe speech and physical disabilities, illustrating that, despite significant advances in technology, the underlying design of AAC has not changed. This is in part due to challenges associated with the inclusion of a diverse user group in all stages of research from project ideation to product evaluation. She will demonstrate how a more inclusive approach might be achieved and will challenge the research community to consider the nature of interdisciplinary research teams and their role in setting the research agenda.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121395784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
How Teachers of the Visually Impaired Compensate with the Absence of Accessible Block-Based Languages 视障教师如何弥补无障碍语言的缺失
Aboubakar Mountapmbeme, S. Ludi
The past five years have witnessed an increase in research to improve the accessibility of block-based programming environments to people with visual impairments. This has led to the creation of a few accessible block-based programming environments with some researchers considering tangible alternatives or hybrid environments. However, the literature says little about the learning experiences of K-12 students with visual impairments on these systems in educational settings. We try to fill this gap of knowledge with a report on an interview study with twelve teachers of K-12 students with visual impairments. Through the lens of the teachers, we discovered that factors such as the students background, the teacher's CS background and the design of existing curricula influence the learning process of students with visual impairments learning how to code. In addition to discussing how they go about to mitigate the challenges that stem from these factors, teachers also reported on how they compensate for the lack of accessible block-based languages. Through this work, we offer insights into how the research community can improve the learning experiences of students with visual impairments including training teachers, ensuring students have basic computing skills, improving the curriculum and designing accessible on-screen block-based programming environments.
在过去的五年中,人们对改善基于块的编程环境对视觉障碍人士的可访问性的研究有所增加。这导致了一些可访问的基于块的编程环境的创建,一些研究人员考虑了有形的替代方案或混合环境。然而,文献很少提到在教育环境中,有视觉障碍的K-12学生在这些系统上的学习经历。我们试图通过对12名K-12有视觉障碍学生的教师的访谈研究报告来填补这一知识空白。通过教师的视角,我们发现学生背景、教师CS背景、现有课程设计等因素影响着视障学生学习编程的过程。除了讨论如何减轻这些因素带来的挑战外,教师们还报告了他们如何弥补无障碍语言的不足。通过这项工作,我们为研究界如何改善视障学生的学习体验提供了见解,包括培训教师,确保学生具备基本的计算技能,改进课程和设计可访问的屏幕上基于块的编程环境。
{"title":"How Teachers of the Visually Impaired Compensate with the Absence of Accessible Block-Based Languages","authors":"Aboubakar Mountapmbeme, S. Ludi","doi":"10.1145/3441852.3471221","DOIUrl":"https://doi.org/10.1145/3441852.3471221","url":null,"abstract":"The past five years have witnessed an increase in research to improve the accessibility of block-based programming environments to people with visual impairments. This has led to the creation of a few accessible block-based programming environments with some researchers considering tangible alternatives or hybrid environments. However, the literature says little about the learning experiences of K-12 students with visual impairments on these systems in educational settings. We try to fill this gap of knowledge with a report on an interview study with twelve teachers of K-12 students with visual impairments. Through the lens of the teachers, we discovered that factors such as the students background, the teacher's CS background and the design of existing curricula influence the learning process of students with visual impairments learning how to code. In addition to discussing how they go about to mitigate the challenges that stem from these factors, teachers also reported on how they compensate for the lack of accessible block-based languages. Through this work, we offer insights into how the research community can improve the learning experiences of students with visual impairments including training teachers, ensuring students have basic computing skills, improving the curriculum and designing accessible on-screen block-based programming environments.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125967628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1