Stephanie Valencia, M. Steidl, Michael L. Rivera, Cynthia L. Bennett, Jeffrey P. Bigham, H. Admoni
Augmentative and alternative communication (AAC) devices enable speech-based communication, but generating speech is not the only resource needed to have a successful conversation. Being able to signal one wishes to take a turn by raising a hand or providing some other cue is critical in securing a turn to speak. Experienced conversation partners know how to recognize the nonverbal communication an augmented communicator (AC) displays, but these same nonverbal gestures can be hard to interpret by people who meet an AC for the first time. Prior work has identified motion-based AAC as a viable and underexplored modality for increasing ACs’ agency in conversation. We build on this prior work to dig deeper into a particular case study on motion-based AAC by co-designing a physical expressive object to support ACs during conversations. We found that our physical expressive object could support communication with unfamiliar partners. As such, we present our process and resulting lessons on the designed object itself and the co-design process.
{"title":"Aided Nonverbal Communication through Physical Expressive Objects","authors":"Stephanie Valencia, M. Steidl, Michael L. Rivera, Cynthia L. Bennett, Jeffrey P. Bigham, H. Admoni","doi":"10.1145/3441852.3471228","DOIUrl":"https://doi.org/10.1145/3441852.3471228","url":null,"abstract":"Augmentative and alternative communication (AAC) devices enable speech-based communication, but generating speech is not the only resource needed to have a successful conversation. Being able to signal one wishes to take a turn by raising a hand or providing some other cue is critical in securing a turn to speak. Experienced conversation partners know how to recognize the nonverbal communication an augmented communicator (AC) displays, but these same nonverbal gestures can be hard to interpret by people who meet an AC for the first time. Prior work has identified motion-based AAC as a viable and underexplored modality for increasing ACs’ agency in conversation. We build on this prior work to dig deeper into a particular case study on motion-based AAC by co-designing a physical expressive object to support ACs during conversations. We found that our physical expressive object could support communication with unfamiliar partners. As such, we present our process and resulting lessons on the designed object itself and the co-design process.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129676221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present the iReadMore app, a reading therapy for people with acquired reading or language impairments (known as alexia and aphasia respectively). The app was co-designed by people with alexia and aphasia, and has been demonstrated to significantly improve reading speed and accuracy in a randomized controlled trial. It is intended to be used at home without the support of a therapist. Therefore, accessibility and maintaining therapy engagement are key elements in achieving the high therapy doses required for rehabilitation of reading impairments. As such, these elements were developed in a co-design process that included 50 participants over 2 phases. This demonstration will present the flow of the application and detail how we translated a clinically validated prototype into a publicly available therapy app used by hundreds of people with acquired reading impairments since its release in March 2021.
{"title":"iReadMore: A Reading Therapy App Co-Designed by People with Aphasia and Alexia","authors":"Thomas Langford, A. Leff, D. Romano","doi":"10.1145/3441852.3476518","DOIUrl":"https://doi.org/10.1145/3441852.3476518","url":null,"abstract":"We present the iReadMore app, a reading therapy for people with acquired reading or language impairments (known as alexia and aphasia respectively). The app was co-designed by people with alexia and aphasia, and has been demonstrated to significantly improve reading speed and accuracy in a randomized controlled trial. It is intended to be used at home without the support of a therapist. Therefore, accessibility and maintaining therapy engagement are key elements in achieving the high therapy doses required for rehabilitation of reading impairments. As such, these elements were developed in a co-design process that included 50 participants over 2 phases. This demonstration will present the flow of the application and detail how we translated a clinically validated prototype into a publicly available therapy app used by hundreds of people with acquired reading impairments since its release in March 2021.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121200071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gesu India, Mohit Jain, Pallav Karya, Nirmalendu Diwakar, Manohar Swaminathan
Current infrastructure design, discouragement by parents, and lack of internal motivation act as barriers for people with visual impairments (PVIs) to perform physical activities at par with sighted individuals. This has triggered accessible exercise technologies to be an emerging area of research. However, most current solutions have either safety concerns and/or are expensive, hence limiting their mass adoption. In our work, we propose VStroll, a smartphone app to promote walking among PVIs, by enabling them to virtually explore real-world locations, while physically walking in the safety and comfort of their homes. Walking is a cheap, accessible, and a common physical activity for people with blindness. VStroll has several added features, such as places-of-interest (POI) announcement using spatial audio and voice input for route selection at every intersection, which helps the user to gain spatial awareness while walking. To understand the usability of VStroll, 16 participants used our app for five days, followed by a semi-structured interview. Overall, our participants took 253 trips, walked for 50.8 hours covering 121.6 kms. We uncovered novel insights, such as discovering new POIs and fitness-related updates acted as key motivators, route selection boosted their confidence in navigation, and spatial audio resulted in an immersive experience. We conclude the paper with key lessons learned to promote accessible exercise technologies.
{"title":"VStroll: An Audio-based Virtual Exploration to Encourage Walking among People with Vision Impairments","authors":"Gesu India, Mohit Jain, Pallav Karya, Nirmalendu Diwakar, Manohar Swaminathan","doi":"10.1145/3441852.3471206","DOIUrl":"https://doi.org/10.1145/3441852.3471206","url":null,"abstract":"Current infrastructure design, discouragement by parents, and lack of internal motivation act as barriers for people with visual impairments (PVIs) to perform physical activities at par with sighted individuals. This has triggered accessible exercise technologies to be an emerging area of research. However, most current solutions have either safety concerns and/or are expensive, hence limiting their mass adoption. In our work, we propose VStroll, a smartphone app to promote walking among PVIs, by enabling them to virtually explore real-world locations, while physically walking in the safety and comfort of their homes. Walking is a cheap, accessible, and a common physical activity for people with blindness. VStroll has several added features, such as places-of-interest (POI) announcement using spatial audio and voice input for route selection at every intersection, which helps the user to gain spatial awareness while walking. To understand the usability of VStroll, 16 participants used our app for five days, followed by a semi-structured interview. Overall, our participants took 253 trips, walked for 50.8 hours covering 121.6 kms. We uncovered novel insights, such as discovering new POIs and fitness-related updates acted as key motivators, route selection boosted their confidence in navigation, and spatial audio resulted in an immersive experience. We conclude the paper with key lessons learned to promote accessible exercise technologies.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"285 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121306621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeehan Malik, Masuma Akter Rumi, Morgan DeNeve, Calvin Skalla, Lindsay E Ball, L. Lieberman, Kyle Rector
Physical activity is an important part of quality life, however people with visual impairments (PVIs) are less likely to participate in physical activity than their sighted peers. One barrier is that exercise instructors may not give accessible verbal instructions. There is a potential for text analysis to determine these phrases, and in response provide more accessible instructions. First, a taxonomy of accessible phrases needs to be developed. To address this problem, we conducted user studies with 10 PVIs exercising along with audio and video aerobic workouts. We analyzed video footage of their exercise along with interviews to determine a preliminary set of phrases that are helpful or confusing. We then conducted an iterative qualitative analysis of six other exercise videos and sought expert feedback to derive our taxonomy. We hope these findings inform systems that analyze instructional phrases for accessibility to PVIs.
{"title":"Determining a Taxonomy of Accessible Phrases During Exercise Instruction for People with Visual Impairments for Text Analysis","authors":"Jeehan Malik, Masuma Akter Rumi, Morgan DeNeve, Calvin Skalla, Lindsay E Ball, L. Lieberman, Kyle Rector","doi":"10.1145/3441852.3476567","DOIUrl":"https://doi.org/10.1145/3441852.3476567","url":null,"abstract":"Physical activity is an important part of quality life, however people with visual impairments (PVIs) are less likely to participate in physical activity than their sighted peers. One barrier is that exercise instructors may not give accessible verbal instructions. There is a potential for text analysis to determine these phrases, and in response provide more accessible instructions. First, a taxonomy of accessible phrases needs to be developed. To address this problem, we conducted user studies with 10 PVIs exercising along with audio and video aerobic workouts. We analyzed video footage of their exercise along with interviews to determine a preliminary set of phrases that are helpful or confusing. We then conducted an iterative qualitative analysis of six other exercise videos and sought expert feedback to derive our taxonomy. We hope these findings inform systems that analyze instructional phrases for accessibility to PVIs.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123074429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abigale Stangl, Nitin Verma, K. Fleischmann, M. Morris, D. Gurari
Image descriptions are how people who are blind or have low vision (BLV) access information depicted within images. To our knowledge, no prior work has examined how a description for an image should be designed for different scenarios in which users encounter images. Scenarios consist of the information goal the person has when seeking information from or about an image, paired with the source where the image is found. To address this gap, we interviewed 28 people who are BLV to learn how the scenario impacts what image content (information) should go into an image description. We offer our findings as a foundation for considering how to design next-generation image description technologies that can both (A) support a departure from one-size-fits-all image descriptions to context-aware descriptions, and (B) reveal what content to include in minimum viable descriptions for a large range of scenarios.
{"title":"Going Beyond One-Size-Fits-All Image Descriptions to Satisfy the Information Wants of People Who are Blind or Have Low Vision","authors":"Abigale Stangl, Nitin Verma, K. Fleischmann, M. Morris, D. Gurari","doi":"10.1145/3441852.3471233","DOIUrl":"https://doi.org/10.1145/3441852.3471233","url":null,"abstract":"Image descriptions are how people who are blind or have low vision (BLV) access information depicted within images. To our knowledge, no prior work has examined how a description for an image should be designed for different scenarios in which users encounter images. Scenarios consist of the information goal the person has when seeking information from or about an image, paired with the source where the image is found. To address this gap, we interviewed 28 people who are BLV to learn how the scenario impacts what image content (information) should go into an image description. We offer our findings as a foundation for considering how to design next-generation image description technologies that can both (A) support a departure from one-size-fits-all image descriptions to context-aware descriptions, and (B) reveal what content to include in minimum viable descriptions for a large range of scenarios.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114674866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Currently, Deaf and hard of hearing (D/HH) callers using mobile phones cannot place a video or captioned based call to a Telecommunication Relay Services (TRS) Communication Assistant (CA) using the carrier assigned mobile phone number. D/HH callers need to use accessible hardware (video phones, captioned telephones, Teletypewriter, TTY) or download mobile applications to place or receive calls. D/HH callers’ generalized and emergency contact information in captioned/video applications is not linked to the built-in directory. Through our research and development work, we propose a concept to allow D/HH callers to have the option to make captioned and video calls through mobile device native dialer systems without the need to download applications. This proposed concept includes the all-in-one solution of Video Relay Services (VRS), 3-Party Video Calls, Voice-to-Text Captioning, and NextGen 911 built into the dialer systems. This demonstration introduces a concept that would make placing and receiving calls through TRS more native-like to that of auditory telephone users.
{"title":"Equivalent Telecommunications Access on Mobile Devices","authors":"Gary W. Behm, S. Ali, Spencer Montan","doi":"10.1145/3441852.3476535","DOIUrl":"https://doi.org/10.1145/3441852.3476535","url":null,"abstract":"Currently, Deaf and hard of hearing (D/HH) callers using mobile phones cannot place a video or captioned based call to a Telecommunication Relay Services (TRS) Communication Assistant (CA) using the carrier assigned mobile phone number. D/HH callers need to use accessible hardware (video phones, captioned telephones, Teletypewriter, TTY) or download mobile applications to place or receive calls. D/HH callers’ generalized and emergency contact information in captioned/video applications is not linked to the built-in directory. Through our research and development work, we propose a concept to allow D/HH callers to have the option to make captioned and video calls through mobile device native dialer systems without the need to download applications. This proposed concept includes the all-in-one solution of Video Relay Services (VRS), 3-Party Video Calls, Voice-to-Text Captioning, and NextGen 911 built into the dialer systems. This demonstration introduces a concept that would make placing and receiving calls through TRS more native-like to that of auditory telephone users.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129576504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this experience report, we describe an approach to ability-based focus groups with sign language users in a remote environment. We discuss our main lessons learned in terms of requirements for sign language-accessibility within research, calling out issues such as the need to address users in their natural language, ensuring translation for all parts of research processes, and including users not only within the conducted method but already within preparation phases. Based on requirements such as these, we argue that HCI research currently faces a dilemma when it comes to hearing researchers working with the sign language user population—having to handle the increasingly emphasized demand for conducting user research with this specific target group while lacking accessible tools and procedures to do so. Concluding our experience report, we address this dilemma by discussing the two sides of its fundamental challenge: Inadequate communication with and insufficient representation of sign language users within research.
{"title":"Lost in Translation: Challenges and Barriers to Sign Language-Accessible User Research","authors":"Amelie Unger, D. Wallach, Nicole Jochems","doi":"10.1145/3441852.3476473","DOIUrl":"https://doi.org/10.1145/3441852.3476473","url":null,"abstract":"In this experience report, we describe an approach to ability-based focus groups with sign language users in a remote environment. We discuss our main lessons learned in terms of requirements for sign language-accessibility within research, calling out issues such as the need to address users in their natural language, ensuring translation for all parts of research processes, and including users not only within the conducted method but already within preparation phases. Based on requirements such as these, we argue that HCI research currently faces a dilemma when it comes to hearing researchers working with the sign language user population—having to handle the increasingly emphasized demand for conducting user research with this specific target group while lacking accessible tools and procedures to do so. Concluding our experience report, we address this dilemma by discussing the two sides of its fundamental challenge: Inadequate communication with and insufficient representation of sign language users within research.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129089521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert L. Howlett, Laurianne Sitbon, Maria Hoogstrate, Saminda Sundeepa Balasuriya
This research explores the conditions and opportunities for citizen science applications to enhance their accessibility to people with intellectual disability (ID). In this paper, we present how the knowledge gathered by co-designing with a group of 3 participants with ID led to a design judged accessible and engaging by another group of 4 participants with ID. We contribute the key elements of that design: static subject, visual engagement, embodiment and social connectedness.
{"title":"Accessible Citizen Science, by people with intellectual disability","authors":"Robert L. Howlett, Laurianne Sitbon, Maria Hoogstrate, Saminda Sundeepa Balasuriya","doi":"10.1145/3441852.3476558","DOIUrl":"https://doi.org/10.1145/3441852.3476558","url":null,"abstract":"This research explores the conditions and opportunities for citizen science applications to enhance their accessibility to people with intellectual disability (ID). In this paper, we present how the knowledge gathered by co-designing with a group of 3 participants with ID led to a design judged accessible and engaging by another group of 4 participants with ID. We contribute the key elements of that design: static subject, visual engagement, embodiment and social connectedness.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127658255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
User-Centred Design (UCD) and Participatory Action Research (PAR) have laid the foundations for Universal Accessibility. The inclusion of disabled end users in the design of digital Assistive Technology (dAT) is now an expectation within the accessibility field. However, some areas of dAT research fall short of this gold standard, especially when end users have speech, language and/or cognitive impairments. This is a particular challenge when developing technology for individuals who use Augmentative and Alternative Communication (AAC). In her ASSETS 2021 keynote talk, Prof. Waller provides a brief history of the development of AAC technologies since the early 1970s with a focus on users with severe speech and physical disabilities, illustrating that, despite significant advances in technology, the underlying design of AAC has not changed. This is in part due to challenges associated with the inclusion of a diverse user group in all stages of research from project ideation to product evaluation. She will demonstrate how a more inclusive approach might be achieved and will challenge the research community to consider the nature of interdisciplinary research teams and their role in setting the research agenda.
{"title":"Participatory Design and Research: Challenges for Augmentative and Alternative Communication Technologies","authors":"A. Waller","doi":"10.1145/3441852.3487958","DOIUrl":"https://doi.org/10.1145/3441852.3487958","url":null,"abstract":"User-Centred Design (UCD) and Participatory Action Research (PAR) have laid the foundations for Universal Accessibility. The inclusion of disabled end users in the design of digital Assistive Technology (dAT) is now an expectation within the accessibility field. However, some areas of dAT research fall short of this gold standard, especially when end users have speech, language and/or cognitive impairments. This is a particular challenge when developing technology for individuals who use Augmentative and Alternative Communication (AAC). In her ASSETS 2021 keynote talk, Prof. Waller provides a brief history of the development of AAC technologies since the early 1970s with a focus on users with severe speech and physical disabilities, illustrating that, despite significant advances in technology, the underlying design of AAC has not changed. This is in part due to challenges associated with the inclusion of a diverse user group in all stages of research from project ideation to product evaluation. She will demonstrate how a more inclusive approach might be achieved and will challenge the research community to consider the nature of interdisciplinary research teams and their role in setting the research agenda.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121395784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The past five years have witnessed an increase in research to improve the accessibility of block-based programming environments to people with visual impairments. This has led to the creation of a few accessible block-based programming environments with some researchers considering tangible alternatives or hybrid environments. However, the literature says little about the learning experiences of K-12 students with visual impairments on these systems in educational settings. We try to fill this gap of knowledge with a report on an interview study with twelve teachers of K-12 students with visual impairments. Through the lens of the teachers, we discovered that factors such as the students background, the teacher's CS background and the design of existing curricula influence the learning process of students with visual impairments learning how to code. In addition to discussing how they go about to mitigate the challenges that stem from these factors, teachers also reported on how they compensate for the lack of accessible block-based languages. Through this work, we offer insights into how the research community can improve the learning experiences of students with visual impairments including training teachers, ensuring students have basic computing skills, improving the curriculum and designing accessible on-screen block-based programming environments.
{"title":"How Teachers of the Visually Impaired Compensate with the Absence of Accessible Block-Based Languages","authors":"Aboubakar Mountapmbeme, S. Ludi","doi":"10.1145/3441852.3471221","DOIUrl":"https://doi.org/10.1145/3441852.3471221","url":null,"abstract":"The past five years have witnessed an increase in research to improve the accessibility of block-based programming environments to people with visual impairments. This has led to the creation of a few accessible block-based programming environments with some researchers considering tangible alternatives or hybrid environments. However, the literature says little about the learning experiences of K-12 students with visual impairments on these systems in educational settings. We try to fill this gap of knowledge with a report on an interview study with twelve teachers of K-12 students with visual impairments. Through the lens of the teachers, we discovered that factors such as the students background, the teacher's CS background and the design of existing curricula influence the learning process of students with visual impairments learning how to code. In addition to discussing how they go about to mitigate the challenges that stem from these factors, teachers also reported on how they compensate for the lack of accessible block-based languages. Through this work, we offer insights into how the research community can improve the learning experiences of students with visual impairments including training teachers, ensuring students have basic computing skills, improving the curriculum and designing accessible on-screen block-based programming environments.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125967628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}