Today, music creation software and hardware are central to the workflow of most professional composers, producers, and songwriters. Music is an aural art form, but it is notated graphically, and highly visual mainstream technologies pose significant accessibility barriers to blind and visually impaired users. Very few studies address the current state of accessibility in music technologies, and fewer propose alternative designs. To address a lack of understanding about the experiences of blind and visually impaired music technology users, we conducted an interview study with 11 music creators who, we demonstrate, find ingenious workarounds to bend inaccessible technologies to their needs, but still face persistent barriers including a lack of options, a limited but persistent need for sighted help, and accessibility features that fail to cover all use cases. We reflect on our findings and present opportunities and guidelines to promote more inclusive design of future music technologies.
{"title":"How Blind and Visually Impaired Composers, Producers, and Songwriters Leverage and Adapt Music Technology","authors":"W. Payne, A. Xu, Fabiha Ahmed, Lisa Ye, Amy Hurst","doi":"10.1145/3373625.3417002","DOIUrl":"https://doi.org/10.1145/3373625.3417002","url":null,"abstract":"Today, music creation software and hardware are central to the workflow of most professional composers, producers, and songwriters. Music is an aural art form, but it is notated graphically, and highly visual mainstream technologies pose significant accessibility barriers to blind and visually impaired users. Very few studies address the current state of accessibility in music technologies, and fewer propose alternative designs. To address a lack of understanding about the experiences of blind and visually impaired music technology users, we conducted an interview study with 11 music creators who, we demonstrate, find ingenious workarounds to bend inaccessible technologies to their needs, but still face persistent barriers including a lack of options, a limited but persistent need for sighted help, and accessibility features that fail to cover all use cases. We reflect on our findings and present opportunities and guidelines to promote more inclusive design of future music technologies.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123841773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christin Engel, Karin Müller, Angela Constantinescu, C. Loitsch, Vanessa Petrausch, G. Weber, R. Stiefelhagen
It is much more difficult for people with visual impairments to plan and implement a journey to unknown places than for sighted people, because in addition to the usual travel arrangements, they also need to know whether the different parts of the travel chain are accessible at all. The need for information is presumably therefore very high and ranges from knowledge about the accessibility of public transport as well as outdoor and indoor environments. However, to the best of our knowledge, there is no study that examines in-depth requirements of both the planning of a trip and its implementation, looking separately at the various special needs of people with low vision and blindness. In this paper, we present a survey with 106 people with visual impairments, in which we examine the strategies they use to prepare for a journey to unknown buildings, how they orient themselves in unfamiliar buildings and what materials they use. Our analysis shows that requirements for people with blindness and low vision differ. The feedback from the participants reveals that there is a large information gap, especially for orientation in buildings, regarding maps, accessibility of buildings and supporting systems. In particular, there is a lack of availability of indoor maps.
{"title":"Travelling more independently: A Requirements Analysis for Accessible Journeys to Unknown Buildings for People with Visual Impairments","authors":"Christin Engel, Karin Müller, Angela Constantinescu, C. Loitsch, Vanessa Petrausch, G. Weber, R. Stiefelhagen","doi":"10.1145/3373625.3417022","DOIUrl":"https://doi.org/10.1145/3373625.3417022","url":null,"abstract":"It is much more difficult for people with visual impairments to plan and implement a journey to unknown places than for sighted people, because in addition to the usual travel arrangements, they also need to know whether the different parts of the travel chain are accessible at all. The need for information is presumably therefore very high and ranges from knowledge about the accessibility of public transport as well as outdoor and indoor environments. However, to the best of our knowledge, there is no study that examines in-depth requirements of both the planning of a trip and its implementation, looking separately at the various special needs of people with low vision and blindness. In this paper, we present a survey with 106 people with visual impairments, in which we examine the strategies they use to prepare for a journey to unknown buildings, how they orient themselves in unfamiliar buildings and what materials they use. Our analysis shows that requirements for people with blindness and low vision differ. The feedback from the participants reveals that there is a large information gap, especially for orientation in buildings, regarding maps, accessibility of buildings and supporting systems. In particular, there is a lack of availability of indoor maps.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"2017 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121561769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Barbareschi, Sibylle Daymond, Jake Honeywill, Aneesha Singh, Dominic Noble, N. Mbugua, Ian Harris, Victoria Austin, C. Holloway
Innovations in the field of assistive technology are usually evaluated based on practical considerations related to their ability to perform certain functions. However, social and emotional aspects play a huge role in how people with disabilities interact with assistive products and services. Over a five months period, we tested an innovative wheelchair service provision model that leverages 3D printing and Computer Aided Design to provide bespoke wheelchairs in Kenya. The study involved eight expert wheelchair users and five healthcare professionals who routinely provide wheelchair services in their community. Results from the study show that both users and providers attributed great value to both the novel service delivery model and the wheelchairs produced as part of the study. The reasons for their appreciation went far beyond the practical considerations and were rooted in the fact that the service delivery model and the wheelchairs promoted core values of agency, empowerment and self-expression.
{"title":"Value beyond function: analyzing the perception of wheelchair innovations in Kenya","authors":"G. Barbareschi, Sibylle Daymond, Jake Honeywill, Aneesha Singh, Dominic Noble, N. Mbugua, Ian Harris, Victoria Austin, C. Holloway","doi":"10.1145/3373625.3417017","DOIUrl":"https://doi.org/10.1145/3373625.3417017","url":null,"abstract":"Innovations in the field of assistive technology are usually evaluated based on practical considerations related to their ability to perform certain functions. However, social and emotional aspects play a huge role in how people with disabilities interact with assistive products and services. Over a five months period, we tested an innovative wheelchair service provision model that leverages 3D printing and Computer Aided Design to provide bespoke wheelchairs in Kenya. The study involved eight expert wheelchair users and five healthcare professionals who routinely provide wheelchair services in their community. Results from the study show that both users and providers attributed great value to both the novel service delivery model and the wheelchairs produced as part of the study. The reasons for their appreciation went far beyond the practical considerations and were rooted in the fact that the service delivery model and the wheelchairs promoted core values of agency, empowerment and self-expression.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125332508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abigale Stangl, Kristina Shiroma, Bo Xie, K. Fleischmann, D. Gurari
We present an empirical study into the visual content people who are blind consider to be private. We conduct a two-stage interview with 18 participants that identifies what they deem private in general and with respect to their use of services that describe their visual surroundings based on camera feeds from their personal devices. We then describe a taxonomy of private visual content that is reflective of our participants’ privacy-related concerns and values. We discuss how this taxonomy can benefit services that collect and sell visual data containing private information so such services are better aligned with their users.
{"title":"Visual Content Considered Private by People Who are Blind","authors":"Abigale Stangl, Kristina Shiroma, Bo Xie, K. Fleischmann, D. Gurari","doi":"10.1145/3373625.3417014","DOIUrl":"https://doi.org/10.1145/3373625.3417014","url":null,"abstract":"We present an empirical study into the visual content people who are blind consider to be private. We conduct a two-stage interview with 18 participants that identifies what they deem private in general and with respect to their use of services that describe their visual surroundings based on camera feeds from their personal devices. We then describe a taxonomy of private visual content that is reflective of our participants’ privacy-related concerns and values. We discuss how this taxonomy can benefit services that collect and sell visual data containing private information so such services are better aligned with their users.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129052343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lauren Race, Joshua A. Miele, Chancey Fleet, Tom Igoe, Amy Hurst
Blind and low vision learners are underrepresented in STEM and maker culture, both of which are historically inaccessible. In this paper we describe our experience conducting a three-day nonvisual soldering workshop and discuss the opportunities and challenges for designing accessible electronics curricula. Workshop attendees learned nonvisual soldering skills, adapted from publications for blind and low vision electronics professionals [4, 13, 18], while building a complex circuit. We detail our curriculum design and its complexities for learners with different levels of technical experience and learning modalities. While our instruction pacing proved challenging for some, all attendees succeeded with operating hot soldering irons and commanding basic soldering techniques over the course of three days. Based on our findings, we provide recommendations for educators wanting to design similar nonvisual STEM curricula and workshops. These include supplying tactile and textual instruction to support multiple learning styles and pacing, and standardizing workshop materials to support nonvisual hands-on learning for novices.
{"title":"Putting Tools in Hands: Designing Curriculum for a Nonvisual Soldering Workshop","authors":"Lauren Race, Joshua A. Miele, Chancey Fleet, Tom Igoe, Amy Hurst","doi":"10.1145/3373625.3418011","DOIUrl":"https://doi.org/10.1145/3373625.3418011","url":null,"abstract":"Blind and low vision learners are underrepresented in STEM and maker culture, both of which are historically inaccessible. In this paper we describe our experience conducting a three-day nonvisual soldering workshop and discuss the opportunities and challenges for designing accessible electronics curricula. Workshop attendees learned nonvisual soldering skills, adapted from publications for blind and low vision electronics professionals [4, 13, 18], while building a complex circuit. We detail our curriculum design and its complexities for learners with different levels of technical experience and learning modalities. While our instruction pacing proved challenging for some, all attendees succeeded with operating hot soldering irons and commanding basic soldering techniques over the course of three days. Based on our findings, we provide recommendations for educators wanting to design similar nonvisual STEM curricula and workshops. These include supplying tactile and textual instruction to support multiple learning styles and pacing, and standardizing workshop materials to support nonvisual hands-on learning for novices.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127645521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Age and motor ability are well-known to impact input performance. Past work examining these factors, however, has tended to focus on samples of 20-40 participants and has binned participants into a small set of age groups (e.g., “younger” vs. “older”). To foster a more nuanced understanding of how age and motor ability impact input performance, this short paper contributes: (1) a dataset from a large-scale study that captures input performance with a mouse and/or touchscreen from over 700 participants, as well as (2) summary analysis of a subset of 318 participants who range in age from 18 to 83 years old and of whom 53% reported a motor impairment. The analysis demonstrates the continuous relationship between age and input performance for users with and without motor impairments, but also illustrates that knowing a user's age and self-reported motor ability should not lead to assumptions about their input performance. The dataset, which contains mouse and touchscreen input traces, should allow for further exploration by other researchers.
{"title":"Input Accessibility: A Large Dataset and Summary Analysis of Age, Motor Ability and Input Performance","authors":"Leah Findlater, Lotus Zhang","doi":"10.1145/3373625.3417031","DOIUrl":"https://doi.org/10.1145/3373625.3417031","url":null,"abstract":"Age and motor ability are well-known to impact input performance. Past work examining these factors, however, has tended to focus on samples of 20-40 participants and has binned participants into a small set of age groups (e.g., “younger” vs. “older”). To foster a more nuanced understanding of how age and motor ability impact input performance, this short paper contributes: (1) a dataset from a large-scale study that captures input performance with a mouse and/or touchscreen from over 700 participants, as well as (2) summary analysis of a subset of 318 participants who range in age from 18 to 83 years old and of whom 53% reported a motor impairment. The analysis demonstrates the continuous relationship between age and input performance for users with and without motor impairments, but also illustrates that knowing a user's age and self-reported motor ability should not lead to assumptions about their input performance. The dataset, which contains mouse and touchscreen input traces, should allow for further exploration by other researchers.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122954809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Akter, Tousif Ahmed, Apu Kapadia, Manohar Swaminathan
Camera based assistive technologies such as smart glasses can provide people with visual impairments (PVIs) information about people in their vicinity. Although such ‘visually available’ information can enhance one’s social interactions, the privacy implications for bystanders from the perspective of PVIs remains underexplored. Motivated by prior findings of bystanders’ perspectives, we conducted two online surveys with visually impaired (N=128) and sighted (N=136) participants with two ‘field-of-view’ (FoV) experimental conditions related to whether information about bystanders was gathered from the front of the glasses or all directions. We found that PVIs considered it as ‘fair’ and equally useful to receive information from all directions. However, they reported being uncomfortable in receiving some visually apparent information (such as weight and gender) about bystanders as they felt it was ‘impolite’ or ‘improper’. Both PVIs and bystanders shared concerns about the fallibility of AI, where bystanders can be misrepresented by the devices. Our finding suggests that beyond issues of social stigma, both PVIs and bystanders have shared concerns that need to be considered to improve the social acceptability of camera based assistive technologies.
{"title":"Privacy Considerations of the Visually Impaired with Camera Based Assistive Technologies: Misrepresentation, Impropriety, and Fairness","authors":"T. Akter, Tousif Ahmed, Apu Kapadia, Manohar Swaminathan","doi":"10.1145/3373625.3417003","DOIUrl":"https://doi.org/10.1145/3373625.3417003","url":null,"abstract":"Camera based assistive technologies such as smart glasses can provide people with visual impairments (PVIs) information about people in their vicinity. Although such ‘visually available’ information can enhance one’s social interactions, the privacy implications for bystanders from the perspective of PVIs remains underexplored. Motivated by prior findings of bystanders’ perspectives, we conducted two online surveys with visually impaired (N=128) and sighted (N=136) participants with two ‘field-of-view’ (FoV) experimental conditions related to whether information about bystanders was gathered from the front of the glasses or all directions. We found that PVIs considered it as ‘fair’ and equally useful to receive information from all directions. However, they reported being uncomfortable in receiving some visually apparent information (such as weight and gender) about bystanders as they felt it was ‘impolite’ or ‘improper’. Both PVIs and bystanders shared concerns about the fallibility of AI, where bystanders can be misrepresented by the devices. Our finding suggests that beyond issues of social stigma, both PVIs and bystanders have shared concerns that need to be considered to improve the social acceptability of camera based assistive technologies.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130837315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Waleed Riaz, Gulraiz Ali, M. Abid, Izma Naim Butt, A. Shahzad, S. Shahid
Since the prevalence of aphasia is one of the major problems faced by older individuals today, there is an increasing amount of research aimed towards developing technology based interventions for persons with aphasia (PwA). In developing countries such as Pakistan, aphasia patients face multiple problems and challenges due to a weak healthcare system and a severe lack of facilities and available resources. This work is an effort to understand the needs, problems and challenges of PwA in order to provide more effective support. In recent years, Virtual Reality (VR) has been the focus of a lot of research, due to its deep immersive environments, and has shown great promise in improving language production and speech comprehension among aphasia patients. This study aims to evaluate the impact of VR enrichment activities on language production and speech comprehension of PwA. It will also explore the opportunities and challenges of using VR experiences with PwA through a thematic analysis of the recorded observations made during the system evaluation.
{"title":"An Exploratory Study on Supporting Persons with Aphasia in Pakistan: Challenges and Opportunities","authors":"Waleed Riaz, Gulraiz Ali, M. Abid, Izma Naim Butt, A. Shahzad, S. Shahid","doi":"10.1145/3373625.3418024","DOIUrl":"https://doi.org/10.1145/3373625.3418024","url":null,"abstract":"Since the prevalence of aphasia is one of the major problems faced by older individuals today, there is an increasing amount of research aimed towards developing technology based interventions for persons with aphasia (PwA). In developing countries such as Pakistan, aphasia patients face multiple problems and challenges due to a weak healthcare system and a severe lack of facilities and available resources. This work is an effort to understand the needs, problems and challenges of PwA in order to provide more effective support. In recent years, Virtual Reality (VR) has been the focus of a lot of research, due to its deep immersive environments, and has shown great promise in improving language production and speech comprehension among aphasia patients. This study aims to evaluate the impact of VR enrichment activities on language production and speech comprehension of PwA. It will also explore the opportunities and challenges of using VR experiences with PwA through a thematic analysis of the recorded observations made during the system evaluation.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133898305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cole Gleason, Amy Pavel, H. Gururaj, Kris M. Kitani, Jeffrey P. Bigham
Social media platforms feature short animations known as GIFs, but they are inaccessible to people with vision impairments. Unlike static images, GIFs contain action and visual indications of sound, which can be challenging to describe in alternative text descriptions. We examine a large sample of inaccessible GIFs on Twitter to document how they are used and what visual elements they contain. In interviews with 10 blind Twitter users, we discuss what elements of GIF content should be described and their experiences with GIFs online. The participants compared alternative text descriptions with two other alternative audio formats: (i) the original audio from the GIF source video and (ii) a spoken audio description. We recommend that social media platforms automatically include alt text descriptions for popular GIFs (as Twitter has begun to do), and content producers create audio descriptions to ensure everyone has a rich and emotive experience with GIFs online.
{"title":"Making GIFs Accessible","authors":"Cole Gleason, Amy Pavel, H. Gururaj, Kris M. Kitani, Jeffrey P. Bigham","doi":"10.1145/3373625.3417027","DOIUrl":"https://doi.org/10.1145/3373625.3417027","url":null,"abstract":"Social media platforms feature short animations known as GIFs, but they are inaccessible to people with vision impairments. Unlike static images, GIFs contain action and visual indications of sound, which can be challenging to describe in alternative text descriptions. We examine a large sample of inaccessible GIFs on Twitter to document how they are used and what visual elements they contain. In interviews with 10 blind Twitter users, we discuss what elements of GIF content should be described and their experiences with GIFs online. The participants compared alternative text descriptions with two other alternative audio formats: (i) the original audio from the GIF source video and (ii) a spoken audio description. We recommend that social media platforms automatically include alt text descriptions for popular GIFs (as Twitter has begun to do), and content producers create audio descriptions to ensure everyone has a rich and emotive experience with GIFs online.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134303329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Garreth W. Tigwell, R. Peiris, Stacey Watson, Gerald M. Garavuso, H. Miller
American Sign Language (ASL) classes are typically held face-to-face to increase interactivity and enhance the learning experience. However, the recent COVID-19 pandemic brought about many changes to course delivery methods, primarily resulting in a move to an online format, which had to occur in a short timeframe. The online format has presented students and teachers with many opportunities and challenges. In this experience report, we reflect on the student and teacher perspectives of learning ASL in an online setting. We use our experience to introduce new online ASL class guidelines, videoconferencing improvements, and suggest where future research is needed.
{"title":"Student and Teacher Perspectives of Learning ASL in an Online Setting","authors":"Garreth W. Tigwell, R. Peiris, Stacey Watson, Gerald M. Garavuso, H. Miller","doi":"10.1145/3373625.3417298","DOIUrl":"https://doi.org/10.1145/3373625.3417298","url":null,"abstract":"American Sign Language (ASL) classes are typically held face-to-face to increase interactivity and enhance the learning experience. However, the recent COVID-19 pandemic brought about many changes to course delivery methods, primarily resulting in a move to an online format, which had to occur in a short timeframe. The online format has presented students and teachers with many opportunities and challenges. In this experience report, we reflect on the student and teacher perspectives of learning ASL in an online setting. We use our experience to introduce new online ASL class guidelines, videoconferencing improvements, and suggest where future research is needed.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"50 9-10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132496788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}