Venkatesh Potluri, Maulishree Pandey, Andrew Begel, Michael Lawrence Barnett, Scott Reitherman
COVID-19 accelerated the trend toward remote software development, increasing the need for tightly-coupled synchronous collaboration. Existing tools and practices impose high coordination overhead on blind or visually impaired (BVI) developers, impeding their abilities to collaborate effectively, compromising their agency, and limiting their contribution. To make remote collaboration more accessible, we created CodeWalk, a set of features added to Microsoft’s Live Share VS Code extension, for synchronous code review and refactoring. We chose design criteria to ease the coordination burden felt by BVI developers by conveying sighted colleagues’ navigation and edit actions via sound effects and speech. We evaluated our design in a within-subjects experiment with 10 BVI developers. Our results show that CodeWalk streamlines the dialogue required to refer to shared workspace locations, enabling participants to spend more time contributing to coding tasks. This design offers a path towards enabling BVI and sighted developers to collaborate on more equal terms.
COVID-19加速了远程软件开发的趋势,增加了对紧密耦合同步协作的需求。现有的工具和实践对盲人或视障(BVI)开发人员施加了很高的协调开销,阻碍了他们有效协作的能力,损害了他们的代理,并限制了他们的贡献。为了使远程协作更容易实现,我们创建了CodeWalk,这是微软Live Share VS Code扩展中添加的一组功能,用于同步代码审查和重构。我们选择设计标准是为了减轻BVI开发人员的协调负担,通过声音效果和语音传达视力正常的同事的导航和编辑操作。我们在10名BVI开发者的受试者内实验中评估了我们的设计。我们的结果表明,CodeWalk简化了引用共享工作空间位置所需的对话,使参与者能够花更多的时间来完成编码任务。这种设计为BVI和有远见的开发人员提供了一条更平等的合作途径。
{"title":"CodeWalk: Facilitating Shared Awareness in Mixed-Ability Collaborative Software Development","authors":"Venkatesh Potluri, Maulishree Pandey, Andrew Begel, Michael Lawrence Barnett, Scott Reitherman","doi":"10.1145/3517428.3544812","DOIUrl":"https://doi.org/10.1145/3517428.3544812","url":null,"abstract":"COVID-19 accelerated the trend toward remote software development, increasing the need for tightly-coupled synchronous collaboration. Existing tools and practices impose high coordination overhead on blind or visually impaired (BVI) developers, impeding their abilities to collaborate effectively, compromising their agency, and limiting their contribution. To make remote collaboration more accessible, we created CodeWalk, a set of features added to Microsoft’s Live Share VS Code extension, for synchronous code review and refactoring. We chose design criteria to ease the coordination burden felt by BVI developers by conveying sighted colleagues’ navigation and edit actions via sound effects and speech. We evaluated our design in a within-subjects experiment with 10 BVI developers. Our results show that CodeWalk streamlines the dialogue required to refer to shared workspace locations, enabling participants to spend more time contributing to coding tasks. This design offers a path towards enabling BVI and sighted developers to collaborate on more equal terms.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127727829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Typing programs can be difficult or impossible for programmers with motor impairments. Programming by voice can be a promising alternative. In this research, we explored the perceptions of motor-impaired programmers with regard to programming by voice. We learned that leveraging existing voice-based programming platforms to speak code can be more complicated than it needs to be. The interviewees expressed their frustration with long hours of memorizing unnatural commands in order to enter code by voice. In addition, we found a preference for being able to speak code in a flexible manner without requiring strict adherence to a grammar.
{"title":"Exploring Motor-impaired Programmers’ Use of Speech Recognition","authors":"Sadia Nowrin, Patricia Ordóñez, K. Vertanen","doi":"10.1145/3517428.3550392","DOIUrl":"https://doi.org/10.1145/3517428.3550392","url":null,"abstract":"Typing programs can be difficult or impossible for programmers with motor impairments. Programming by voice can be a promising alternative. In this research, we explored the perceptions of motor-impaired programmers with regard to programming by voice. We learned that leveraging existing voice-based programming platforms to speak code can be more complicated than it needs to be. The interviewees expressed their frustration with long hours of memorizing unnatural commands in order to enter code by voice. In addition, we found a preference for being able to speak code in a flexible manner without requiring strict adherence to a grammar.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126814757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Creative sound activities, such as music playing and audio engineering, are said to have been democratized with the development of technology. Yet, the use of technology in creative sound activities by people who are deaf, Deaf, and hard of hearing (DHH) has been underexplored by the research community. To address this gap, we conducted an online survey with 50 DHH participants to understand their use of technology and barriers they face in their creative sound activities. We find DHH people use four types of technology — hearing devices, sound manipulation, sound visualization, and speech-to-text — for three purposes — to improve sound perception via auditory and visual means, to avoid hearing fatigue, and to better communicate with hearing people. We also find their barriers to technology: unknown availability, limited options, and limitations that technology can solve. We discuss opportunities for more inclusive design specific to DHH people’s creative sound activities, as well as facilitating access to information about technology.
{"title":"How people who are deaf, Deaf, and hard of hearing use technology in creative sound activities","authors":"Keita Ohshiro, M. Cartwright","doi":"10.1145/3517428.3550396","DOIUrl":"https://doi.org/10.1145/3517428.3550396","url":null,"abstract":"Creative sound activities, such as music playing and audio engineering, are said to have been democratized with the development of technology. Yet, the use of technology in creative sound activities by people who are deaf, Deaf, and hard of hearing (DHH) has been underexplored by the research community. To address this gap, we conducted an online survey with 50 DHH participants to understand their use of technology and barriers they face in their creative sound activities. We find DHH people use four types of technology — hearing devices, sound manipulation, sound visualization, and speech-to-text — for three purposes — to improve sound perception via auditory and visual means, to avoid hearing fatigue, and to better communicate with hearing people. We also find their barriers to technology: unknown availability, limited options, and limitations that technology can solve. We discuss opportunities for more inclusive design specific to DHH people’s creative sound activities, as well as facilitating access to information about technology.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128392552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dimitri Vargemidis, K. Gerling, L. Geurts, V. Abeele
Wearable activity trackers are inaccessible to older adults who use mobility aids (e.g., walker, wheelchair), because the accuracy of trackers drops considerably for such movement modalities (MMs). As an initial step to address this problem, we implemented and tested a minimum distance classifier to automatically identify the used MM out of seven modalities, including movement with or without a mobility aid, and no movement. Depending on the test setup, our classifier achieves accuracies between 82 % and 100 %. These findings can be leveraged in future work to combine the classifier with algorithms tailored to each mobility aid to make activity trackers accessible to users with limited mobility.
{"title":"Flexible Activity Tracking for Older Adults Using Mobility Aids — An Exploratory Study on Automatically Identifying Movement Modality","authors":"Dimitri Vargemidis, K. Gerling, L. Geurts, V. Abeele","doi":"10.1145/3517428.3550371","DOIUrl":"https://doi.org/10.1145/3517428.3550371","url":null,"abstract":"Wearable activity trackers are inaccessible to older adults who use mobility aids (e.g., walker, wheelchair), because the accuracy of trackers drops considerably for such movement modalities (MMs). As an initial step to address this problem, we implemented and tested a minimum distance classifier to automatically identify the used MM out of seven modalities, including movement with or without a mobility aid, and no movement. Depending on the test setup, our classifier achieves accuracies between 82 % and 100 %. These findings can be leveraged in future work to combine the classifier with algorithms tailored to each mobility aid to make activity trackers accessible to users with limited mobility.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130075677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Students with disabilities face numerous access barriers in higher education institutions. For example, many students struggle to receive the accommodations that they legally have a right to, and many course materials and tools are inaccessible (e.g., textbooks, required software, slide decks). Consequently, students with disabilities drop out of college at a higher rate than nondisabled students. In this dissertation, I aim to improve two core areas of inaccessibility for students with disabilities. First, I will learn about the common issues that arise when three stakeholders (disabled students, professors, and people working in disability service offices) work to fulfill technology-focused accommodations (e.g., slides, IDEs, lecture videos) for a student. Through this two part survey and interview/co-design study, I will develop design recommendations around how technology can better support this process. Second, I will apply techniques like optimization and natural language processing to build tools to identify and automatically repair common accessibility issues in a ubiquitous tool for teaching across departments: slide show presentations. By conducting this work, I will contribute software tools and design recommendations that will support disabled students in obtaining an accessible education.
{"title":"Accessible Communication and Materials in Higher Education","authors":"Kelly Avery Mack","doi":"10.1145/3517428.3550408","DOIUrl":"https://doi.org/10.1145/3517428.3550408","url":null,"abstract":"Students with disabilities face numerous access barriers in higher education institutions. For example, many students struggle to receive the accommodations that they legally have a right to, and many course materials and tools are inaccessible (e.g., textbooks, required software, slide decks). Consequently, students with disabilities drop out of college at a higher rate than nondisabled students. In this dissertation, I aim to improve two core areas of inaccessibility for students with disabilities. First, I will learn about the common issues that arise when three stakeholders (disabled students, professors, and people working in disability service offices) work to fulfill technology-focused accommodations (e.g., slides, IDEs, lecture videos) for a student. Through this two part survey and interview/co-design study, I will develop design recommendations around how technology can better support this process. Second, I will apply techniques like optimization and natural language processing to build tools to identify and automatically repair common accessibility issues in a ubiquitous tool for teaching across departments: slide show presentations. By conducting this work, I will contribute software tools and design recommendations that will support disabled students in obtaining an accessible education.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123775776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While students with disabilities demonstrate high interest in STEM during the transition from high school to college, their representation in STEM decreases throughout postsecondary education and into the workforce [13]. Organic chemistry, in particular, is a uniquely useful case study for exploring technological accessibility in STEM education due to its heavy reliance on highly visual components such as two- and three-dimensional representations of molecular structures. In addition, many university STEM programs recognize organic chemistry’s rigorous perception as a “weed-out” course [4]. After a thorough search, no organic chemistry educational website has addressed level AA accessibility, and therefore, none have met the Web Content Accessibility Guidelines (WCAG 2.0). In this study, we investigated student’s preferences on accessibility features while learning organic chemistry. We then explored “webORA,” a web-based application that allows the user to interact with a 3D molecular animation with subtitles that describe the reaction progression. We also evaluated the beta version of webORA by conducting user testing with users of multiple skill-levels. Lastly, we performed a manual accessibility audit of webORA.
{"title":"Challenges and Opportunities in Creating an Accessible Web Application for Learning Organic Chemistry","authors":"Allyson Yu","doi":"10.1145/3517428.3563284","DOIUrl":"https://doi.org/10.1145/3517428.3563284","url":null,"abstract":"While students with disabilities demonstrate high interest in STEM during the transition from high school to college, their representation in STEM decreases throughout postsecondary education and into the workforce [13]. Organic chemistry, in particular, is a uniquely useful case study for exploring technological accessibility in STEM education due to its heavy reliance on highly visual components such as two- and three-dimensional representations of molecular structures. In addition, many university STEM programs recognize organic chemistry’s rigorous perception as a “weed-out” course [4]. After a thorough search, no organic chemistry educational website has addressed level AA accessibility, and therefore, none have met the Web Content Accessibility Guidelines (WCAG 2.0). In this study, we investigated student’s preferences on accessibility features while learning organic chemistry. We then explored “webORA,” a web-based application that allows the user to interact with a 3D molecular animation with subtitles that describe the reaction progression. We also evaluated the beta version of webORA by conducting user testing with users of multiple skill-levels. Lastly, we performed a manual accessibility audit of webORA.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133738107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We interviewed 10 autistic livestreamers to understand their motivations for livestreaming on Twitch. Our participants explained that streaming helped them fulfill social desires by: supporting them in making meaningful social connections with others; giving them a safe space to practice social skills like “small talk”; and empowering them to be autistic role models and to share their true selves. This work offers an early report on how autistic individuals leverage livestreaming as a beneficial social platform while struggling with audience expectations.
{"title":"Social Access and Representation for Autistic Adult Livestreamers","authors":"T. Mok, Anthony Tang, Adam Mccrimmon, L. Oehlberg","doi":"10.1145/3517428.3550400","DOIUrl":"https://doi.org/10.1145/3517428.3550400","url":null,"abstract":"We interviewed 10 autistic livestreamers to understand their motivations for livestreaming on Twitch. Our participants explained that streaming helped them fulfill social desires by: supporting them in making meaningful social connections with others; giving them a safe space to practice social skills like “small talk”; and empowering them to be autistic role models and to share their true selves. This work offers an early report on how autistic individuals leverage livestreaming as a beneficial social platform while struggling with audience expectations.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133953331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As sign languages are often the native languages for members of the Deaf community, text captioning can be inaccessible, highlighting the importance of sign language interpretation. This research explores sign language in virtual reality 360-degree three degrees of freedom videos, exploring two rendering modes, fixed-position and always-visible, and two visual guiding methods, arrows and radar. Findings from testing with eight participants indicates that fixed-position rendering provides participants with a greater sense of presence than always-visible rendering, whilst always-visible rendering produces less of a blocking effect. Arrows appear more usable than radar for visually guiding participants to active speakers, including providing a higher level of sign language understanding. Future research is needed to validate these findings with six degrees of freedom content.
{"title":"Investigating Sign Language Interpreter Rendering and Guiding Methods in Virtual Reality 360-Degree Content","authors":"C. Anderton","doi":"10.1145/3517428.3563373","DOIUrl":"https://doi.org/10.1145/3517428.3563373","url":null,"abstract":"As sign languages are often the native languages for members of the Deaf community, text captioning can be inaccessible, highlighting the importance of sign language interpretation. This research explores sign language in virtual reality 360-degree three degrees of freedom videos, exploring two rendering modes, fixed-position and always-visible, and two visual guiding methods, arrows and radar. Findings from testing with eight participants indicates that fixed-position rendering provides participants with a greater sense of presence than always-visible rendering, whilst always-visible rendering produces less of a blocking effect. Arrows appear more usable than radar for visually guiding participants to active speakers, including providing a higher level of sign language understanding. Future research is needed to validate these findings with six degrees of freedom content.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"208 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114217948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study compares four command-line interface (CLI) web accessibility tools, examining if one CLI tool is sufficient for automated accessibility evaluation. The four tools were: Axe-core/cli, IBM Equal Access NPM Accessibility Checker (Accessibility Checker), Pa11y-ci, and the A11y Machine. Inter-rater reliability was calculated using Gwet’s alpha coefficient 2 (AC2), and the results indicate very poor reliability between tools.
{"title":"Inter-rater Reliability of Command-Line Web Accessibility Evaluation Tools","authors":"Eryn Rachael Kelsey-Adkins, R. Thompson","doi":"10.1145/3517428.3550395","DOIUrl":"https://doi.org/10.1145/3517428.3550395","url":null,"abstract":"This study compares four command-line interface (CLI) web accessibility tools, examining if one CLI tool is sufficient for automated accessibility evaluation. The four tools were: Axe-core/cli, IBM Equal Access NPM Accessibility Checker (Accessibility Checker), Pa11y-ci, and the A11y Machine. Inter-rater reliability was calculated using Gwet’s alpha coefficient 2 (AC2), and the results indicate very poor reliability between tools.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114805040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People with complex communication needs (CCNs) can use high-tech augmentative and alternative communication (AAC) devices and systems to compensate for communication difficulties. While many use AAC effectively, much research has highlighted challenges – for instance, high rates of abandonment and solutions which are not appropriate for their end-users. Presently, we lack a detailed survey of this field to comprehend these shortcomings and understand how the accessibility community might direct its efforts to design more effective AAC. In response to this, we conduct a systematic review and taxonomy of high-tech AAC devices and interventions, reporting results from 562 articles identified in the ACM DL and SCOPUS databases. We provide a taxonomical overview of the current state of AAC devices – e.g. their interaction modalities and characteristics. We describe the communities of focus explored, and the methodological approaches used. We contrast findings in the broader accessibility and HCI literature to delineate future avenues for exploration in light of the current taxonomy, offer a reassessment of the norms and incumbent research methodologies and present a discourse on the communities of focus for AAC and interventions.
{"title":"State of the Art in AAC: A Systematic Review and Taxonomy","authors":"Humphrey Curtis, Timothy Neate, Carlota Vazquez Gonzalez","doi":"10.1145/3517428.3544810","DOIUrl":"https://doi.org/10.1145/3517428.3544810","url":null,"abstract":"People with complex communication needs (CCNs) can use high-tech augmentative and alternative communication (AAC) devices and systems to compensate for communication difficulties. While many use AAC effectively, much research has highlighted challenges – for instance, high rates of abandonment and solutions which are not appropriate for their end-users. Presently, we lack a detailed survey of this field to comprehend these shortcomings and understand how the accessibility community might direct its efforts to design more effective AAC. In response to this, we conduct a systematic review and taxonomy of high-tech AAC devices and interventions, reporting results from 562 articles identified in the ACM DL and SCOPUS databases. We provide a taxonomical overview of the current state of AAC devices – e.g. their interaction modalities and characteristics. We describe the communities of focus explored, and the methodological approaches used. We contrast findings in the broader accessibility and HCI literature to delineate future avenues for exploration in light of the current taxonomy, offer a reassessment of the norms and incumbent research methodologies and present a discourse on the communities of focus for AAC and interventions.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"178 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116100700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}