Fabiha Ahmed, Dennis Kuzminer, Michael Zachor, Lisa Ye, Rachel Josepho, W. Payne, Amy Hurst
Many blind musicians and composers read and write music using braille. Yet, braille music is not as widely available as print (visual) music, sighted collaborators and educators do not read braille music, and workflows and toolchains for converting between print and braille music are complex. In this research, we present Sound Cells, a music notation system that simultaneously outputs visual and braille notation, and provides audio feedback as a user writes music with text. We share findings from a Design Probe in which two experienced blind musicians notated music using Sound Cells and reflected on it in the context of their current notation practices. Finally, we highlight music navigation and outputted score customization as opportunities for further study.
{"title":"Sound Cells: Rendering Visual and Braille Music in the Browser","authors":"Fabiha Ahmed, Dennis Kuzminer, Michael Zachor, Lisa Ye, Rachel Josepho, W. Payne, Amy Hurst","doi":"10.1145/3441852.3476555","DOIUrl":"https://doi.org/10.1145/3441852.3476555","url":null,"abstract":"Many blind musicians and composers read and write music using braille. Yet, braille music is not as widely available as print (visual) music, sighted collaborators and educators do not read braille music, and workflows and toolchains for converting between print and braille music are complex. In this research, we present Sound Cells, a music notation system that simultaneously outputs visual and braille notation, and provides audio feedback as a user writes music with text. We share findings from a Design Probe in which two experienced blind musicians notated music using Sound Cells and reflected on it in the context of their current notation practices. Finally, we highlight music navigation and outputted score customization as opportunities for further study.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129880945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kelly Avery Mack, Edward Cutrell, Bongshin Lee, M. Morris
Alternative (alt) text provides access to descriptions of digital images for people who use screen readers. While prior work studied screen reader users’ (SRUs’) preferences about alt text and automatic alt text (i.e., alt text generated by artificial intelligence), little work examined the alt text author’s experience composing or editing these descriptions. We built two types of prototype interfaces for two tasks: authoring alt text and providing feedback on automatic alt text. Through combined interview-usability testing sessions with alt text authors and interviews with SRUs, we tested the effectiveness of our prototypes in the context of Microsoft PowerPoint. Our results suggest that authoring interfaces that support authors in choosing what to include in their descriptions result in higher quality alt text. The feedback interfaces highlighted considerable differences in the perceptions of authors and SRUs regarding “high-quality” alt text. Finally, authors crafted significantly lower quality alt text when starting from the automatic alt text compared to starting from a blank box. We discuss the implications of these results on applications that support alt text.
{"title":"Designing Tools for High-Quality Alt Text Authoring","authors":"Kelly Avery Mack, Edward Cutrell, Bongshin Lee, M. Morris","doi":"10.1145/3441852.3471207","DOIUrl":"https://doi.org/10.1145/3441852.3471207","url":null,"abstract":"Alternative (alt) text provides access to descriptions of digital images for people who use screen readers. While prior work studied screen reader users’ (SRUs’) preferences about alt text and automatic alt text (i.e., alt text generated by artificial intelligence), little work examined the alt text author’s experience composing or editing these descriptions. We built two types of prototype interfaces for two tasks: authoring alt text and providing feedback on automatic alt text. Through combined interview-usability testing sessions with alt text authors and interviews with SRUs, we tested the effectiveness of our prototypes in the context of Microsoft PowerPoint. Our results suggest that authoring interfaces that support authors in choosing what to include in their descriptions result in higher quality alt text. The feedback interfaces highlighted considerable differences in the perceptions of authors and SRUs regarding “high-quality” alt text. Finally, authors crafted significantly lower quality alt text when starting from the automatic alt text compared to starting from a blank box. We discuss the implications of these results on applications that support alt text.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130000772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicholas L Robertson, Filip Bircanin, Laurianne Sitbon
This paper presents the first iteration of the design of a web application which supports its users to access and arrange pictures as a non-linguistic way of supporting communication. We motivate our initial design by examining related work on Augmentative Alternative Communication (AAC). We present our reflections on the use of a working prototype by two minimally-verbal users with intellectual disability and how this can inform future work.
{"title":"Designing a Pictorial Communication Web Application With People With Intellectual Disability","authors":"Nicholas L Robertson, Filip Bircanin, Laurianne Sitbon","doi":"10.1145/3441852.3476527","DOIUrl":"https://doi.org/10.1145/3441852.3476527","url":null,"abstract":"This paper presents the first iteration of the design of a web application which supports its users to access and arrange pictures as a non-linguistic way of supporting communication. We motivate our initial design by examining related work on Augmentative Alternative Communication (AAC). We present our reflections on the use of a working prototype by two minimally-verbal users with intellectual disability and how this can inform future work.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123508215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Morrison, Edward Cutrell, Martin Grayson, Elisabeth R. B. Becker, Vasiliki Kladouchou, L. Pring, Katherine Jones, R. Marques, Camilla Longden, A. Sellen
Novel AI-infused educational technologies can give children with blindness the opportunity to explore concepts learned incidentally through vision by using alternative perceptual modalities. However, more effort is needed to support the meaningful use of such technological innovations for evaluations at scale and later wide-spread adoption. This paper presents the development and pilot evaluation of a curriculum to enable educators to support blind learners’ self-exploration of social attention using the PeopleLens technology. We reflect on these learnings to present four design guidelines for creating curricula aimed to enable meaningful use. We then consider how formulations of “success” by our participants can help us think about ways of assessing efficacy in low-incidence disability groups. We conclude by arguing for our community to widen the scope of discourse around assistive technologies from design and engineering to include supporting their meaningful use.
{"title":"Enabling meaningful use of AI-infused educational technologies for children with blindness: Learnings from the development and piloting of the PeopleLens curriculum","authors":"C. Morrison, Edward Cutrell, Martin Grayson, Elisabeth R. B. Becker, Vasiliki Kladouchou, L. Pring, Katherine Jones, R. Marques, Camilla Longden, A. Sellen","doi":"10.1145/3441852.3471210","DOIUrl":"https://doi.org/10.1145/3441852.3471210","url":null,"abstract":"Novel AI-infused educational technologies can give children with blindness the opportunity to explore concepts learned incidentally through vision by using alternative perceptual modalities. However, more effort is needed to support the meaningful use of such technological innovations for evaluations at scale and later wide-spread adoption. This paper presents the development and pilot evaluation of a curriculum to enable educators to support blind learners’ self-exploration of social attention using the PeopleLens technology. We reflect on these learnings to present four design guidelines for creating curricula aimed to enable meaningful use. We then consider how formulations of “success” by our participants can help us think about ways of assessing efficacy in low-incidence disability groups. We conclude by arguing for our community to widen the scope of discourse around assistive technologies from design and engineering to include supporting their meaningful use.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122418938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Becca Dingman, Garreth W. Tigwell, Kristen Shinohara
Listening to podcasts is a popular way for people to spend their time. However, little focus has been given to how accessible podcast platforms are for Deaf and Hard-of-Hearing (DHH) people. We present a DHH-centered accessible podcast platform prototype developed with user-centered design. Our proposed design was constructed through semi-structured interviews (n=7) and prototype design feedback sessions (n=8) with DHH users. We encourage podcast platform designers to adopt our design recommendations to make podcasts more inclusive for DHH people and recommend how podcast hosts can make their shows more accessible.
{"title":"Designing a Podcast Platform for Deaf and Hard of Hearing Users","authors":"Becca Dingman, Garreth W. Tigwell, Kristen Shinohara","doi":"10.1145/3441852.3476523","DOIUrl":"https://doi.org/10.1145/3441852.3476523","url":null,"abstract":"Listening to podcasts is a popular way for people to spend their time. However, little focus has been given to how accessible podcast platforms are for Deaf and Hard-of-Hearing (DHH) people. We present a DHH-centered accessible podcast platform prototype developed with user-centered design. Our proposed design was constructed through semi-structured interviews (n=7) and prototype design feedback sessions (n=8) with DHH users. We encourage podcast platform designers to adopt our design recommendations to make podcasts more inclusive for DHH people and recommend how podcast hosts can make their shows more accessible.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127191669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Low contrast between text and background and its effect with low vision is relatively well-understood. Many tools exist for helping web designers check contrast limits. Most of these tools identify contrast problems but give limited advice on how to rectify the problems. Moreover, website accessibility audits reveal that insufficient color contrast still is a recurring issue in practice. A framework was therefore developed that automatically proposes color adjustments to problematic text-background color pairs on web pages. These suggestions adhere to contrast requirements and are aligned with the visual design profile. The framework allows the developers to visually inspect the suggestions and amend color definitions in projects.
{"title":"Inverse Color Contrast Checker: Automatically Suggesting Color Adjustments that meet Contrast Requirements on the Web","authors":"F. Sandnes","doi":"10.1145/3441852.3476529","DOIUrl":"https://doi.org/10.1145/3441852.3476529","url":null,"abstract":"Low contrast between text and background and its effect with low vision is relatively well-understood. Many tools exist for helping web designers check contrast limits. Most of these tools identify contrast problems but give limited advice on how to rectify the problems. Moreover, website accessibility audits reveal that insufficient color contrast still is a recurring issue in practice. A framework was therefore developed that automatically proposes color adjustments to problematic text-background color pairs on web pages. These suggestions adhere to contrast requirements and are aligned with the visual design profile. The framework allows the developers to visually inspect the suggestions and amend color definitions in projects.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115309188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite the existence of accessibility testing tools, software still largely inaccessible mainly due to lack of awareness among developers and issues with existing tools [14, 18]. This motivated us to evaluate the accessibility support of development tools that do not require specific accessibility knowledge such as web frameworks. We tested the accessibility support of three JavaScript web frameworks; Angular, React, and Vue. For each of the three frameworks, we built a web application with 32 pages, each of which violated a single accessibility guideline. We found that only React generated a warning for one of the accessibility violations that is lack of label for non-text content. The rest of the accessibility violations went unnoticed by the three frameworks.
{"title":"Accessibility Support in Web Frameworks","authors":"Michael Longley, Yasmine N. Elglaly","doi":"10.1145/3441852.3476531","DOIUrl":"https://doi.org/10.1145/3441852.3476531","url":null,"abstract":"Despite the existence of accessibility testing tools, software still largely inaccessible mainly due to lack of awareness among developers and issues with existing tools [14, 18]. This motivated us to evaluate the accessibility support of development tools that do not require specific accessibility knowledge such as web frameworks. We tested the accessibility support of three JavaScript web frameworks; Angular, React, and Vue. For each of the three frameworks, we built a web application with 32 pages, each of which violated a single accessibility guideline. We found that only React generated a warning for one of the accessibility violations that is lack of label for non-text content. The rest of the accessibility violations went unnoticed by the three frameworks.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115455004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Howard, K. Venkatasubramanian, Jeanine L. M. Skorinko, Pauline Bosma, J. Mullaly, Brian Kelly, Deborah Lloyd, Maria Wishart, Emiton Alves, N. Jutras, Mariah Freark, Nancy A. Alterio
In the US, the abuse of individuals with intellectual and developmental disabilities (I/DD) is at epidemic proportions; however, the reporting of such abuse has been severely lacking. It has been found that individuals with I/DD are more aware of when and how to report abuse if they have received abuse prevention training. Consequently, in this paper we present the design of a mobile-computing app called Recognize to teach individuals with I/DD about abuse. Our research team is diverse, with both individuals with I/DD and neurotypical individuals. We leveraged this diversity by utilizing a co-design process with our team members who live with I/DD. Our team developed three initial prototypes of the app and performed a qualitative, within-group user study with six separate individuals with I/DD who are themselves experienced teachers to other individuals with I/DD. We found that, overall, the app would be viable for use by individuals with I/DD. We end the paper with a brief discussion of the implications of our findings toward building a full prototype of the app.
{"title":"Designing an App to Help Individuals with Intellectual and Developmental Disabilities to Recognize Abuse","authors":"Thomas Howard, K. Venkatasubramanian, Jeanine L. M. Skorinko, Pauline Bosma, J. Mullaly, Brian Kelly, Deborah Lloyd, Maria Wishart, Emiton Alves, N. Jutras, Mariah Freark, Nancy A. Alterio","doi":"10.1145/3441852.3471217","DOIUrl":"https://doi.org/10.1145/3441852.3471217","url":null,"abstract":"In the US, the abuse of individuals with intellectual and developmental disabilities (I/DD) is at epidemic proportions; however, the reporting of such abuse has been severely lacking. It has been found that individuals with I/DD are more aware of when and how to report abuse if they have received abuse prevention training. Consequently, in this paper we present the design of a mobile-computing app called Recognize to teach individuals with I/DD about abuse. Our research team is diverse, with both individuals with I/DD and neurotypical individuals. We leveraged this diversity by utilizing a co-design process with our team members who live with I/DD. Our team developed three initial prototypes of the app and performed a qualitative, within-group user study with six separate individuals with I/DD who are themselves experienced teachers to other individuals with I/DD. We found that, overall, the app would be viable for use by individuals with I/DD. We end the paper with a brief discussion of the implications of our findings toward building a full prototype of the app.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130799530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a landscape analysis of commercially available visual assistance technologies (VATs) that provide auditory descriptions of image and video content found online, as well as those taken by people who are blind and have visual questions. Through structured web-based searches, we identified 20 VATs released by 17 companies, and analyzed how these companies communicate to users about their technical innovation and service offerings. Our results can orient new researchers, UX professionals, and developers to trends within commercial VAT development.
{"title":"Landscape Analysis of Commercial Visual Assistance Technologies","authors":"Emma Sadjo, Leah Findlater, Abigale Stangl","doi":"10.1145/3441852.3476521","DOIUrl":"https://doi.org/10.1145/3441852.3476521","url":null,"abstract":"We present a landscape analysis of commercially available visual assistance technologies (VATs) that provide auditory descriptions of image and video content found online, as well as those taken by people who are blind and have visual questions. Through structured web-based searches, we identified 20 VATs released by 17 companies, and analyzed how these companies communicate to users about their technical innovation and service offerings. Our results can orient new researchers, UX professionals, and developers to trends within commercial VAT development.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"11 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123396286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lucy Lu Wang, Isabel Cachola, Jonathan Bragg, Evie (Yu-Yen) Cheng, Chelsea Hess Haupt, Matt Latzke, Bailey Kuehl, Madeleine van Zuylen, Linda M. Wagner, Daniel S. Weld
We present SciA11y, a system that renders inaccessible scientific paper PDFs into HTML. SciA11y uses machine learning models to extract and understand the content of scientific PDFs, and reorganizes the resulting paper components into a form that better supports skimming and scanning for blind and low vision (BLV) readers. SciA11y adds navigation features such as tagged headings, a table of contents, and bidirectional links between inline citations and references, which allow readers to resolve citations without losing their context. A set of 1.5 million open access papers are processed and available at https://scia11y.org/. This system is a first step in addressing scientific PDF accessibility, and may significantly improve the experience of paper reading for BLV users.
{"title":"SciA11y: Converting Scientific Papers to Accessible HTML","authors":"Lucy Lu Wang, Isabel Cachola, Jonathan Bragg, Evie (Yu-Yen) Cheng, Chelsea Hess Haupt, Matt Latzke, Bailey Kuehl, Madeleine van Zuylen, Linda M. Wagner, Daniel S. Weld","doi":"10.1145/3441852.3476545","DOIUrl":"https://doi.org/10.1145/3441852.3476545","url":null,"abstract":"We present SciA11y, a system that renders inaccessible scientific paper PDFs into HTML. SciA11y uses machine learning models to extract and understand the content of scientific PDFs, and reorganizes the resulting paper components into a form that better supports skimming and scanning for blind and low vision (BLV) readers. SciA11y adds navigation features such as tagged headings, a table of contents, and bidirectional links between inline citations and references, which allow readers to resolve citations without losing their context. A set of 1.5 million open access papers are processed and available at https://scia11y.org/. This system is a first step in addressing scientific PDF accessibility, and may significantly improve the experience of paper reading for BLV users.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133829592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}