With the COVID-19 pandemic, we all suffered from several restrictions and measures regulating interaction with one another. We had to wear masks, use hand sanitizer, have open-air meetings, feel a combination of excitement and frustration, and eventually depend on online video calls. The combinations of these additional requirements and limitations, while necessary, affected how we could involve users in the different stages of design. It has profoundly hindered our chances of meeting in person with people with temporary or permanent disabilities. In our project, involving people with intellectual disabilities in the museum context, we also had to deal with museums being closed and physical exhibitions being canceled. At the same time, guardians and caregivers often turned to a stricter interpretation of anti-COVID measures to protect people with intellectual disabilities. This paper aims to discuss these challenges and share our lessons about coping with challenging and unpredictable situations by using improvisation.
{"title":"Meeting Participants with Intellectual Disabilities during COVID-19 Pandemic: Challenges and Improvisation","authors":"L. Guedes, M. Landoni","doi":"10.1145/3441852.3476566","DOIUrl":"https://doi.org/10.1145/3441852.3476566","url":null,"abstract":"With the COVID-19 pandemic, we all suffered from several restrictions and measures regulating interaction with one another. We had to wear masks, use hand sanitizer, have open-air meetings, feel a combination of excitement and frustration, and eventually depend on online video calls. The combinations of these additional requirements and limitations, while necessary, affected how we could involve users in the different stages of design. It has profoundly hindered our chances of meeting in person with people with temporary or permanent disabilities. In our project, involving people with intellectual disabilities in the museum context, we also had to deal with museums being closed and physical exhibitions being canceled. At the same time, guardians and caregivers often turned to a stricter interpretation of anti-COVID measures to protect people with intellectual disabilities. This paper aims to discuss these challenges and share our lessons about coping with challenging and unpredictable situations by using improvisation.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131538708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Filipa Rocha, Guilherme Guimarães, David Gonçalves, A. Pires, L. Abreu, T. Guerreiro
Introduction of computational thinking training in early childhood potentiates cognitive development and better prepares children to live and prosper in a future heavily computational society. Programming environments are now widely adopted in classrooms to teach programming concepts. However, these tools are often reliant on visual interaction, making them inaccessible to children with visual impairments. Also, programming environments in general are usually designed to promote individual experiences, wasting the potential benefits of group collaborative activities. We propose the design of a programming environment that leverages asymmetric roles to foster collaborative computational thinking activities for children with visual impairments, in particular mixed-visual-ability classes. The multimodal system comprises the use of tangible blocks and auditory feedback, while children have to collaborate to program a robot. We conducted a remote online study, collecting valuable feedback on the limitations and opportunities for future work, aiming to potentiate education and social inclusion.
{"title":"Fostering collaboration with asymmetric roles in accessible programming environments for children with mixed-visual-abilities","authors":"Filipa Rocha, Guilherme Guimarães, David Gonçalves, A. Pires, L. Abreu, T. Guerreiro","doi":"10.1145/3441852.3476553","DOIUrl":"https://doi.org/10.1145/3441852.3476553","url":null,"abstract":"Introduction of computational thinking training in early childhood potentiates cognitive development and better prepares children to live and prosper in a future heavily computational society. Programming environments are now widely adopted in classrooms to teach programming concepts. However, these tools are often reliant on visual interaction, making them inaccessible to children with visual impairments. Also, programming environments in general are usually designed to promote individual experiences, wasting the potential benefits of group collaborative activities. We propose the design of a programming environment that leverages asymmetric roles to foster collaborative computational thinking activities for children with visual impairments, in particular mixed-visual-ability classes. The multimodal system comprises the use of tangible blocks and auditory feedback, while children have to collaborate to program a robot. We conducted a remote online study, collecting valuable feedback on the limitations and opportunities for future work, aiming to potentiate education and social inclusion.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129779770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Morrison, Edward Cutrell, Martin Grayson, Elisabeth R. B. Becker, Vasiliki Kladouchou, L. Pring, Katherine Jones, R. Marques, Camilla Longden, A. Sellen
Novel AI-infused educational technologies can give children with blindness the opportunity to explore concepts learned incidentally through vision by using alternative perceptual modalities. However, more effort is needed to support the meaningful use of such technological innovations for evaluations at scale and later wide-spread adoption. This paper presents the development and pilot evaluation of a curriculum to enable educators to support blind learners’ self-exploration of social attention using the PeopleLens technology. We reflect on these learnings to present four design guidelines for creating curricula aimed to enable meaningful use. We then consider how formulations of “success” by our participants can help us think about ways of assessing efficacy in low-incidence disability groups. We conclude by arguing for our community to widen the scope of discourse around assistive technologies from design and engineering to include supporting their meaningful use.
{"title":"Enabling meaningful use of AI-infused educational technologies for children with blindness: Learnings from the development and piloting of the PeopleLens curriculum","authors":"C. Morrison, Edward Cutrell, Martin Grayson, Elisabeth R. B. Becker, Vasiliki Kladouchou, L. Pring, Katherine Jones, R. Marques, Camilla Longden, A. Sellen","doi":"10.1145/3441852.3471210","DOIUrl":"https://doi.org/10.1145/3441852.3471210","url":null,"abstract":"Novel AI-infused educational technologies can give children with blindness the opportunity to explore concepts learned incidentally through vision by using alternative perceptual modalities. However, more effort is needed to support the meaningful use of such technological innovations for evaluations at scale and later wide-spread adoption. This paper presents the development and pilot evaluation of a curriculum to enable educators to support blind learners’ self-exploration of social attention using the PeopleLens technology. We reflect on these learnings to present four design guidelines for creating curricula aimed to enable meaningful use. We then consider how formulations of “success” by our participants can help us think about ways of assessing efficacy in low-incidence disability groups. We conclude by arguing for our community to widen the scope of discourse around assistive technologies from design and engineering to include supporting their meaningful use.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122418938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Becca Dingman, Garreth W. Tigwell, Kristen Shinohara
Listening to podcasts is a popular way for people to spend their time. However, little focus has been given to how accessible podcast platforms are for Deaf and Hard-of-Hearing (DHH) people. We present a DHH-centered accessible podcast platform prototype developed with user-centered design. Our proposed design was constructed through semi-structured interviews (n=7) and prototype design feedback sessions (n=8) with DHH users. We encourage podcast platform designers to adopt our design recommendations to make podcasts more inclusive for DHH people and recommend how podcast hosts can make their shows more accessible.
{"title":"Designing a Podcast Platform for Deaf and Hard of Hearing Users","authors":"Becca Dingman, Garreth W. Tigwell, Kristen Shinohara","doi":"10.1145/3441852.3476523","DOIUrl":"https://doi.org/10.1145/3441852.3476523","url":null,"abstract":"Listening to podcasts is a popular way for people to spend their time. However, little focus has been given to how accessible podcast platforms are for Deaf and Hard-of-Hearing (DHH) people. We present a DHH-centered accessible podcast platform prototype developed with user-centered design. Our proposed design was constructed through semi-structured interviews (n=7) and prototype design feedback sessions (n=8) with DHH users. We encourage podcast platform designers to adopt our design recommendations to make podcasts more inclusive for DHH people and recommend how podcast hosts can make their shows more accessible.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127191669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a landscape analysis of commercially available visual assistance technologies (VATs) that provide auditory descriptions of image and video content found online, as well as those taken by people who are blind and have visual questions. Through structured web-based searches, we identified 20 VATs released by 17 companies, and analyzed how these companies communicate to users about their technical innovation and service offerings. Our results can orient new researchers, UX professionals, and developers to trends within commercial VAT development.
{"title":"Landscape Analysis of Commercial Visual Assistance Technologies","authors":"Emma Sadjo, Leah Findlater, Abigale Stangl","doi":"10.1145/3441852.3476521","DOIUrl":"https://doi.org/10.1145/3441852.3476521","url":null,"abstract":"We present a landscape analysis of commercially available visual assistance technologies (VATs) that provide auditory descriptions of image and video content found online, as well as those taken by people who are blind and have visual questions. Through structured web-based searches, we identified 20 VATs released by 17 companies, and analyzed how these companies communicate to users about their technical innovation and service offerings. Our results can orient new researchers, UX professionals, and developers to trends within commercial VAT development.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"11 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123396286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicholas L Robertson, Filip Bircanin, Laurianne Sitbon
This paper presents the first iteration of the design of a web application which supports its users to access and arrange pictures as a non-linguistic way of supporting communication. We motivate our initial design by examining related work on Augmentative Alternative Communication (AAC). We present our reflections on the use of a working prototype by two minimally-verbal users with intellectual disability and how this can inform future work.
{"title":"Designing a Pictorial Communication Web Application With People With Intellectual Disability","authors":"Nicholas L Robertson, Filip Bircanin, Laurianne Sitbon","doi":"10.1145/3441852.3476527","DOIUrl":"https://doi.org/10.1145/3441852.3476527","url":null,"abstract":"This paper presents the first iteration of the design of a web application which supports its users to access and arrange pictures as a non-linguistic way of supporting communication. We motivate our initial design by examining related work on Augmentative Alternative Communication (AAC). We present our reflections on the use of a working prototype by two minimally-verbal users with intellectual disability and how this can inform future work.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123508215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite the existence of accessibility testing tools, software still largely inaccessible mainly due to lack of awareness among developers and issues with existing tools [14, 18]. This motivated us to evaluate the accessibility support of development tools that do not require specific accessibility knowledge such as web frameworks. We tested the accessibility support of three JavaScript web frameworks; Angular, React, and Vue. For each of the three frameworks, we built a web application with 32 pages, each of which violated a single accessibility guideline. We found that only React generated a warning for one of the accessibility violations that is lack of label for non-text content. The rest of the accessibility violations went unnoticed by the three frameworks.
{"title":"Accessibility Support in Web Frameworks","authors":"Michael Longley, Yasmine N. Elglaly","doi":"10.1145/3441852.3476531","DOIUrl":"https://doi.org/10.1145/3441852.3476531","url":null,"abstract":"Despite the existence of accessibility testing tools, software still largely inaccessible mainly due to lack of awareness among developers and issues with existing tools [14, 18]. This motivated us to evaluate the accessibility support of development tools that do not require specific accessibility knowledge such as web frameworks. We tested the accessibility support of three JavaScript web frameworks; Angular, React, and Vue. For each of the three frameworks, we built a web application with 32 pages, each of which violated a single accessibility guideline. We found that only React generated a warning for one of the accessibility violations that is lack of label for non-text content. The rest of the accessibility violations went unnoticed by the three frameworks.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115455004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Low contrast between text and background and its effect with low vision is relatively well-understood. Many tools exist for helping web designers check contrast limits. Most of these tools identify contrast problems but give limited advice on how to rectify the problems. Moreover, website accessibility audits reveal that insufficient color contrast still is a recurring issue in practice. A framework was therefore developed that automatically proposes color adjustments to problematic text-background color pairs on web pages. These suggestions adhere to contrast requirements and are aligned with the visual design profile. The framework allows the developers to visually inspect the suggestions and amend color definitions in projects.
{"title":"Inverse Color Contrast Checker: Automatically Suggesting Color Adjustments that meet Contrast Requirements on the Web","authors":"F. Sandnes","doi":"10.1145/3441852.3476529","DOIUrl":"https://doi.org/10.1145/3441852.3476529","url":null,"abstract":"Low contrast between text and background and its effect with low vision is relatively well-understood. Many tools exist for helping web designers check contrast limits. Most of these tools identify contrast problems but give limited advice on how to rectify the problems. Moreover, website accessibility audits reveal that insufficient color contrast still is a recurring issue in practice. A framework was therefore developed that automatically proposes color adjustments to problematic text-background color pairs on web pages. These suggestions adhere to contrast requirements and are aligned with the visual design profile. The framework allows the developers to visually inspect the suggestions and amend color definitions in projects.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115309188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this experience paper we consider the relevance of social policy to the design of personal cameras for accessibility. As researchers with different backgrounds, in disability studies, accessible computing, AI-based systems, HCI and disability policy, we reflect broadly on our experiences of developing assistive technology for and with people with deafblindness. For designers of assistive technology there are usually few restrictions placed on what may be investigated, provided certain ethical and legal standards are met. However, deafblind and disabled people experience many more barriers in how the products of design may be accessed and how they may be used. Social policy is one of the mediators that governs the allocation of resources and benefits, especially for disabled people. We discuss these issues for researchers in the field, using the example of personal cameras: an area of high policy intervention. Awareness of policy is limited in HCI research, and we argue that it has the potential to add focus to work on design and assistive devices for disabled people. Designers have an important role to play in this process.
{"title":"Regulating Personal Cameras for Disabled People and People with Deafblindness: Implications for HCI and Accessible Computing","authors":"Sarah L. Woodin, A. Theil","doi":"10.1145/3441852.3476471","DOIUrl":"https://doi.org/10.1145/3441852.3476471","url":null,"abstract":"In this experience paper we consider the relevance of social policy to the design of personal cameras for accessibility. As researchers with different backgrounds, in disability studies, accessible computing, AI-based systems, HCI and disability policy, we reflect broadly on our experiences of developing assistive technology for and with people with deafblindness. For designers of assistive technology there are usually few restrictions placed on what may be investigated, provided certain ethical and legal standards are met. However, deafblind and disabled people experience many more barriers in how the products of design may be accessed and how they may be used. Social policy is one of the mediators that governs the allocation of resources and benefits, especially for disabled people. We discuss these issues for researchers in the field, using the example of personal cameras: an area of high policy intervention. Awareness of policy is limited in HCI research, and we argue that it has the potential to add focus to work on design and assistive devices for disabled people. Designers have an important role to play in this process.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126383278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sooyeon Lee, Abraham Glasser, Becca Dingman, Zhaoyang Xia, Dimitris N. Metaxas, C. Neidle, Matt Huenerfauth
Without a commonly accepted writing system for American Sign Language (ASL), Deaf or Hard of Hearing (DHH) ASL signers who wish to express opinions or ask questions online must post a video of their signing, if they prefer not to use written English, a language in which they may feel less proficient. Since the face conveys essential linguistic meaning, the face cannot simply be removed from the video in order to preserve anonymity. Thus, DHH ASL signers cannot easily discuss sensitive, personal, or controversial topics in their primary language, limiting engagement in online debate or inquiries about health or legal issues. We explored several recent attempts to address this problem through development of “face swap” technologies to automatically disguise the face in videos while preserving essential facial expressions and natural human appearance. We presented several prototypes to DHH ASL signers (N=16) and examined their interests in and requirements for such technology. After viewing transformed videos of other signers and of themselves, participants evaluated the understandability, naturalness of appearance, and degree of anonymity protection of these technologies. Our study revealed users’ perception of key trade-offs among these three dimensions, factors that contribute to each, and their views on transformation options enabled by this technology, for use in various contexts. Our findings guide future designers of this technology and inform selection of applications and design features.
{"title":"American Sign Language Video Anonymization to Support Online Participation of Deaf and Hard of Hearing Users","authors":"Sooyeon Lee, Abraham Glasser, Becca Dingman, Zhaoyang Xia, Dimitris N. Metaxas, C. Neidle, Matt Huenerfauth","doi":"10.1145/3441852.3471200","DOIUrl":"https://doi.org/10.1145/3441852.3471200","url":null,"abstract":"Without a commonly accepted writing system for American Sign Language (ASL), Deaf or Hard of Hearing (DHH) ASL signers who wish to express opinions or ask questions online must post a video of their signing, if they prefer not to use written English, a language in which they may feel less proficient. Since the face conveys essential linguistic meaning, the face cannot simply be removed from the video in order to preserve anonymity. Thus, DHH ASL signers cannot easily discuss sensitive, personal, or controversial topics in their primary language, limiting engagement in online debate or inquiries about health or legal issues. We explored several recent attempts to address this problem through development of “face swap” technologies to automatically disguise the face in videos while preserving essential facial expressions and natural human appearance. We presented several prototypes to DHH ASL signers (N=16) and examined their interests in and requirements for such technology. After viewing transformed videos of other signers and of themselves, participants evaluated the understandability, naturalness of appearance, and degree of anonymity protection of these technologies. Our study revealed users’ perception of key trade-offs among these three dimensions, factors that contribute to each, and their views on transformation options enabled by this technology, for use in various contexts. Our findings guide future designers of this technology and inform selection of applications and design features.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"65 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131293576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}