The Web Content Accessibility Guidelines (WCAG) is the most well-documented, well-accepted, set of interface guidelines on the planet, based on empirical research and a participatory process of stakeholder input. A recent case in a U.S. Federal District Court, Robles v. Dominos Pizza LLC, involved a blind individual requesting that Dominos Pizza make their web site and mobile app accessible for people with disabilities, utilizing the WCAG. The court ruled that, due to the legal concepts of due process and primary jurisdiction doctrine, the plaintiff loses the case simply for asking for the WCAG. This court ruling minimizes the importance of evidence-based accessibility research and guidelines, and this poster will provide a background of the case, describe preliminary analysis of related cases, and discuss implications for accessibility researchers.
{"title":"Due Process and Primary Jurisdiction Doctrine: A Threat to Accessibility Research and Practice?","authors":"J. Lazar","doi":"10.1145/3234695.3241022","DOIUrl":"https://doi.org/10.1145/3234695.3241022","url":null,"abstract":"The Web Content Accessibility Guidelines (WCAG) is the most well-documented, well-accepted, set of interface guidelines on the planet, based on empirical research and a participatory process of stakeholder input. A recent case in a U.S. Federal District Court, Robles v. Dominos Pizza LLC, involved a blind individual requesting that Dominos Pizza make their web site and mobile app accessible for people with disabilities, utilizing the WCAG. The court ruled that, due to the legal concepts of due process and primary jurisdiction doctrine, the plaintiff loses the case simply for asking for the WCAG. This court ruling minimizes the importance of evidence-based accessibility research and guidelines, and this poster will provide a background of the case, describe preliminary analysis of related cases, and discuss implications for accessibility researchers.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129608060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 7: Enhancing Navigation","authors":"Anke M. Brock","doi":"10.1145/3284381","DOIUrl":"https://doi.org/10.1145/3284381","url":null,"abstract":"","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134311209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Our research explores the development of a novel technology-based prototype to support children and designers during brainstorming, one of the most challenging activities within Participatory Design (PD). This paper describes a proof-of-concept prototype for a tool that aims to empower children with Autism Spectrum Disorders (ASD) during PD to maximise their contributions to the design and their own benefits. Preliminary results revealed that the prototype has the potential for reducing anxiety in children with ASD, and supports unlocking their creativity.
{"title":"Toward a Technology-based Tool to Support Idea Generation during Participatory Design with Children with Autism Spectrum Disorders","authors":"A. Constantin, J. Hourcade","doi":"10.1145/3234695.3240995","DOIUrl":"https://doi.org/10.1145/3234695.3240995","url":null,"abstract":"Our research explores the development of a novel technology-based prototype to support children and designers during brainstorming, one of the most challenging activities within Participatory Design (PD). This paper describes a proof-of-concept prototype for a tool that aims to empower children with Autism Spectrum Disorders (ASD) during PD to maximise their contributions to the design and their own benefits. Preliminary results revealed that the prototype has the potential for reducing anxiety in children with ASD, and supports unlocking their creativity.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130602762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Color identification tools do not identify visual patterns or allow users to quickly inspect multiple locations, which are both important for identifying clothing. We are exploring the use of a finger-based camera that allows users to query clothing colors and patterns by touch. Previously, we demonstrated the feasibility of this approach using a small, highly-controlled dataset and combining two image classification techniques commonly used for object recognition. Here, to improve scalability and robustness, we collect a dataset of fabric images from online sources and apply transfer learning to train an end-to-end deep neural network to recognize visual patterns. This new approach achieves 92% accuracy in a general case and 97% when tuned for images from a finger-mounted camera.
{"title":"Applying Transfer Learning to Recognize Clothing Patterns Using a Finger-Mounted Camera","authors":"Lee Stearns, Leah Findlater, Jon E. Froehlich","doi":"10.1145/3234695.3241015","DOIUrl":"https://doi.org/10.1145/3234695.3241015","url":null,"abstract":"Color identification tools do not identify visual patterns or allow users to quickly inspect multiple locations, which are both important for identifying clothing. We are exploring the use of a finger-based camera that allows users to query clothing colors and patterns by touch. Previously, we demonstrated the feasibility of this approach using a small, highly-controlled dataset and combining two image classification techniques commonly used for object recognition. Here, to improve scalability and robustness, we collect a dataset of fabric images from online sources and apply transfer learning to train an end-to-end deep neural network to recognize visual patterns. This new approach achieves 92% accuracy in a general case and 97% when tuned for images from a finger-mounted camera.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133538243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Facial and head movements have important linguistic roles in American Sign Language (ASL) and other sign languages. Without being properly trained, both human observers and existing emotion recognition tools will misinterpret ASL linguistic facial expressions. In this study, we capture over 2,000 photographs of 15 participants: five hearing, five Deaf, and five Children of Deaf Adults (CODAs). We then analyze the performance of six commercial facial expression recognition services on these photographs. Key observations include poor face detection rates for Deaf participants, more accurate emotion recognition for Deaf and CODA participants, and frequent misinterpretation of ASL linguistic markers as negative emotions. This suggests a need to include data from ASL users in the training sets for these technologies.
{"title":"Exploring the Performance of Facial Expression Recognition Technologies on Deaf Adults and Their Children","authors":"I. Shaffer","doi":"10.1145/3234695.3240986","DOIUrl":"https://doi.org/10.1145/3234695.3240986","url":null,"abstract":"Facial and head movements have important linguistic roles in American Sign Language (ASL) and other sign languages. Without being properly trained, both human observers and existing emotion recognition tools will misinterpret ASL linguistic facial expressions. In this study, we capture over 2,000 photographs of 15 participants: five hearing, five Deaf, and five Children of Deaf Adults (CODAs). We then analyze the performance of six commercial facial expression recognition services on these photographs. Key observations include poor face detection rates for Deaf participants, more accurate emotion recognition for Deaf and CODA participants, and frequent misinterpretation of ASL linguistic markers as negative emotions. This suggests a need to include data from ASL users in the training sets for these technologies.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116145473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a computer vision system that can automatically detect people in dynamic real-world scenes, enabling people with vision impairments to have more awareness of, and interactions with, other people in their surroundings. As an initial step, we investigate the feasibility of four camera systems that vary in their placement, field-of-view, and image distortion for: (i) capturing people generally; and (ii) detecting people via a specific person-pose estimator. Based on our findings, we discuss future opportunities and challenges for detecting people in dynamic scenes, and for communicating that information to visually impaired users.
{"title":"Automated Person Detection in Dynamic Scenes to Assist People with Vision Impairments: An Initial Investigation","authors":"Lee Stearns, Anja Thieme","doi":"10.1145/3234695.3241017","DOIUrl":"https://doi.org/10.1145/3234695.3241017","url":null,"abstract":"We propose a computer vision system that can automatically detect people in dynamic real-world scenes, enabling people with vision impairments to have more awareness of, and interactions with, other people in their surroundings. As an initial step, we investigate the feasibility of four camera systems that vary in their placement, field-of-view, and image distortion for: (i) capturing people generally; and (ii) detecting people via a specific person-pose estimator. Based on our findings, we discuss future opportunities and challenges for detecting people in dynamic scenes, and for communicating that information to visually impaired users.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129650547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Passwords help people avoid unauthorized access to their personal devices but are not without challenges, like memorability and shoulder surfing attacks. Little is known about how people with vision impairment assure their digital security in mobile contexts. We conducted an online survey to understand their strategies to remember passwords, their perceptions of authentication methods and their self-assessed ability to keep their digital information safe. We collected answers from 325 people who are blind or have low vision from 12 countries and found: most use familiar names and numbers to create memorable passwords, the majority consider fingerprint to be the most secure and accessible user authentication method and PINs the least secure user authentication method. This paper presents our survey results and provides insights for designing better authentication methods for people with vision impairment.
{"title":"Understanding Authentication Method Use on Mobile Devices by People with Vision Impairment","authors":"Daniella Briotto Faustino, A. Girouard","doi":"10.1145/3234695.3236342","DOIUrl":"https://doi.org/10.1145/3234695.3236342","url":null,"abstract":"Passwords help people avoid unauthorized access to their personal devices but are not without challenges, like memorability and shoulder surfing attacks. Little is known about how people with vision impairment assure their digital security in mobile contexts. We conducted an online survey to understand their strategies to remember passwords, their perceptions of authentication methods and their self-assessed ability to keep their digital information safe. We collected answers from 325 people who are blind or have low vision from 12 countries and found: most use familiar names and numbers to create memorable passwords, the majority consider fingerprint to be the most secure and accessible user authentication method and PINs the least secure user authentication method. This paper presents our survey results and provides insights for designing better authentication methods for people with vision impairment.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132537455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Voice-activated personal assistants (VAPAs)--like Amazon Echo or Apple Siri--offer considerable promise to individuals who are blind due to widespread adoption of these non-visual interaction platforms. However, studies have yet to focus on the ways in which these technologies are used by individuals who are blind, along with whether barriers are encountered during the process of interaction. To address this gap, we interviewed fourteen legally-blind adults with experience of home and/or mobile-based VAPAs. While participants appreciated the access VAPAs provided to inaccessible applications and services, they faced challenges relating to the input, responses from VAPAs, and control of information presented. User behavior varied depending on the situation or context of the interaction. Implications for design are suggested to support inclusivity when interacting with VAPAs. These include accounting for privacy and situational factors in design, examining ways to support concerns over trust, and synchronizing presentation of visual and non-visual cues.
{"title":"\"Siri Talks at You\": An Empirical Investigation of Voice-Activated Personal Assistant (VAPA) Usage by Individuals Who Are Blind","authors":"A. Abdolrahmani, Ravi Kuber, Stacy M. Branham","doi":"10.1145/3234695.3236344","DOIUrl":"https://doi.org/10.1145/3234695.3236344","url":null,"abstract":"Voice-activated personal assistants (VAPAs)--like Amazon Echo or Apple Siri--offer considerable promise to individuals who are blind due to widespread adoption of these non-visual interaction platforms. However, studies have yet to focus on the ways in which these technologies are used by individuals who are blind, along with whether barriers are encountered during the process of interaction. To address this gap, we interviewed fourteen legally-blind adults with experience of home and/or mobile-based VAPAs. While participants appreciated the access VAPAs provided to inaccessible applications and services, they faced challenges relating to the input, responses from VAPAs, and control of information presented. User behavior varied depending on the situation or context of the interaction. Implications for design are suggested to support inclusivity when interacting with VAPAs. These include accounting for privacy and situational factors in design, examining ways to support concerns over trust, and synchronizing presentation of visual and non-visual cues.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114089566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Bar-El, Thomas Large, Lydia Davison, M. Worsley
With millions of players worldwide, Minecraft has become a rich context for playing, socializing and learning for children. However, as is the case with many video games, players must rely heavily on vision to navigate and participate in the game. We present our Work-In-Progress on Tangicraft, a multimodal interface designed to empower visually impaired children to play and collaborate around Minecraft. Our work includes two strands of prototypes. The first is a haptic sensing wearable. The second is a set of tangible blocks that communicate with the game environment using webcam-enabled codes.
{"title":"Tangicraft","authors":"David Bar-El, Thomas Large, Lydia Davison, M. Worsley","doi":"10.1145/3234695.3241031","DOIUrl":"https://doi.org/10.1145/3234695.3241031","url":null,"abstract":"With millions of players worldwide, Minecraft has become a rich context for playing, socializing and learning for children. However, as is the case with many video games, players must rely heavily on vision to navigate and participate in the game. We present our Work-In-Progress on Tangicraft, a multimodal interface designed to empower visually impaired children to play and collaborate around Minecraft. Our work includes two strands of prototypes. The first is a haptic sensing wearable. The second is a set of tangible blocks that communicate with the game environment using webcam-enabled codes.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114654930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna M. H. Abrams, Carl Fridolin Weber, P. Beckerle
For individuals having a motor disorder of neuromuscular origin, computer usage can be challenging. Due to different medical conditions, alternative input methodologies such as speech or eye tracking are no option. Here, piezo sensors, inertial measurement units and force resistance sensors are used to develop input devices that can compensate for mouse and keyboard. The devices are tested in a case study with one potential user with ataxia. Future user studies will deliver additional insights in the users' specific needs and further improve the developments.
{"title":"Design and Testing of Sensors for Text Entry and Mouse Control for Individuals with Neuromuscular Diseases","authors":"Anna M. H. Abrams, Carl Fridolin Weber, P. Beckerle","doi":"10.1145/3234695.3241012","DOIUrl":"https://doi.org/10.1145/3234695.3241012","url":null,"abstract":"For individuals having a motor disorder of neuromuscular origin, computer usage can be challenging. Due to different medical conditions, alternative input methodologies such as speech or eye tracking are no option. Here, piezo sensors, inertial measurement units and force resistance sensors are used to develop input devices that can compensate for mouse and keyboard. The devices are tested in a case study with one potential user with ataxia. Future user studies will deliver additional insights in the users' specific needs and further improve the developments.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114728359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}