Low vision people face many daily encumbrances. Traditional visual enhancements do not suffice to navigate indoor environments, or recognize objects efficiently. In this paper, we explore how Augmented Reality (AR) can be leveraged to design mobile applications to improve visual experience and unburden low vision persons. Specifically, we propose a novel automated AR-based annotation tool for detecting and labeling salient objects for assisted indoor navigation applications like NearbyExplorer. NearbyExplorer, which issues audio descriptions of nearby objects to the users, relies on a database populated by large teams of volunteers and map-a-thons to manually annotate salient objects in the environment like desks, chairs, low overhead ceilings. This has limited widespread and rapid deployment. Our tool builds on advances in automated object detection, AR labeling and accurate indoor positioning to provide an automated way to upload object labels and user position to a database, requiring just one volunteer. Moreover, it enables low vision people to detect and notice surrounding objects quickly using smartphones in various indoor environments.
{"title":"An automated AR-based annotation tool for indoor navigation for visually impaired people","authors":"Pei Du, N. Bulusu","doi":"10.1145/3441852.3476561","DOIUrl":"https://doi.org/10.1145/3441852.3476561","url":null,"abstract":"Low vision people face many daily encumbrances. Traditional visual enhancements do not suffice to navigate indoor environments, or recognize objects efficiently. In this paper, we explore how Augmented Reality (AR) can be leveraged to design mobile applications to improve visual experience and unburden low vision persons. Specifically, we propose a novel automated AR-based annotation tool for detecting and labeling salient objects for assisted indoor navigation applications like NearbyExplorer. NearbyExplorer, which issues audio descriptions of nearby objects to the users, relies on a database populated by large teams of volunteers and map-a-thons to manually annotate salient objects in the environment like desks, chairs, low overhead ceilings. This has limited widespread and rapid deployment. Our tool builds on advances in automated object detection, AR labeling and accurate indoor positioning to provide an automated way to upload object labels and user position to a database, requiring just one volunteer. Moreover, it enables low vision people to detect and notice surrounding objects quickly using smartphones in various indoor environments.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132413835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Duan, Aroosh Kumar, Michael Saugstad, Aileen Zeng, Ilia Savin, Jon E. Froehlich
What do sidewalk accessibility problems look like? How might these problems differ across cities? In this poster paper, we introduce Sidewalk Gallery, an interactive, filterable gallery of over 500,000 crowdsourced sidewalk accessibility images across seven cities in two countries (US and Mexico). Gallery allows users to explore and interactively filter sidewalk images based on five primary accessibility problem types, 35 tag categories, and a 5-point severity scale. When browsing images, users can also provide feedback about data correctness. We envision Gallery as a tool for teaching in urban design and accessibility and as a visualization aid for disability advocacy.
{"title":"Sidewalk Gallery: An Interactive, Filterable Image Gallery of Over 500,000 Sidewalk Accessibility Problems","authors":"Michael Duan, Aroosh Kumar, Michael Saugstad, Aileen Zeng, Ilia Savin, Jon E. Froehlich","doi":"10.1145/3441852.3476542","DOIUrl":"https://doi.org/10.1145/3441852.3476542","url":null,"abstract":"What do sidewalk accessibility problems look like? How might these problems differ across cities? In this poster paper, we introduce Sidewalk Gallery, an interactive, filterable gallery of over 500,000 crowdsourced sidewalk accessibility images across seven cities in two countries (US and Mexico). Gallery allows users to explore and interactively filter sidewalk images based on five primary accessibility problem types, 35 tag categories, and a 5-point severity scale. When browsing images, users can also provide feedback about data correctness. We envision Gallery as a tool for teaching in urban design and accessibility and as a visualization aid for disability advocacy.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130527989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Ahmetovic, Antonio Pugliese, S. Mascetti, Valentina Begnozzi, E. Boccalandro, R. Gualtierotti, F. Peyvandi
Play Access is an Android assistive technology that replaces touchscreen interaction with alternative interfaces, enabling people with upper extremity impairments to access mobile games, and providing alternative means of playing mobile games for all. We demonstrate the use of Play Access to support physical therapy for children with haemophilia, with the goal of preventing long-term mobility impairments. To achieve this, we modified Play Access to enable the use of body movements, recognized using wearable sensors, as an alternative interface for playing games. This way, Play Access makes it possible to use existing Android games as exergames, hence better targeting patients’ interest.
Play Access是一项Android辅助技术,它用替代界面取代触摸屏交互,使上肢障碍的人能够访问手机游戏,并为所有人提供玩手机游戏的替代方法。我们展示了使用Play Access来支持血友病儿童的物理治疗,目的是预防长期行动障碍。为了实现这一目标,我们修改了Play Access,允许使用可穿戴传感器识别的身体动作,作为玩游戏的替代界面。通过这种方式,Play Access可以将现有的Android游戏用作游戏,从而更好地瞄准患者的兴趣。
{"title":"Rehabilitation through Accessible Mobile Gaming and Wearable Sensors","authors":"D. Ahmetovic, Antonio Pugliese, S. Mascetti, Valentina Begnozzi, E. Boccalandro, R. Gualtierotti, F. Peyvandi","doi":"10.1145/3441852.3476544","DOIUrl":"https://doi.org/10.1145/3441852.3476544","url":null,"abstract":"Play Access is an Android assistive technology that replaces touchscreen interaction with alternative interfaces, enabling people with upper extremity impairments to access mobile games, and providing alternative means of playing mobile games for all. We demonstrate the use of Play Access to support physical therapy for children with haemophilia, with the goal of preventing long-term mobility impairments. To achieve this, we modified Play Access to enable the use of body movements, recognized using wearable sensors, as an alternative interface for playing games. This way, Play Access makes it possible to use existing Android games as exergames, hence better targeting patients’ interest.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114895551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we describe ongoing work about a robotic walker-frame that was designed to aid patients in an orthopaedic rehabilitation clinic. The so-called Walker is able to autonomously drive to patients and then changes into a more traditional walking-frame, i.e. one that has to be pushed by the patient, but it can still help by giving navigation instructions. Walker was designed with a multi-modal user interface in such a way that it can also be used by visually, hearing or speaking impaired people.
{"title":"Walker - An Autonomous, Interactive Walking Aid","authors":"Johannes Hackbarth, Caspar Jacob","doi":"10.1145/3441852.3476552","DOIUrl":"https://doi.org/10.1145/3441852.3476552","url":null,"abstract":"In this paper, we describe ongoing work about a robotic walker-frame that was designed to aid patients in an orthopaedic rehabilitation clinic. The so-called Walker is able to autonomously drive to patients and then changes into a more traditional walking-frame, i.e. one that has to be pushed by the patient, but it can still help by giving navigation instructions. Walker was designed with a multi-modal user interface in such a way that it can also be used by visually, hearing or speaking impaired people.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122607074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Measuring text comprehension is crucial for evaluating the accessibility of texts in Easy Language. However, accurate and objective comprehension tests tend to be expensive, time-consuming and sometimes difficult to implement for target groups of Easy Language. In this paper, we propose using computer-based testing with touchscreen devices as a means to simplify and accelerate data collection using comprehension tests, and to facilitate experiments with less proficient readers. We demonstrate this by designing and implementing a mobile touchscreen application and validating its effectiveness in an experiment with people with intellectual disabilities. The results suggest that there is no difference in terms of task difficulty between measuring comprehension using the mobile application and a traditional paper-and-pencil test. Moreover, reading times appear to be faster in the application than on paper.
{"title":"Measuring Text Comprehension for People with Reading Difficulties Using a Mobile Application","authors":"Andreas Säuberli","doi":"10.1145/3441852.3476474","DOIUrl":"https://doi.org/10.1145/3441852.3476474","url":null,"abstract":"Measuring text comprehension is crucial for evaluating the accessibility of texts in Easy Language. However, accurate and objective comprehension tests tend to be expensive, time-consuming and sometimes difficult to implement for target groups of Easy Language. In this paper, we propose using computer-based testing with touchscreen devices as a means to simplify and accelerate data collection using comprehension tests, and to facilitate experiments with less proficient readers. We demonstrate this by designing and implementing a mobile touchscreen application and validating its effectiveness in an experiment with people with intellectual disabilities. The results suggest that there is no difference in terms of task difficulty between measuring comprehension using the mobile application and a traditional paper-and-pencil test. Moreover, reading times appear to be faster in the application than on paper.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125773767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People with color vision deficiency (CVD) face several difficulties in performing daily tasks because they often fall outside of the culturally, linguistically, and educationally modulated majority opinion. This study aims to develop a device that can seamlessly input/output information based on the user's handling actions and to verify the validity of the support for daily decision-making of people with CVD. In this study, the use case is set as selecting clothes in a shop; online behavior observation is then conducted to design an assistive method and a watch-type device that shows useful information, such as the adjusted color and/or text for people with CVD on a display at the wrist is developed. An online user interview is conducted using a first-person perspective and bird's-eye perspective video with three CVD participants to verify the validity of the developed device for daily support. Consequently, the accuracy and effectiveness of the watch-type devices were determined. This study presents a prototyped proof-of-concept device in a remote environment, considering the coronavirus pandemic, and discusses the daily support for people with CVD.
{"title":"Colorable Band: A Wearable Device to Encourage Daily Decision Making Based on Behavior of Users with Color Vision Deficiency","authors":"A. Uehara","doi":"10.1145/3441852.3476570","DOIUrl":"https://doi.org/10.1145/3441852.3476570","url":null,"abstract":"People with color vision deficiency (CVD) face several difficulties in performing daily tasks because they often fall outside of the culturally, linguistically, and educationally modulated majority opinion. This study aims to develop a device that can seamlessly input/output information based on the user's handling actions and to verify the validity of the support for daily decision-making of people with CVD. In this study, the use case is set as selecting clothes in a shop; online behavior observation is then conducted to design an assistive method and a watch-type device that shows useful information, such as the adjusted color and/or text for people with CVD on a display at the wrist is developed. An online user interview is conducted using a first-person perspective and bird's-eye perspective video with three CVD participants to verify the validity of the developed device for daily support. Consequently, the accuracy and effectiveness of the watch-type devices were determined. This study presents a prototyped proof-of-concept device in a remote environment, considering the coronavirus pandemic, and discusses the daily support for people with CVD.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126586002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
According to the world bank organization report, about 15 percent of the world’s population (equal to 1 billion people) experience some form of disability [3]. However, designers can easily forget to take account of disabilities such as colorblindness, as most designers are not colorblind and tools for accessibility are not integrated into design tools. In this work, we introduce and evaluate Adee, an accessibility testing tool that has been integrated into widely used design tools Adobe XD, Figma and Sketch. Adee aims to make accessibility part of the design process, to create inclusive and ethical products.
{"title":"Adee: Bringing Accessibility Right Inside Design Tools","authors":"Samine Hadadi","doi":"10.1145/3441852.3476478","DOIUrl":"https://doi.org/10.1145/3441852.3476478","url":null,"abstract":"According to the world bank organization report, about 15 percent of the world’s population (equal to 1 billion people) experience some form of disability [3]. However, designers can easily forget to take account of disabilities such as colorblindness, as most designers are not colorblind and tools for accessibility are not integrated into design tools. In this work, we introduce and evaluate Adee, an accessibility testing tool that has been integrated into widely used design tools Adobe XD, Figma and Sketch. Adee aims to make accessibility part of the design process, to create inclusive and ethical products.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126462992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kelly Avery Mack, Edward Cutrell, Bongshin Lee, M. Morris
Alternative (alt) text provides access to descriptions of digital images for people who use screen readers. While prior work studied screen reader users’ (SRUs’) preferences about alt text and automatic alt text (i.e., alt text generated by artificial intelligence), little work examined the alt text author’s experience composing or editing these descriptions. We built two types of prototype interfaces for two tasks: authoring alt text and providing feedback on automatic alt text. Through combined interview-usability testing sessions with alt text authors and interviews with SRUs, we tested the effectiveness of our prototypes in the context of Microsoft PowerPoint. Our results suggest that authoring interfaces that support authors in choosing what to include in their descriptions result in higher quality alt text. The feedback interfaces highlighted considerable differences in the perceptions of authors and SRUs regarding “high-quality” alt text. Finally, authors crafted significantly lower quality alt text when starting from the automatic alt text compared to starting from a blank box. We discuss the implications of these results on applications that support alt text.
{"title":"Designing Tools for High-Quality Alt Text Authoring","authors":"Kelly Avery Mack, Edward Cutrell, Bongshin Lee, M. Morris","doi":"10.1145/3441852.3471207","DOIUrl":"https://doi.org/10.1145/3441852.3471207","url":null,"abstract":"Alternative (alt) text provides access to descriptions of digital images for people who use screen readers. While prior work studied screen reader users’ (SRUs’) preferences about alt text and automatic alt text (i.e., alt text generated by artificial intelligence), little work examined the alt text author’s experience composing or editing these descriptions. We built two types of prototype interfaces for two tasks: authoring alt text and providing feedback on automatic alt text. Through combined interview-usability testing sessions with alt text authors and interviews with SRUs, we tested the effectiveness of our prototypes in the context of Microsoft PowerPoint. Our results suggest that authoring interfaces that support authors in choosing what to include in their descriptions result in higher quality alt text. The feedback interfaces highlighted considerable differences in the perceptions of authors and SRUs regarding “high-quality” alt text. Finally, authors crafted significantly lower quality alt text when starting from the automatic alt text compared to starting from a blank box. We discuss the implications of these results on applications that support alt text.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130000772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People with visual impairments (PVIs) are less likely to participate in physical activity than their sighted peers. One barrier is the lack of accessible group-based aerobic exercise classes, often due to instructors not giving accessible verbal instructions. While there is research in exercise tracking, these tools often require vision or familiarity with the exercise. There are accessible solutions that give personalized verbal feedback in slower-paced exercises, not generalizing to aerobics. In response, we have developed an algorithm that detects shoeprints on a sensor mat using computer vision and a CNN. We can infer whether a person is following along with a step aerobics workout and are designing reactive verbal feedback to guide the person to rejoin the class. Future work will include finishing development and conducting a user study to assess the effectiveness of the reactive verbal feedback.
{"title":"Increasing Access to Trainer-led Aerobic Exercise for People with Visual Impairments through a Sensor Mat System","authors":"Jeehan Malik, Mitchell Majure, Hana Gabrielle Rubio Bidon, Regan Lamoureux, Kyle Rector","doi":"10.1145/3441852.3476557","DOIUrl":"https://doi.org/10.1145/3441852.3476557","url":null,"abstract":"People with visual impairments (PVIs) are less likely to participate in physical activity than their sighted peers. One barrier is the lack of accessible group-based aerobic exercise classes, often due to instructors not giving accessible verbal instructions. While there is research in exercise tracking, these tools often require vision or familiarity with the exercise. There are accessible solutions that give personalized verbal feedback in slower-paced exercises, not generalizing to aerobics. In response, we have developed an algorithm that detects shoeprints on a sensor mat using computer vision and a CNN. We can infer whether a person is following along with a step aerobics workout and are designing reactive verbal feedback to guide the person to rejoin the class. Future work will include finishing development and conducting a user study to assess the effectiveness of the reactive verbal feedback.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129865633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fabiha Ahmed, Dennis Kuzminer, Michael Zachor, Lisa Ye, Rachel Josepho, W. Payne, Amy Hurst
Many blind musicians and composers read and write music using braille. Yet, braille music is not as widely available as print (visual) music, sighted collaborators and educators do not read braille music, and workflows and toolchains for converting between print and braille music are complex. In this research, we present Sound Cells, a music notation system that simultaneously outputs visual and braille notation, and provides audio feedback as a user writes music with text. We share findings from a Design Probe in which two experienced blind musicians notated music using Sound Cells and reflected on it in the context of their current notation practices. Finally, we highlight music navigation and outputted score customization as opportunities for further study.
{"title":"Sound Cells: Rendering Visual and Braille Music in the Browser","authors":"Fabiha Ahmed, Dennis Kuzminer, Michael Zachor, Lisa Ye, Rachel Josepho, W. Payne, Amy Hurst","doi":"10.1145/3441852.3476555","DOIUrl":"https://doi.org/10.1145/3441852.3476555","url":null,"abstract":"Many blind musicians and composers read and write music using braille. Yet, braille music is not as widely available as print (visual) music, sighted collaborators and educators do not read braille music, and workflows and toolchains for converting between print and braille music are complex. In this research, we present Sound Cells, a music notation system that simultaneously outputs visual and braille notation, and provides audio feedback as a user writes music with text. We share findings from a Design Probe in which two experienced blind musicians notated music using Sound Cells and reflected on it in the context of their current notation practices. Finally, we highlight music navigation and outputted score customization as opportunities for further study.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129880945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}