Low vision people face many daily encumbrances. Traditional visual enhancements do not suffice to navigate indoor environments, or recognize objects efficiently. In this paper, we explore how Augmented Reality (AR) can be leveraged to design mobile applications to improve visual experience and unburden low vision persons. Specifically, we propose a novel automated AR-based annotation tool for detecting and labeling salient objects for assisted indoor navigation applications like NearbyExplorer. NearbyExplorer, which issues audio descriptions of nearby objects to the users, relies on a database populated by large teams of volunteers and map-a-thons to manually annotate salient objects in the environment like desks, chairs, low overhead ceilings. This has limited widespread and rapid deployment. Our tool builds on advances in automated object detection, AR labeling and accurate indoor positioning to provide an automated way to upload object labels and user position to a database, requiring just one volunteer. Moreover, it enables low vision people to detect and notice surrounding objects quickly using smartphones in various indoor environments.
{"title":"An automated AR-based annotation tool for indoor navigation for visually impaired people","authors":"Pei Du, N. Bulusu","doi":"10.1145/3441852.3476561","DOIUrl":"https://doi.org/10.1145/3441852.3476561","url":null,"abstract":"Low vision people face many daily encumbrances. Traditional visual enhancements do not suffice to navigate indoor environments, or recognize objects efficiently. In this paper, we explore how Augmented Reality (AR) can be leveraged to design mobile applications to improve visual experience and unburden low vision persons. Specifically, we propose a novel automated AR-based annotation tool for detecting and labeling salient objects for assisted indoor navigation applications like NearbyExplorer. NearbyExplorer, which issues audio descriptions of nearby objects to the users, relies on a database populated by large teams of volunteers and map-a-thons to manually annotate salient objects in the environment like desks, chairs, low overhead ceilings. This has limited widespread and rapid deployment. Our tool builds on advances in automated object detection, AR labeling and accurate indoor positioning to provide an automated way to upload object labels and user position to a database, requiring just one volunteer. Moreover, it enables low vision people to detect and notice surrounding objects quickly using smartphones in various indoor environments.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132413835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Duan, Aroosh Kumar, Michael Saugstad, Aileen Zeng, Ilia Savin, Jon E. Froehlich
What do sidewalk accessibility problems look like? How might these problems differ across cities? In this poster paper, we introduce Sidewalk Gallery, an interactive, filterable gallery of over 500,000 crowdsourced sidewalk accessibility images across seven cities in two countries (US and Mexico). Gallery allows users to explore and interactively filter sidewalk images based on five primary accessibility problem types, 35 tag categories, and a 5-point severity scale. When browsing images, users can also provide feedback about data correctness. We envision Gallery as a tool for teaching in urban design and accessibility and as a visualization aid for disability advocacy.
{"title":"Sidewalk Gallery: An Interactive, Filterable Image Gallery of Over 500,000 Sidewalk Accessibility Problems","authors":"Michael Duan, Aroosh Kumar, Michael Saugstad, Aileen Zeng, Ilia Savin, Jon E. Froehlich","doi":"10.1145/3441852.3476542","DOIUrl":"https://doi.org/10.1145/3441852.3476542","url":null,"abstract":"What do sidewalk accessibility problems look like? How might these problems differ across cities? In this poster paper, we introduce Sidewalk Gallery, an interactive, filterable gallery of over 500,000 crowdsourced sidewalk accessibility images across seven cities in two countries (US and Mexico). Gallery allows users to explore and interactively filter sidewalk images based on five primary accessibility problem types, 35 tag categories, and a 5-point severity scale. When browsing images, users can also provide feedback about data correctness. We envision Gallery as a tool for teaching in urban design and accessibility and as a visualization aid for disability advocacy.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130527989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we describe ongoing work about a robotic walker-frame that was designed to aid patients in an orthopaedic rehabilitation clinic. The so-called Walker is able to autonomously drive to patients and then changes into a more traditional walking-frame, i.e. one that has to be pushed by the patient, but it can still help by giving navigation instructions. Walker was designed with a multi-modal user interface in such a way that it can also be used by visually, hearing or speaking impaired people.
{"title":"Walker - An Autonomous, Interactive Walking Aid","authors":"Johannes Hackbarth, Caspar Jacob","doi":"10.1145/3441852.3476552","DOIUrl":"https://doi.org/10.1145/3441852.3476552","url":null,"abstract":"In this paper, we describe ongoing work about a robotic walker-frame that was designed to aid patients in an orthopaedic rehabilitation clinic. The so-called Walker is able to autonomously drive to patients and then changes into a more traditional walking-frame, i.e. one that has to be pushed by the patient, but it can still help by giving navigation instructions. Walker was designed with a multi-modal user interface in such a way that it can also be used by visually, hearing or speaking impaired people.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122607074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
According to the world bank organization report, about 15 percent of the world’s population (equal to 1 billion people) experience some form of disability [3]. However, designers can easily forget to take account of disabilities such as colorblindness, as most designers are not colorblind and tools for accessibility are not integrated into design tools. In this work, we introduce and evaluate Adee, an accessibility testing tool that has been integrated into widely used design tools Adobe XD, Figma and Sketch. Adee aims to make accessibility part of the design process, to create inclusive and ethical products.
{"title":"Adee: Bringing Accessibility Right Inside Design Tools","authors":"Samine Hadadi","doi":"10.1145/3441852.3476478","DOIUrl":"https://doi.org/10.1145/3441852.3476478","url":null,"abstract":"According to the world bank organization report, about 15 percent of the world’s population (equal to 1 billion people) experience some form of disability [3]. However, designers can easily forget to take account of disabilities such as colorblindness, as most designers are not colorblind and tools for accessibility are not integrated into design tools. In this work, we introduce and evaluate Adee, an accessibility testing tool that has been integrated into widely used design tools Adobe XD, Figma and Sketch. Adee aims to make accessibility part of the design process, to create inclusive and ethical products.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126462992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Measuring text comprehension is crucial for evaluating the accessibility of texts in Easy Language. However, accurate and objective comprehension tests tend to be expensive, time-consuming and sometimes difficult to implement for target groups of Easy Language. In this paper, we propose using computer-based testing with touchscreen devices as a means to simplify and accelerate data collection using comprehension tests, and to facilitate experiments with less proficient readers. We demonstrate this by designing and implementing a mobile touchscreen application and validating its effectiveness in an experiment with people with intellectual disabilities. The results suggest that there is no difference in terms of task difficulty between measuring comprehension using the mobile application and a traditional paper-and-pencil test. Moreover, reading times appear to be faster in the application than on paper.
{"title":"Measuring Text Comprehension for People with Reading Difficulties Using a Mobile Application","authors":"Andreas Säuberli","doi":"10.1145/3441852.3476474","DOIUrl":"https://doi.org/10.1145/3441852.3476474","url":null,"abstract":"Measuring text comprehension is crucial for evaluating the accessibility of texts in Easy Language. However, accurate and objective comprehension tests tend to be expensive, time-consuming and sometimes difficult to implement for target groups of Easy Language. In this paper, we propose using computer-based testing with touchscreen devices as a means to simplify and accelerate data collection using comprehension tests, and to facilitate experiments with less proficient readers. We demonstrate this by designing and implementing a mobile touchscreen application and validating its effectiveness in an experiment with people with intellectual disabilities. The results suggest that there is no difference in terms of task difficulty between measuring comprehension using the mobile application and a traditional paper-and-pencil test. Moreover, reading times appear to be faster in the application than on paper.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125773767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Ahmetovic, Antonio Pugliese, S. Mascetti, Valentina Begnozzi, E. Boccalandro, R. Gualtierotti, F. Peyvandi
Play Access is an Android assistive technology that replaces touchscreen interaction with alternative interfaces, enabling people with upper extremity impairments to access mobile games, and providing alternative means of playing mobile games for all. We demonstrate the use of Play Access to support physical therapy for children with haemophilia, with the goal of preventing long-term mobility impairments. To achieve this, we modified Play Access to enable the use of body movements, recognized using wearable sensors, as an alternative interface for playing games. This way, Play Access makes it possible to use existing Android games as exergames, hence better targeting patients’ interest.
Play Access是一项Android辅助技术,它用替代界面取代触摸屏交互,使上肢障碍的人能够访问手机游戏,并为所有人提供玩手机游戏的替代方法。我们展示了使用Play Access来支持血友病儿童的物理治疗,目的是预防长期行动障碍。为了实现这一目标,我们修改了Play Access,允许使用可穿戴传感器识别的身体动作,作为玩游戏的替代界面。通过这种方式,Play Access可以将现有的Android游戏用作游戏,从而更好地瞄准患者的兴趣。
{"title":"Rehabilitation through Accessible Mobile Gaming and Wearable Sensors","authors":"D. Ahmetovic, Antonio Pugliese, S. Mascetti, Valentina Begnozzi, E. Boccalandro, R. Gualtierotti, F. Peyvandi","doi":"10.1145/3441852.3476544","DOIUrl":"https://doi.org/10.1145/3441852.3476544","url":null,"abstract":"Play Access is an Android assistive technology that replaces touchscreen interaction with alternative interfaces, enabling people with upper extremity impairments to access mobile games, and providing alternative means of playing mobile games for all. We demonstrate the use of Play Access to support physical therapy for children with haemophilia, with the goal of preventing long-term mobility impairments. To achieve this, we modified Play Access to enable the use of body movements, recognized using wearable sensors, as an alternative interface for playing games. This way, Play Access makes it possible to use existing Android games as exergames, hence better targeting patients’ interest.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114895551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Filipa Rocha, Guilherme Guimarães, David Gonçalves, A. Pires, L. Abreu, T. Guerreiro
Introduction of computational thinking training in early childhood potentiates cognitive development and better prepares children to live and prosper in a future heavily computational society. Programming environments are now widely adopted in classrooms to teach programming concepts. However, these tools are often reliant on visual interaction, making them inaccessible to children with visual impairments. Also, programming environments in general are usually designed to promote individual experiences, wasting the potential benefits of group collaborative activities. We propose the design of a programming environment that leverages asymmetric roles to foster collaborative computational thinking activities for children with visual impairments, in particular mixed-visual-ability classes. The multimodal system comprises the use of tangible blocks and auditory feedback, while children have to collaborate to program a robot. We conducted a remote online study, collecting valuable feedback on the limitations and opportunities for future work, aiming to potentiate education and social inclusion.
{"title":"Fostering collaboration with asymmetric roles in accessible programming environments for children with mixed-visual-abilities","authors":"Filipa Rocha, Guilherme Guimarães, David Gonçalves, A. Pires, L. Abreu, T. Guerreiro","doi":"10.1145/3441852.3476553","DOIUrl":"https://doi.org/10.1145/3441852.3476553","url":null,"abstract":"Introduction of computational thinking training in early childhood potentiates cognitive development and better prepares children to live and prosper in a future heavily computational society. Programming environments are now widely adopted in classrooms to teach programming concepts. However, these tools are often reliant on visual interaction, making them inaccessible to children with visual impairments. Also, programming environments in general are usually designed to promote individual experiences, wasting the potential benefits of group collaborative activities. We propose the design of a programming environment that leverages asymmetric roles to foster collaborative computational thinking activities for children with visual impairments, in particular mixed-visual-ability classes. The multimodal system comprises the use of tangible blocks and auditory feedback, while children have to collaborate to program a robot. We conducted a remote online study, collecting valuable feedback on the limitations and opportunities for future work, aiming to potentiate education and social inclusion.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129779770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People with color vision deficiency (CVD) face several difficulties in performing daily tasks because they often fall outside of the culturally, linguistically, and educationally modulated majority opinion. This study aims to develop a device that can seamlessly input/output information based on the user's handling actions and to verify the validity of the support for daily decision-making of people with CVD. In this study, the use case is set as selecting clothes in a shop; online behavior observation is then conducted to design an assistive method and a watch-type device that shows useful information, such as the adjusted color and/or text for people with CVD on a display at the wrist is developed. An online user interview is conducted using a first-person perspective and bird's-eye perspective video with three CVD participants to verify the validity of the developed device for daily support. Consequently, the accuracy and effectiveness of the watch-type devices were determined. This study presents a prototyped proof-of-concept device in a remote environment, considering the coronavirus pandemic, and discusses the daily support for people with CVD.
{"title":"Colorable Band: A Wearable Device to Encourage Daily Decision Making Based on Behavior of Users with Color Vision Deficiency","authors":"A. Uehara","doi":"10.1145/3441852.3476570","DOIUrl":"https://doi.org/10.1145/3441852.3476570","url":null,"abstract":"People with color vision deficiency (CVD) face several difficulties in performing daily tasks because they often fall outside of the culturally, linguistically, and educationally modulated majority opinion. This study aims to develop a device that can seamlessly input/output information based on the user's handling actions and to verify the validity of the support for daily decision-making of people with CVD. In this study, the use case is set as selecting clothes in a shop; online behavior observation is then conducted to design an assistive method and a watch-type device that shows useful information, such as the adjusted color and/or text for people with CVD on a display at the wrist is developed. An online user interview is conducted using a first-person perspective and bird's-eye perspective video with three CVD participants to verify the validity of the developed device for daily support. Consequently, the accuracy and effectiveness of the watch-type devices were determined. This study presents a prototyped proof-of-concept device in a remote environment, considering the coronavirus pandemic, and discusses the daily support for people with CVD.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126586002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the COVID-19 pandemic, we all suffered from several restrictions and measures regulating interaction with one another. We had to wear masks, use hand sanitizer, have open-air meetings, feel a combination of excitement and frustration, and eventually depend on online video calls. The combinations of these additional requirements and limitations, while necessary, affected how we could involve users in the different stages of design. It has profoundly hindered our chances of meeting in person with people with temporary or permanent disabilities. In our project, involving people with intellectual disabilities in the museum context, we also had to deal with museums being closed and physical exhibitions being canceled. At the same time, guardians and caregivers often turned to a stricter interpretation of anti-COVID measures to protect people with intellectual disabilities. This paper aims to discuss these challenges and share our lessons about coping with challenging and unpredictable situations by using improvisation.
{"title":"Meeting Participants with Intellectual Disabilities during COVID-19 Pandemic: Challenges and Improvisation","authors":"L. Guedes, M. Landoni","doi":"10.1145/3441852.3476566","DOIUrl":"https://doi.org/10.1145/3441852.3476566","url":null,"abstract":"With the COVID-19 pandemic, we all suffered from several restrictions and measures regulating interaction with one another. We had to wear masks, use hand sanitizer, have open-air meetings, feel a combination of excitement and frustration, and eventually depend on online video calls. The combinations of these additional requirements and limitations, while necessary, affected how we could involve users in the different stages of design. It has profoundly hindered our chances of meeting in person with people with temporary or permanent disabilities. In our project, involving people with intellectual disabilities in the museum context, we also had to deal with museums being closed and physical exhibitions being canceled. At the same time, guardians and caregivers often turned to a stricter interpretation of anti-COVID measures to protect people with intellectual disabilities. This paper aims to discuss these challenges and share our lessons about coping with challenging and unpredictable situations by using improvisation.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131538708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People with visual impairments (PVIs) are less likely to participate in physical activity than their sighted peers. One barrier is the lack of accessible group-based aerobic exercise classes, often due to instructors not giving accessible verbal instructions. While there is research in exercise tracking, these tools often require vision or familiarity with the exercise. There are accessible solutions that give personalized verbal feedback in slower-paced exercises, not generalizing to aerobics. In response, we have developed an algorithm that detects shoeprints on a sensor mat using computer vision and a CNN. We can infer whether a person is following along with a step aerobics workout and are designing reactive verbal feedback to guide the person to rejoin the class. Future work will include finishing development and conducting a user study to assess the effectiveness of the reactive verbal feedback.
{"title":"Increasing Access to Trainer-led Aerobic Exercise for People with Visual Impairments through a Sensor Mat System","authors":"Jeehan Malik, Mitchell Majure, Hana Gabrielle Rubio Bidon, Regan Lamoureux, Kyle Rector","doi":"10.1145/3441852.3476557","DOIUrl":"https://doi.org/10.1145/3441852.3476557","url":null,"abstract":"People with visual impairments (PVIs) are less likely to participate in physical activity than their sighted peers. One barrier is the lack of accessible group-based aerobic exercise classes, often due to instructors not giving accessible verbal instructions. While there is research in exercise tracking, these tools often require vision or familiarity with the exercise. There are accessible solutions that give personalized verbal feedback in slower-paced exercises, not generalizing to aerobics. In response, we have developed an algorithm that detects shoeprints on a sensor mat using computer vision and a CNN. We can infer whether a person is following along with a step aerobics workout and are designing reactive verbal feedback to guide the person to rejoin the class. Future work will include finishing development and conducting a user study to assess the effectiveness of the reactive verbal feedback.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129865633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}