K. Venkatasubramanian, Tina-Marie Ranalli, Jack Lanoie, A. Sinapi, Andrew Laraw Lama, Jeanine Skorinko, Mariah Freark, Nancy A. Alterio
In the United States, the abuse of individuals with intellectual and developmental disabilities (I/DD) is at epidemic proportions. However, the reporting of such abuse has been severely lacking. It has been found that individuals with I/DD are more aware of when and how to report abuse when they have received abuse-prevention training. Consequently, in this article we present the design and prototyping of a mobile-computing app called Recognize that empowers adults with I/DD to independently learn about abuse. To this end, we first conducted an auto-ethnographic co-design of Recognize with individuals and self-advocates from the I/DD community. Next, based on the outcomes from the co-design process, we developed three initial prototype variants of Recognize and performed a preliminary user study with six individuals with I/DD who have experience teaching others with I/DD about abuse. Based on the findings of this preliminary user study, we created a consolidated prototype of Recognize and performed a more detailed qualitative user study with 11 individuals with I/DD who represented the eventual users of Recognize. The participants in this user study found it to be viable for use by individuals with I/DD. We end the article with a discussion of the implications of our findings toward the development of a deployable version of Recognize and similar apps.
{"title":"The Design and Prototyping of an App to Teach Adults with Intellectual and Developmental Disabilities to Empower Them Against Abuse","authors":"K. Venkatasubramanian, Tina-Marie Ranalli, Jack Lanoie, A. Sinapi, Andrew Laraw Lama, Jeanine Skorinko, Mariah Freark, Nancy A. Alterio","doi":"10.1145/3569585","DOIUrl":"https://doi.org/10.1145/3569585","url":null,"abstract":"In the United States, the abuse of individuals with intellectual and developmental disabilities (I/DD) is at epidemic proportions. However, the reporting of such abuse has been severely lacking. It has been found that individuals with I/DD are more aware of when and how to report abuse when they have received abuse-prevention training. Consequently, in this article we present the design and prototyping of a mobile-computing app called Recognize that empowers adults with I/DD to independently learn about abuse. To this end, we first conducted an auto-ethnographic co-design of Recognize with individuals and self-advocates from the I/DD community. Next, based on the outcomes from the co-design process, we developed three initial prototype variants of Recognize and performed a preliminary user study with six individuals with I/DD who have experience teaching others with I/DD about abuse. Based on the findings of this preliminary user study, we created a consolidated prototype of Recognize and performed a more detailed qualitative user study with 11 individuals with I/DD who represented the eventual users of Recognize. The participants in this user study found it to be viable for use by individuals with I/DD. We end the article with a discussion of the implications of our findings toward the development of a deployable version of Recognize and similar apps.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83121574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Saiph Savage, Claudia Flores-Saviaga, Rachel Rodney, Liliana Savage, J. Schull, Jennifer Mankoff
The popularity of 3D printed assistive technology has led to the emergence of new ecosystems of care, where multiple stakeholders (makers, clinicians, and recipients with disabilities) work toward creating new upper limb prosthetic devices. However, despite the increasing growth, we currently know little about the differences between these care ecosystems. Medical regulations and the prevailing culture have greatly impacted how ecosystems are structured and stakeholders work together, including whether clinicians and makers collaborate. To better understand these care ecosystems, we interviewed a range of stakeholders from multiple countries, including Brazil, Chile, Costa Rica, France, India, Mexico, and the U.S. Our broad analysis allowed us to uncover different working examples of how multiple stakeholders collaborate within these care ecosystems and the main challenges they face. Through our study, we were able to uncover that ecosystems with multi-stakeholder collaborations exist (something prior work had not seen), and these ecosystems showed increased success and impact. We also identified some of the key follow-up practices to reduce device abandonment. Of particular importance are to have ecosystems put in place follow-up practices that integrate formal agreements and compensations for participation (which do not need to be just monetary). We identified that these features helped to ensure multi-stakeholder involvement and ecosystem sustainability. We finished the article with socio-technical recommendations to create vibrant care ecosystems that include multiple stakeholders in the production of 3D printed assistive devices.
{"title":"The Global Care Ecosystems of 3D Printed Assistive Devices","authors":"Saiph Savage, Claudia Flores-Saviaga, Rachel Rodney, Liliana Savage, J. Schull, Jennifer Mankoff","doi":"10.1145/3537676","DOIUrl":"https://doi.org/10.1145/3537676","url":null,"abstract":"The popularity of 3D printed assistive technology has led to the emergence of new ecosystems of care, where multiple stakeholders (makers, clinicians, and recipients with disabilities) work toward creating new upper limb prosthetic devices. However, despite the increasing growth, we currently know little about the differences between these care ecosystems. Medical regulations and the prevailing culture have greatly impacted how ecosystems are structured and stakeholders work together, including whether clinicians and makers collaborate. To better understand these care ecosystems, we interviewed a range of stakeholders from multiple countries, including Brazil, Chile, Costa Rica, France, India, Mexico, and the U.S. Our broad analysis allowed us to uncover different working examples of how multiple stakeholders collaborate within these care ecosystems and the main challenges they face. Through our study, we were able to uncover that ecosystems with multi-stakeholder collaborations exist (something prior work had not seen), and these ecosystems showed increased success and impact. We also identified some of the key follow-up practices to reduce device abandonment. Of particular importance are to have ecosystems put in place follow-up practices that integrate formal agreements and compensations for participation (which do not need to be just monetary). We identified that these features helped to ensure multi-stakeholder involvement and ecosystem sustainability. We finished the article with socio-technical recommendations to create vibrant care ecosystems that include multiple stakeholders in the production of 3D printed assistive devices.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82639129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Auditory overviews of routes can provide routing and map information to blind users enabling them to preview route maps before embarking on a journey. This article investigates the usefulness of a system designed to do this through a Preliminary Survey, followed by a Design Study to gather the design requirements, development of a prototype and evaluation through a Usability Study. The design is drawn in two stages with eight audio designers and eight potential blind users. The auditory route overview is sequential and automatically generated as integrated audio. It comprises auditory icons to represent points of interest, earcons for auditory brackets encapsulating repeating points of interest, and speech for directions. A prototype based on this design is developed and evaluated with 22 sighted and eight blind participants. The software architecture of the prototype including the route information retrieval and mapping onto audio has been included. The findings show that both groups perform well in route reconstruction and recognition tasks. Moreover, the functional route information and auditory icons are effectively designed and useful in forming a mental model of the route, which improves over time. However, the design of auditory brackets needs further improvement and testing. At all stages of the system development, input has been acquired from the end-user population and the design is adapted accordingly.
{"title":"Planning Your Journey in Audio: Design and Evaluation of Auditory Route Overviews","authors":"Nida Aziz, T. Stockman, Rebecca Stewart","doi":"10.1145/3531529","DOIUrl":"https://doi.org/10.1145/3531529","url":null,"abstract":"Auditory overviews of routes can provide routing and map information to blind users enabling them to preview route maps before embarking on a journey. This article investigates the usefulness of a system designed to do this through a Preliminary Survey, followed by a Design Study to gather the design requirements, development of a prototype and evaluation through a Usability Study. The design is drawn in two stages with eight audio designers and eight potential blind users. The auditory route overview is sequential and automatically generated as integrated audio. It comprises auditory icons to represent points of interest, earcons for auditory brackets encapsulating repeating points of interest, and speech for directions. A prototype based on this design is developed and evaluated with 22 sighted and eight blind participants. The software architecture of the prototype including the route information retrieval and mapping onto audio has been included. The findings show that both groups perform well in route reconstruction and recognition tasks. Moreover, the functional route information and auditory icons are effectively designed and useful in forming a mental model of the route, which improves over time. However, the design of auditory brackets needs further improvement and testing. At all stages of the system development, input has been acquired from the end-user population and the design is adapted accordingly.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78474806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Ahmetovic, C. Bernareggi, B. Leporini, S. Mascetti
WordMelodies is a mobile app that aims to support inclusive teaching of literacy skills for primary school students. Thus it was designed to be accessible both visually and through screen reader, and it includes over 80 different types of exercises for practicing literacy skills, each with adjustable difficulty levels, in Italian and in English. WordMelodies is freely available for iOS and Android devices. However, it has not been previously evaluated with children having visual impairments. Thus, in this article, we evaluate the app usability, its perceived ease of use, appreciation and children’s autonomy while using it, as well as the characteristics of the end users. To achieve this, we conducted a user study with 11 primary school students with visual impairments, and we analyzed app usage logs collected from 408 users in over 1 year from the app publication. We show that app usability is high, and most exercises can be completed autonomously. The exercises are also perceived to be easy to perform, and they are appreciated by the participants. Finally, we provide insights on how to address the identified app limitations and propose future research directions.
{"title":"WordMelodies: Supporting the Acquisition of Literacy Skills by Children with Visual Impairment through a Mobile App","authors":"D. Ahmetovic, C. Bernareggi, B. Leporini, S. Mascetti","doi":"10.1145/3565029","DOIUrl":"https://doi.org/10.1145/3565029","url":null,"abstract":"WordMelodies is a mobile app that aims to support inclusive teaching of literacy skills for primary school students. Thus it was designed to be accessible both visually and through screen reader, and it includes over 80 different types of exercises for practicing literacy skills, each with adjustable difficulty levels, in Italian and in English. WordMelodies is freely available for iOS and Android devices. However, it has not been previously evaluated with children having visual impairments. Thus, in this article, we evaluate the app usability, its perceived ease of use, appreciation and children’s autonomy while using it, as well as the characteristics of the end users. To achieve this, we conducted a user study with 11 primary school students with visual impairments, and we analyzed app usage logs collected from 408 users in over 1 year from the app publication. We show that app usability is high, and most exercises can be completed autonomously. The exercises are also perceived to be easy to perform, and they are appreciated by the participants. Finally, we provide insights on how to address the identified app limitations and propose future research directions.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76425180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francisco Iniesto, Tim Coughlan, Kate Lister, Peter Devine, Nick Freear, Richard Greenwood, Wayne Holmes, Ian Kenny, Kevin McLeod, Ruth Tudor
Administrative processes are ubiquitous in modern life and have been identified as a particular burden to those with accessibility needs. Students who have accessibility needs often have to understand guidance, fill in complex forms, and communicate with multiple parties to disclose disabilities and access appropriate support. Conversational user interfaces (CUIs) could allow us to reimagine such processes, yet there is currently limited understanding of how to design these to be accessible, or whether such an approach would be preferred. In the ADMINS (Assistants for the Disclosure and Management of Information about Needs and Support) project, we implemented a virtual assistant (VA) which is designed to enable students to disclose disabilities and to provide guidance and suggestions about appropriate support. ADMINS explores the potential of CUIs to reduce administrative burden and improve the experience of arranging support by replacing a static form with written or spoken dialogue. This article reports the results of two trials conducted during the project. A beta trial using an early version of the VA provided understanding of accessibility challenges and issues in user experience. The beta trial sample included 22 students who had already disclosed disabilities and 3 disability support advisors. After improvements to the design, a larger main trial was conducted with 134 students who disclosed their disabilities to the university using both the VA and the existing form-based process. The results show that the VA was preferred by most participants to completing the form (64.9% vs 23.9%). Qualitative and quantitative feedback from the trials also identified accessibility and user experience barriers for improving CUI design, and an understanding of benefits and preferences that can inform further development of accessible CUIs for this design space.
行政程序在现代生活中无处不在,已被确定为有无障碍需求的人的特别负担。有无障碍需求的学生通常必须理解指导,填写复杂的表格,并与多方沟通,以披露残疾并获得适当的支持。会话用户界面(CUIs)可以让我们重新构想这样的流程,但是目前对于如何设计这些流程使其易于访问,或者这种方法是否更可取的理解有限。在ADMINS(需求和支持信息披露和管理助理)项目中,我们实施了一个虚拟助理(VA),旨在使学生能够披露残疾,并提供有关适当支持的指导和建议。ADMINS探索了gui的潜力,通过用书面或口头对话取代静态表单来减少管理负担并改善安排支持的体验。本文报告了项目期间进行的两次试验的结果。使用早期版本的VA的beta测试提供了对用户体验中的可访问性挑战和问题的理解。测试样本包括22名已经披露残疾的学生和3名残疾支持顾问。在对设计进行改进后,对134名学生进行了更大的主要试验,他们使用VA和现有的基于表格的流程向大学披露了他们的残疾。结果显示,大多数参与者(64.9% vs 23.9%)更喜欢VA完成表格。从试验中获得的定性和定量反馈还确定了改进CUI设计的可访问性和用户体验障碍,以及对可访问ui的好处和偏好的理解,可以为该设计空间的可访问ui的进一步开发提供信息。
{"title":"Creating ‘a Simple Conversation’: Designing a Conversational User Interface to Improve the Experience of Accessing Support for Study","authors":"Francisco Iniesto, Tim Coughlan, Kate Lister, Peter Devine, Nick Freear, Richard Greenwood, Wayne Holmes, Ian Kenny, Kevin McLeod, Ruth Tudor","doi":"10.1145/3568166","DOIUrl":"https://doi.org/10.1145/3568166","url":null,"abstract":"Administrative processes are ubiquitous in modern life and have been identified as a particular burden to those with accessibility needs. Students who have accessibility needs often have to understand guidance, fill in complex forms, and communicate with multiple parties to disclose disabilities and access appropriate support. Conversational user interfaces (CUIs) could allow us to reimagine such processes, yet there is currently limited understanding of how to design these to be accessible, or whether such an approach would be preferred. In the ADMINS (Assistants for the Disclosure and Management of Information about Needs and Support) project, we implemented a virtual assistant (VA) which is designed to enable students to disclose disabilities and to provide guidance and suggestions about appropriate support. ADMINS explores the potential of CUIs to reduce administrative burden and improve the experience of arranging support by replacing a static form with written or spoken dialogue. This article reports the results of two trials conducted during the project. A beta trial using an early version of the VA provided understanding of accessibility challenges and issues in user experience. The beta trial sample included 22 students who had already disclosed disabilities and 3 disability support advisors. After improvements to the design, a larger main trial was conducted with 134 students who disclosed their disabilities to the university using both the VA and the existing form-based process. The results show that the VA was preferred by most participants to completing the form (64.9% vs 23.9%). Qualitative and quantitative feedback from the trials also identified accessibility and user experience barriers for improving CUI design, and an understanding of benefits and preferences that can inform further development of accessible CUIs for this design space.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79813589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deaf persons, whether or not they are sign language users, make up one of various existing marginalized populations that historically have been socially and politically underrepresented. Unfortunately, this also happens in technology design. Conducting user studies in which marginalized populations are represented is a step towards guaranteeing their right to participate in choices and decisions that are made for, with, and by them. This article presents and discusses results from a Systematic Literature Review (SLR) of user studies in the design of systems for Automatic Sign Language Processing (ASLP). Following our SLR protocol, from 2,486 papers initially found, we applied inclusion and exclusion criteria to finally select 37 papers in our review. We excluded publications that were not full papers, were not related to our main topic of interest, or that reported results that had been updated by more recent papers. All the selected papers focus on user studies as a basis for the design of three major aspects of ASLP: generation (ASLG), recognition (ASLR), and translation (ASLT). With regard to our specific area of interest, we analyzed four areas related to our research questions: goals and research methods, types of user involvement in the interaction design life cycle, cultural and collaborative aspects, and other lessons learned from the primary studies under review. Salient findings from our analysis show that numerical scale questionnaires are the most frequently used research instruments, co-designing ASLP systems with sign language users is not a common practice (as potential users are included mostly in the evaluation phase), and only seldom are Deaf persons who are sign language users included as members of research teams. These findings point to the need of conducting more inclusive and qualitative research for, with and by Deaf persons who are sign language users.
{"title":"A Systematic Review of User Studies as a Basis for the Design of Systems for Automatic Sign Language Processing","authors":"S. Prietch, J. A. Sánchez, J. Guerrero","doi":"10.1145/3563395","DOIUrl":"https://doi.org/10.1145/3563395","url":null,"abstract":"Deaf persons, whether or not they are sign language users, make up one of various existing marginalized populations that historically have been socially and politically underrepresented. Unfortunately, this also happens in technology design. Conducting user studies in which marginalized populations are represented is a step towards guaranteeing their right to participate in choices and decisions that are made for, with, and by them. This article presents and discusses results from a Systematic Literature Review (SLR) of user studies in the design of systems for Automatic Sign Language Processing (ASLP). Following our SLR protocol, from 2,486 papers initially found, we applied inclusion and exclusion criteria to finally select 37 papers in our review. We excluded publications that were not full papers, were not related to our main topic of interest, or that reported results that had been updated by more recent papers. All the selected papers focus on user studies as a basis for the design of three major aspects of ASLP: generation (ASLG), recognition (ASLR), and translation (ASLT). With regard to our specific area of interest, we analyzed four areas related to our research questions: goals and research methods, types of user involvement in the interaction design life cycle, cultural and collaborative aspects, and other lessons learned from the primary studies under review. Salient findings from our analysis show that numerical scale questionnaires are the most frequently used research instruments, co-designing ASLP systems with sign language users is not a common practice (as potential users are included mostly in the evaluation phase), and only seldom are Deaf persons who are sign language users included as members of research teams. These findings point to the need of conducting more inclusive and qualitative research for, with and by Deaf persons who are sign language users.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77722320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karst M P Hoogsteen, Sarit Szpiro, Gabriel Kreiman, Eli Peli
Blind people face difficulties with independent mobility, impacting employment prospects, social inclusion, and quality of life. Given the advancements in computer vision, with more efficient and effective automated information extraction from visual scenes, it is important to determine what information is worth conveying to blind travelers, especially since people have a limited capacity to receive and process sensory information. We aimed to investigate which objects in a street scene are useful to describe and how those objects should be described. Thirteen cane-using participants, five of whom were early blind, took part in two urban walking experiments. In the first experiment, participants were asked to voice their information needs in the form of questions to the experimenter. In the second experiment, participants were asked to score scene descriptions and navigation instructions, provided by the experimenter, in terms of their usefulness. The descriptions included a variety of objects with various annotations per object. Additionally, we asked participants to rank order the objects and the different descriptions per object in terms of priority and explain why the provided information is or is not useful to them. The results reveal differences between early and late blind participants. Late blind participants requested information more frequently and prioritized information about objects' locations. Our results illustrate how different factors, such as the level of detail, relative position, and what type of information is provided when describing an object, affected the usefulness of scene descriptions. Participants explained how they (indirectly) used information, but they were frequently unable to explain their ratings. The results distinguish between various types of travel information, underscore the importance of featuring these types at multiple levels of abstraction, and highlight gaps in current understanding of travel information needs. Elucidating the information needs of blind travelers is critical for the development of more useful assistive technologies.
{"title":"Beyond the Cane: Describing Urban Scenes to Blind People for Mobility Tasks.","authors":"Karst M P Hoogsteen, Sarit Szpiro, Gabriel Kreiman, Eli Peli","doi":"10.1145/3522757","DOIUrl":"https://doi.org/10.1145/3522757","url":null,"abstract":"<p><p>Blind people face difficulties with independent mobility, impacting employment prospects, social inclusion, and quality of life. Given the advancements in computer vision, with more efficient and effective automated information extraction from visual scenes, it is important to determine what information is worth conveying to blind travelers, especially since people have a limited capacity to receive and process sensory information. We aimed to investigate <i>which</i> objects in a street scene are useful to describe and <i>how</i> those objects should be described. Thirteen cane-using participants, five of whom were early blind, took part in two urban walking experiments. In the first experiment, participants were asked to voice their information needs in the form of questions to the experimenter. In the second experiment, participants were asked to score scene descriptions and navigation instructions, provided by the experimenter, in terms of their usefulness. The descriptions included a variety of objects with various annotations per object. Additionally, we asked participants to rank order the objects and the different descriptions per object in terms of priority and explain why the provided information is or is not useful to them. The results reveal differences between early and late blind participants. Late blind participants requested information more frequently and prioritized information about objects' locations. Our results illustrate how different factors, such as the level of detail, relative position, and what type of information is provided when describing an object, affected the usefulness of scene descriptions. Participants explained how they (indirectly) used information, but they were frequently unable to explain their ratings. The results distinguish between various types of travel information, underscore the importance of featuring these types at multiple levels of abstraction, and highlight gaps in current understanding of travel information needs. Elucidating the information needs of blind travelers is critical for the development of more useful assistive technologies.</p>","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9491388/pdf/nihms-1834372.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10021147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The results presented here were obtained from an experimental study of blind people's experiences on two routes with very different characteristics. They are intended to answer three research questions on how blind people identify environmental features while travelling and use environmental information to form spatial representations, and the implications for the design of electronic travel aids to better support mental mapping of space. The results include detailed discussions of the mainly tactile and auditory information used by blind people to identify objects, as well as the different combinations of sensory information used in forming mental maps, the approaches participants used to do this, and the sensory modalities involved. They also provide a categorisation of the main features in participants’ descriptions of the two routes. The answers to the three questions include a discussion of the relationship between the sensory information used in route descriptions and mental maps, and the implications of the results for the design of electronic travel aids to support mental mapping, including suggestions for new types of aids and guidelines for aid design.
{"title":"Route Descriptions, Spatial Knowledge and Spatial Representations of Blind and Partially Sighted People: Improved Design of Electronic Travel Aids","authors":"Marion A. Hersh, A. R. G. Ramirez","doi":"10.1145/3549077","DOIUrl":"https://doi.org/10.1145/3549077","url":null,"abstract":"The results presented here were obtained from an experimental study of blind people's experiences on two routes with very different characteristics. They are intended to answer three research questions on how blind people identify environmental features while travelling and use environmental information to form spatial representations, and the implications for the design of electronic travel aids to better support mental mapping of space. The results include detailed discussions of the mainly tactile and auditory information used by blind people to identify objects, as well as the different combinations of sensory information used in forming mental maps, the approaches participants used to do this, and the sensory modalities involved. They also provide a categorisation of the main features in participants’ descriptions of the two routes. The answers to the three questions include a discussion of the relationship between the sensory information used in route descriptions and mental maps, and the implications of the results for the design of electronic travel aids to support mental mapping, including suggestions for new types of aids and guidelines for aid design.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82846855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Danyang Fan, Alexa Fay Siu, Hrishikesh V. Rao, Gene S.-H. Kim, Xavier Vazquez, Lucy Greco, Sile O'Modhrain, Sean Follmer
Data visualization has become an increasingly important means of effective data communication and has played a vital role in broadcasting the progression of COVID-19. Accessible data representations, however, have lagged behind, leaving areas of information out of reach for many blind and visually impaired (BVI) users. In this work, we sought to understand (1) the accessibility of current implementations of visualizations on the web; (2) BVI users’ preferences and current experiences when accessing data-driven media; (3) how accessible data representations on the web address these users’ access needs and help them navigate, interpret, and gain insights from the data; and (4) the practical challenges that limit BVI users’ access and use of data representations. To answer these questions, we conducted a mixed-methods study consisting of an accessibility audit of 87 data visualizations on the web to identify accessibility issues, an online survey of 127 screen reader users to understand lived experiences and preferences, and a remote contextual inquiry with 12 of the survey respondents to observe how they navigate, interpret, and gain insights from accessible data representations. Our observations during this critical period of time provide an understanding of the widespread accessibility issues encountered across online data visualizations, the impact that data accessibility inequities have on the BVI community, the ways screen reader users sought access to data-driven information and made use of online visualizations to form insights, and the pressing need to make larger strides towards improving data literacy, building confidence, and enriching methods of access. Based on our findings, we provide recommendations for researchers and practitioners to broaden data accessibility on the web.
{"title":"The Accessibility of Data Visualizations on the Web for Screen Reader Users: Practices and Experiences During COVID-19","authors":"Danyang Fan, Alexa Fay Siu, Hrishikesh V. Rao, Gene S.-H. Kim, Xavier Vazquez, Lucy Greco, Sile O'Modhrain, Sean Follmer","doi":"10.1145/3557899","DOIUrl":"https://doi.org/10.1145/3557899","url":null,"abstract":"Data visualization has become an increasingly important means of effective data communication and has played a vital role in broadcasting the progression of COVID-19. Accessible data representations, however, have lagged behind, leaving areas of information out of reach for many blind and visually impaired (BVI) users. In this work, we sought to understand (1) the accessibility of current implementations of visualizations on the web; (2) BVI users’ preferences and current experiences when accessing data-driven media; (3) how accessible data representations on the web address these users’ access needs and help them navigate, interpret, and gain insights from the data; and (4) the practical challenges that limit BVI users’ access and use of data representations. To answer these questions, we conducted a mixed-methods study consisting of an accessibility audit of 87 data visualizations on the web to identify accessibility issues, an online survey of 127 screen reader users to understand lived experiences and preferences, and a remote contextual inquiry with 12 of the survey respondents to observe how they navigate, interpret, and gain insights from accessible data representations. Our observations during this critical period of time provide an understanding of the widespread accessibility issues encountered across online data visualizations, the impact that data accessibility inequities have on the BVI community, the ways screen reader users sought access to data-driven information and made use of online visualizations to form insights, and the pressing need to make larger strides towards improving data literacy, building confidence, and enriching methods of access. Based on our findings, we provide recommendations for researchers and practitioners to broaden data accessibility on the web.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74507259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For people with visual impairments, many studies have been conducted to improve the accessibility of various types of images on the web. However, the majority of the work focused on photos or graphs. In this study, we propose AccessComics, an accessible digital comic book reader for people with visual impairments. To understand the accessibility of existing platforms, we first conducted a formative online survey with 68 participants who are blind or have low vision asking about their prior experiences with audiobooks and eBooks. Then, to learn the implications of designing an accessible comic book reader for people with visual impairments, we conducted an interview study with eight participants and collected feedback about our system. Considering our findings that a brief description of the scene and sound effects are desired when listening to comic books, we conducted a follow-up study with 16 participants (8 blind, 8 sighted) to explore how to effectively provide scene descriptions and sound effects, generated based on the onomatopoeia and mimetic words that appear in comics. Then we assessed the impact of the overall reading experience and if it differs depending on the user group. The results show that the presence of scene descriptions was perceived to be useful for concentration and understanding the situation, while the sound effects were perceived to make the book-reading experience more immersive and realistic. Based on the findings, we suggest design implications specifying features that future accessible comic book readers should support.
{"title":"AccessComics2: Understanding the User Experience of an Accessible Comic Book Reader for Blind People with Textual Sound Effects","authors":"Yun Jung Lee, Hwayeon Joh, Suhyeon Yoo, U. Oh","doi":"10.1145/3555720","DOIUrl":"https://doi.org/10.1145/3555720","url":null,"abstract":"For people with visual impairments, many studies have been conducted to improve the accessibility of various types of images on the web. However, the majority of the work focused on photos or graphs. In this study, we propose AccessComics, an accessible digital comic book reader for people with visual impairments. To understand the accessibility of existing platforms, we first conducted a formative online survey with 68 participants who are blind or have low vision asking about their prior experiences with audiobooks and eBooks. Then, to learn the implications of designing an accessible comic book reader for people with visual impairments, we conducted an interview study with eight participants and collected feedback about our system. Considering our findings that a brief description of the scene and sound effects are desired when listening to comic books, we conducted a follow-up study with 16 participants (8 blind, 8 sighted) to explore how to effectively provide scene descriptions and sound effects, generated based on the onomatopoeia and mimetic words that appear in comics. Then we assessed the impact of the overall reading experience and if it differs depending on the user group. The results show that the presence of scene descriptions was perceived to be useful for concentration and understanding the situation, while the sound effects were perceived to make the book-reading experience more immersive and realistic. Based on the findings, we suggest design implications specifying features that future accessible comic book readers should support.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76742059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}