In this article, we introduce Semantic Interior Mapology (SIM), a web app that allows anyone to quickly trace the floor plan of a building, generating a vectorized representation that can be automatically converted into a tactile map at the desired scale. The design of SIM is informed by a focus group with seven blind participants. Maps generated by SIM at two different scales have been tested by a user study with 10 participants, who were asked to perform a number of tasks designed to ascertain the spatial knowledge acquired through map exploration. These tasks included cross-map pointing and path finding, and determination of turn direction/walker orientation during imagined path traversal. By and large, participants were able to successfully complete the tasks, suggesting that these types of maps could be useful for pre-journey spatial learning.
{"title":"Experimental Evaluation of Multi-scale Tactile Maps Created with SIM, a Web App for Indoor Map Authoring.","authors":"Viet Trinh, Roberto Manduchi, Nicholas A Giudice","doi":"10.1145/3590775","DOIUrl":"https://doi.org/10.1145/3590775","url":null,"abstract":"<p><p>In this article, we introduce Semantic Interior Mapology (SIM), a web app that allows anyone to quickly trace the floor plan of a building, generating a vectorized representation that can be automatically converted into a tactile map at the desired scale. The design of SIM is informed by a focus group with seven blind participants. Maps generated by SIM at two different scales have been tested by a user study with 10 participants, who were asked to perform a number of tasks designed to ascertain the spatial knowledge acquired through map exploration. These tasks included cross-map pointing and path finding, and determination of turn direction/walker orientation during imagined path traversal. By and large, participants were able to successfully complete the tasks, suggesting that these types of maps could be useful for pre-journey spatial learning.</p>","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10327626/pdf/nihms-1909104.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9812542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. P. Campos, L. Gonçalves, Wesnydy L. Ribeiro, T. Araújo, T. G. do Rêgo, Pedro H. V. Figueiredo, Suanny Vieira, Thiago F. S. Costa, Caio Moraes, Alexandre C. S. Cruz, F. A. Araújo, Guido L. Souza Filho
Automating the generation of audio descriptions (AD) for blind and visually impaired (BVI) people is a difficult task, since it has several challenges involved, such as: identifying gaps in dialogues; describing the essential elements; summarizing and fitting the descriptions into the dialogue gaps; generating an AD narration track, and synchronizing it with the main soundtrack. In our previous work (Campos et al. [6]), we propose a solution for automatic AD script generation, named CineAD, which uses the movie’s script as a basis for the AD generation. This article proposes extending this solution to complement the information extracted from the script and reduce its dependency based on the classification of visual information from the video. To assess the viability of the proposed solution, we implemented a proof of concept of the solution and evaluated it with 11 blind users. The results showed that the solution could generate a more succinct and objective AD but with a similar users’ level of understanding compared to our previous work. Thus, the solution can provide relevant information to blind users using less video time for descriptions.
{"title":"Machine Generation of Audio Description for Blind and Visually Impaired People","authors":"V. P. Campos, L. Gonçalves, Wesnydy L. Ribeiro, T. Araújo, T. G. do Rêgo, Pedro H. V. Figueiredo, Suanny Vieira, Thiago F. S. Costa, Caio Moraes, Alexandre C. S. Cruz, F. A. Araújo, Guido L. Souza Filho","doi":"10.1145/3590955","DOIUrl":"https://doi.org/10.1145/3590955","url":null,"abstract":"Automating the generation of audio descriptions (AD) for blind and visually impaired (BVI) people is a difficult task, since it has several challenges involved, such as: identifying gaps in dialogues; describing the essential elements; summarizing and fitting the descriptions into the dialogue gaps; generating an AD narration track, and synchronizing it with the main soundtrack. In our previous work (Campos et al. [6]), we propose a solution for automatic AD script generation, named CineAD, which uses the movie’s script as a basis for the AD generation. This article proposes extending this solution to complement the information extracted from the script and reduce its dependency based on the classification of visual information from the video. To assess the viability of the proposed solution, we implemented a proof of concept of the solution and evaluated it with 11 blind users. The results showed that the solution could generate a more succinct and objective AD but with a similar users’ level of understanding compared to our previous work. Thus, the solution can provide relevant information to blind users using less video time for descriptions.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2023-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86316917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We are pleased to present the first Special Issue on the International Web for All Conference (W4A) series featured in the ACM Transactions on Accessible Computing (TACCESS) journal. This volume presents seven articles that are extended versions of the conference papers presented at the 18th International Web for All Conference, which was held online on April 19–20, 2021. Authors of several top papers from the conference submitted manuscripts for consideration, which then underwent a full journal review process. The guest editors for this issue are Victoria Yaneva (National Board of Medical Examiners, USA; University of Wolverhampton, UK) and Dragan Ahmetovic (University of Milan, Italy). The guest editors thank the authors for their excellent submissions, and they also thank all of the journal reviewers who contributed their time and expertise to this process. The first article, titled “AccessComics2: Understanding the User Experience of an Accessible Comic Book Reader for Blind People with Textual Sound Effects,” proposes an accessible digital comic-book reader for people with visual impairments. Surveys and interviews with participants who are blind or have low vision revealed preference for the inclusion of brief scene descriptions and sound effects. These components were integrated into the system and further evaluated, showing that the presence of scene descriptions was useful for concentration and understanding, while the sound effects made the book reading experience more immersive and realistic. The second article, “The Transparency of Automatic Web Accessibility Evaluation Tools: Design Criteria, State of the Art, and User Perception,” presents a comprehensive survey of the instruments available for automated website accessibility evaluation, the metrics they adopt, and how these are presented to the user. Through a survey with 138 users of evaluation tools and a study with 18 accessibility and web design experts, the authors identify a number of design criteria aimed to support the transparency of the reported results and their interpretability by end-users. The third article, “The Accessibility of Data Visualizations on the Web for Screen Reader Users: Practices and Experiences during COVID-19,” explores the level of accessibility of web-based data visualizations by screen reader users. To this end, the authors conduct an accessibility audit of 87 data visualizations by 3 expert auditors, a follow-up survey with 127 screen reader users, and an observational study with 12 participants interacting with accessible web visualizations. A final discussion proposes recommendations for designing more accessible data visualizations. The fourth article, “WordMelodies: Supporting the Acquisition of Literacy Skills by Children with Visual Impairment through a Mobile App,” presents a mobile app designed to support inclusive teaching of literacy skills for primary school students. The app includes over 80 different exercise types in Italian and Eng
{"title":"Introduction to the Special Issue on W4A’21","authors":"Victoria Yaneva, D. Ahmetovic","doi":"10.1145/3587165","DOIUrl":"https://doi.org/10.1145/3587165","url":null,"abstract":"We are pleased to present the first Special Issue on the International Web for All Conference (W4A) series featured in the ACM Transactions on Accessible Computing (TACCESS) journal. This volume presents seven articles that are extended versions of the conference papers presented at the 18th International Web for All Conference, which was held online on April 19–20, 2021. Authors of several top papers from the conference submitted manuscripts for consideration, which then underwent a full journal review process. The guest editors for this issue are Victoria Yaneva (National Board of Medical Examiners, USA; University of Wolverhampton, UK) and Dragan Ahmetovic (University of Milan, Italy). The guest editors thank the authors for their excellent submissions, and they also thank all of the journal reviewers who contributed their time and expertise to this process. The first article, titled “AccessComics2: Understanding the User Experience of an Accessible Comic Book Reader for Blind People with Textual Sound Effects,” proposes an accessible digital comic-book reader for people with visual impairments. Surveys and interviews with participants who are blind or have low vision revealed preference for the inclusion of brief scene descriptions and sound effects. These components were integrated into the system and further evaluated, showing that the presence of scene descriptions was useful for concentration and understanding, while the sound effects made the book reading experience more immersive and realistic. The second article, “The Transparency of Automatic Web Accessibility Evaluation Tools: Design Criteria, State of the Art, and User Perception,” presents a comprehensive survey of the instruments available for automated website accessibility evaluation, the metrics they adopt, and how these are presented to the user. Through a survey with 138 users of evaluation tools and a study with 18 accessibility and web design experts, the authors identify a number of design criteria aimed to support the transparency of the reported results and their interpretability by end-users. The third article, “The Accessibility of Data Visualizations on the Web for Screen Reader Users: Practices and Experiences during COVID-19,” explores the level of accessibility of web-based data visualizations by screen reader users. To this end, the authors conduct an accessibility audit of 87 data visualizations by 3 expert auditors, a follow-up survey with 127 screen reader users, and an observational study with 12 participants interacting with accessible web visualizations. A final discussion proposes recommendations for designing more accessible data visualizations. The fourth article, “WordMelodies: Supporting the Acquisition of Literacy Skills by Children with Visual Impairment through a Mobile App,” presents a mobile app designed to support inclusive teaching of literacy skills for primary school students. The app includes over 80 different exercise types in Italian and Eng","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72445214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. J. Edwards, Michael Gilbert, Emily Blank, Stacy M. Branham
Many studies within Accessible Computing have investigated image accessibility, from what should be included in alternative text (alt text), to possible automated, human-in-the-loop, or crowdsourced approaches to alt text generation. However, the processes through which practitioners make alt text in situ have rarely been discussed. Through interviews with three artists and three accessibility practitioners working with Google, as well as 25 end users, we identify four processes of alt text creation used by this company—The User-Evaluation Process, The Lone Writer Process, The Team Write-A-Thon Process, and The Artist-Writer Process—and unpack their potential strengths and weaknesses as they relate to access and inclusive imagery. We conclude with a discussion of what alt text researchers and industry professionals can learn from considering alt text in situ, including opportunities to support user feedback, cross-contributor consistency, and organizational or technical changes to production processes.
{"title":"How the Alt Text Gets Made: What Roles and Processes of Alt Text Creation Can Teach Us About Inclusive Imagery","authors":"E. J. Edwards, Michael Gilbert, Emily Blank, Stacy M. Branham","doi":"10.1145/3587469","DOIUrl":"https://doi.org/10.1145/3587469","url":null,"abstract":"Many studies within Accessible Computing have investigated image accessibility, from what should be included in alternative text (alt text), to possible automated, human-in-the-loop, or crowdsourced approaches to alt text generation. However, the processes through which practitioners make alt text in situ have rarely been discussed. Through interviews with three artists and three accessibility practitioners working with Google, as well as 25 end users, we identify four processes of alt text creation used by this company—The User-Evaluation Process, The Lone Writer Process, The Team Write-A-Thon Process, and The Artist-Writer Process—and unpack their potential strengths and weaknesses as they relate to access and inclusive imagery. We conclude with a discussion of what alt text researchers and industry professionals can learn from considering alt text in situ, including opportunities to support user feedback, cross-contributor consistency, and organizational or technical changes to production processes.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2023-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90616347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Kearney-Volpe, Chancey Fleet, Keita Ohshiro, Veronica Alfaro Arias, Eric Hao Xu, Amy Hurst
Despite a growing demand for Web Development and adjacent tech skills, there is a lack of accessible skills training for screen reader users. To address this gap, we developed tools and techniques to support screen reader users in learning web development. In this article, we describe our design, implementation, and evaluation of a nine-week web development workshop, designed to introduce screen reader users to HTML, CSS, and JavaScript. We taught the remote workshop using synchronous lectures followed by one-on-one time with Teaching Assistants (TAs) and included a resource-rich website, tactile diagrams, and discussion forum. We evaluated the effectiveness of our tools and the impact of the workshop during, immediately following, and one year after the workshop. At its conclusion, students demonstrated their knowledge of web development basics by creating and publishing their own websites; showed an increase in self-efficacy; and maintained a high level of interest in the subject. Participation also benefited TAs who reported increased confidence in understanding accessibility concepts, increased interest in pursuing work related to accessibility, and plans to apply what they learned. One year after the workshop, both students and TAs reported a lasting impact. Most notably, students had applied their understanding of design concepts, reported that the workshop helped them prepare for career changes or helped them in their current job functions, and that it gave them both the language and confidence to problem-solve web and accessibility issues. TAs felt that the workshop broadened their understanding of blind students’ abilities; especially when provided with accessible materials and tools, it gave them a better understanding of digital accessibility and assistive technologies, and they shared examples of how they continue to apply learnings and advocate for accessibility. Based on these findings, we recommend techniques and tools to support screen reader users’ learning web development, the inclusion of job-focused sub-topics, and suggestions for engaging with post-secondary institutions to pair service learning with tech skills training. We close with recommendations for implementing and adapting the workshop using our open-educational materials to expand the availability and breadth of accessible tech skills training and co-learning experiences for post-secondary students.
{"title":"Tangible Progress: Tools, Techniques, and Impacts of Teaching Web Development to Screen Reader Users","authors":"C. Kearney-Volpe, Chancey Fleet, Keita Ohshiro, Veronica Alfaro Arias, Eric Hao Xu, Amy Hurst","doi":"10.1145/3585315","DOIUrl":"https://doi.org/10.1145/3585315","url":null,"abstract":"Despite a growing demand for Web Development and adjacent tech skills, there is a lack of accessible skills training for screen reader users. To address this gap, we developed tools and techniques to support screen reader users in learning web development. In this article, we describe our design, implementation, and evaluation of a nine-week web development workshop, designed to introduce screen reader users to HTML, CSS, and JavaScript. We taught the remote workshop using synchronous lectures followed by one-on-one time with Teaching Assistants (TAs) and included a resource-rich website, tactile diagrams, and discussion forum. We evaluated the effectiveness of our tools and the impact of the workshop during, immediately following, and one year after the workshop. At its conclusion, students demonstrated their knowledge of web development basics by creating and publishing their own websites; showed an increase in self-efficacy; and maintained a high level of interest in the subject. Participation also benefited TAs who reported increased confidence in understanding accessibility concepts, increased interest in pursuing work related to accessibility, and plans to apply what they learned. One year after the workshop, both students and TAs reported a lasting impact. Most notably, students had applied their understanding of design concepts, reported that the workshop helped them prepare for career changes or helped them in their current job functions, and that it gave them both the language and confidence to problem-solve web and accessibility issues. TAs felt that the workshop broadened their understanding of blind students’ abilities; especially when provided with accessible materials and tools, it gave them a better understanding of digital accessibility and assistive technologies, and they shared examples of how they continue to apply learnings and advocate for accessibility. Based on these findings, we recommend techniques and tools to support screen reader users’ learning web development, the inclusion of job-focused sub-topics, and suggestions for engaging with post-secondary institutions to pair service learning with tech skills training. We close with recommendations for implementing and adapting the workshop using our open-educational materials to expand the availability and breadth of accessible tech skills training and co-learning experiences for post-secondary students.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2023-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77581565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Screen reader plugins are small pieces of code that blind users can download and install to enhance the capabilities of their screen readers. This article aims to understand why blind users use these plugins, as well as how these plugins are developed, deployed, and maintained. To this end, we conducted an interview study with 14 blind users to gain individual perspectives and analyzed 2,000 online posts scraped from three plugin-related forums to gain the community perspective. Our study revealed that screen reader users rely on plugins for various reasons, such as to improve the usability of screen readers and application software, to make partially accessible applications accessible, and to receive custom auditory feedback. Furthermore, installing plugins is easy; uninstalling them is unlikely; and finding them online is ad hoc, challenging, and sometimes poses security threats. In addition, developing screen reader plugins is technically demanding; only a handful of people develop plugins. Unfortunately, most plugins do not receive updates once distributed and become obsolete. The lack of financial incentives plays in the slow growth of the plugin ecosystem. Further, we outlined the complex, tripartite collaboration among individual blind users, their online communities, and developer communities in creating a plugin. Additionally, we reported several phenomena within and between these communities that are likely to influence a plugin’s development. Based on our findings, we recommend creating a community-driven repository for all plugins hosted on a peer-to-peer infrastructure, engaging third-party developers, and raising general awareness about the benefits and dangers of plugins. We believe our findings will inspire HCI researchers to embrace the plugin-based distribution model as an effective way to combat accessibility and usability problems in non-visual interaction and to investigate potential ways to improve the collaboration between blind users and developer communities.
{"title":"Understanding the Usages, Lifecycle, and Opportunities of Screen Readers’ Plugins","authors":"Farhani Momotaz, Md Ehtesham-Ul-Haque, Syed Masum Billah","doi":"10.1145/3582697","DOIUrl":"https://doi.org/10.1145/3582697","url":null,"abstract":"Screen reader plugins are small pieces of code that blind users can download and install to enhance the capabilities of their screen readers. This article aims to understand why blind users use these plugins, as well as how these plugins are developed, deployed, and maintained. To this end, we conducted an interview study with 14 blind users to gain individual perspectives and analyzed 2,000 online posts scraped from three plugin-related forums to gain the community perspective. Our study revealed that screen reader users rely on plugins for various reasons, such as to improve the usability of screen readers and application software, to make partially accessible applications accessible, and to receive custom auditory feedback. Furthermore, installing plugins is easy; uninstalling them is unlikely; and finding them online is ad hoc, challenging, and sometimes poses security threats. In addition, developing screen reader plugins is technically demanding; only a handful of people develop plugins. Unfortunately, most plugins do not receive updates once distributed and become obsolete. The lack of financial incentives plays in the slow growth of the plugin ecosystem. Further, we outlined the complex, tripartite collaboration among individual blind users, their online communities, and developer communities in creating a plugin. Additionally, we reported several phenomena within and between these communities that are likely to influence a plugin’s development. Based on our findings, we recommend creating a community-driven repository for all plugins hosted on a peer-to-peer infrastructure, engaging third-party developers, and raising general awareness about the benefits and dangers of plugins. We believe our findings will inspire HCI researchers to embrace the plugin-based distribution model as an effective way to combat accessibility and usability problems in non-visual interaction and to investigate potential ways to improve the collaboration between blind users and developer communities.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2023-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77953889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The dementia community faces major challenges in social engagements, which have been further complicated by the prolonged physical distancing measures due to the COVID-19 pandemic. Designing digital tools for in-person social sharing in family and care facility settings has been well explored, but comparatively little HCI work has focused on the design of community-based social technologies for virtual settings. We present our virtual fieldwork on remote social activities explored by one dementia community in response to the impacts of the pandemic. Building upon our previously published on-site fieldwork in this community, we expand on our initial publication by follow-up interviewing caregivers and facilitators and reflecting on a virtual social program. Through thematic analysis and contrasting in-person and online formats of the program, we deepened the understanding of virtual social engagements of the dementia community, examining their efforts to leverage physical objects and environments, enhance open and flexible experiences, and expand collaborative space. We propose to open new design opportunities through holistic approaches, including reimagining community social spaces, rethinking agency in people with dementia and caregivers, and diversifying HCI support across communities and stakeholders.
{"title":"Enriching Social Sharing for the Dementia Community: Insights from In-Person and Online Social Programs","authors":"Jiamin Dai, Karyn Moffatt","doi":"10.1145/3582558","DOIUrl":"https://doi.org/10.1145/3582558","url":null,"abstract":"The dementia community faces major challenges in social engagements, which have been further complicated by the prolonged physical distancing measures due to the COVID-19 pandemic. Designing digital tools for in-person social sharing in family and care facility settings has been well explored, but comparatively little HCI work has focused on the design of community-based social technologies for virtual settings. We present our virtual fieldwork on remote social activities explored by one dementia community in response to the impacts of the pandemic. Building upon our previously published on-site fieldwork in this community, we expand on our initial publication by follow-up interviewing caregivers and facilitators and reflecting on a virtual social program. Through thematic analysis and contrasting in-person and online formats of the program, we deepened the understanding of virtual social engagements of the dementia community, examining their efforts to leverage physical objects and environments, enhance open and flexible experiences, and expand collaborative space. We propose to open new design opportunities through holistic approaches, including reimagining community social spaces, rethinking agency in people with dementia and caregivers, and diversifying HCI support across communities and stakeholders.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76343932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vineet Pandey, Ne Khan, Anoopum S. Gupta, Krzysztof Z Gajos
Methods for obtaining accurate quantitative assessments of motor impairments are essential in accessibility research, design of adaptive ability-based assistive technologies, as well as in clinical care and medical research. Currently, such assessments are typically performed in controlled laboratory or clinical settings under professional supervision. Emerging approaches for collecting data in unsupervised settings have been shown to produce valid data when aggregated over large populations, but it is not yet established whether in unsupervised settings measures of research or clinical significance can be collected accurately and reliably for individuals. We conducted a study with 13 children with ataxia-telangiectasia and 9 healthy children to analyze the validity, test-retest reliability, and acceptability of at-home use of a recent active digital phenotyping system, called Hevelius. Hevelius produces 32 measures derived from the movement trajectories of the mouse cursor and then generates a quantitative estimate of motor impairment in the dominant arm using the dominant arm component of the Brief Ataxia Rating Scale (BARS). The severity score estimates generated by Hevelius from single at-home sessions deviated from clinician-assigned BARS scores more than the severity score estimates generated from single sessions conducted under researcher supervision. However, taking a median of as few as 2 consecutive sessions produced severity score estimates that were as accurate or better than the estimates produced from single supervised sessions. Further, aggregating as few as 2 consecutive sessions resulted in good test-retest reliability (ICC = 0.81 for A-T participants). This work demonstrated the feasibility of performing accurate and reliable quantitative assessments of individual motor impairments in the dominant arm through tasks performed at home without supervision by the researchers. Further work is needed, however, to assess how broadly these results generalize.
{"title":"Accuracy and Reliability of At-Home Quantification of Motor Impairments Using a Computer-Based Pointing Task with Children with Ataxia-Telangiectasia","authors":"Vineet Pandey, Ne Khan, Anoopum S. Gupta, Krzysztof Z Gajos","doi":"10.1145/3581790","DOIUrl":"https://doi.org/10.1145/3581790","url":null,"abstract":"Methods for obtaining accurate quantitative assessments of motor impairments are essential in accessibility research, design of adaptive ability-based assistive technologies, as well as in clinical care and medical research. Currently, such assessments are typically performed in controlled laboratory or clinical settings under professional supervision. Emerging approaches for collecting data in unsupervised settings have been shown to produce valid data when aggregated over large populations, but it is not yet established whether in unsupervised settings measures of research or clinical significance can be collected accurately and reliably for individuals. We conducted a study with 13 children with ataxia-telangiectasia and 9 healthy children to analyze the validity, test-retest reliability, and acceptability of at-home use of a recent active digital phenotyping system, called Hevelius. Hevelius produces 32 measures derived from the movement trajectories of the mouse cursor and then generates a quantitative estimate of motor impairment in the dominant arm using the dominant arm component of the Brief Ataxia Rating Scale (BARS). The severity score estimates generated by Hevelius from single at-home sessions deviated from clinician-assigned BARS scores more than the severity score estimates generated from single sessions conducted under researcher supervision. However, taking a median of as few as 2 consecutive sessions produced severity score estimates that were as accurate or better than the estimates produced from single supervised sessions. Further, aggregating as few as 2 consecutive sessions resulted in good test-retest reliability (ICC = 0.81 for A-T participants). This work demonstrated the feasibility of performing accurate and reliable quantitative assessments of individual motor impairments in the dominant arm through tasks performed at home without supervision by the researchers. Further work is needed, however, to assess how broadly these results generalize.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2023-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84527659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While accessibility is acknowledged as a crucial component in design, many technologies remain inaccessible for people with disabilities. As part of a study to better understand UX practice to inform pedagogy, we analyzed 58 interview sessions that included 65 senior user experience (UX) professionals and asked them “How do you consider accessibility in your work?” Using transitivity analysis from critical discourse analysis, our findings provide insight into the disparate practices of individuals and organizations. Key findings include the growing role of design systems to structurally address accessibility and the range of organizational strategies, including dedicated teams. We also found that the categories of accessibility consideration were somewhat superficial and largely focused on vision-related challenges. Additionally, our findings support previous work that many practitioners did not feel their formal education adequately prepared them to address accessibility. We conclude with implications for education and industry, namely, the importance of implementing and teaching design systems in human-computer interaction and computer-science programs.
{"title":"“It could be better. It could be much worse”: Understanding Accessibility in User Experience Practice with Implications for Industry and Education","authors":"C. Putnam, E. Rose, Craig M. Macdonald","doi":"10.1145/3575662","DOIUrl":"https://doi.org/10.1145/3575662","url":null,"abstract":"While accessibility is acknowledged as a crucial component in design, many technologies remain inaccessible for people with disabilities. As part of a study to better understand UX practice to inform pedagogy, we analyzed 58 interview sessions that included 65 senior user experience (UX) professionals and asked them “How do you consider accessibility in your work?” Using transitivity analysis from critical discourse analysis, our findings provide insight into the disparate practices of individuals and organizations. Key findings include the growing role of design systems to structurally address accessibility and the range of organizational strategies, including dedicated teams. We also found that the categories of accessibility consideration were somewhat superficial and largely focused on vision-related challenges. Additionally, our findings support previous work that many practitioners did not feel their formal education adequately prepared them to address accessibility. We conclude with implications for education and industry, namely, the importance of implementing and teaching design systems in human-computer interaction and computer-science programs.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88124027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since the first lockdown in 2020, video conferencing tools have become increasingly important for employment, education, and social interaction, making them essential tools in everyday life. This study investigates the accessibility and usability of the desktop and mobile versions of three popular video conferencing tools, Zoom, Google Meet, and MS Teams, for visually impaired people interacting via screen readers and keyboard or gestures. This involved two inspection evaluations to test the most important features of the desktop and mobile device versions and two surveys of visually impaired users to obtain information about the accessibility of the selected video conferencing tools. Sixty-five people answered the survey for desktop and 94 for mobile platforms. The results showed that Zoom was preferred to Google Meet and MS Teams but that none of the tools was fully accessible via screen reader and keyboard or gestures. Finally, the results of this empirical study were used to develop a set of guidelines for designers of video conferencing tools and assistive technology.
{"title":"Video Conferencing Tools: Comparative Study of the Experiences of Screen Reader Users and the Development of More Inclusive Design Guidelines","authors":"B. Leporini, M. Buzzi, Marion A. Hersh","doi":"10.1145/3573012","DOIUrl":"https://doi.org/10.1145/3573012","url":null,"abstract":"Since the first lockdown in 2020, video conferencing tools have become increasingly important for employment, education, and social interaction, making them essential tools in everyday life. This study investigates the accessibility and usability of the desktop and mobile versions of three popular video conferencing tools, Zoom, Google Meet, and MS Teams, for visually impaired people interacting via screen readers and keyboard or gestures. This involved two inspection evaluations to test the most important features of the desktop and mobile device versions and two surveys of visually impaired users to obtain information about the accessibility of the selected video conferencing tools. Sixty-five people answered the survey for desktop and 94 for mobile platforms. The results showed that Zoom was preferred to Google Meet and MS Teams but that none of the tools was fully accessible via screen reader and keyboard or gestures. Finally, the results of this empirical study were used to develop a set of guidelines for designers of video conferencing tools and assistive technology.","PeriodicalId":54128,"journal":{"name":"ACM Transactions on Accessible Computing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2022-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88663223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}