Recent studies have shown that adding sonification to stroke rehabilitation training is effective. It helps patients achieve better training results by positively affecting motor control, the somatosensory system, and patient engagement. This paper explores the concept of audio-based games in stroke rehabilitation, hypothesizing that the removal of a visual dimension might increase patient focus on the body part being trained. In an expert study with nine therapists we evaluated Serenity, an audio-based rehabilitation game, as a design probe to explore the potential of audio-based games in rehabilitation training. Results show promise for further exploring the concept of audio-based gaming in stroke rehabilitation.
{"title":"Serenity: exploring audio-based gaming for arm-hand rehabilitation after stroke","authors":"Yijing Jiang, D. Tetteroo","doi":"10.1145/3517428.3550388","DOIUrl":"https://doi.org/10.1145/3517428.3550388","url":null,"abstract":"Recent studies have shown that adding sonification to stroke rehabilitation training is effective. It helps patients achieve better training results by positively affecting motor control, the somatosensory system, and patient engagement. This paper explores the concept of audio-based games in stroke rehabilitation, hypothesizing that the removal of a visual dimension might increase patient focus on the body part being trained. In an expert study with nine therapists we evaluated Serenity, an audio-based rehabilitation game, as a design probe to explore the potential of audio-based games in rehabilitation training. Results show promise for further exploring the concept of audio-based gaming in stroke rehabilitation.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128691873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work-in-progress explores factors that impact the attitudes towards existing modes of teletherapy and a preliminary exploration of attitudes toward virtual reality teletherapy (VRT) adoption for teletherapy on college campuses. Five semi-structured interviews were conducted with 3 students and 2 campus counselors. Thus far, 3 primary themes (physical, social, and clinical factors), 8 secondary sub-themes, and 9 tertiary sub-themes are identified in influencing participants’ attitudes of counseling modalities including their initial attitudes toward adopting VR for remote counseling services. These insights suggest that VRT may be more openly adopted for improved access to campus counseling services and for specific clinical use cases where students can translate learned skills to the real world. Concerns of VRT include students using VR to support antithetical behavior and establishing trust between users in a VR environment.
{"title":"College Students’ and Campus Counselors’ Attitudes Toward Teletherapy and Adopting Virtual Reality (Preliminary Exploration) for Campus Counseling Services","authors":"Vanny Chao, Roshan Peiris","doi":"10.1145/3517428.3550378","DOIUrl":"https://doi.org/10.1145/3517428.3550378","url":null,"abstract":"This work-in-progress explores factors that impact the attitudes towards existing modes of teletherapy and a preliminary exploration of attitudes toward virtual reality teletherapy (VRT) adoption for teletherapy on college campuses. Five semi-structured interviews were conducted with 3 students and 2 campus counselors. Thus far, 3 primary themes (physical, social, and clinical factors), 8 secondary sub-themes, and 9 tertiary sub-themes are identified in influencing participants’ attitudes of counseling modalities including their initial attitudes toward adopting VR for remote counseling services. These insights suggest that VRT may be more openly adopted for improved access to campus counseling services and for specific clinical use cases where students can translate learned skills to the real world. Concerns of VRT include students using VR to support antithetical behavior and establishing trust between users in a VR environment.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132995097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Fung, Z. Wen, Haotian Li, Xingbo Wang, S. Song, Huamin Qu
Students with specific learning disabilities (SLDs) often experience reading, writing, attention, and physical movement coordination difficulties. However, in Hong Kong, it takes years for special education needs coordinators (SENCOs) and special-ed teachers to pre-screen and diagnose students with SLDs. Therefore, many students with SLDs missed the golden time for special interventions (i.e., before six years old). In addition, although there are screening tools for students with SLDs in Chinese and Indo-European languages (e.g., English and Spanish), they did not provide a student data visualization dashboard that could help teachers speed up the pre-screening process. Therefore, we designed a new visualization dashboard for Hong Kong SENCOs and special-ed teachers to assist them in pre-screening students with SLDs. Our formative study showed that our current design met teachers’ need to quickly identify a student’s specific under-performing tasks and effectively collect evidence about how the student was affected by SLDs. Future work will further test the efficacy of our design in real life.
{"title":"Designing a Data Visualization Dashboard for Pre-Screening Hong Kong Students with Specific Learning Disabilities","authors":"K. Fung, Z. Wen, Haotian Li, Xingbo Wang, S. Song, Huamin Qu","doi":"10.1145/3517428.3550361","DOIUrl":"https://doi.org/10.1145/3517428.3550361","url":null,"abstract":"Students with specific learning disabilities (SLDs) often experience reading, writing, attention, and physical movement coordination difficulties. However, in Hong Kong, it takes years for special education needs coordinators (SENCOs) and special-ed teachers to pre-screen and diagnose students with SLDs. Therefore, many students with SLDs missed the golden time for special interventions (i.e., before six years old). In addition, although there are screening tools for students with SLDs in Chinese and Indo-European languages (e.g., English and Spanish), they did not provide a student data visualization dashboard that could help teachers speed up the pre-screening process. Therefore, we designed a new visualization dashboard for Hong Kong SENCOs and special-ed teachers to assist them in pre-screening students with SLDs. Our formative study showed that our current design met teachers’ need to quickly identify a student’s specific under-performing tasks and effectively collect evidence about how the student was affected by SLDs. Future work will further test the efficacy of our design in real life.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130347928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automated driving system (ADS) technology has been incorporated into critical driving functions (e.g., adaptive cruise control and autonomous braking) for over two decades. Now companies like GM and Google are developing and testing fully autonomous vehicles (AVs). However, the current design of AVs is for individuals who can drive rather than those who cannot, those who would benefit most from these vehicles. As the technology is still primarily experimental, conducting research with AVs and older adults or people with disabilities is infeasible. Therefore, to explore the accessibility of AVs, we conduct user enactment studies, as this method works well with technologies that participants have little to no experience utilizing. This pictorial describes the iterative process of developing a high-fidelity enactment environment, the space and objects with which participants interact. The aim is to encourage the HCI community to utilize high-fidelity enactment environments for conducting accessible future technology research.
{"title":"It's Enactment Time!: High-fidelity Enactment Stage for Accessible Automated Driving System Technology Research","authors":"Aaron Gluck, Hannah M. Solini, Julian Brinkley","doi":"10.1145/3517428.3551351","DOIUrl":"https://doi.org/10.1145/3517428.3551351","url":null,"abstract":"Automated driving system (ADS) technology has been incorporated into critical driving functions (e.g., adaptive cruise control and autonomous braking) for over two decades. Now companies like GM and Google are developing and testing fully autonomous vehicles (AVs). However, the current design of AVs is for individuals who can drive rather than those who cannot, those who would benefit most from these vehicles. As the technology is still primarily experimental, conducting research with AVs and older adults or people with disabilities is infeasible. Therefore, to explore the accessibility of AVs, we conduct user enactment studies, as this method works well with technologies that participants have little to no experience utilizing. This pictorial describes the iterative process of developing a high-fidelity enactment environment, the space and objects with which participants interact. The aim is to encourage the HCI community to utilize high-fidelity enactment environments for conducting accessible future technology research.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114778836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tactile Graphics (TG) are raised line diagrams to represent pictorial content for People with Visual Impairments (PVI). While shapes may be simple to depict, objects often appear disjunctive when represented in graphics. Researchers have evaluated the efficacy of TG and stated factors like size, complexity, or prior visual experience, affecting identification. However, the specific features of a graphic or aspects that enable object (3D) recognition through them need investigation. We evaluate the interactions of 12 PVI (8 born-blind and 4 late-blind), having minimal experience with TG or graphical representations, with 20 tactile stimuli including daily objects. We present graphic- and process-based factors that aid recognition. Our results indicate unique elements or a combination leading to recognition. Born-blind participants were able to identify stimuli based solely on their prior tactile experience of the actual objects, the majority being depictions of objects with insignificant thickness. Mirroring hand movements for symmetrical shapes were observed. A broader classification followed by deduction appeared as a common strategy of exploration. These insights impart improved understanding and evaluation of tactile graphics.
{"title":"Object recognition from two-dimensional tactile graphics: What factors lead to successful identification through touch?","authors":"Anchal Sharma, P. Rao, Srinivasan Venkataraman","doi":"10.1145/3517428.3550376","DOIUrl":"https://doi.org/10.1145/3517428.3550376","url":null,"abstract":"Tactile Graphics (TG) are raised line diagrams to represent pictorial content for People with Visual Impairments (PVI). While shapes may be simple to depict, objects often appear disjunctive when represented in graphics. Researchers have evaluated the efficacy of TG and stated factors like size, complexity, or prior visual experience, affecting identification. However, the specific features of a graphic or aspects that enable object (3D) recognition through them need investigation. We evaluate the interactions of 12 PVI (8 born-blind and 4 late-blind), having minimal experience with TG or graphical representations, with 20 tactile stimuli including daily objects. We present graphic- and process-based factors that aid recognition. Our results indicate unique elements or a combination leading to recognition. Born-blind participants were able to identify stimuli based solely on their prior tactile experience of the actual objects, the majority being depictions of objects with insignificant thickness. Mirroring hand movements for symmetrical shapes were observed. A broader classification followed by deduction appeared as a common strategy of exploration. These insights impart improved understanding and evaluation of tactile graphics.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"353 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115467311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Saad Hassan, Akhter Al Amin, Caluã de Lacerda Pataca, Diego Navarro, Alexis Gordon, Sooyeon Lee, Matt Huenerfauth
As they develop comprehension skills, American Sign Language (ASL) learners often view challenging ASL videos, which may contain unfamiliar signs. Current dictionary tools require students to isolate a single sign they do not understand and input a search query, by selecting linguistic properties or by performing the sign into a webcam. Students may struggle with extracting and re-creating an unfamiliar sign, and they must leave the video-watching task to use an external dictionary tool. We investigate a technology that enables users, in the moment, i.e., while they are viewing a video, to select a span of one or more signs that they do not understand, to view dictionary results. We interviewed 14 American Sign Language (ASL) learners about their challenges in understanding ASL video and workarounds for unfamiliar vocabulary. We then conducted a comparative study and an in-depth analysis with 15 ASL learners to investigate the benefits of using video sub-spans for searching, and their interactions with a Wizard-of-Oz prototype during a video-comprehension task. Our findings revealed benefits of our tool in terms of quality of video translation produced and perceived workload to produce translations. Our in-depth analysis also revealed benefits of an integrated search tool and use of span-selection to constrain video play. These findings inform future designers of such systems, computer vision researchers working on the underlying sign matching technologies, and sign language educators.
{"title":"Support in the Moment: Benefits and use of video-span selection and search for sign-language video comprehension among ASL learners","authors":"Saad Hassan, Akhter Al Amin, Caluã de Lacerda Pataca, Diego Navarro, Alexis Gordon, Sooyeon Lee, Matt Huenerfauth","doi":"10.1145/3517428.3544883","DOIUrl":"https://doi.org/10.1145/3517428.3544883","url":null,"abstract":"As they develop comprehension skills, American Sign Language (ASL) learners often view challenging ASL videos, which may contain unfamiliar signs. Current dictionary tools require students to isolate a single sign they do not understand and input a search query, by selecting linguistic properties or by performing the sign into a webcam. Students may struggle with extracting and re-creating an unfamiliar sign, and they must leave the video-watching task to use an external dictionary tool. We investigate a technology that enables users, in the moment, i.e., while they are viewing a video, to select a span of one or more signs that they do not understand, to view dictionary results. We interviewed 14 American Sign Language (ASL) learners about their challenges in understanding ASL video and workarounds for unfamiliar vocabulary. We then conducted a comparative study and an in-depth analysis with 15 ASL learners to investigate the benefits of using video sub-spans for searching, and their interactions with a Wizard-of-Oz prototype during a video-comprehension task. Our findings revealed benefits of our tool in terms of quality of video translation produced and perceived workload to produce translations. Our in-depth analysis also revealed benefits of an integrated search tool and use of span-selection to constrain video play. These findings inform future designers of such systems, computer vision researchers working on the underlying sign matching technologies, and sign language educators.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"12 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122623994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abraham Glasser, Fyodor O. Minakov, Danielle Bragg
The Deaf and Hard-of-hearing (DHH) community faces a lack of information in American Sign Language (ASL) and other signed languages. Most informational resources are text-based (e.g. books, encyclopedias, newspapers, magazines, etc.). Because DHH signers typically prefer ASL and are often less fluent in written English, text is often insufficient. At the same time, there is also a lack of large continuous sign language datasets from representative signers, which are essential to advancing sign langauge research and technology. In this work, we explore the possibility of crowdsourcing English-to-ASL translations to help address these barriers. To do this, we present a novel bilingual interface that enables the community to both contribute and consume translations. To shed light on the user experience with such an interface, we present a user study with 19 participants using the interface to both generate and consume content. To better understand the potential impact of the interface on translation quality, we also present a preliminary translation quality analysis. Our results suggest that DHH community members find real-world value in the interface, that the quality of translations is comparable to those created with state-of-the-art setups, and shed light on future research avenues.
{"title":"ASL Wiki: An Exploratory Interface for Crowdsourcing ASL Translations","authors":"Abraham Glasser, Fyodor O. Minakov, Danielle Bragg","doi":"10.1145/3517428.3544827","DOIUrl":"https://doi.org/10.1145/3517428.3544827","url":null,"abstract":"The Deaf and Hard-of-hearing (DHH) community faces a lack of information in American Sign Language (ASL) and other signed languages. Most informational resources are text-based (e.g. books, encyclopedias, newspapers, magazines, etc.). Because DHH signers typically prefer ASL and are often less fluent in written English, text is often insufficient. At the same time, there is also a lack of large continuous sign language datasets from representative signers, which are essential to advancing sign langauge research and technology. In this work, we explore the possibility of crowdsourcing English-to-ASL translations to help address these barriers. To do this, we present a novel bilingual interface that enables the community to both contribute and consume translations. To shed light on the user experience with such an interface, we present a user study with 19 participants using the interface to both generate and consume content. To better understand the potential impact of the interface on translation quality, we also present a preliminary translation quality analysis. Our results suggest that DHH community members find real-world value in the interface, that the quality of translations is comparable to those created with state-of-the-art setups, and shed light on future research avenues.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125509875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Athira Pillai, Kristen Shinohara, Garreth W. Tigwell
Website builders enable individuals without design or technical skills to create websites. However, it is unclear if modern websites created by website builders meet accessibility standards. We reviewed six popular website building platforms and found a lack of accessibility support. Wix provided the most comprehensive accessibility documentation and robust accessibility features. However, during an accessibility audit of 90 Wix webpages, we found many accessibility issues, raising concerns about how users are supported.
{"title":"Website Builders Still Contribute To Inaccessible Web Design","authors":"Athira Pillai, Kristen Shinohara, Garreth W. Tigwell","doi":"10.1145/3517428.3550368","DOIUrl":"https://doi.org/10.1145/3517428.3550368","url":null,"abstract":"Website builders enable individuals without design or technical skills to create websites. However, it is unclear if modern websites created by website builders meet accessibility standards. We reviewed six popular website building platforms and found a lack of accessibility support. Wix provided the most comprehensive accessibility documentation and robust accessibility features. However, during an accessibility audit of 90 Wix webpages, we found many accessibility issues, raising concerns about how users are supported.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131362065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Higgins, W. Easley, Karen L. Gordes, Amy Hurst, Foad Hamidi
Digital fabrication methods have been shown to be an effective method for producing customized assistive technology (AT). However, the skills required to utilize these tools currently require a high level of technical skill. Previous research showed that integration of these skills within physical therapy training is appropriate but that the level of technical difficulty required can be an issue. We worked to address these issues by introducing a group of PT students to maker concepts and having them develop custom AT for real end users with the help of makers. We present three considerations when integrating making into PT curriculum: 1) including all stakeholders, 2) developing interdisciplinary competencies for PTs and makers, and 3) leveraging academic training programs to connect makers and PT students. In this paper, we contribute to knowledge on how to facilitate the 3D printing of customized ATs for PT students by connecting them with a community organization that provides digital fabrication services and technical expertise. By connecting multiple stakeholders (i.e., PT students, digital fabricators, and AT users), we offer an approach to overcome time and capacity constraints of PT students to utilize advanced fabrication technologies to create customized ATs through connecting them to professional makers.
{"title":"Creating 3D Printed Assistive Technology Through Design Shortcuts: Leveraging Digital Fabrication Services to Incorporate 3D Printing into the Physical Therapy Classroom: Leveraging Digital Fabrication Services to Incorporate 3D Printing into the Physical Therapy Classroom","authors":"E. Higgins, W. Easley, Karen L. Gordes, Amy Hurst, Foad Hamidi","doi":"10.1145/3517428.3544816","DOIUrl":"https://doi.org/10.1145/3517428.3544816","url":null,"abstract":"Digital fabrication methods have been shown to be an effective method for producing customized assistive technology (AT). However, the skills required to utilize these tools currently require a high level of technical skill. Previous research showed that integration of these skills within physical therapy training is appropriate but that the level of technical difficulty required can be an issue. We worked to address these issues by introducing a group of PT students to maker concepts and having them develop custom AT for real end users with the help of makers. We present three considerations when integrating making into PT curriculum: 1) including all stakeholders, 2) developing interdisciplinary competencies for PTs and makers, and 3) leveraging academic training programs to connect makers and PT students. In this paper, we contribute to knowledge on how to facilitate the 3D printing of customized ATs for PT students by connecting them with a community organization that provides digital fabrication services and technical expertise. By connecting multiple stakeholders (i.e., PT students, digital fabricators, and AT users), we offer an approach to overcome time and capacity constraints of PT students to utilize advanced fabrication technologies to create customized ATs through connecting them to professional makers.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131595359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Urvashi Kokate, Kristen Shinohara, Garreth W. Tigwell
Many digital systems are found to be inaccessible and a large part of the issue is that accessibility is not considered early enough in the design process. Digital prototyping tools are a powerful resource for designers to quickly explore both low and high fidelity design mockups during initial stages of product design and development. We evaluated 10 popular prototyping tools to understand their built-in and third-party accessibility features. We found that accessible design support is largely from third-party plug-ins rather than prototyping tools’ built-in features, and the availability of accessibility support varies from tool to tool. There is potential to improve accessible design by increasing the potential for accessibility to be consider earlier in the design process.
{"title":"Exploring Accessibility Features and Plug-ins for Digital Prototyping Tools","authors":"Urvashi Kokate, Kristen Shinohara, Garreth W. Tigwell","doi":"10.1145/3517428.3550391","DOIUrl":"https://doi.org/10.1145/3517428.3550391","url":null,"abstract":"Many digital systems are found to be inaccessible and a large part of the issue is that accessibility is not considered early enough in the design process. Digital prototyping tools are a powerful resource for designers to quickly explore both low and high fidelity design mockups during initial stages of product design and development. We evaluated 10 popular prototyping tools to understand their built-in and third-party accessibility features. We found that accessible design support is largely from third-party plug-ins rather than prototyping tools’ built-in features, and the availability of accessibility support varies from tool to tool. There is potential to improve accessible design by increasing the potential for accessibility to be consider earlier in the design process.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128884938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}