ACM SIGACCESS Conference on Computers and Accessibility (ASSETS) is considered one of the premium forums for research on accessibility. Recently, Mack et al. shed light on the demographics, goals, research methodologies, and evolution of accessibility research over time. We extend their work by exploring the frequencies and trends of disability categories and computer science research domains in publications at ASSETS (N=1,678). Our results show that disability categories and research domains varied significantly across the publication years. We found that in the past 10 years, publications targeting Mental-Health-Related disabilities and the research domain of AR/VR show an increasing trend. In opposition, Gaming, Input Methods/Interaction Techniques, and User Interfaces domains portray a decreasing trend. Additionally, our results show that the majority of the publications utilize the AI/ML/CV/NLP domain (19%) and focus on people with visual disabilities (42%). We share our preliminary exploration results and identify avenues for future work.
{"title":"“What’s going on in Accessibility Research?” Frequencies and Trends of Disability Categories and Research Domains in Publications at ASSETS","authors":"Ather Sharif, Ploypilin Pruekcharoen, Thrisha Ramesh, Ruoxi Shang, Spencer Williams, Gary Hsieh","doi":"10.1145/3517428.3550359","DOIUrl":"https://doi.org/10.1145/3517428.3550359","url":null,"abstract":"ACM SIGACCESS Conference on Computers and Accessibility (ASSETS) is considered one of the premium forums for research on accessibility. Recently, Mack et al. shed light on the demographics, goals, research methodologies, and evolution of accessibility research over time. We extend their work by exploring the frequencies and trends of disability categories and computer science research domains in publications at ASSETS (N=1,678). Our results show that disability categories and research domains varied significantly across the publication years. We found that in the past 10 years, publications targeting Mental-Health-Related disabilities and the research domain of AR/VR show an increasing trend. In opposition, Gaming, Input Methods/Interaction Techniques, and User Interfaces domains portray a decreasing trend. Additionally, our results show that the majority of the publications utilize the AI/ML/CV/NLP domain (19%) and focus on people with visual disabilities (42%). We share our preliminary exploration results and identify avenues for future work.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121247756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ather Sharif, Aneesha Ramesh, Trung-Anh H. Nguyen, Luna Chen, Kent Richard Zeng, Lanqing Hou, Xuhai Xu
Current web-based maps do not provide visibility into real-time elevator outages at urban rail transit stations, disenfranchising commuters (e.g., wheelchair users) who rely on functioning elevators at transit stations. In this paper, we demonstrate UnlockedMaps, an open-source and open-data web-based map that visualizes the real-time accessibility of urban rail transit stations in six North American cities, assisting users in making informed decisions regarding their commute. Specifically, UnlockedMaps uses a map to display transit stations, prominently highlighting their real-time accessibility status (accessible with functioning elevators, accessible but experiencing at least one elevator outage, or not-accessible) and surrounding accessible restaurants and restrooms. UnlockedMaps is the first system to collect elevator outage data from 2,336 transit stations over 23 months and make it publicly available via an API. We report on results from our pilot user studies with five stakeholder groups: (1) people with mobility disabilities; (2) pregnant people; (3) cyclists/stroller users/commuters with heavy equipment; (4) members of disability advocacy groups; and (5) civic hackers.
{"title":"UnlockedMaps: Visualizing Real-Time Accessibility of Urban Rail Transit Using a Web-Based Map","authors":"Ather Sharif, Aneesha Ramesh, Trung-Anh H. Nguyen, Luna Chen, Kent Richard Zeng, Lanqing Hou, Xuhai Xu","doi":"10.1145/3517428.3550397","DOIUrl":"https://doi.org/10.1145/3517428.3550397","url":null,"abstract":"Current web-based maps do not provide visibility into real-time elevator outages at urban rail transit stations, disenfranchising commuters (e.g., wheelchair users) who rely on functioning elevators at transit stations. In this paper, we demonstrate UnlockedMaps, an open-source and open-data web-based map that visualizes the real-time accessibility of urban rail transit stations in six North American cities, assisting users in making informed decisions regarding their commute. Specifically, UnlockedMaps uses a map to display transit stations, prominently highlighting their real-time accessibility status (accessible with functioning elevators, accessible but experiencing at least one elevator outage, or not-accessible) and surrounding accessible restaurants and restrooms. UnlockedMaps is the first system to collect elevator outage data from 2,336 transit stations over 23 months and make it publicly available via an API. We report on results from our pilot user studies with five stakeholder groups: (1) people with mobility disabilities; (2) pregnant people; (3) cyclists/stroller users/commuters with heavy equipment; (4) members of disability advocacy groups; and (5) civic hackers.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122485975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomous navigation on wheelchairs promises to be a significant frontier in the evolution of the power wheelchair as an assistive enabling device, and is increasingly explored among researchers for its potential to unlock more accessible navigation for wheelchair users. While developments on path-planning methods for wheelchairs is ongoing, there is a relative paucity of research on autonomous wheelchair navigation experiences which accommodate potential users’ needs. In this work, we present preliminary design considerations for the user experience for autonomous wheelchair navigation derived from a semi-structured interview with ten (10) current wheelchair users about their willingness to use and applicability of an autonomous navigation function. From this, nine (9) expressed a willingness to use autonomous navigation in the near future in a range of contexts, while expressing attitudes like expectations, concerns, and motivations for intent to use. To better understand the impetus for such attitudes, we conducted thematic analysis to reveal three high-order factors and associated qualities which together serve as a framework to help understand participants’ intent. Finally, we highlight three critical areas of focus to highlight opportunities and challenges for developers of a user-centered autonomous navigation experience for wheelchairs.
{"title":"”I Should Feel Like I’m In Control”: Understanding Expectations, Concerns, and Motivations for the Use of Autonomous Navigation on Wheelchairs","authors":"JiWoong Jang, Yunzhi Li, Patrick Carrington","doi":"10.1145/3517428.3550380","DOIUrl":"https://doi.org/10.1145/3517428.3550380","url":null,"abstract":"Autonomous navigation on wheelchairs promises to be a significant frontier in the evolution of the power wheelchair as an assistive enabling device, and is increasingly explored among researchers for its potential to unlock more accessible navigation for wheelchair users. While developments on path-planning methods for wheelchairs is ongoing, there is a relative paucity of research on autonomous wheelchair navigation experiences which accommodate potential users’ needs. In this work, we present preliminary design considerations for the user experience for autonomous wheelchair navigation derived from a semi-structured interview with ten (10) current wheelchair users about their willingness to use and applicability of an autonomous navigation function. From this, nine (9) expressed a willingness to use autonomous navigation in the near future in a range of contexts, while expressing attitudes like expectations, concerns, and motivations for intent to use. To better understand the impetus for such attitudes, we conducted thematic analysis to reveal three high-order factors and associated qualities which together serve as a framework to help understand participants’ intent. Finally, we highlight three critical areas of focus to highlight opportunities and challenges for developers of a user-centered autonomous navigation experience for wheelchairs.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114693542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ather Sharif, Andrew M. Zhang, Anna Shih, J. Wobbrock, Katharina Reinecke
Prior work has studied the interaction experiences of screen-reader users with simple online data visualizations (e.g., bar charts, line graphs, scatter plots), highlighting the disenfranchisement of screen-reader users in accessing information from these visualizations. However, the interactions of screen-reader users with online geospatial data visualizations, commonly used by visualization creators to represent geospatial data (e.g., COVID-19 cases per US state), remain unexplored. In this work, we study the interactions of and information extraction by screen-reader users from online geospatial data visualizations. Specifically, we conducted a user study with 12 screen-reader users to understand the information they seek from online geospatial data visualizations and the questions they ask to extract that information. We utilized our findings to generate a taxonomy of information sought from our participants’ interactions. Additionally, we extended the functionalities of VoxLens—an open-source multi-modal solution that improves data visualization accessibility—to enable screen-reader users to extract information from online geospatial data visualizations.
{"title":"Understanding and Improving Information Extraction From Online Geospatial Data Visualizations for Screen-Reader Users","authors":"Ather Sharif, Andrew M. Zhang, Anna Shih, J. Wobbrock, Katharina Reinecke","doi":"10.1145/3517428.3550363","DOIUrl":"https://doi.org/10.1145/3517428.3550363","url":null,"abstract":"Prior work has studied the interaction experiences of screen-reader users with simple online data visualizations (e.g., bar charts, line graphs, scatter plots), highlighting the disenfranchisement of screen-reader users in accessing information from these visualizations. However, the interactions of screen-reader users with online geospatial data visualizations, commonly used by visualization creators to represent geospatial data (e.g., COVID-19 cases per US state), remain unexplored. In this work, we study the interactions of and information extraction by screen-reader users from online geospatial data visualizations. Specifically, we conducted a user study with 12 screen-reader users to understand the information they seek from online geospatial data visualizations and the questions they ask to extract that information. We utilized our findings to generate a taxonomy of information sought from our participants’ interactions. Additionally, we extended the functionalities of VoxLens—an open-source multi-modal solution that improves data visualization accessibility—to enable screen-reader users to extract information from online geospatial data visualizations.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116892064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Positioning disabled people as deficient, dysfunctional, and locating the ’problem’ of disability within the individual, the over-medicalized, individualistic, and not equity-oriented perspectives of disability have led to oppression, discrimination, and exclusion of disabled people from important parts of public life. The global politics of disability rights and disability movements have brought thorny questions regarding the nature of dominant explanations. Equity-oriented perspectives and collaborative approaches regarding organization and distribution of access started to gain visibility. HCI research has a vital potential to contribute to this by providing related tools and technologies for integrating equal access in the collaborative organization of access. Considering the existing literature, the question of how access is collaboratively organized, negotiated, distributed and scaled through socio-technical mechanisms especially at an institutional level, as well as how mixed-ability groups reorganize access by interacting with institutional socio-technical structures remains open. In this research, I aim to extend the body of literature in collaborative access by presenting the importance of socio-technical perspectives for designing collaborative technologies to support equal distribution of access. My research is about the significance of equity perspectives in access and interaction. Specifically, this research focuses on understanding the role of socio-technical infrastructures for the organization and distribution of access by mixed-ability collaborators and developing design insights for socio-technical mechanisms to support equal distribution of access for people with disabilities.
{"title":"Understanding the Role of Socio-Technical Infrastructures on the Organization of Access for the Mixed-Ability Collaborators","authors":"Z. Yıldız","doi":"10.1145/3517428.3550410","DOIUrl":"https://doi.org/10.1145/3517428.3550410","url":null,"abstract":"Positioning disabled people as deficient, dysfunctional, and locating the ’problem’ of disability within the individual, the over-medicalized, individualistic, and not equity-oriented perspectives of disability have led to oppression, discrimination, and exclusion of disabled people from important parts of public life. The global politics of disability rights and disability movements have brought thorny questions regarding the nature of dominant explanations. Equity-oriented perspectives and collaborative approaches regarding organization and distribution of access started to gain visibility. HCI research has a vital potential to contribute to this by providing related tools and technologies for integrating equal access in the collaborative organization of access. Considering the existing literature, the question of how access is collaboratively organized, negotiated, distributed and scaled through socio-technical mechanisms especially at an institutional level, as well as how mixed-ability groups reorganize access by interacting with institutional socio-technical structures remains open. In this research, I aim to extend the body of literature in collaborative access by presenting the importance of socio-technical perspectives for designing collaborative technologies to support equal distribution of access. My research is about the significance of equity perspectives in access and interaction. Specifically, this research focuses on understanding the role of socio-technical infrastructures for the organization and distribution of access by mixed-ability collaborators and developing design insights for socio-technical mechanisms to support equal distribution of access for people with disabilities.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123042653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The visual and mouse-centric nature of block-based programming environments generally make them inaccessible and challenging to use by users with visual impairments who rely on assistive technologies to interact with computers. This prevents these users from participating in programming activities where these systems are used. This paper presents a prototype of an accessible block-based programming library called Accessible Blockly that allows users to create and navigate block-based code using a screen reader and a keyboard. This is an attempt to make the famous Blockly library accessible through a screen reader and keyboard. In this paper, we present the design and implementation of Accessible Blockly. We also discuss the evaluation of the library for block-based code navigation in a study with 12 blind programmers. Analysis of the study results shows that Accessible Blockly effectively aids users with reading and understanding block-based code. Participants found Accessible Blockly easy to use and less frustrating for navigating block-based programs. The participants also expressed enthusiasm and interest in using the keyboard and screen reader to navigate block-based code and in the accessibility of block-based programming.
{"title":"Accessible Blockly: An Accessible Block-Based Programming Library for People with Visual Impairments","authors":"Aboubakar Mountapmbeme, Obianuju Okafor, S. Ludi","doi":"10.1145/3517428.3544806","DOIUrl":"https://doi.org/10.1145/3517428.3544806","url":null,"abstract":"The visual and mouse-centric nature of block-based programming environments generally make them inaccessible and challenging to use by users with visual impairments who rely on assistive technologies to interact with computers. This prevents these users from participating in programming activities where these systems are used. This paper presents a prototype of an accessible block-based programming library called Accessible Blockly that allows users to create and navigate block-based code using a screen reader and a keyboard. This is an attempt to make the famous Blockly library accessible through a screen reader and keyboard. In this paper, we present the design and implementation of Accessible Blockly. We also discuss the evaluation of the library for block-based code navigation in a study with 12 blind programmers. Analysis of the study results shows that Accessible Blockly effectively aids users with reading and understanding block-based code. Participants found Accessible Blockly easy to use and less frustrating for navigating block-based programs. The participants also expressed enthusiasm and interest in using the keyboard and screen reader to navigate block-based code and in the accessibility of block-based programming.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123382938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Captioning provides access to sounds in audio-visual content for people who are Deaf or Hard-of-hearing (DHH). As user-generated content in online videos grows in prevalence, researchers have explored using automatic speech recognition (ASR) to automate captioning. However, definitions of captions (as compared to subtitles) include non-speech sounds, which ASR typically does not capture as it focuses on speech. Thus, we explore DHH viewers’ and hearing video creators’ perspectives on captioning non-speech sounds in user-generated online videos using text or graphics. Formative interviews with 11 DHH participants informed the design and implementation of a prototype interface for authoring text-based and graphic captions using automatic sound event detection, which was then evaluated with 10 hearing video creators. Our findings include identifying DHH viewers’ interests in having important non-speech sounds included in captions, as well as various criteria for sound selection and the appropriateness of text-based versus graphic captions of non-speech sounds. Our findings also include hearing creators’ requirements for automatic tools to assist them in captioning non-speech sounds.
{"title":"Beyond Subtitles: Captioning and Visualizing Non-speech Sounds to Improve Accessibility of User-Generated Videos","authors":"Oliver Alonzo, Hijung Valentina Shin, Dingzeyu Li","doi":"10.1145/3517428.3544808","DOIUrl":"https://doi.org/10.1145/3517428.3544808","url":null,"abstract":"Captioning provides access to sounds in audio-visual content for people who are Deaf or Hard-of-hearing (DHH). As user-generated content in online videos grows in prevalence, researchers have explored using automatic speech recognition (ASR) to automate captioning. However, definitions of captions (as compared to subtitles) include non-speech sounds, which ASR typically does not capture as it focuses on speech. Thus, we explore DHH viewers’ and hearing video creators’ perspectives on captioning non-speech sounds in user-generated online videos using text or graphics. Formative interviews with 11 DHH participants informed the design and implementation of a prototype interface for authoring text-based and graphic captions using automatic sound event detection, which was then evaluated with 10 hearing video creators. Our findings include identifying DHH viewers’ interests in having important non-speech sounds included in captions, as well as various criteria for sound selection and the appropriateness of text-based versus graphic captions of non-speech sounds. Our findings also include hearing creators’ requirements for automatic tools to assist them in captioning non-speech sounds.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129956404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Felix M. Schmitt-Koopmann, Elaine M. Huang, Alireza Darvishy
People with visual impairments use assistive technology, e.g., screen readers, to navigate and read PDFs. However, such screen readers need extra information about the logical structure of the PDF, such as the reading order, header levels, and mathematical formulas, described in readable form to navigate the document in a meaningful way. This logical structure can be added to a PDF with tags. Creating tags for a PDF is time-consuming, and requires awareness and expert knowledge. Hence, most PDFs are left untagged, and as a result, they are poorly readable or unreadable for people who rely on screen readers. STEM documents are particularly problematic with their complex document structure and complicated mathematical formulae. These inaccessible PDFs present a major barrier for people with visual impairments wishing to pursue studies or careers in STEM fields, who cannot easily read studies and publications from their field. The goal of this Ph.D. is to apply artificial intelligence for document analysis to reasonably automate the remediation process of PDFs and present a solution for large mathematical formulae accessibility in PDFs. With these new methods, the Ph.D. research aims to lower barriers to creating accessible scientific PDFs, by reducing the time, effort, and expertise necessary to do so, ultimately facilitating greater access to scientific documents for people with visual impairments.
{"title":"Accessible PDFs: Applying Artificial Intelligence for Automated Remediation of STEM PDFs","authors":"Felix M. Schmitt-Koopmann, Elaine M. Huang, Alireza Darvishy","doi":"10.1145/3517428.3550407","DOIUrl":"https://doi.org/10.1145/3517428.3550407","url":null,"abstract":"People with visual impairments use assistive technology, e.g., screen readers, to navigate and read PDFs. However, such screen readers need extra information about the logical structure of the PDF, such as the reading order, header levels, and mathematical formulas, described in readable form to navigate the document in a meaningful way. This logical structure can be added to a PDF with tags. Creating tags for a PDF is time-consuming, and requires awareness and expert knowledge. Hence, most PDFs are left untagged, and as a result, they are poorly readable or unreadable for people who rely on screen readers. STEM documents are particularly problematic with their complex document structure and complicated mathematical formulae. These inaccessible PDFs present a major barrier for people with visual impairments wishing to pursue studies or careers in STEM fields, who cannot easily read studies and publications from their field. The goal of this Ph.D. is to apply artificial intelligence for document analysis to reasonably automate the remediation process of PDFs and present a solution for large mathematical formulae accessibility in PDFs. With these new methods, the Ph.D. research aims to lower barriers to creating accessible scientific PDFs, by reducing the time, effort, and expertise necessary to do so, ultimately facilitating greater access to scientific documents for people with visual impairments.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"181 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116143787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are multiple perspectives that must be considered while designing a limb: The engineering requirements, the visual and aesthetic appeal, the needs and wants of the client, and how these designs impact users physically, socially, and mentally. Historically, the main focus of design has been on the engineering requirements while neglecting the individual's needs or desires for aesthetics which are seen as secondary concerns. However, these aspects still hold important roles in how the user views and connects with their own limb. In order to better understand the impact that aesthetic design has on the individual, this exploratory case study aimed to create custom-designed prosthetic limb covers for lower limb amputees. This poster includes the findings of the research and design process, including related works, the interview, design, and prototyping processes, as well as issues that occurred during the project and how they were either overcome or could be addressed in the future.
{"title":"ProAesthetics: Changing How We View Prosthetic Function","authors":"Susan Abler, Foad Hamidi","doi":"10.1145/3517428.3550386","DOIUrl":"https://doi.org/10.1145/3517428.3550386","url":null,"abstract":"There are multiple perspectives that must be considered while designing a limb: The engineering requirements, the visual and aesthetic appeal, the needs and wants of the client, and how these designs impact users physically, socially, and mentally. Historically, the main focus of design has been on the engineering requirements while neglecting the individual's needs or desires for aesthetics which are seen as secondary concerns. However, these aspects still hold important roles in how the user views and connects with their own limb. In order to better understand the impact that aesthetic design has on the individual, this exploratory case study aimed to create custom-designed prosthetic limb covers for lower limb amputees. This poster includes the findings of the research and design process, including related works, the interview, design, and prototyping processes, as well as issues that occurred during the project and how they were either overcome or could be addressed in the future.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124107997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sonification is a commonly used technique to make online data visualizations accessible to screen-reader users through auditory means. While current sonification solutions provide plausible utility (usefulness) to screen-reader users in exploring data visualizations, they are limited in exploring the quality (usability) of the sonified responses. In this preliminary exploration, we investigated the usability and user-friendliness of data visualization sonification for screen-reader users. Specifically, we evaluated the Pleasantness, Clarity, Confidence, and Overall Score of discrete and continuous sonified responses generated using various oscillator waveforms and synthesizers through user studies with 10 screen-reader users. Additionally, we examined these factors using both simple and complex trends. Our results show that screen-reader users preferred distinct non-continuous responses generated using oscillators with square waveforms. We utilized our findings to extend the functionality of Sonifier—an open-source JavaScript library that enables developers to sonify online data visualizations. Our follow-up interviews with screen-reader users identified the need to personalize the sonified responses per their individualized preferences.
{"title":"“What Makes Sonification User-Friendly?” Exploring Usability and User-Friendliness of Sonified Responses","authors":"Ather Sharif, Olivia H. Wang, Alida T. Muongchan","doi":"10.1145/3517428.3550360","DOIUrl":"https://doi.org/10.1145/3517428.3550360","url":null,"abstract":"Sonification is a commonly used technique to make online data visualizations accessible to screen-reader users through auditory means. While current sonification solutions provide plausible utility (usefulness) to screen-reader users in exploring data visualizations, they are limited in exploring the quality (usability) of the sonified responses. In this preliminary exploration, we investigated the usability and user-friendliness of data visualization sonification for screen-reader users. Specifically, we evaluated the Pleasantness, Clarity, Confidence, and Overall Score of discrete and continuous sonified responses generated using various oscillator waveforms and synthesizers through user studies with 10 screen-reader users. Additionally, we examined these factors using both simple and complex trends. Our results show that screen-reader users preferred distinct non-continuous responses generated using oscillators with square waveforms. We utilized our findings to extend the functionality of Sonifier—an open-source JavaScript library that enables developers to sonify online data visualizations. Our follow-up interviews with screen-reader users identified the need to personalize the sonified responses per their individualized preferences.","PeriodicalId":384752,"journal":{"name":"Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126200632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}