Junhan Kong, Mingyuan Zhong, J. Fogarty, J. Wobbrock
Current performance measures with touch-based systems usually focus on overall performance, such as touch accuracy and target acquisition speed. But a touch is not an atomic event; it is a process that unfolds over time, and this process can be characterized to gain insight into users’ touch behaviors. To this end, our work proposes 13 target-agnostic touch performance metrics to characterize what happens during a touch. These metrics are: touch direction, variability, drift, duration, extent, absolute/signed area change, area variability, area deviation, absolute/signed angle change, angle variability, and angle deviation. Unlike traditional touch performance measures that treat a touch as a single (x, y) coordinate, we regard a touch as a time series of ovals that occur from finger-down to finger-up. We provide a mathematical formula and intuitive description for each metric we propose. To evaluate our metrics, we run an analysis on a publicly available dataset containing touch inputs by people with and without limited fine motor function, finding our metrics helpful in characterizing different fine motor control challenges. Our metrics can be useful to designers and evaluators of touch-based systems, particularly when making touch screens accessible to all forms of touch.
{"title":"New Metrics for Understanding Touch by People with and without Limited Fine Motor Function","authors":"Junhan Kong, Mingyuan Zhong, J. Fogarty, J. Wobbrock","doi":"10.1145/3441852.3476559","DOIUrl":"https://doi.org/10.1145/3441852.3476559","url":null,"abstract":"Current performance measures with touch-based systems usually focus on overall performance, such as touch accuracy and target acquisition speed. But a touch is not an atomic event; it is a process that unfolds over time, and this process can be characterized to gain insight into users’ touch behaviors. To this end, our work proposes 13 target-agnostic touch performance metrics to characterize what happens during a touch. These metrics are: touch direction, variability, drift, duration, extent, absolute/signed area change, area variability, area deviation, absolute/signed angle change, angle variability, and angle deviation. Unlike traditional touch performance measures that treat a touch as a single (x, y) coordinate, we regard a touch as a time series of ovals that occur from finger-down to finger-up. We provide a mathematical formula and intuitive description for each metric we propose. To evaluate our metrics, we run an analysis on a publicly available dataset containing touch inputs by people with and without limited fine motor function, finding our metrics helpful in characterizing different fine motor control challenges. Our metrics can be useful to designers and evaluators of touch-based systems, particularly when making touch screens accessible to all forms of touch.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117180932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qisheng Li, Josephine Lee, C. Zhang, Katharina Reinecke
Roughly 1 in 3 people around the world are affected by cognitive or mental disabilities at some point in their lives, yet people often face a variety of barriers when seeking support and receiving diagnosis from healthcare professionals. While prior work found that people with such disabilities assess themselves using online tests and assessments, it remains unknown whether and how effectively these tests fill gaps in healthcare and general support systems. To find out, we interviewed 17 adults with cognitive or mental disabilities about their motivation for and experience using online tests. We learned that online tests act as an important resource that address the shortcomings in support systems for people with professionally diagnosed or suspected cognitive or mental disabilities. In particular, online tests can lower barriers to a professional diagnosis, provide valuable information about the nuances of a disability, and support people in forming a disability identity – an invaluable step towards a positive acceptance of oneself. Our results also uncovered challenges and risks that prevent people with known or suspected health conditions from fully taking advantage of online tests. Based on these findings, we discuss how online tests can be better leveraged to support people with cognitive or mental disabilities before and after professional diagnosis.
{"title":"How Online Tests Contribute to the Support System for People With Cognitive and Mental Disabilities","authors":"Qisheng Li, Josephine Lee, C. Zhang, Katharina Reinecke","doi":"10.1145/3441852.3471229","DOIUrl":"https://doi.org/10.1145/3441852.3471229","url":null,"abstract":"Roughly 1 in 3 people around the world are affected by cognitive or mental disabilities at some point in their lives, yet people often face a variety of barriers when seeking support and receiving diagnosis from healthcare professionals. While prior work found that people with such disabilities assess themselves using online tests and assessments, it remains unknown whether and how effectively these tests fill gaps in healthcare and general support systems. To find out, we interviewed 17 adults with cognitive or mental disabilities about their motivation for and experience using online tests. We learned that online tests act as an important resource that address the shortcomings in support systems for people with professionally diagnosed or suspected cognitive or mental disabilities. In particular, online tests can lower barriers to a professional diagnosis, provide valuable information about the nuances of a disability, and support people in forming a disability identity – an invaluable step towards a positive acceptance of oneself. Our results also uncovered challenges and risks that prevent people with known or suspected health conditions from fully taking advantage of online tests. Based on these findings, we discuss how online tests can be better leveraged to support people with cognitive or mental disabilities before and after professional diagnosis.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123206851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anirudh Nagraj, Ravi Kuber, Foad Hamidi, Raghavendra S. G. Prasad
Assistive navigational technologies offer considerable promise to people who are blind. However, uptake of these technologies has traditionally been lower in low and middle income countries (LMICs), where levels of investment and maintenance in infrastructure differ from upper middle (UMICs) and high income countries (HICs). In this paper, we describe a qualitative study undertaken with 14 people who identify as legally-blind in an LMIC (India) to understand their experiences and strategies used when navigating within a metropolitan area. We highlight a set of scenarios impacting people who are blind within the context studied. These include crossing busy highways, navigating in the rainy season, collaborating with others to navigate at night, and using older public transportation. Our work brings attention to areas where the latest successful and well-publicized innovations in blind navigation may fall short when used in an Indian metropolitan area. We suggest that designers should be cognizant of the role that infrastructure (particularly its shortcomings) and environmental factors may play when navigating in LMICs such as India, with a view to designing assistive navigational technologies to better match the needs of users within these contexts.
{"title":"Investigating the Navigational Habits of People who are Blind in India","authors":"Anirudh Nagraj, Ravi Kuber, Foad Hamidi, Raghavendra S. G. Prasad","doi":"10.1145/3441852.3471203","DOIUrl":"https://doi.org/10.1145/3441852.3471203","url":null,"abstract":"Assistive navigational technologies offer considerable promise to people who are blind. However, uptake of these technologies has traditionally been lower in low and middle income countries (LMICs), where levels of investment and maintenance in infrastructure differ from upper middle (UMICs) and high income countries (HICs). In this paper, we describe a qualitative study undertaken with 14 people who identify as legally-blind in an LMIC (India) to understand their experiences and strategies used when navigating within a metropolitan area. We highlight a set of scenarios impacting people who are blind within the context studied. These include crossing busy highways, navigating in the rainy season, collaborating with others to navigate at night, and using older public transportation. Our work brings attention to areas where the latest successful and well-publicized innovations in blind navigation may fall short when used in an Indian metropolitan area. We suggest that designers should be cognizant of the role that infrastructure (particularly its shortcomings) and environmental factors may play when navigating in LMICs such as India, with a view to designing assistive navigational technologies to better match the needs of users within these contexts.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121702049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hwei-Shin Harriman, D. Ahmetovic, S. Mascetti, D. Moyle, Michael Evans, P. Ruvolo
Certified orientation and mobility specialists (COMS) work with clients who are blind or visually impaired (BVI) to help them travel independently with confidence. Part of this process involves creating a narrative description of a route and using specific techniques to help the client internalize it. We focus on the problem of automatically generating a narrative description of an indoor route based on a recording from a smartphone. These automatically generated narrations could be used in cases where a COMS is not available or to enable clients to independently practice routes that were originally learned with the help of a COMS. Specifically, we introduce Clew3D, a mobile app that leverages LIDAR-equipped iOS devices to identify orientation and mobility (O&M) landmarks and their relative location along a recorded route. The identified landmarks are then used to provide a spoken narration modeled after traditional O&M techniques. Our solution is co-designed with COMS and uses methods and language that they employ when creating route narrations for their clients. In addition to presenting Clew3D, we report the results of an analysis conducted with COMS regarding techniques and terminology used in traditional, in-person O&M instruction. We also discuss challenges posed by vision-based systems to achieve automatic narrations that are reliable. Finally, we provide an example of an automatically generated route description and compare it with the same route provided by a COMS.
{"title":"Clew3D: Automated Generation of O&M Instructions Using LIDAR-Equipped Smartphones","authors":"Hwei-Shin Harriman, D. Ahmetovic, S. Mascetti, D. Moyle, Michael Evans, P. Ruvolo","doi":"10.1145/3441852.3476564","DOIUrl":"https://doi.org/10.1145/3441852.3476564","url":null,"abstract":"Certified orientation and mobility specialists (COMS) work with clients who are blind or visually impaired (BVI) to help them travel independently with confidence. Part of this process involves creating a narrative description of a route and using specific techniques to help the client internalize it. We focus on the problem of automatically generating a narrative description of an indoor route based on a recording from a smartphone. These automatically generated narrations could be used in cases where a COMS is not available or to enable clients to independently practice routes that were originally learned with the help of a COMS. Specifically, we introduce Clew3D, a mobile app that leverages LIDAR-equipped iOS devices to identify orientation and mobility (O&M) landmarks and their relative location along a recorded route. The identified landmarks are then used to provide a spoken narration modeled after traditional O&M techniques. Our solution is co-designed with COMS and uses methods and language that they employ when creating route narrations for their clients. In addition to presenting Clew3D, we report the results of an analysis conducted with COMS regarding techniques and terminology used in traditional, in-person O&M instruction. We also discuss challenges posed by vision-based systems to achieve automatic narrations that are reliable. Finally, we provide an example of an automatically generated route description and compare it with the same route provided by a COMS.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"52 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121013903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Z. Wen, Yuhang Zhao, Erica Silverstein, Shiri Azenkot
Students with specific learning disabilities (SLDs) often experience negative emotions when solving math problems, which they have difficulty managing. This is one reason that current math e-learning tools, which elicit these negative emotions, are not effective for these students. We designed an intelligent math e-tutoring system that aims to reduce students’ negative emotional behaviors. The system automatically detects possible negative emotional behaviors by analyzing gaze, inputs on the touchscreen, and response time. It then uses one of four intervention methods (e.g., hints or brain breaks) to prevent students from being upset. To form this design, we conducted a formative study with five teachers for students with SLDs. The teachers thought that the design of four intervention methods would help students with SLDs. Among the four intervention methods, providing brain breaks is new and particularly useful for the students. The teachers also suggested that the system should personalize the detection of negative emotional behaviors to help students who have more severe learning disabilities.
{"title":"An Intelligent Math E-Tutoring System for Students with Specific Learning Disabilities","authors":"Z. Wen, Yuhang Zhao, Erica Silverstein, Shiri Azenkot","doi":"10.1145/3441852.3476568","DOIUrl":"https://doi.org/10.1145/3441852.3476568","url":null,"abstract":"Students with specific learning disabilities (SLDs) often experience negative emotions when solving math problems, which they have difficulty managing. This is one reason that current math e-learning tools, which elicit these negative emotions, are not effective for these students. We designed an intelligent math e-tutoring system that aims to reduce students’ negative emotional behaviors. The system automatically detects possible negative emotional behaviors by analyzing gaze, inputs on the touchscreen, and response time. It then uses one of four intervention methods (e.g., hints or brain breaks) to prevent students from being upset. To form this design, we conducted a formative study with five teachers for students with SLDs. The teachers thought that the design of four intervention methods would help students with SLDs. Among the four intervention methods, providing brain breaks is new and particularly useful for the students. The teachers also suggested that the system should personalize the detection of negative emotional behaviors to help students who have more severe learning disabilities.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121504992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sedeeq Al-khazraji, Becca Dingman, Sooyeon Lee, Matt Huenerfauth
Adding American Sign Language (ASL) versions of information content to websites can improve information accessibility for many people who are Deaf or Hard of Hearing (DHH) who may have lower levels of English literacy. Generating animations from a script representation would enable this content to be easily updated, yet software is needed that can set detailed speed and timing parameters for such animations, which prior work has revealed to be critical for their understandability and acceptance among DHH users. Despite recent work on predicting these parameters using AI models trained on recordings of human signers, no prior work had examined whether DHH users actually prefer for these speed and timing properties to be similar to humans, or to be exaggerated, e.g. for additional clarity. We conducted two empirical studies to investigate preferences of ASL signers for speed and timing parameters of ASL animations, including: sign duration, transition time, differential signing rate, pause length, and pausing frequency. Our first study (N=20) identified two preferred values from among five options for each parameter, one of which included a typical human value for this parameter, and a second study (N=20) identified the most preferred value. We found that while ASL signers preferred pause length and frequency to be similar to those of humans, they actually preferred animations to have faster signs, slower transitions, and less dynamic variation in differential signing speed, as compared to the timing of human signers. This study provides specific empirical guidance for creators of future ASL animation technologies, and more broadly, it demonstrates that it is not safe to assume that ASL signers will simply prefer for properties of ASL animations to be as similar as possible to human signers.
{"title":"At a Different Pace: Evaluating Whether Users Prefer Timing Parameters in American Sign Language Animations to Differ from Human Signers’ Timing","authors":"Sedeeq Al-khazraji, Becca Dingman, Sooyeon Lee, Matt Huenerfauth","doi":"10.1145/3441852.3471214","DOIUrl":"https://doi.org/10.1145/3441852.3471214","url":null,"abstract":"Adding American Sign Language (ASL) versions of information content to websites can improve information accessibility for many people who are Deaf or Hard of Hearing (DHH) who may have lower levels of English literacy. Generating animations from a script representation would enable this content to be easily updated, yet software is needed that can set detailed speed and timing parameters for such animations, which prior work has revealed to be critical for their understandability and acceptance among DHH users. Despite recent work on predicting these parameters using AI models trained on recordings of human signers, no prior work had examined whether DHH users actually prefer for these speed and timing properties to be similar to humans, or to be exaggerated, e.g. for additional clarity. We conducted two empirical studies to investigate preferences of ASL signers for speed and timing parameters of ASL animations, including: sign duration, transition time, differential signing rate, pause length, and pausing frequency. Our first study (N=20) identified two preferred values from among five options for each parameter, one of which included a typical human value for this parameter, and a second study (N=20) identified the most preferred value. We found that while ASL signers preferred pause length and frequency to be similar to those of humans, they actually preferred animations to have faster signs, slower transitions, and less dynamic variation in differential signing speed, as compared to the timing of human signers. This study provides specific empirical guidance for creators of future ASL animation technologies, and more broadly, it demonstrates that it is not safe to assume that ASL signers will simply prefer for properties of ASL animations to be as similar as possible to human signers.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"12 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116855750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lida Theodorou, Daniela Massiceti, L. Zintgraf, S. Stumpf, C. Morrison, Edward Cutrell, Matthew Tobias Harris, Katja Hofmann
Artificial Intelligence (AI) for accessibility is a rapidly growing area, requiring datasets that are inclusive of the disabled users that assistive technology aims to serve. We offer insights from a multi-disciplinary project that constructed a dataset for teachable object recognition with people who are blind or low vision. Teachable object recognition enables users to teach a model objects that are of interest to them, e.g., their white cane or own sunglasses, by providing example images or videos of objects. In this paper, we make the following contributions: 1) a disability-first procedure to support blind and low vision data collectors to produce good quality data, using video rather than images; 2) a validation and evolution of this procedure through a series of data collection phases and 3) a set of questions to orient researchers involved in creating datasets toward reflecting on the needs of their participant community.
{"title":"Disability-first Dataset Creation: Lessons from Constructing a Dataset for Teachable Object Recognition with Blind and Low Vision Data Collectors","authors":"Lida Theodorou, Daniela Massiceti, L. Zintgraf, S. Stumpf, C. Morrison, Edward Cutrell, Matthew Tobias Harris, Katja Hofmann","doi":"10.1145/3441852.3471225","DOIUrl":"https://doi.org/10.1145/3441852.3471225","url":null,"abstract":"Artificial Intelligence (AI) for accessibility is a rapidly growing area, requiring datasets that are inclusive of the disabled users that assistive technology aims to serve. We offer insights from a multi-disciplinary project that constructed a dataset for teachable object recognition with people who are blind or low vision. Teachable object recognition enables users to teach a model objects that are of interest to them, e.g., their white cane or own sunglasses, by providing example images or videos of objects. In this paper, we make the following contributions: 1) a disability-first procedure to support blind and low vision data collectors to produce good quality data, using video rather than images; 2) a validation and evolution of this procedure through a series of data collection phases and 3) a set of questions to orient researchers involved in creating datasets toward reflecting on the needs of their participant community.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131175494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Studies show Visual Question Answering (VQA) systems are valuable tools for people with visual impairments to quickly obtain information from an image. In this poster, we present our ongoing work towards developing uses of live photos to mitigate quality issues in photography from people with visual impairments for VQA. New contributions building on our prior research include an expanded live photos dataset, a more in-depth analysis of VQA results, and an analysis of features of our live photos compared with existing data collected from people with visual impairments. We show that live photos are a promising method for improving accuracy of VQA and that our sample live photos mimic the types of images taken in real-world settings by people with visual impairments for the task of VQA.
{"title":"Towards Using Live Photos to Mitigate Image Quality Issues In VQA Photography","authors":"Lauren Olson, C. Kambhamettu, Kathleen F. McCoy","doi":"10.1145/3441852.3476541","DOIUrl":"https://doi.org/10.1145/3441852.3476541","url":null,"abstract":"Studies show Visual Question Answering (VQA) systems are valuable tools for people with visual impairments to quickly obtain information from an image. In this poster, we present our ongoing work towards developing uses of live photos to mitigate quality issues in photography from people with visual impairments for VQA. New contributions building on our prior research include an expanded live photos dataset, a more in-depth analysis of VQA results, and an analysis of features of our live photos compared with existing data collected from people with visual impairments. We show that live photos are a promising method for improving accuracy of VQA and that our sample live photos mimic the types of images taken in real-world settings by people with visual impairments for the task of VQA.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128463519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Math graphs need to be accessible to People with Visual Impairments (PVI). While tactile graphics are a common way for PVI to access math graphs, their use becomes complicated in remote learning. To make math graphs more accessible in remote education, we focused on sonification, the use of non-speech sound. In this study, we designed techniques of sonification of math graphs to introduce the concept of discontinuity in calculus to PVI. First, we conducted a remote interview with six participants to understand their experiences with math education using graphs. Based on these findings, we developed a series of sonifications of math graphs that we remotely evaluated with three participants from our initial interviews. Our findings reveal that sonification can intuitively convey simple patterns and trends in math graphs with a little practice, be useful to introduce the discontinuities, and be more effective with descriptions of the sound and graphs.
{"title":"Making Math Graphs More Accessible in Remote Learning: Using Sonification to Introduce Discontinuity in Calculus","authors":"Keita Ohshiro, Amy Hurst, R. DuBois","doi":"10.1145/3441852.3476533","DOIUrl":"https://doi.org/10.1145/3441852.3476533","url":null,"abstract":"Math graphs need to be accessible to People with Visual Impairments (PVI). While tactile graphics are a common way for PVI to access math graphs, their use becomes complicated in remote learning. To make math graphs more accessible in remote education, we focused on sonification, the use of non-speech sound. In this study, we designed techniques of sonification of math graphs to introduce the concept of discontinuity in calculus to PVI. First, we conducted a remote interview with six participants to understand their experiences with math education using graphs. Based on these findings, we developed a series of sonifications of math graphs that we remotely evaluated with three participants from our initial interviews. Our findings reveal that sonification can intuitively convey simple patterns and trends in math graphs with a little practice, be useful to introduce the discontinuities, and be more effective with descriptions of the sound and graphs.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117263377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper aims to create a tangible design framework for practitioners to follow when designing an online social media platform for individuals with intellectual disability. Currently, legislation and best practice consider cyber security and safety for the general public, giving particular attention to the protection of children. However, despite the support in health care, financial assistance, and education, individuals with intellectual disability are rarely considered when it comes to cybersafety. To achieve inclusivity, an integrative review was conducted to make connections between disciplines of education and information technology and law. The process was split into three phases: (i) understanding the challenges those with intellectual disability face, both when using a social media interface and when evaluating safety risks; (ii) identifying gaps and understanding the implications for persons with intellectual disability from legislative and design and design principles; and (iii) visualisation of data flow to model interactions. In conclusion, an inclusive framework is proposed for practitioners when designing online social media platforms for people with intellectual disability.
{"title":"Towards a Secured and Safe Online Social Media Design Framework for People with Intellectual Disability","authors":"Ya-Wen Chang, Laurianne Sitbon, L. Simpson","doi":"10.1145/3441852.3476540","DOIUrl":"https://doi.org/10.1145/3441852.3476540","url":null,"abstract":"This paper aims to create a tangible design framework for practitioners to follow when designing an online social media platform for individuals with intellectual disability. Currently, legislation and best practice consider cyber security and safety for the general public, giving particular attention to the protection of children. However, despite the support in health care, financial assistance, and education, individuals with intellectual disability are rarely considered when it comes to cybersafety. To achieve inclusivity, an integrative review was conducted to make connections between disciplines of education and information technology and law. The process was split into three phases: (i) understanding the challenges those with intellectual disability face, both when using a social media interface and when evaluating safety risks; (ii) identifying gaps and understanding the implications for persons with intellectual disability from legislative and design and design principles; and (iii) visualisation of data flow to model interactions. In conclusion, an inclusive framework is proposed for practitioners when designing online social media platforms for people with intellectual disability.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114784565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}