Abi Roper, Ian Davey, Stephanie M. Wilson, Timothy Neate, J. Marshall, Brian Grellmann
This paper reports the experience of participating in usability testing from the perspective of a person with aphasia. We briefly report adaptations to classic usability testing to enable the participation of people with aphasia. These included the use of short, direct tasks and physical artefacts such as picture cards. Authors of the paper include Ian, a user with aphasia who participated in adapted usability testing and Abi, a speech and language therapist researcher who facilitated sessions. Ian reports that these methods allowed him, as a person with aphasia, to engage with the usability testing process. We argue that such adaptations are essential in order to develop technologies which will be accessible to people with aphasia. This collaborative report provides a case for both how and why these adaptations can be made.
{"title":"Usability Testing - An Aphasia Perspective","authors":"Abi Roper, Ian Davey, Stephanie M. Wilson, Timothy Neate, J. Marshall, Brian Grellmann","doi":"10.1145/3234695.3241481","DOIUrl":"https://doi.org/10.1145/3234695.3241481","url":null,"abstract":"This paper reports the experience of participating in usability testing from the perspective of a person with aphasia. We briefly report adaptations to classic usability testing to enable the participation of people with aphasia. These included the use of short, direct tasks and physical artefacts such as picture cards. Authors of the paper include Ian, a user with aphasia who participated in adapted usability testing and Abi, a speech and language therapist researcher who facilitated sessions. Ian reports that these methods allowed him, as a person with aphasia, to engage with the usability testing process. We argue that such adaptations are essential in order to develop technologies which will be accessible to people with aphasia. This collaborative report provides a case for both how and why these adaptations can be made.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"3 Dermatol Sect 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133798342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We conduct the first large-scale analysis of the accessibility of mobile apps, examining what unique insights this can provide into the state of mobile app accessibility. We analyzed 5,753 free Android apps for label-based accessibility barriers in three classes of image-based buttons: Clickable Images, Image Buttons, and Floating Action Buttons. An epidemiology-inspired framework was used to structure the investigation. The population of free Android apps was assessed for label-based inaccessible button diseases. Three determinants of the disease were considered: missing labels, duplicate labels, and uninformative labels. The prevalence, or frequency of occurrences of barriers, was examined in apps and in classes of image-based buttons. In the app analysis, 35.9% of analyzed apps had 90% or more of their assessed image-based buttons labeled, 45.9% had less than 10% of assessed image-based buttons labeled, and the remaining apps were relatively uniformly distributed along the proportion of elements that were labeled. In the class analysis, 92.0% of Floating Action Buttons were found to have missing labels, compared to 54.7% of Image Buttons and 86.3% of Clickable Images. We discuss how these accessibility barriers are addressed in existing treatments, including accessibility development guidelines.
{"title":"Examining Image-Based Button Labeling for Accessibility in Android Apps through Large-Scale Analysis","authors":"A. S. Ross, Xiaoyi Zhang, J. Fogarty, J. Wobbrock","doi":"10.1145/3234695.3236364","DOIUrl":"https://doi.org/10.1145/3234695.3236364","url":null,"abstract":"We conduct the first large-scale analysis of the accessibility of mobile apps, examining what unique insights this can provide into the state of mobile app accessibility. We analyzed 5,753 free Android apps for label-based accessibility barriers in three classes of image-based buttons: Clickable Images, Image Buttons, and Floating Action Buttons. An epidemiology-inspired framework was used to structure the investigation. The population of free Android apps was assessed for label-based inaccessible button diseases. Three determinants of the disease were considered: missing labels, duplicate labels, and uninformative labels. The prevalence, or frequency of occurrences of barriers, was examined in apps and in classes of image-based buttons. In the app analysis, 35.9% of analyzed apps had 90% or more of their assessed image-based buttons labeled, 45.9% had less than 10% of assessed image-based buttons labeled, and the remaining apps were relatively uniformly distributed along the proportion of elements that were labeled. In the class analysis, 92.0% of Floating Action Buttons were found to have missing labels, compared to 54.7% of Image Buttons and 86.3% of Clickable Images. We discuss how these accessibility barriers are addressed in existing treatments, including accessibility development guidelines.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"29 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114045049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Beatrice Aruanno, F. Garzotto, Emanuele Torelli, F. Vona
Our research explores the potential of wearable Mixed Reality (MR) for people with Neuro-Developmental Disorders (NDD). The paper presents HoloLearn, a MR application designed in cooperation with NDD experts and implemented using HoloLens technology. The goal of HoloLearn is to help people with NDD learn how to perform simple everyday tasks in domestic environments and improve autonomy. An original feature of the system is the presence of a virtual assistant devoted to capture the user's attention and to give her/him hints during task execution in the MR environment. We performed an exploratory study involving 20 subjects with NDD to investigate the acceptability and usability of HoloLearn and its potential as a therapeutic tool. HoloLearn was well-accepted by the participants and the activities in the MR space were perceived as enjoyable, despite some usability problems associated to HoloLens interaction mechanism. More extensive and long term empirical research is needed to validate these early results, but our study suggests that HoloLearn could be adopted as a complement to more traditional interventions. Our work, and the lessons we learned, may help designers and developers of future MR applications devoted to people with NDD and to other people with similar needs.
{"title":"HoloLearn","authors":"Beatrice Aruanno, F. Garzotto, Emanuele Torelli, F. Vona","doi":"10.1145/3234695.3236351","DOIUrl":"https://doi.org/10.1145/3234695.3236351","url":null,"abstract":"Our research explores the potential of wearable Mixed Reality (MR) for people with Neuro-Developmental Disorders (NDD). The paper presents HoloLearn, a MR application designed in cooperation with NDD experts and implemented using HoloLens technology. The goal of HoloLearn is to help people with NDD learn how to perform simple everyday tasks in domestic environments and improve autonomy. An original feature of the system is the presence of a virtual assistant devoted to capture the user's attention and to give her/him hints during task execution in the MR environment. We performed an exploratory study involving 20 subjects with NDD to investigate the acceptability and usability of HoloLearn and its potential as a therapeutic tool. HoloLearn was well-accepted by the participants and the activities in the MR space were perceived as enjoyable, despite some usability problems associated to HoloLens interaction mechanism. More extensive and long term empirical research is needed to validate these early results, but our study suggests that HoloLearn could be adopted as a complement to more traditional interventions. Our work, and the lessons we learned, may help designers and developers of future MR applications devoted to people with NDD and to other people with similar needs.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114083413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The usage of smartphone-based virtual assistants (e.g., Siri or Google Assistant) is growing, and their spread has generally a positive impact on device accessibility, e.g., for people with disabilities. However, people with dysarthria or other speech impairments may be unable to use these virtual assistants with proficiency. This paper investigates to which extent people with ALS-induced dysarthria can be understood and get consistent answers by three widely used smartphone-based assistants, namely Siri, Google Assistant, and Cortana. We focus on the recognition of Italian dysarthric speech, to study the behavior of the virtual assistants with this specific population for which no relevant studies are available. We collected and recorded suitable speech samples from people with dysarthria in a dedicated center of the Molinette hospital, in Turin, Italy. Starting from those recordings, the differences between such assistants, in terms of speech recognition and consistency in answer, are investigated and discussed. Results highlight different performance among the virtual assistants. For speech recognition, Google Assistant is the most promising, with around 25% of word error rate per sentence. Consistency in answer, instead, sees Siri and Google Assistant provide coherent answers around 60% of times.
{"title":"Assessing Virtual Assistant Capabilities with Italian Dysarthric Speech","authors":"Fabio Ballati, Fulvio Corno, Luigi De Russis","doi":"10.1145/3234695.3236354","DOIUrl":"https://doi.org/10.1145/3234695.3236354","url":null,"abstract":"The usage of smartphone-based virtual assistants (e.g., Siri or Google Assistant) is growing, and their spread has generally a positive impact on device accessibility, e.g., for people with disabilities. However, people with dysarthria or other speech impairments may be unable to use these virtual assistants with proficiency. This paper investigates to which extent people with ALS-induced dysarthria can be understood and get consistent answers by three widely used smartphone-based assistants, namely Siri, Google Assistant, and Cortana. We focus on the recognition of Italian dysarthric speech, to study the behavior of the virtual assistants with this specific population for which no relevant studies are available. We collected and recorded suitable speech samples from people with dysarthria in a dedicated center of the Molinette hospital, in Turin, Italy. Starting from those recordings, the differences between such assistants, in terms of speech recognition and consistency in answer, are investigated and discussed. Results highlight different performance among the virtual assistants. For speech recognition, Google Assistant is the most promising, with around 25% of word error rate per sentence. Consistency in answer, instead, sees Siri and Google Assistant provide coherent answers around 60% of times.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122354800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuhang Zhao, Elizabeth Kupferstein, Doron Tal, Shiri Azenkot
Walking in environments with stairs and curbs is potentially dangerous for people with low vision. We sought to understand what challenges low vision people face and what strategies and tools they use when navigating such surface level changes. Using contextual inquiry, we interviewed and observed 14 low vision participants as they completed navigation tasks in two buildings and through two city blocks. The tasks involved walking in- and outdoors, across four staircases and two city blocks. We found that surface level changes were a source of uncertainty and even fear for all participants. Besides the white cane that many participants did not want to use, participants did not use technology in the study. Participants mostly used their vision, which was exhausting and sometimes deceptive. Our findings highlight the need for systems that support surface level changes and other depth-perception tasks; they should consider low vision people's distinct experiences from blind people, their sensitivity to different lighting conditions, and leverage visual enhancements.
{"title":"\"It Looks Beautiful but Scary\": How Low Vision People Navigate Stairs and Other Surface Level Changes","authors":"Yuhang Zhao, Elizabeth Kupferstein, Doron Tal, Shiri Azenkot","doi":"10.1145/3234695.3236359","DOIUrl":"https://doi.org/10.1145/3234695.3236359","url":null,"abstract":"Walking in environments with stairs and curbs is potentially dangerous for people with low vision. We sought to understand what challenges low vision people face and what strategies and tools they use when navigating such surface level changes. Using contextual inquiry, we interviewed and observed 14 low vision participants as they completed navigation tasks in two buildings and through two city blocks. The tasks involved walking in- and outdoors, across four staircases and two city blocks. We found that surface level changes were a source of uncertainty and even fear for all participants. Besides the white cane that many participants did not want to use, participants did not use technology in the study. Participants mostly used their vision, which was exhausting and sometimes deceptive. Our findings highlight the need for systems that support surface level changes and other depth-perception tasks; they should consider low vision people's distinct experiences from blind people, their sensitivity to different lighting conditions, and leverage visual enhancements.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121147613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional television remote control presents frequent challenges to older adults. These challenges arise due to lack of feedback and poor design features such as labeling, size, spatial proximity, physical feel, etc. This paper describes the design of an accessible TV remote control (Potmote) created by employing potentiometers with Arduino to enhance tactile feedback and ease of channel selection with ergonomic controls. An experimental study was conducted with 15 older adults to understand how to design a system that would allow them to change channel numbers and volume levels. The result of experiment have shown positive feedback by the subjects.
{"title":"Potmote: A TV Remote Control for Older Adults","authors":"Siddharth Mehrotra","doi":"10.1145/3234695.3240989","DOIUrl":"https://doi.org/10.1145/3234695.3240989","url":null,"abstract":"Traditional television remote control presents frequent challenges to older adults. These challenges arise due to lack of feedback and poor design features such as labeling, size, spatial proximity, physical feel, etc. This paper describes the design of an accessible TV remote control (Potmote) created by employing potentiometers with Arduino to enhance tactile feedback and ease of channel selection with ergonomic controls. An experimental study was conducted with 15 older adults to understand how to design a system that would allow them to change channel numbers and volume levels. The result of experiment have shown positive feedback by the subjects.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116384726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People with vision impairment are concerned about entering passwords in public as accessibility features (e.g. screen readers and screen magnifiers) make their passwords more vulnerable to attackers. This project aims to use bend passwords to solve this accessibility issue, as they are harder to observe than PINs. Bend passwords are a recently proposed method for user authentication that uses a combination of predefined bend and fold gestures performed on a flexible device. Our inexpensive prototype called BendyPass is made of silicone, with flex sensors able to capture and verify bend passwords, a vibration motor for gesture input haptic feedback, and a button to delete the last gesture or confirm the password. Bend passwords entered on BendyPass provide a tactile method for user authentication, designed to reduce the vulnerability to attackers and help people with vision impairment to better protect their personal information.
{"title":"Bend Passwords on BendyPass: A User Authentication Method for People with Vision Impairment","authors":"Daniella Briotto Faustino, A. Girouard","doi":"10.1145/3234695.3241032","DOIUrl":"https://doi.org/10.1145/3234695.3241032","url":null,"abstract":"People with vision impairment are concerned about entering passwords in public as accessibility features (e.g. screen readers and screen magnifiers) make their passwords more vulnerable to attackers. This project aims to use bend passwords to solve this accessibility issue, as they are harder to observe than PINs. Bend passwords are a recently proposed method for user authentication that uses a combination of predefined bend and fold gestures performed on a flexible device. Our inexpensive prototype called BendyPass is made of silicone, with flex sensors able to capture and verify bend passwords, a vibration motor for gesture input haptic feedback, and a button to delete the last gesture or confirm the password. Bend passwords entered on BendyPass provide a tactile method for user authentication, designed to reduce the vulnerability to attackers and help people with vision impairment to better protect their personal information.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129832682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anthony Li, Manaswi Saha, Anupam Gupta, Jon E. Froehlich
Walkability indices such as walkscore.com model the proximity and density of walkable destinations within a neighborhood. While these metrics have gained widespread use (e.g., incorporated into real-estate tools), they do not integrate accessibility-related features such as sidewalk conditions or curb ramps-thereby excluding a significant portion of the population. In this poster paper, we explore the initial design and implementation of neighborhood accessibility models and visualizations for people with mobility impairments. We are able to overcome previous data availability challenges by using the Project Sidewalk API, which provides access to 255,000+ labels about the accessibility and location of DC sidewalks.
{"title":"Interactively Modeling and Visualizing Neighborhood Accessibility at Scale: An Initial Study of Washington DC","authors":"Anthony Li, Manaswi Saha, Anupam Gupta, Jon E. Froehlich","doi":"10.1145/3234695.3241000","DOIUrl":"https://doi.org/10.1145/3234695.3241000","url":null,"abstract":"Walkability indices such as walkscore.com model the proximity and density of walkable destinations within a neighborhood. While these metrics have gained widespread use (e.g., incorporated into real-estate tools), they do not integrate accessibility-related features such as sidewalk conditions or curb ramps-thereby excluding a significant portion of the population. In this poster paper, we explore the initial design and implementation of neighborhood accessibility models and visualizations for people with mobility impairments. We are able to overcome previous data availability challenges by using the Project Sidewalk API, which provides access to 255,000+ labels about the accessibility and location of DC sidewalks.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122744360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Navigation assistive technologies aim to improve the mobility of blind or visually impaired people. In particular, turn-by-turn navigation assistants provide sequential instructions to enable autonomous guidance towards a destination. A problem frequently addressed in the literature is to obtain accurate position and orientation of the user during such guidance. An orthogonal challenge, often overlooked in the literature, is how precisely navigation instructions are followed by users. In particular, imprecisions in following rotation instructions lead to rotation errors that can significantly affect navigation. Indeed, a relatively small error during a turn is amplified by the following frontal movement and can lead the user towards incorrect or dangerous paths. In this contribution, we study rotation errors and their effect on turn-by-turn guidance for individuals with visual impairments. We analyze a dataset of indoor trajectories of 11 blind participants guided along three routes through a multi-story shopping mall using NavCog, a turn-by-turn smartphone navigation assistant. We find that participants extend rotations by 17º on average. The error is not proportional to the expected rotation; instead, it is accentuated for "slight turns" (22.5º-60º), while "ample turns" (60º-120º) are consistently approximated to 90º. We generalize our findings as design considerations for engineering navigation assistance in real-world scenarios.
{"title":"Turn Right: Analysis of Rotation Errors in Turn-by-Turn Navigation for Individuals with Visual Impairments","authors":"D. Ahmetovic, U. Oh, S. Mascetti, C. Asakawa","doi":"10.1145/3234695.3236363","DOIUrl":"https://doi.org/10.1145/3234695.3236363","url":null,"abstract":"Navigation assistive technologies aim to improve the mobility of blind or visually impaired people. In particular, turn-by-turn navigation assistants provide sequential instructions to enable autonomous guidance towards a destination. A problem frequently addressed in the literature is to obtain accurate position and orientation of the user during such guidance. An orthogonal challenge, often overlooked in the literature, is how precisely navigation instructions are followed by users. In particular, imprecisions in following rotation instructions lead to rotation errors that can significantly affect navigation. Indeed, a relatively small error during a turn is amplified by the following frontal movement and can lead the user towards incorrect or dangerous paths. In this contribution, we study rotation errors and their effect on turn-by-turn guidance for individuals with visual impairments. We analyze a dataset of indoor trajectories of 11 blind participants guided along three routes through a multi-story shopping mall using NavCog, a turn-by-turn smartphone navigation assistant. We find that participants extend rotations by 17º on average. The error is not proportional to the expected rotation; instead, it is accentuated for \"slight turns\" (22.5º-60º), while \"ample turns\" (60º-120º) are consistently approximated to 90º. We generalize our findings as design considerations for engineering navigation assistance in real-world scenarios.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122877804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sumita Sharma, K. Achary, H. Kaur, Juhani Linna, M. Turunen, Blessin Varkey, Jaakko Hakulinen, Sanidhya Daeeyya
In this paper, we build a case for incorporating socio-technical aspirations of different stakeholders, e.g. parents, care-givers, and therapists, to motivate technology acceptance and adoption for children with autism. We base this on findings from two studies at a special school in New Delhi. First, with six children with autism, their parents and therapists we explored whether fitness bands motivate children with autism in India to increase their physical activity. Second, with five parents and specialists at the same school, we conducted interviews to understand their expectations from and current usage of technology. Previous work defines a culture-based framework for assistive technology design with three dimensions: lifestyle, socio-technical infrastructure, and monetary and informational resources. To this framework we propose a fourth dimension of socio-technical aspirations. We discuss the implications of the proposed fourth dimension to the existing framework.
{"title":"'Wow! You're Wearing a Fitbit, You're a Young Boy Now!\": Socio-Technical Aspirations for Children with Autism in India","authors":"Sumita Sharma, K. Achary, H. Kaur, Juhani Linna, M. Turunen, Blessin Varkey, Jaakko Hakulinen, Sanidhya Daeeyya","doi":"10.1145/3234695.3239329","DOIUrl":"https://doi.org/10.1145/3234695.3239329","url":null,"abstract":"In this paper, we build a case for incorporating socio-technical aspirations of different stakeholders, e.g. parents, care-givers, and therapists, to motivate technology acceptance and adoption for children with autism. We base this on findings from two studies at a special school in New Delhi. First, with six children with autism, their parents and therapists we explored whether fitness bands motivate children with autism in India to increase their physical activity. Second, with five parents and specialists at the same school, we conducted interviews to understand their expectations from and current usage of technology. Previous work defines a culture-based framework for assistive technology design with three dimensions: lifestyle, socio-technical infrastructure, and monetary and informational resources. To this framework we propose a fourth dimension of socio-technical aspirations. We discuss the implications of the proposed fourth dimension to the existing framework.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124253757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}