This paper describes the design, implementation, and pilot evaluation of an interface to support embodied musical interaction for children with Autism Spectrum Conditions (ASC), in the context of music therapy sessions. Previous research suggests music and movement therapies are powerful tools for supporting children with autism in their development of communication, expression, and motor skills. OSMoSIS (Observation of Social Motor Synchrony with an Interactive System) is an interactive musical system which tracks body movements and transforms them into sounds using the Microsoft Kinect motion capture system. It is designed so that, regardless of motor abilities, children can generate sounds by moving in the environment either freely or guided by a facilitator. OSMoSIS was inspired by the author's experiences as a music therapist and supports observation of Social Motor Synchrony to allow facilitators and researchers to record and investigate this aspect of the therapy sessions. It converts movements into sounds using Microsoft Kinect body tracking, in the context of an interactive game. From our preliminary testing with 11 children with autism (aged 5 – 11 years old), we observed that our design actively connects children, who displayed a notable increase in engagement and interaction when the system was used.
{"title":"Designing Embodied Musical Interaction for Children with Autism","authors":"Grazia Ragone","doi":"10.1145/3373625.3417077","DOIUrl":"https://doi.org/10.1145/3373625.3417077","url":null,"abstract":"This paper describes the design, implementation, and pilot evaluation of an interface to support embodied musical interaction for children with Autism Spectrum Conditions (ASC), in the context of music therapy sessions. Previous research suggests music and movement therapies are powerful tools for supporting children with autism in their development of communication, expression, and motor skills. OSMoSIS (Observation of Social Motor Synchrony with an Interactive System) is an interactive musical system which tracks body movements and transforms them into sounds using the Microsoft Kinect motion capture system. It is designed so that, regardless of motor abilities, children can generate sounds by moving in the environment either freely or guided by a facilitator. OSMoSIS was inspired by the author's experiences as a music therapist and supports observation of Social Motor Synchrony to allow facilitators and researchers to record and investigate this aspect of the therapy sessions. It converts movements into sounds using Microsoft Kinect body tracking, in the context of an interactive game. From our preliminary testing with 11 children with autism (aged 5 – 11 years old), we observed that our design actively connects children, who displayed a notable increase in engagement and interaction when the system was used.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123338802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advent of digital audio workstations and other digital audio tools has brought a critical shift in the audio industry by empowering amateur and professional audio content creators with the necessary means to produce high quality audio content. Yet, we know little about the accessibility of widely used audio production tools for people with vision impairments. Through interviews with 18 audio professionals and hobbyists with vision impairments, we find that accessible audio production involves: piecing together accessible and efficient workflows through a combination of mainstream and custom tools; achieving professional competency through a steep learning curve in which domain knowledge and accessibility are inseparable; and facilitating learning and creating access by engaging in online communities of visually impaired audio enthusiasts. We discuss the deep entanglement between accessibility and professional competency and conclude with design considerations to inform future development of accessible audio production tools.
{"title":"Understanding Audio Production Practices of People with Vision Impairments","authors":"Abir Saha, Anne Marie Piper","doi":"10.1145/3373625.3416993","DOIUrl":"https://doi.org/10.1145/3373625.3416993","url":null,"abstract":"The advent of digital audio workstations and other digital audio tools has brought a critical shift in the audio industry by empowering amateur and professional audio content creators with the necessary means to produce high quality audio content. Yet, we know little about the accessibility of widely used audio production tools for people with vision impairments. Through interviews with 18 audio professionals and hobbyists with vision impairments, we find that accessible audio production involves: piecing together accessible and efficient workflows through a combination of mainstream and custom tools; achieving professional competency through a steep learning curve in which domain knowledge and accessibility are inseparable; and facilitating learning and creating access by engaging in online communities of visually impaired audio enthusiasts. We discuss the deep entanglement between accessibility and professional competency and conclude with design considerations to inform future development of accessible audio production tools.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126479822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As mobile applications continue to offer more features, tackling the complexity of mobile interfaces can become challenging for older adults. Owing to a small screen and frequent updates that modify the visual layouts of menus and buttons, older adults can find it challenging to locate a function on a mobile interface quickly—even when familiar with the application. To address this issue, we present a system that supports older adults to quickly locate an on-screen feature on a mobile interface using speech queries. Our system allows users to ask for a function related to the current mobile screen using voice input. When that function is available, it provides visual guidance for users to engage with the pertinent user interface (UI) widget. The label and location of all UI components on the current screen are acquired via the Android’s Assist API. We discuss four scenarios of use.
{"title":"Supporting Older Adults in Locating Mobile Interface Features with Voice Input","authors":"Ja Eun Yu, Debaleena Chattopadhyay","doi":"10.1145/3373625.3418044","DOIUrl":"https://doi.org/10.1145/3373625.3418044","url":null,"abstract":"As mobile applications continue to offer more features, tackling the complexity of mobile interfaces can become challenging for older adults. Owing to a small screen and frequent updates that modify the visual layouts of menus and buttons, older adults can find it challenging to locate a function on a mobile interface quickly—even when familiar with the application. To address this issue, we present a system that supports older adults to quickly locate an on-screen feature on a mobile interface using speech queries. Our system allows users to ask for a function related to the current mobile screen using voice input. When that function is available, it provides visual guidance for users to engage with the pertinent user interface (UI) widget. The label and location of all UI components on the current screen are acquired via the Android’s Assist API. We discuss four scenarios of use.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"430 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126745759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mirette Elias, M. Tavakoli, S. Lohmann, G. Kismihók, S. Auer
Open Educational Resources are becoming a significant source of learning that are widely used for various educational purposes and levels. Learners have diverse backgrounds and needs, especially when it comes to learners with accessibility requirements. Persons with disabilities have significantly lower employment rates partly due to the lack of access to education and vocational rehabilitation and training. It is not surprising therefore, that providing high quality OERs that facilitate the self-development towards specific jobs and skills on the labor market in the light of special preferences of learners with disabilities is difficult. In this paper, we introduce a personalized OER recommeder system that considers skills, occupations, and accessibility properties of learners to retrieve the most adequate and high-quality OERs. This is done by: 1) describing the profile of learners with disabilities, 2) collecting and analysing more than 1,500 OERs, 3) filtering OERs based on their accessibility features and predicted quality, and 4) providing personalised OER recommendations for learners according to their accessibility needs. As a result, the OERs retrieved by our method proved to satisfy more accessibility checks than other OERs. Moreover, we evaluated our results with five experts in educating people with visual and cognitive impairments. The evaluation showed that our recommendations are potentially helpful for learners with accessibility needs.
{"title":"An OER Recommender System Supporting Accessibility Requirements","authors":"Mirette Elias, M. Tavakoli, S. Lohmann, G. Kismihók, S. Auer","doi":"10.1145/3373625.3418021","DOIUrl":"https://doi.org/10.1145/3373625.3418021","url":null,"abstract":"Open Educational Resources are becoming a significant source of learning that are widely used for various educational purposes and levels. Learners have diverse backgrounds and needs, especially when it comes to learners with accessibility requirements. Persons with disabilities have significantly lower employment rates partly due to the lack of access to education and vocational rehabilitation and training. It is not surprising therefore, that providing high quality OERs that facilitate the self-development towards specific jobs and skills on the labor market in the light of special preferences of learners with disabilities is difficult. In this paper, we introduce a personalized OER recommeder system that considers skills, occupations, and accessibility properties of learners to retrieve the most adequate and high-quality OERs. This is done by: 1) describing the profile of learners with disabilities, 2) collecting and analysing more than 1,500 OERs, 3) filtering OERs based on their accessibility features and predicted quality, and 4) providing personalised OER recommendations for learners according to their accessibility needs. As a result, the OERs retrieved by our method proved to satisfy more accessibility checks than other OERs. Moreover, we evaluated our results with five experts in educating people with visual and cognitive impairments. The evaluation showed that our recommendations are potentially helpful for learners with accessibility needs.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127037960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lucie Chasseur, Marion Dohen, B. Lecouteux, Sébastien Riou, Amélie Rochet-Capellan, D. Schwab
The multiplication of communication software based on pictogram grids with voice output has led to the democratisation of this type of tool. To date, however, there is no standard, nor systematic evaluation that makes it possible to objectively measure the suitability of these tools for a given language. There are also no methods for designers to improve the organisation of words into grids to optimise sentence production. This paper is a first step in this direction. We represented the Proloquo2Go® Crescendo vocabulary for a given grid size as a graph and computed the production cost of frequent sentences in French. This cost depends on the physical distance between the pictograms on a given page and navigation between pages. We discuss the interest of this approach for the evaluation as well as the conception of communicative pictogram grids.
{"title":"Evaluation of the acceptability and usability of Augmentative and Alternative Communication (ACC) tools: the example of Pictogram grid communication systems with voice output.","authors":"Lucie Chasseur, Marion Dohen, B. Lecouteux, Sébastien Riou, Amélie Rochet-Capellan, D. Schwab","doi":"10.1145/3373625.3418018","DOIUrl":"https://doi.org/10.1145/3373625.3418018","url":null,"abstract":"The multiplication of communication software based on pictogram grids with voice output has led to the democratisation of this type of tool. To date, however, there is no standard, nor systematic evaluation that makes it possible to objectively measure the suitability of these tools for a given language. There are also no methods for designers to improve the organisation of words into grids to optimise sentence production. This paper is a first step in this direction. We represented the Proloquo2Go® Crescendo vocabulary for a given grid size as a graph and computed the production cost of frequent sentences in French. This cost depends on the physical distance between the pictograms on a given page and navigation between pages. We discuss the interest of this approach for the evaluation as well as the conception of communicative pictogram grids.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128418415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jaylin Herskovitz, Jason Wu, Samuel White, Amy Pavel, G. Reyes, Anhong Guo, Jeffrey P. Bigham
Augmented Reality (AR) technology creates new immersive experiences in entertainment, games, education, retail, and social media. AR content is often primarily visual and it is challenging to enable access to it non-visually due to the mix of virtual and real-world content. In this paper, we identify common constituent tasks in AR by analyzing existing mobile AR applications for iOS, and characterize the design space of tasks that require accessible alternatives. For each of the major task categories, we create prototype accessible alternatives that we evaluate in a study with 10 blind participants to explore their perceptions of accessible AR. Our study demonstrates that these prototypes make AR possible to use for blind users and reveals a number of insights to move forward. We believe our work sets forth not only exemplars for developers to create accessible AR applications, but also a roadmap for future research to make AR comprehensively accessible.
{"title":"Making Mobile Augmented Reality Applications Accessible","authors":"Jaylin Herskovitz, Jason Wu, Samuel White, Amy Pavel, G. Reyes, Anhong Guo, Jeffrey P. Bigham","doi":"10.1145/3373625.3417006","DOIUrl":"https://doi.org/10.1145/3373625.3417006","url":null,"abstract":"Augmented Reality (AR) technology creates new immersive experiences in entertainment, games, education, retail, and social media. AR content is often primarily visual and it is challenging to enable access to it non-visually due to the mix of virtual and real-world content. In this paper, we identify common constituent tasks in AR by analyzing existing mobile AR applications for iOS, and characterize the design space of tasks that require accessible alternatives. For each of the major task categories, we create prototype accessible alternatives that we evaluate in a study with 10 blind participants to explore their perceptions of accessible AR. Our study demonstrates that these prototypes make AR possible to use for blind users and reveals a number of insights to move forward. We believe our work sets forth not only exemplars for developers to create accessible AR applications, but also a roadmap for future research to make AR comprehensively accessible.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"15 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133041470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual creative forms, such as painting and sculpture, are a common expressive outlet and offer an alternative to language-based expression. They are particularly beneficial for those who find language challenging due to an impairment – for example, people with aphasia. However, being creative with digital platforms can be challenging due to the language-based barriers they impose. In this work, we describe an accessible tool called Inker. Inker supports people with aphasia in accessing digital creativity, supported by previously created physical artistic work.
{"title":"Painting a Picture of Accessible Digital Art","authors":"Timothy Neate, Abi Roper, Stephanie M. Wilson","doi":"10.1145/3373625.3418019","DOIUrl":"https://doi.org/10.1145/3373625.3418019","url":null,"abstract":"Visual creative forms, such as painting and sculpture, are a common expressive outlet and offer an alternative to language-based expression. They are particularly beneficial for those who find language challenging due to an impairment – for example, people with aphasia. However, being creative with digital platforms can be challenging due to the language-based barriers they impose. In this work, we describe an accessible tool called Inker. Inker supports people with aphasia in accessing digital creativity, supported by previously created physical artistic work.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123636224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carla Tamburro, Timothy Neate, Abi Roper, Stephanie M. Wilson
Creativity and humour allow people to be expressive and to address topics which they might otherwise avoid or find deeply uncomfortable. One such way to express these sentiments is via comics. Comics have a highly-visual format with relatively little language. They therefore offer a promising opportunity for people who experience challenges with language to express creativity and humour. Most comic tools, however, are not accessible to people with language impairments. In this paper we describe Comic Spin, a comic app designed for people with aphasia. Comic Spin builds upon the literature on supporting creativity by constraining the creative space. We report both the design process and the results of a creative workshop where people with aphasia used Comic Spin. Participants were not only successful in using the app, but were able to create a range of narrative, humorous and subversive comics.
{"title":"Accessible Creativity with a Comic Spin","authors":"Carla Tamburro, Timothy Neate, Abi Roper, Stephanie M. Wilson","doi":"10.1145/3373625.3417012","DOIUrl":"https://doi.org/10.1145/3373625.3417012","url":null,"abstract":"Creativity and humour allow people to be expressive and to address topics which they might otherwise avoid or find deeply uncomfortable. One such way to express these sentiments is via comics. Comics have a highly-visual format with relatively little language. They therefore offer a promising opportunity for people who experience challenges with language to express creativity and humour. Most comic tools, however, are not accessible to people with language impairments. In this paper we describe Comic Spin, a comic app designed for people with aphasia. Comic Spin builds upon the literature on supporting creativity by constraining the creative space. We report both the design process and the results of a creative workshop where people with aphasia used Comic Spin. Participants were not only successful in using the app, but were able to create a range of narrative, humorous and subversive comics.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114580470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tong Wu, Yang Gu, Yiqiang Chen, Yunlong Xiao, Jiwei Wang
Falls are one of the major causes of accidental or unintentional injury death worldwide. Therefore, this paper proposes a reliable fall detection algorithm and a mobile cloud collaboration system for fall detection. The algorithm is an ensemble learning method based on decision tree, named Fall-detection Ensemble Decision Tree (FEDT). The mobile cloud collaboration system is composed of three stages: 1) mobile stage: a light-weighted threshold method is used to filter out activities of daily livings (ADLs), 2) collaboration stage: TCP protocol is used to transmit data to cloud and meanwhile features are extracted in the cloud, 3) cloud stage: the model trained by FEDT is deployed to give the final detection result with the extracted features. Experiments show that the proposed FEDT outperforms the others' over 1-3% both on sensitivity and specificity and has superior robustness on different devices.
{"title":"A Mobile Cloud Collaboration Fall Detection System Based on Ensemble Learning","authors":"Tong Wu, Yang Gu, Yiqiang Chen, Yunlong Xiao, Jiwei Wang","doi":"10.1145/3373625.3417010","DOIUrl":"https://doi.org/10.1145/3373625.3417010","url":null,"abstract":"Falls are one of the major causes of accidental or unintentional injury death worldwide. Therefore, this paper proposes a reliable fall detection algorithm and a mobile cloud collaboration system for fall detection. The algorithm is an ensemble learning method based on decision tree, named Fall-detection Ensemble Decision Tree (FEDT). The mobile cloud collaboration system is composed of three stages: 1) mobile stage: a light-weighted threshold method is used to filter out activities of daily livings (ADLs), 2) collaboration stage: TCP protocol is used to transmit data to cloud and meanwhile features are extracted in the cloud, 3) cloud stage: the model trained by FEDT is deployed to give the final detection result with the extracted features. Experiments show that the proposed FEDT outperforms the others' over 1-3% both on sensitivity and specificity and has superior robustness on different devices.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129979998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}