Mikel Ostiz-Blanco, Alfredo Pina Calafi, Miriam Lizaso Azcárate, J. J. Astrain, G. Arrondo
Music therapy could be an interesting resource to enhance social and cognitive skills in people with mental disorders. The aim of this study is to assess if using the music multimedia tool ACMUS with people with severe mental disorders is feasible and potentially beneficial. The study was a prospective pilot trial with 12 patients who had a diagnosis of schizophrenia or related disorders. It was carried out along nine sessions in small groups. The evaluation tools used were the observational COTE (Comprehensive Occupational Therapy Evaluation) scale, and a satisfaction questionnaire that was completed by the participants. Results showed an improvement in the COTE scores between the first and last session. Results of the satisfaction questionnaire were also positive, as the therapy program was positively rated by the patients. Programs which use the multimedia tool ACMUS for musical therapy sessions for patients with severe mental disorders are feasible and of clinical interest for future research.
{"title":"Using the Musical Multimedia Tool ACMUS with People with Severe Mental Disorders: A Pilot Study","authors":"Mikel Ostiz-Blanco, Alfredo Pina Calafi, Miriam Lizaso Azcárate, J. J. Astrain, G. Arrondo","doi":"10.1145/3234695.3241016","DOIUrl":"https://doi.org/10.1145/3234695.3241016","url":null,"abstract":"Music therapy could be an interesting resource to enhance social and cognitive skills in people with mental disorders. The aim of this study is to assess if using the music multimedia tool ACMUS with people with severe mental disorders is feasible and potentially beneficial. The study was a prospective pilot trial with 12 patients who had a diagnosis of schizophrenia or related disorders. It was carried out along nine sessions in small groups. The evaluation tools used were the observational COTE (Comprehensive Occupational Therapy Evaluation) scale, and a satisfaction questionnaire that was completed by the participants. Results showed an improvement in the COTE scores between the first and last session. Results of the satisfaction questionnaire were also positive, as the therapy program was positively rated by the patients. Programs which use the multimedia tool ACMUS for musical therapy sessions for patients with severe mental disorders are feasible and of clinical interest for future research.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115444768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Syed Asad R. Rizvi, Ella Tuson, Breanna Desrochers, John J. Magee
Participants with motor impairments may not always be available for research or software development testing. To address this, we propose simulation of users with motor impairments interacting with a head-controlled mouse pointer system. Simulation can be used as a stand-in for research participants in preliminary experiments and can serve to raise awareness about ability-based interactions to a wider software development population. We evaluated our prototype system using a Fitts' Law experiment and report on the measured communication rate of our system compared to users without motor impairments and with a previously reported participant with motor impairments.
{"title":"Simulation of Motor Impairment in Head-Controlled Pointer Fitts' Law Task","authors":"Syed Asad R. Rizvi, Ella Tuson, Breanna Desrochers, John J. Magee","doi":"10.1145/3234695.3241034","DOIUrl":"https://doi.org/10.1145/3234695.3241034","url":null,"abstract":"Participants with motor impairments may not always be available for research or software development testing. To address this, we propose simulation of users with motor impairments interacting with a head-controlled mouse pointer system. Simulation can be used as a stand-in for research participants in preliminary experiments and can serve to raise awareness about ability-based interactions to a wider software development population. We evaluated our prototype system using a Fitts' Law experiment and report on the measured communication rate of our system compared to users without motor impairments and with a previously reported participant with motor impairments.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121083101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tsubasa Uchida, H. Sumiyoshi, Taro Miyazaki, Makiko Azuma, Shuichi Umeda, Naoto Katoh, Y. Yamanouchi, N. Hiruma
As information support to deaf and hard of hearing people who are viewing sports programs, we have developed a sign language support system. The system automatically generates Japanese Sign Language (JSL) computer graphics (CG) animation and subtitles from prepared templates of JSL phrases corresponding to fixed format game data. To verify the system's performance, we carried out demonstration experiments on the generation and displaying of contents using real-time match data from actual games. From the experiment results we concluded that the automatically generated JSL CG is practical enough for understanding the information. We also found that among several display methods, the one providing game video and JSL CG on a single tablet screen was most preferred in this small-scale experiment.
{"title":"Evaluation of a Sign Language Support System for Viewing Sports Programs","authors":"Tsubasa Uchida, H. Sumiyoshi, Taro Miyazaki, Makiko Azuma, Shuichi Umeda, Naoto Katoh, Y. Yamanouchi, N. Hiruma","doi":"10.1145/3234695.3241002","DOIUrl":"https://doi.org/10.1145/3234695.3241002","url":null,"abstract":"As information support to deaf and hard of hearing people who are viewing sports programs, we have developed a sign language support system. The system automatically generates Japanese Sign Language (JSL) computer graphics (CG) animation and subtitles from prepared templates of JSL phrases corresponding to fixed format game data. To verify the system's performance, we carried out demonstration experiments on the generation and displaying of contents using real-time match data from actual games. From the experiment results we concluded that the automatically generated JSL CG is practical enough for understanding the information. We also found that among several display methods, the one providing game video and JSL CG on a single tablet screen was most preferred in this small-scale experiment.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115552887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Caregivers of autistic children suffer mostly from their children's repeated challenging behaviors occurring every day. Thus, it is important for caregivers to track the behaviors to understand the context of challenging behaviors and eventually alleviate them. GeniAuti is a tracking application for caregivers of autistic children, which provides 1) recording of their children's behaviors in a timely fashion, 2) visualization of data input by the user, 3) references of other autistic children's behaviors.
{"title":"GeniAuti","authors":"K. Choi, Dongho Jang, Dasol Lee, Seoyoung Park","doi":"10.1145/3234695.3240987","DOIUrl":"https://doi.org/10.1145/3234695.3240987","url":null,"abstract":"Caregivers of autistic children suffer mostly from their children's repeated challenging behaviors occurring every day. Thus, it is important for caregivers to track the behaviors to understand the context of challenging behaviors and eventually alleviate them. GeniAuti is a tracking application for caregivers of autistic children, which provides 1) recording of their children's behaviors in a timely fashion, 2) visualization of data input by the user, 3) references of other autistic children's behaviors.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"231 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116391251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kerr Macpherson, Garreth W. Tigwell, R. Menzies, David R. Flatla
With the growing popularity of mobile devices, Situational Visual Impairments (SVIs) can cause accessibility challenges. When addressing SVIs, interface and content designers are lacking guidelines based on empirically-determined SVI contrast sensitivities. To address this, we developed BrightLights -- a game that collects screen-content-contrast data in-the-wild that will enable new SVI-pertinent contrast ratio recommendations. In our evaluation with 15 participants, we found significantly worse performance with low screen brightness versus medium or high screen brightness, showing that BrightLights is sensitive to at least one factor that contributes to SVI (screen brightness). Once validated for in-the-wild deployment, BrightLights data will finally help designers address SVIs through their designs.
{"title":"BrightLights: Gamifying Data Capture for Situational Visual Impairments","authors":"Kerr Macpherson, Garreth W. Tigwell, R. Menzies, David R. Flatla","doi":"10.1145/3234695.3241030","DOIUrl":"https://doi.org/10.1145/3234695.3241030","url":null,"abstract":"With the growing popularity of mobile devices, Situational Visual Impairments (SVIs) can cause accessibility challenges. When addressing SVIs, interface and content designers are lacking guidelines based on empirically-determined SVI contrast sensitivities. To address this, we developed BrightLights -- a game that collects screen-content-contrast data in-the-wild that will enable new SVI-pertinent contrast ratio recommendations. In our evaluation with 15 participants, we found significantly worse performance with low screen brightness versus medium or high screen brightness, showing that BrightLights is sensitive to at least one factor that contributes to SVI (screen brightness). Once validated for in-the-wild deployment, BrightLights data will finally help designers address SVIs through their designs.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114139802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 3: Accessing Information","authors":"S. Mascetti","doi":"10.1145/3284377","DOIUrl":"https://doi.org/10.1145/3284377","url":null,"abstract":"","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121399282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mayara Bonani, Raquel Oliveira, Filipa Correia, André Rodrigues, Tiago Guerreiro, Ana Paiva
Blind people rely on sighted peers and different assistive technologies to accomplish everyday tasks. In this paper, we explore how assistive robots can go beyond information-giving assistive technologies (e.g., screen readers) by physically collaborating with blind people. We first conducted a set of focus groups to assess how blind people perceive and envision robots. Results showed that, albeit having stereotypical concerns, participants conceive the integration of assistive robots in a broad range of everyday life scenarios and are welcoming of this type of technology. In a second study, we asked blind participants to collaborate with two versions of a robot in a Tangram assembly task: one robot would only provide static verbal instructions whereas the other would physically collaborate with participants and adjust the feedback to their performance. Results showed that active collaboration had a major influence on the successful performance of the task. Participants also reported higher perceived warmth, competence and usefulness when interacting with the physically assistive robot. Overall, we provide preliminary results on the usefulness of assistive robots and the possible role these can hold in fostering a higher degree of autonomy for blind people.
{"title":"What My Eyes Can't See, A Robot Can Show Me: Exploring the Collaboration Between Blind People and Robots","authors":"Mayara Bonani, Raquel Oliveira, Filipa Correia, André Rodrigues, Tiago Guerreiro, Ana Paiva","doi":"10.1145/3234695.3239330","DOIUrl":"https://doi.org/10.1145/3234695.3239330","url":null,"abstract":"Blind people rely on sighted peers and different assistive technologies to accomplish everyday tasks. In this paper, we explore how assistive robots can go beyond information-giving assistive technologies (e.g., screen readers) by physically collaborating with blind people. We first conducted a set of focus groups to assess how blind people perceive and envision robots. Results showed that, albeit having stereotypical concerns, participants conceive the integration of assistive robots in a broad range of everyday life scenarios and are welcoming of this type of technology. In a second study, we asked blind participants to collaborate with two versions of a robot in a Tangram assembly task: one robot would only provide static verbal instructions whereas the other would physically collaborate with participants and adjust the feedback to their performance. Results showed that active collaboration had a major influence on the successful performance of the task. Participants also reported higher perceived warmth, competence and usefulness when interacting with the physically assistive robot. Overall, we provide preliminary results on the usefulness of assistive robots and the possible role these can hold in fostering a higher degree of autonomy for blind people.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132549291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Manzoor, Murayyiam Parvez, S. Shahid, Asim Karim
This software usability study aims towards evaluating our LaTeX based extension, created to assist blind researchers and writers, in terms of authoring both continuous and non-continuous text [2]. Our extension includes features like speech based error prompts and navigation to the error location which are expected to improve the LaTeX code debugging experience and increase writing productivity. Upon testing our extension, it is observed that a majority of both LaTeX novice and expert users preferred using MS Word for writing continuous text, while the LaTeX experts preferred our extension for writing mathematical content.
{"title":"Assistive Debugging to Support Accessible Latex Based Document Authoring","authors":"A. Manzoor, Murayyiam Parvez, S. Shahid, Asim Karim","doi":"10.1145/3234695.3241013","DOIUrl":"https://doi.org/10.1145/3234695.3241013","url":null,"abstract":"This software usability study aims towards evaluating our LaTeX based extension, created to assist blind researchers and writers, in terms of authoring both continuous and non-continuous text [2]. Our extension includes features like speech based error prompts and navigation to the error location which are expected to improve the LaTeX code debugging experience and increase writing productivity. Upon testing our extension, it is observed that a majority of both LaTeX novice and expert users preferred using MS Word for writing continuous text, while the LaTeX experts preferred our extension for writing mathematical content.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134147854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To improve functional communication and behavioral management, many children with disabilities receive behavioral and communication-related intervention from professionals such as behavioral analysts and speech and language therapists. This paper presents user perspectives from three clinicians who have used and/or designed assistive technology with children with disabilities, and calls for researchers to recognize and leverage clinicians' knowledge to design accessible technology for children with complex sensory and communication needs.
{"title":"From Behavioral and Communication Intervention to Interaction Design: User Perspectives from Clinicians","authors":"Yao Du, Louanne E. Boyd, Seray B. Ibrahim","doi":"10.1145/3234695.3241480","DOIUrl":"https://doi.org/10.1145/3234695.3241480","url":null,"abstract":"To improve functional communication and behavioral management, many children with disabilities receive behavioral and communication-related intervention from professionals such as behavioral analysts and speech and language therapists. This paper presents user perspectives from three clinicians who have used and/or designed assistive technology with children with disabilities, and calls for researchers to recognize and leverage clinicians' knowledge to design accessible technology for children with complex sensory and communication needs.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131091797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sedeeq Al-khazraji, Larwan Berke, Sushant Kafle, Peter Yeung, Matt Huenerfauth
To enable more websites to provide content in the form of sign language, we investigate software to partially automate the synthesis of animations of American Sign Language (ASL), based on a human-authored message specification. We automatically select: where prosodic pauses should be inserted (based on the syntax or other features), the time-duration of these pauses, and the variations of the speed at which individual words are performed (e.g. slower at the end of phrases). Based on an analysis of a corpus of multi-sentence ASL recordings with motion-capture data, we trained machine-learning models, which were evaluated in a cross-validation study. The best model out-performed a prior state-of-the-art ASL timing model. In a study with native ASL signers evaluating animations generated from either our new model or from a simple baseline (uniform speed and no pauses), participants indicated a preference for speed and pausing in ASL animations from our model.
{"title":"Modeling the Speed and Timing of American Sign Language to Generate Realistic Animations","authors":"Sedeeq Al-khazraji, Larwan Berke, Sushant Kafle, Peter Yeung, Matt Huenerfauth","doi":"10.1145/3234695.3236356","DOIUrl":"https://doi.org/10.1145/3234695.3236356","url":null,"abstract":"To enable more websites to provide content in the form of sign language, we investigate software to partially automate the synthesis of animations of American Sign Language (ASL), based on a human-authored message specification. We automatically select: where prosodic pauses should be inserted (based on the syntax or other features), the time-duration of these pauses, and the variations of the speed at which individual words are performed (e.g. slower at the end of phrases). Based on an analysis of a corpus of multi-sentence ASL recordings with motion-capture data, we trained machine-learning models, which were evaluated in a cross-validation study. The best model out-performed a prior state-of-the-art ASL timing model. In a study with native ASL signers evaluating animations generated from either our new model or from a simple baseline (uniform speed and no pauses), participants indicated a preference for speed and pausing in ASL animations from our model.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"102 4 Pt 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131402393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}