The key features in the Lake Devo Web-based role-play environment that allow blind students, using a screen reader, to fully participate in creating and presenting online role-play scenarios are described. Three key features are reviewed including: the Character Editor, the Script Editor, and the Movie Player, discussing as well how WAI-ARIA has been used to make them accessible.
{"title":"Lake Devo: accessible online role-play","authors":"Greg Gay, M. Glynn, Naza Djafarova","doi":"10.1145/2899475.2899511","DOIUrl":"https://doi.org/10.1145/2899475.2899511","url":null,"abstract":"The key features in the Lake Devo Web-based role-play environment that allow blind students, using a screen reader, to fully participate in creating and presenting online role-play scenarios are described. Three key features are reviewed including: the Character Editor, the Script Editor, and the Movie Player, discussing as well how WAI-ARIA has been used to make them accessible.","PeriodicalId":337838,"journal":{"name":"Proceedings of the 13th Web for All Conference","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115530176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In postsecondary education, technology and online resources have become a pervasive component of learning, but they are not always accessible. For students with intellectual disabilities, completing technology-dependent tasks may pose unique challenges that are not always addressed by the disability support services offered at the university level. During our fieldwork, we have observed several barriers to online education tools in a postsecondary environment for students with intellectual disabilities. For example, a student with an intellectual disability submitting an assignment via email to an instructor may encounter difficulties recalling and navigating to the location of their attachment file. In this paper, we describe core skills and common interfaces that we have identified as problematic for this population through an emic ethnography. We offer emic (perceptions from within a given environment) experience accounts to highlight the obstacles we have observed in a) information retrieval, b) navigation and information architecture c) file management, and d) password management. As researchers and educators involved in a postsecondary program for young adults with intellectual disability (ID), we have spent considerable time working with this population. For each scenario, we offer examples from our own experience of the techniques and technologies that did or did not help students accomplish these tasks. Based on these experiences, we provide recommendations for mitigating these barriers including education and training for students and developers and the use of existing interventions and tools. We also discuss future directions for this work. We believe that heightened awareness and communication between educators, designers, and students with disabilities will help address these problems and generate solutions which provide more accessible education experiences for learners with diverse needs.
{"title":"Accessibility barriers to online education for young adults with intellectual disabilities","authors":"Erin Buehler, William Easley, Amy Poole, A. Hurst","doi":"10.1145/2899475.2899481","DOIUrl":"https://doi.org/10.1145/2899475.2899481","url":null,"abstract":"In postsecondary education, technology and online resources have become a pervasive component of learning, but they are not always accessible. For students with intellectual disabilities, completing technology-dependent tasks may pose unique challenges that are not always addressed by the disability support services offered at the university level. During our fieldwork, we have observed several barriers to online education tools in a postsecondary environment for students with intellectual disabilities. For example, a student with an intellectual disability submitting an assignment via email to an instructor may encounter difficulties recalling and navigating to the location of their attachment file. In this paper, we describe core skills and common interfaces that we have identified as problematic for this population through an emic ethnography. We offer emic (perceptions from within a given environment) experience accounts to highlight the obstacles we have observed in a) information retrieval, b) navigation and information architecture c) file management, and d) password management. As researchers and educators involved in a postsecondary program for young adults with intellectual disability (ID), we have spent considerable time working with this population. For each scenario, we offer examples from our own experience of the techniques and technologies that did or did not help students accomplish these tasks. Based on these experiences, we provide recommendations for mitigating these barriers including education and training for students and developers and the use of existing interventions and tools. We also discuss future directions for this work. We believe that heightened awareness and communication between educators, designers, and students with disabilities will help address these problems and generate solutions which provide more accessible education experiences for learners with diverse needs.","PeriodicalId":337838,"journal":{"name":"Proceedings of the 13th Web for All Conference","volume":"356 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123000283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of this research is to show that a playful approach combined with music can detect children with dyslexia. Early detection will prevent children from suffering in school until they are detected due to bad grades. Our envisioned web application will contribute to 10% of the population by giving them a chance to succeed in life and find their skills to impress the world.
{"title":"DysMusic: detecting dyslexia by web-based games with music elements: doctoral consortium","authors":"M. Rauschenberger","doi":"10.1145/2899475.2899503","DOIUrl":"https://doi.org/10.1145/2899475.2899503","url":null,"abstract":"The aim of this research is to show that a playful approach combined with music can detect children with dyslexia. Early detection will prevent children from suffering in school until they are detected due to bad grades. Our envisioned web application will contribute to 10% of the population by giving them a chance to succeed in life and find their skills to impress the world.","PeriodicalId":337838,"journal":{"name":"Proceedings of the 13th Web for All Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125666930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Borodin, Yury Puzis, Andrii Sovyak, V. Ashok, Andrii Melnyk, I. Ramakrishnan
In this paper, we present Capti ESL Assistant, a novel universally accessible web application that facilitates acquisition of English by helping language learners develop their reading and listening skills simultaneously. It features built-in translation and the Word Challenge game that enables users to learn the language in the context. It is also accessible to users with print disabilities.
{"title":"Contextual language learning with Capti ESL Assistant","authors":"Y. Borodin, Yury Puzis, Andrii Sovyak, V. Ashok, Andrii Melnyk, I. Ramakrishnan","doi":"10.1145/2899475.2899508","DOIUrl":"https://doi.org/10.1145/2899475.2899508","url":null,"abstract":"In this paper, we present Capti ESL Assistant, a novel universally accessible web application that facilitates acquisition of English by helping language learners develop their reading and listening skills simultaneously. It features built-in translation and the Word Challenge game that enables users to learn the language in the context. It is also accessible to users with print disabilities.","PeriodicalId":337838,"journal":{"name":"Proceedings of the 13th Web for All Conference","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131748197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sandra Sanchez-Gordon, Juan Estevez, S. Luján-Mora
In this paper, we present a set of twenty features for managing accessible images when generating educational content. With this set of features, we assessed a sample of eight e-Learning platforms and found that, regarding image accessibility, Moodle and Sakai are the most accessible e-Learning platforms, whereas edX and Udemy are the least accessible. We also present an HTML visual text editor for accessible images designed to be a base component across e-Learning platforms. As future work, we plan to propose additional sets of accessibility features regarding other formats, e.g. text, audio, video.
{"title":"Editor for accessible images in e-Learning platforms","authors":"Sandra Sanchez-Gordon, Juan Estevez, S. Luján-Mora","doi":"10.1145/2899475.2899513","DOIUrl":"https://doi.org/10.1145/2899475.2899513","url":null,"abstract":"In this paper, we present a set of twenty features for managing accessible images when generating educational content. With this set of features, we assessed a sample of eight e-Learning platforms and found that, regarding image accessibility, Moodle and Sakai are the most accessible e-Learning platforms, whereas edX and Udemy are the least accessible. We also present an HTML visual text editor for accessible images designed to be a base component across e-Learning platforms. As future work, we plan to propose additional sets of accessibility features regarding other formats, e.g. text, audio, video.","PeriodicalId":337838,"journal":{"name":"Proceedings of the 13th Web for All Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124560173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we describe the development of an HTML5 video player, created for implementation by web developers. This player removes a barrier to media for people with disabilities. The reasoning behind the applied design features is outlined.
{"title":"HTML5 accessible video player: how and why","authors":"C. Earl, E. Neal","doi":"10.1145/2899475.2899499","DOIUrl":"https://doi.org/10.1145/2899475.2899499","url":null,"abstract":"In this paper, we describe the development of an HTML5 video player, created for implementation by web developers. This player removes a barrier to media for people with disabilities. The reasoning behind the applied design features is outlined.","PeriodicalId":337838,"journal":{"name":"Proceedings of the 13th Web for All Conference","volume":"28 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129404364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research towards my dissertation has involved a series of perceptual and accessibility-focused studies concerned with the use of tactile cues for spatial and situational awareness, displayed through head-mounted wearables. These studies were informed by an initial participatory design study of mobile technology multitasking and tactile interaction habits. This research has yielded a number of actionable conclusions regarding the development of tactile interfaces for the head, and endeavors to provide greater insight into the design of advanced tactile alerting for contextual and spatial understanding in assistive applications (e.g. for individuals who are blind or those encountering situational impairments), as well as guidance for developers regarding assessment of interaction between under-utilized sensory modalities and underlying perceptual and cognitive processes.
{"title":"Developing a wearable tactile prototype to support situational awareness","authors":"Flynn Wolf","doi":"10.1145/2899475.2899505","DOIUrl":"https://doi.org/10.1145/2899475.2899505","url":null,"abstract":"Research towards my dissertation has involved a series of perceptual and accessibility-focused studies concerned with the use of tactile cues for spatial and situational awareness, displayed through head-mounted wearables. These studies were informed by an initial participatory design study of mobile technology multitasking and tactile interaction habits. This research has yielded a number of actionable conclusions regarding the development of tactile interfaces for the head, and endeavors to provide greater insight into the design of advanced tactile alerting for contextual and spatial understanding in assistive applications (e.g. for individuals who are blind or those encountering situational impairments), as well as guidance for developers regarding assessment of interaction between under-utilized sensory modalities and underlying perceptual and cognitive processes.","PeriodicalId":337838,"journal":{"name":"Proceedings of the 13th Web for All Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129859063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yashesh Gaur, Walter S. Lasecki, Florian Metze, Jeffrey P. Bigham
Transcription makes speech accessible to deaf and hard of hearing people. This conversion of speech to text is still done manually by humans, despite high cost, because the quality of automated speech recognition (ASR) is still too low in real-world settings. Manual conversion can require more than 5 times the original audio time, which also introduces significant latency. Giving transcriptionists ASR output as a starting point seems like a reasonable approach to making humans more efficient and thereby reducing this cost, but the effectiveness of this approach is clearly related to the quality of the speech recognition output. At high error rates, fixing inaccurate speech recognition output may take longer than producing the transcription from scratch, and transcriptionists may not realize when transcription output is too inaccurate to be useful. In this paper, we empirically explore how the latency of transcriptions created by participants recruited on Amazon Mechanical Turk vary based on the accuracy of speech recognition output. We present results from 2 studies which indicate that starting with the ASR output is worse unless it is sufficiently accurate (Word Error Rate of under 30%).
{"title":"The effects of automatic speech recognition quality on human transcription latency","authors":"Yashesh Gaur, Walter S. Lasecki, Florian Metze, Jeffrey P. Bigham","doi":"10.1145/2899475.2899478","DOIUrl":"https://doi.org/10.1145/2899475.2899478","url":null,"abstract":"Transcription makes speech accessible to deaf and hard of hearing people. This conversion of speech to text is still done manually by humans, despite high cost, because the quality of automated speech recognition (ASR) is still too low in real-world settings. Manual conversion can require more than 5 times the original audio time, which also introduces significant latency. Giving transcriptionists ASR output as a starting point seems like a reasonable approach to making humans more efficient and thereby reducing this cost, but the effectiveness of this approach is clearly related to the quality of the speech recognition output. At high error rates, fixing inaccurate speech recognition output may take longer than producing the transcription from scratch, and transcriptionists may not realize when transcription output is too inaccurate to be useful. In this paper, we empirically explore how the latency of transcriptions created by participants recruited on Amazon Mechanical Turk vary based on the accuracy of speech recognition output. We present results from 2 studies which indicate that starting with the ASR output is worse unless it is sufficiently accurate (Word Error Rate of under 30%).","PeriodicalId":337838,"journal":{"name":"Proceedings of the 13th Web for All Conference","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127032536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julia Müller, Alan Davies, S. Harper, C. Jay, C. Todd
Having lung cancer is associated with accessibility issues because people afflicted with lung cancer tend to be older and less familiar with technology, and have low education levels and low health literacy. Fear, embarrassment and stigmatization also play a role. This makes it difficult for people to access the information they need to understand and manage their illness, particularly in the time before the diagnosis. We can mitigate these disadvantages and bridge the accessibility gap by ensuring people at risk for lung cancer are informed about symptoms and when to seek medical advice. The Web is uniquely placed to fulfill this role. We therefore developed an online lung cancer symptom appraisal tool tailored towards people with low education levels and health literacy and based on psychological theory to target barriers like fear and embarrassment. At present we are conducting a feasibility study to assess whether it is possible to reach the high risk population and encourage early help-seeking. So far, 97 users have participated, 97.9% of which report symptoms and risk factors that indicate they should seek medical help. 34% report education levels below school leaving qualification. Our tool led to a significantly higher intention to seek medical help than the same information without theory-based components (p = 0.01). Our initial analyses suggest this is a suitable approach to widening health education to excluded groups.
{"title":"Widening access to online health education for lung cancer: a feasibility study","authors":"Julia Müller, Alan Davies, S. Harper, C. Jay, C. Todd","doi":"10.1145/2899475.2899495","DOIUrl":"https://doi.org/10.1145/2899475.2899495","url":null,"abstract":"Having lung cancer is associated with accessibility issues because people afflicted with lung cancer tend to be older and less familiar with technology, and have low education levels and low health literacy. Fear, embarrassment and stigmatization also play a role. This makes it difficult for people to access the information they need to understand and manage their illness, particularly in the time before the diagnosis. We can mitigate these disadvantages and bridge the accessibility gap by ensuring people at risk for lung cancer are informed about symptoms and when to seek medical advice. The Web is uniquely placed to fulfill this role. We therefore developed an online lung cancer symptom appraisal tool tailored towards people with low education levels and health literacy and based on psychological theory to target barriers like fear and embarrassment. At present we are conducting a feasibility study to assess whether it is possible to reach the high risk population and encourage early help-seeking. So far, 97 users have participated, 97.9% of which report symptoms and risk factors that indicate they should seek medical help. 34% report education levels below school leaving qualification. Our tool led to a significantly higher intention to seek medical help than the same information without theory-based components (p = 0.01). Our initial analyses suggest this is a suitable approach to widening health education to excluded groups.","PeriodicalId":337838,"journal":{"name":"Proceedings of the 13th Web for All Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122335413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luz Rello, Kristin Williams, Abdullah Ali, N. C. White, Jeffrey P. Bigham
At least 10% of the global population has dyslexia. In the United States and Spain, dyslexia is associated with a large percentage of school drop out. Current methods to detect risk of dyslexia are language specific, expensive, or do not scale well because they require a professional or extensive equipment. A central challenge to detecting dyslexia is handling its differing manifestations across languages. To address this, we designed a browser-based game, Dytective, to detect risk of dyslexia across the English and Spanish languages. Dytective consists of linguistic tasks informed by analysis of common errors made by persons with dyslexia. To evaluate Dytective, we conducted a user study with 60 English and Spanish speaking children between 7 and 12 years old. We found children with and without dyslexia differed significantly in their performance on the game. Our results suggest that Dytective is able to differentiate school age children with and without dyslexia in both English and Spanish speakers.
{"title":"Dytective: towards detecting dyslexia across languages using an online game","authors":"Luz Rello, Kristin Williams, Abdullah Ali, N. C. White, Jeffrey P. Bigham","doi":"10.1145/2899475.2899491","DOIUrl":"https://doi.org/10.1145/2899475.2899491","url":null,"abstract":"At least 10% of the global population has dyslexia. In the United States and Spain, dyslexia is associated with a large percentage of school drop out. Current methods to detect risk of dyslexia are language specific, expensive, or do not scale well because they require a professional or extensive equipment. A central challenge to detecting dyslexia is handling its differing manifestations across languages. To address this, we designed a browser-based game, Dytective, to detect risk of dyslexia across the English and Spanish languages. Dytective consists of linguistic tasks informed by analysis of common errors made by persons with dyslexia. To evaluate Dytective, we conducted a user study with 60 English and Spanish speaking children between 7 and 12 years old. We found children with and without dyslexia differed significantly in their performance on the game. Our results suggest that Dytective is able to differentiate school age children with and without dyslexia in both English and Spanish speakers.","PeriodicalId":337838,"journal":{"name":"Proceedings of the 13th Web for All Conference","volume":"114 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113994133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}