Highly textual websites present barriers to Deaf people, primarily using American Sign Language for communication. Deaf people have been posting ASL content in form of vlogs to YouTube and specialized websites such as Deafvideo.TV. This paper presents some of the first insights into the use of vlogging technology and techniques among the Deaf community. The findings suggest that there are differences between YouTube and Deafvideo.TV due to differences between mainstream and specialized sites. Vlogging technology seems to influence use of styles that are not found or are used differently in face-to-face communications. Examples include the alteration of vloggers' signing space to convey different meanings on screen.
{"title":"The vlogging phenomena: a deaf perspective","authors":"Ellen S. Hibbard, D. Fels","doi":"10.1145/2049536.2049549","DOIUrl":"https://doi.org/10.1145/2049536.2049549","url":null,"abstract":"Highly textual websites present barriers to Deaf people, primarily using American Sign Language for communication. Deaf people have been posting ASL content in form of vlogs to YouTube and specialized websites such as Deafvideo.TV. This paper presents some of the first insights into the use of vlogging technology and techniques among the Deaf community. The findings suggest that there are differences between YouTube and Deafvideo.TV due to differences between mainstream and specialized sites. Vlogging technology seems to influence use of styles that are not found or are used differently in face-to-face communications. Examples include the alteration of vloggers' signing space to convey different meanings on screen.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131729592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Davide Barberis, Nicola Garazzino, P. Prinetto, G. Tiotto
This paper presents the ATLAS Editor for Assisted Translation (ALEAT), a novel tool for the Computer Assisted Translation (CAT) from Italian written language to Italian Sign Language (LIS) of Deaf People. The tool is a web application that has been developed within the ATLAS project, that targets the automatic translation from Italian written language to Italian Sign Language in the weather forecasts domain. ALEAT takes a text as input, written according to the Italian Language grammar, performs the automatic translation of the sentence and gives the result of the translation to the user by visualizing it through a virtual character. Since the automatic translation is error-prone, ALEAT allows to correct it with the intervention of the user. The translation is stored in a database resorting to a novel formalism: the ATLAS Written Extended LIS (AEWLIS). AEWLIS allows to play the translation through the ATLAS visualization module and to load it within ALEAT for successive modifications and improvement.
本文介绍了一种用于聋人意大利语书面语到意大利语手语计算机辅助翻译(CAT)的新工具ATLAS Editor for Assisted Translation (ALEAT)。该工具是ATLAS项目中开发的一个web应用程序,其目标是从意大利书面语言到意大利手语在天气预报领域的自动翻译。ALEAT将根据意大利语语法编写的文本作为输入,执行句子的自动翻译,并通过虚拟字符将翻译结果可视化,将其提供给用户。由于自动翻译容易出错,ALEAT允许在用户的干预下纠正它。翻译通过一种新的形式存储在数据库中:ATLAS书面扩展LIS (AEWLIS)。aaelis允许通过ATLAS可视化模块播放翻译,并将其加载到ALEAT中以进行后续修改和改进。
{"title":"Improving accessibility for deaf people: an editor for computer assisted translation through virtual avatars.","authors":"Davide Barberis, Nicola Garazzino, P. Prinetto, G. Tiotto","doi":"10.1145/2049536.2049593","DOIUrl":"https://doi.org/10.1145/2049536.2049593","url":null,"abstract":"This paper presents the ATLAS Editor for Assisted Translation (ALEAT), a novel tool for the Computer Assisted Translation (CAT) from Italian written language to Italian Sign Language (LIS) of Deaf People. The tool is a web application that has been developed within the ATLAS project, that targets the automatic translation from Italian written language to Italian Sign Language in the weather forecasts domain. ALEAT takes a text as input, written according to the Italian Language grammar, performs the automatic translation of the sentence and gives the result of the translation to the user by visualizing it through a virtual character. Since the automatic translation is error-prone, ALEAT allows to correct it with the intervention of the user. The translation is stored in a database resorting to a novel formalism: the ATLAS Written Extended LIS (AEWLIS). AEWLIS allows to play the translation through the ATLAS visualization module and to load it within ALEAT for successive modifications and improvement.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133914699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
José Coelho, Carlos M. Duarte, P. Biswas, P. Langdon
The development of TV applications nowadays excludes users with certain impairments from interacting with and accessing the same type of contents as other users do. Developers are also not interested in developing new or different versions of applications targeting different user characteristics. In this paper we describe a novel adaptive accessibility approach on how to develop accessible TV applications, without requiring too much additional effort from the developers. Integrating multimodal interaction, adaptation techniques and the use of simulators in the design process, we show how to adapt User Interfaces to the individual needs and limitations of elderly users. For this, we rely on the identification of the most relevant impairment configurations among users in practical user-trials, and we draw a relation with user specific characteristics. We provide guidelines for more accessible and centered TV application development.
{"title":"Developing accessible TV applications","authors":"José Coelho, Carlos M. Duarte, P. Biswas, P. Langdon","doi":"10.1145/2049536.2049561","DOIUrl":"https://doi.org/10.1145/2049536.2049561","url":null,"abstract":"The development of TV applications nowadays excludes users with certain impairments from interacting with and accessing the same type of contents as other users do. Developers are also not interested in developing new or different versions of applications targeting different user characteristics. In this paper we describe a novel adaptive accessibility approach on how to develop accessible TV applications, without requiring too much additional effort from the developers. Integrating multimodal interaction, adaptation techniques and the use of simulators in the design process, we show how to adapt User Interfaces to the individual needs and limitations of elderly users. For this, we rely on the identification of the most relevant impairment configurations among users in practical user-trials, and we draw a relation with user specific characteristics. We provide guidelines for more accessible and centered TV application development.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"04 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129906622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daisuke Sato, Masatomo Kobayashi, Hironobu Takagi, C. Asakawa, J. Tanaka
Online Web applications have become widespread and have made our daily life more convenient. However, older adults often find such applications inaccessible because of age-related changes to their physical and cognitive abilities. Two of the reasons that older adults may shy away from the Web are fears of the unknown and of the consequences of incorrect actions. We are extending a voice-based augmentation technique originally developed for blind users. We want to reduce the cognitive load on older adults by providing contextual support. An experiment was conducted to evaluate how voice augmentation can support elderly users in using Web applications. Ten older adults participated in our study and their subjective evaluations showed how the system gave them confidence in completing Web forms. We believe that voice augmentation may help address the users' concerns arising from their low confidence levels.
{"title":"How voice augmentation supports elderly web users","authors":"Daisuke Sato, Masatomo Kobayashi, Hironobu Takagi, C. Asakawa, J. Tanaka","doi":"10.1145/2049536.2049565","DOIUrl":"https://doi.org/10.1145/2049536.2049565","url":null,"abstract":"Online Web applications have become widespread and have made our daily life more convenient. However, older adults often find such applications inaccessible because of age-related changes to their physical and cognitive abilities. Two of the reasons that older adults may shy away from the Web are fears of the unknown and of the consequences of incorrect actions. We are extending a voice-based augmentation technique originally developed for blind users. We want to reduce the cognitive load on older adults by providing contextual support. An experiment was conducted to evaluate how voice augmentation can support elderly users in using Web applications. Ten older adults participated in our study and their subjective evaluations showed how the system gave them confidence in completing Web forms. We believe that voice augmentation may help address the users' concerns arising from their low confidence levels.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"559 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122486696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assistive Technologies empower individuals to accomplish tasks they might not be able to do otherwise. Unfortunately, a large percentage of Assistive Technology devices that are purchased (35% or more) end up unused or abandoned [7,10], leaving many people with Assistive Technology that is inappropriate for their needs. Low acceptance rates of Assistive Technology occur for many reasons, but common factors include 1) lack of considering user opinion in selection, 2) ease in obtaining devices, 3) poor device performance, and 4) changes in user needs and priorities [7]. We are working to help more people gain access to the Assistive Technology they need by empowering non-engineers to "Do-It-Yourself" (DIY) and create, modify, or build. This paper illustrates that it is possible to custom-build Assistive Technology, and argues why empowering users to make their own Assistive Technology can improve the adoption process (and subsequently adoption rates). We discuss DIY experiences and impressions from individuals who have either built Assistive Technology before, or rely on it. We found that increased control over design elements, passion, and cost motivated individuals to make their own Assistive Technology instead of buying it. We discuss how a new generation of rapid prototyping tools and online communities can empower more individuals. We synthesize our findings into design recommendations to help promote future DIY-AT success.
{"title":"Empowering individuals with do-it-yourself assistive technology","authors":"A. Hurst, J. Tobias","doi":"10.1145/2049536.2049541","DOIUrl":"https://doi.org/10.1145/2049536.2049541","url":null,"abstract":"Assistive Technologies empower individuals to accomplish tasks they might not be able to do otherwise. Unfortunately, a large percentage of Assistive Technology devices that are purchased (35% or more) end up unused or abandoned [7,10], leaving many people with Assistive Technology that is inappropriate for their needs. Low acceptance rates of Assistive Technology occur for many reasons, but common factors include 1) lack of considering user opinion in selection, 2) ease in obtaining devices, 3) poor device performance, and 4) changes in user needs and priorities [7]. We are working to help more people gain access to the Assistive Technology they need by empowering non-engineers to \"Do-It-Yourself\" (DIY) and create, modify, or build. This paper illustrates that it is possible to custom-build Assistive Technology, and argues why empowering users to make their own Assistive Technology can improve the adoption process (and subsequently adoption rates). We discuss DIY experiences and impressions from individuals who have either built Assistive Technology before, or rely on it. We found that increased control over design elements, passion, and cost motivated individuals to make their own Assistive Technology instead of buying it. We discuss how a new generation of rapid prototyping tools and online communities can empower more individuals. We synthesize our findings into design recommendations to help promote future DIY-AT success.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121929089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of this study was to determine whether the use of audio and a haptic-based videogame has an impact on the development of Orientation and Mobility (O&M) skills in school-age blind learners. The video game Audio Haptic Maze (AHM) was designed, developed and its usability and cognitive impact was evaluated to determine the impact on the development of O&M skills. The results show that the interfaces used in the videogame are usable and appropiately designed, and that the haptic interface is as effective as the audio interface for O&M purposes.
{"title":"Audio haptic videogaming for navigation skills in learners who are blind","authors":"Jaime Sánchez, M. Espinoza","doi":"10.1145/2049536.2049580","DOIUrl":"https://doi.org/10.1145/2049536.2049580","url":null,"abstract":"The purpose of this study was to determine whether the use of audio and a haptic-based videogame has an impact on the development of Orientation and Mobility (O&M) skills in school-age blind learners. The video game Audio Haptic Maze (AHM) was designed, developed and its usability and cognitive impact was evaluated to determine the impact on the development of O&M skills. The results show that the interfaces used in the videogame are usable and appropiately designed, and that the haptic interface is as effective as the audio interface for O&M purposes.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117167883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I developed a system for mobile devices for crowdsourcing landmarks around bus stops for blind transit riders. The main focus of my research is to develop a method to provide reliable and accurate information about landmarks around bus stops to blind transit riders. In addition to that, my research focuses on understanding how access to such information affects their use of public transportation.
{"title":"StopFinder: improving the experience of blind public transit riders with crowdsourcing","authors":"Sanjana Prasain","doi":"10.1145/2049536.2049629","DOIUrl":"https://doi.org/10.1145/2049536.2049629","url":null,"abstract":"I developed a system for mobile devices for crowdsourcing landmarks around bus stops for blind transit riders. The main focus of my research is to develop a method to provide reliable and accurate information about landmarks around bus stops to blind transit riders. In addition to that, my research focuses on understanding how access to such information affects their use of public transportation.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115542496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ondrej Polácek, Z. Míkovec, Adam J. Sporka, P. Slavík
This paper presents Humsher -- a novel text entry method operated by the non-verbal vocal input, specifically the sound of humming. The method utilizes an adaptive language model for text prediction. Four different user interfaces are presented and compared. Three of them use dynamic layout in which n-grams of characters are presented to the user to choose from according to their probability in given context. The last interface utilizes static layout, in which the characters are displayed alphabetically and a modified binary search algorithm is used for an efficient selection of a character. All interfaces were compared and evaluated in a user study involving 17 able-bodied subjects. Case studies with four disabled people were also performed in order to validate the potential of the method for motor-impaired users. The average speed of the fastest interface was 14 characters per minute, while the fastest user reached 30 characters per minute. Disabled participants were able to type at 14 -- 22 characters per minute after seven sessions.
{"title":"Humsher: a predictive keyboard operated by humming","authors":"Ondrej Polácek, Z. Míkovec, Adam J. Sporka, P. Slavík","doi":"10.1145/2049536.2049552","DOIUrl":"https://doi.org/10.1145/2049536.2049552","url":null,"abstract":"This paper presents Humsher -- a novel text entry method operated by the non-verbal vocal input, specifically the sound of humming. The method utilizes an adaptive language model for text prediction. Four different user interfaces are presented and compared. Three of them use dynamic layout in which n-grams of characters are presented to the user to choose from according to their probability in given context. The last interface utilizes static layout, in which the characters are displayed alphabetically and a modified binary search algorithm is used for an efficient selection of a character. All interfaces were compared and evaluated in a user study involving 17 able-bodied subjects. Case studies with four disabled people were also performed in order to validate the potential of the method for motor-impaired users. The average speed of the fastest interface was 14 characters per minute, while the fastest user reached 30 characters per minute. Disabled participants were able to type at 14 -- 22 characters per minute after seven sessions.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127444059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Pereira, Benjamim Fonseca, H. Paredes, Miriam Cabo
In this demo, we present an application for mobile phones, which can allow communication between deaf and emergency medical services using an iconographic touch interface. This application can be useful especially for deaf but also for persons without disabilities that face sudden situations where speech is hard to articulate.
{"title":"Exploring iconographic interface in emergency for deaf","authors":"T. Pereira, Benjamim Fonseca, H. Paredes, Miriam Cabo","doi":"10.1145/2049536.2049589","DOIUrl":"https://doi.org/10.1145/2049536.2049589","url":null,"abstract":"In this demo, we present an application for mobile phones, which can allow communication between deaf and emergency medical services using an iconographic touch interface. This application can be useful especially for deaf but also for persons without disabilities that face sudden situations where speech is hard to articulate.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126900033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IVAT (in-vehicle assistive technology) is an in-dash interface borne out from a collaborative effort between the Shepherd Center assistive technology team, the Georgia Tech Sonification Laboratory, and Centrafuse. The aim of this technology is to increase driver safety by taking individual cognitive abilities and limitations into account. While the potential applications of IVAT are widespread, the initial population of interest for the current research is survivors of a traumatic brain injury (TBI). TBI can cause a variety of impairments that limit driving ability. IVAT is aimed at enabling the individual to overcome these limitations in order to regain some independence by driving after injury.
{"title":"In-vehicle assistive technology (IVAT) for drivers who have survived a traumatic brain injury","authors":"Julia DeBlasio Olsheski, B. Walker, Jeff McCloud","doi":"10.1145/2049536.2049595","DOIUrl":"https://doi.org/10.1145/2049536.2049595","url":null,"abstract":"IVAT (in-vehicle assistive technology) is an in-dash interface borne out from a collaborative effort between the Shepherd Center assistive technology team, the Georgia Tech Sonification Laboratory, and Centrafuse. The aim of this technology is to increase driver safety by taking individual cognitive abilities and limitations into account. While the potential applications of IVAT are widespread, the initial population of interest for the current research is survivors of a traumatic brain injury (TBI). TBI can cause a variety of impairments that limit driving ability. IVAT is aimed at enabling the individual to overcome these limitations in order to regain some independence by driving after injury.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127470899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}