Nithin Santhanam, Shari Trewin, C. Swart, P. Santhanam
This study focuses on the use of web accessibility software by people with cerebral palsy performing three typical user tasks. We evaluate the customization options in the IBM accessibility Works add-on to the Mozilla Firefox browser, as used by ten users. While specific features provide significant benefit, we find that users tend to pick unnecessary options, resulting in a potentially negative user experience.
{"title":"Self-selection of accessibility options","authors":"Nithin Santhanam, Shari Trewin, C. Swart, P. Santhanam","doi":"10.1145/2049536.2049605","DOIUrl":"https://doi.org/10.1145/2049536.2049605","url":null,"abstract":"This study focuses on the use of web accessibility software by people with cerebral palsy performing three typical user tasks. We evaluate the customization options in the IBM accessibility Works add-on to the Mozilla Firefox browser, as used by ten users. While specific features provide significant benefit, we find that users tend to pick unnecessary options, resulting in a potentially negative user experience.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123073454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Highly textual websites present barriers to Deaf people, primarily using American Sign Language for communication. Deaf people have been posting ASL content in form of vlogs to YouTube and specialized websites such as Deafvideo.TV. This paper presents some of the first insights into the use of vlogging technology and techniques among the Deaf community. The findings suggest that there are differences between YouTube and Deafvideo.TV due to differences between mainstream and specialized sites. Vlogging technology seems to influence use of styles that are not found or are used differently in face-to-face communications. Examples include the alteration of vloggers' signing space to convey different meanings on screen.
{"title":"The vlogging phenomena: a deaf perspective","authors":"Ellen S. Hibbard, D. Fels","doi":"10.1145/2049536.2049549","DOIUrl":"https://doi.org/10.1145/2049536.2049549","url":null,"abstract":"Highly textual websites present barriers to Deaf people, primarily using American Sign Language for communication. Deaf people have been posting ASL content in form of vlogs to YouTube and specialized websites such as Deafvideo.TV. This paper presents some of the first insights into the use of vlogging technology and techniques among the Deaf community. The findings suggest that there are differences between YouTube and Deafvideo.TV due to differences between mainstream and specialized sites. Vlogging technology seems to influence use of styles that are not found or are used differently in face-to-face communications. Examples include the alteration of vloggers' signing space to convey different meanings on screen.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131729592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Davide Barberis, Nicola Garazzino, P. Prinetto, G. Tiotto
This paper presents the ATLAS Editor for Assisted Translation (ALEAT), a novel tool for the Computer Assisted Translation (CAT) from Italian written language to Italian Sign Language (LIS) of Deaf People. The tool is a web application that has been developed within the ATLAS project, that targets the automatic translation from Italian written language to Italian Sign Language in the weather forecasts domain. ALEAT takes a text as input, written according to the Italian Language grammar, performs the automatic translation of the sentence and gives the result of the translation to the user by visualizing it through a virtual character. Since the automatic translation is error-prone, ALEAT allows to correct it with the intervention of the user. The translation is stored in a database resorting to a novel formalism: the ATLAS Written Extended LIS (AEWLIS). AEWLIS allows to play the translation through the ATLAS visualization module and to load it within ALEAT for successive modifications and improvement.
本文介绍了一种用于聋人意大利语书面语到意大利语手语计算机辅助翻译(CAT)的新工具ATLAS Editor for Assisted Translation (ALEAT)。该工具是ATLAS项目中开发的一个web应用程序,其目标是从意大利书面语言到意大利手语在天气预报领域的自动翻译。ALEAT将根据意大利语语法编写的文本作为输入,执行句子的自动翻译,并通过虚拟字符将翻译结果可视化,将其提供给用户。由于自动翻译容易出错,ALEAT允许在用户的干预下纠正它。翻译通过一种新的形式存储在数据库中:ATLAS书面扩展LIS (AEWLIS)。aaelis允许通过ATLAS可视化模块播放翻译,并将其加载到ALEAT中以进行后续修改和改进。
{"title":"Improving accessibility for deaf people: an editor for computer assisted translation through virtual avatars.","authors":"Davide Barberis, Nicola Garazzino, P. Prinetto, G. Tiotto","doi":"10.1145/2049536.2049593","DOIUrl":"https://doi.org/10.1145/2049536.2049593","url":null,"abstract":"This paper presents the ATLAS Editor for Assisted Translation (ALEAT), a novel tool for the Computer Assisted Translation (CAT) from Italian written language to Italian Sign Language (LIS) of Deaf People. The tool is a web application that has been developed within the ATLAS project, that targets the automatic translation from Italian written language to Italian Sign Language in the weather forecasts domain. ALEAT takes a text as input, written according to the Italian Language grammar, performs the automatic translation of the sentence and gives the result of the translation to the user by visualizing it through a virtual character. Since the automatic translation is error-prone, ALEAT allows to correct it with the intervention of the user. The translation is stored in a database resorting to a novel formalism: the ATLAS Written Extended LIS (AEWLIS). AEWLIS allows to play the translation through the ATLAS visualization module and to load it within ALEAT for successive modifications and improvement.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133914699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Microsoft's Kinect 3-D motion sensor is a low cost 3D camera that provides color and depth information of indoor environments. In this demonstration, the functionality of this fun-only camera accompanied by an iPad's tangible interface is targeted to the benefit of the visually impaired. A computer-vision-based framework for real time objects localization and for their audio description is introduced. Firstly, objects are extracted from the scene and recognized using feature descriptors and machine-learning. Secondly, the recognized objects are labeled by instruments sounds, whereas their position in 3D space is described by virtual space sources of sound. As a result, the scene can be heard and explored while finger-triggering the sounds within the iPad, on which a top-view of the objects is mapped. This enables blindfolded users to build a mental occupancy grid of the environment. The approach presented here brings the promise of efficient assistance and could be adapted as an electronic travel aid for the visually-impaired in the near future.
{"title":"Toward 3D scene understanding via audio-description: Kinect-iPad fusion for the visually impaired","authors":"J. D. Gomez, Sinan Mohammed, G. Bologna, T. Pun","doi":"10.1145/2049536.2049613","DOIUrl":"https://doi.org/10.1145/2049536.2049613","url":null,"abstract":"Microsoft's Kinect 3-D motion sensor is a low cost 3D camera that provides color and depth information of indoor environments. In this demonstration, the functionality of this fun-only camera accompanied by an iPad's tangible interface is targeted to the benefit of the visually impaired. A computer-vision-based framework for real time objects localization and for their audio description is introduced. Firstly, objects are extracted from the scene and recognized using feature descriptors and machine-learning. Secondly, the recognized objects are labeled by instruments sounds, whereas their position in 3D space is described by virtual space sources of sound. As a result, the scene can be heard and explored while finger-triggering the sounds within the iPad, on which a top-view of the objects is mapped. This enables blindfolded users to build a mental occupancy grid of the environment. The approach presented here brings the promise of efficient assistance and could be adapted as an electronic travel aid for the visually-impaired in the near future.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125132518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assistive Technologies empower individuals to accomplish tasks they might not be able to do otherwise. Unfortunately, a large percentage of Assistive Technology devices that are purchased (35% or more) end up unused or abandoned [7,10], leaving many people with Assistive Technology that is inappropriate for their needs. Low acceptance rates of Assistive Technology occur for many reasons, but common factors include 1) lack of considering user opinion in selection, 2) ease in obtaining devices, 3) poor device performance, and 4) changes in user needs and priorities [7]. We are working to help more people gain access to the Assistive Technology they need by empowering non-engineers to "Do-It-Yourself" (DIY) and create, modify, or build. This paper illustrates that it is possible to custom-build Assistive Technology, and argues why empowering users to make their own Assistive Technology can improve the adoption process (and subsequently adoption rates). We discuss DIY experiences and impressions from individuals who have either built Assistive Technology before, or rely on it. We found that increased control over design elements, passion, and cost motivated individuals to make their own Assistive Technology instead of buying it. We discuss how a new generation of rapid prototyping tools and online communities can empower more individuals. We synthesize our findings into design recommendations to help promote future DIY-AT success.
{"title":"Empowering individuals with do-it-yourself assistive technology","authors":"A. Hurst, J. Tobias","doi":"10.1145/2049536.2049541","DOIUrl":"https://doi.org/10.1145/2049536.2049541","url":null,"abstract":"Assistive Technologies empower individuals to accomplish tasks they might not be able to do otherwise. Unfortunately, a large percentage of Assistive Technology devices that are purchased (35% or more) end up unused or abandoned [7,10], leaving many people with Assistive Technology that is inappropriate for their needs. Low acceptance rates of Assistive Technology occur for many reasons, but common factors include 1) lack of considering user opinion in selection, 2) ease in obtaining devices, 3) poor device performance, and 4) changes in user needs and priorities [7]. We are working to help more people gain access to the Assistive Technology they need by empowering non-engineers to \"Do-It-Yourself\" (DIY) and create, modify, or build. This paper illustrates that it is possible to custom-build Assistive Technology, and argues why empowering users to make their own Assistive Technology can improve the adoption process (and subsequently adoption rates). We discuss DIY experiences and impressions from individuals who have either built Assistive Technology before, or rely on it. We found that increased control over design elements, passion, and cost motivated individuals to make their own Assistive Technology instead of buying it. We discuss how a new generation of rapid prototyping tools and online communities can empower more individuals. We synthesize our findings into design recommendations to help promote future DIY-AT success.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121929089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ondrej Polácek, Z. Míkovec, Adam J. Sporka, P. Slavík
This paper presents Humsher -- a novel text entry method operated by the non-verbal vocal input, specifically the sound of humming. The method utilizes an adaptive language model for text prediction. Four different user interfaces are presented and compared. Three of them use dynamic layout in which n-grams of characters are presented to the user to choose from according to their probability in given context. The last interface utilizes static layout, in which the characters are displayed alphabetically and a modified binary search algorithm is used for an efficient selection of a character. All interfaces were compared and evaluated in a user study involving 17 able-bodied subjects. Case studies with four disabled people were also performed in order to validate the potential of the method for motor-impaired users. The average speed of the fastest interface was 14 characters per minute, while the fastest user reached 30 characters per minute. Disabled participants were able to type at 14 -- 22 characters per minute after seven sessions.
{"title":"Humsher: a predictive keyboard operated by humming","authors":"Ondrej Polácek, Z. Míkovec, Adam J. Sporka, P. Slavík","doi":"10.1145/2049536.2049552","DOIUrl":"https://doi.org/10.1145/2049536.2049552","url":null,"abstract":"This paper presents Humsher -- a novel text entry method operated by the non-verbal vocal input, specifically the sound of humming. The method utilizes an adaptive language model for text prediction. Four different user interfaces are presented and compared. Three of them use dynamic layout in which n-grams of characters are presented to the user to choose from according to their probability in given context. The last interface utilizes static layout, in which the characters are displayed alphabetically and a modified binary search algorithm is used for an efficient selection of a character. All interfaces were compared and evaluated in a user study involving 17 able-bodied subjects. Case studies with four disabled people were also performed in order to validate the potential of the method for motor-impaired users. The average speed of the fastest interface was 14 characters per minute, while the fastest user reached 30 characters per minute. Disabled participants were able to type at 14 -- 22 characters per minute after seven sessions.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127444059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IVAT (in-vehicle assistive technology) is an in-dash interface borne out from a collaborative effort between the Shepherd Center assistive technology team, the Georgia Tech Sonification Laboratory, and Centrafuse. The aim of this technology is to increase driver safety by taking individual cognitive abilities and limitations into account. While the potential applications of IVAT are widespread, the initial population of interest for the current research is survivors of a traumatic brain injury (TBI). TBI can cause a variety of impairments that limit driving ability. IVAT is aimed at enabling the individual to overcome these limitations in order to regain some independence by driving after injury.
{"title":"In-vehicle assistive technology (IVAT) for drivers who have survived a traumatic brain injury","authors":"Julia DeBlasio Olsheski, B. Walker, Jeff McCloud","doi":"10.1145/2049536.2049595","DOIUrl":"https://doi.org/10.1145/2049536.2049595","url":null,"abstract":"IVAT (in-vehicle assistive technology) is an in-dash interface borne out from a collaborative effort between the Shepherd Center assistive technology team, the Georgia Tech Sonification Laboratory, and Centrafuse. The aim of this technology is to increase driver safety by taking individual cognitive abilities and limitations into account. While the potential applications of IVAT are widespread, the initial population of interest for the current research is survivors of a traumatic brain injury (TBI). TBI can cause a variety of impairments that limit driving ability. IVAT is aimed at enabling the individual to overcome these limitations in order to regain some independence by driving after injury.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127470899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Pereira, Benjamim Fonseca, H. Paredes, Miriam Cabo
In this demo, we present an application for mobile phones, which can allow communication between deaf and emergency medical services using an iconographic touch interface. This application can be useful especially for deaf but also for persons without disabilities that face sudden situations where speech is hard to articulate.
{"title":"Exploring iconographic interface in emergency for deaf","authors":"T. Pereira, Benjamim Fonseca, H. Paredes, Miriam Cabo","doi":"10.1145/2049536.2049589","DOIUrl":"https://doi.org/10.1145/2049536.2049589","url":null,"abstract":"In this demo, we present an application for mobile phones, which can allow communication between deaf and emergency medical services using an iconographic touch interface. This application can be useful especially for deaf but also for persons without disabilities that face sudden situations where speech is hard to articulate.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126900033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many new technologies have been developed to assist people who are visually impaired in learning about their environment, but there is little understanding of their motivations for using these tools. Our tool VizWiz allows users to take a picture using their mobile phone, ask a question about the picture's contents, and receive an answer in nearly realtime. This study investigates patterns in the questions that visually impaired users ask about their surroundings, and presents the benefits and limitations of responses from both human and computerized sources.
{"title":"Analyzing visual questions from visually impaired users","authors":"Erin L. Brady","doi":"10.1145/2049536.2049622","DOIUrl":"https://doi.org/10.1145/2049536.2049622","url":null,"abstract":"Many new technologies have been developed to assist people who are visually impaired in learning about their environment, but there is little understanding of their motivations for using these tools. Our tool VizWiz allows users to take a picture using their mobile phone, ask a question about the picture's contents, and receive an answer in nearly realtime. This study investigates patterns in the questions that visually impaired users ask about their surroundings, and presents the benefits and limitations of responses from both human and computerized sources.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116374321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I developed a system for mobile devices for crowdsourcing landmarks around bus stops for blind transit riders. The main focus of my research is to develop a method to provide reliable and accurate information about landmarks around bus stops to blind transit riders. In addition to that, my research focuses on understanding how access to such information affects their use of public transportation.
{"title":"StopFinder: improving the experience of blind public transit riders with crowdsourcing","authors":"Sanjana Prasain","doi":"10.1145/2049536.2049629","DOIUrl":"https://doi.org/10.1145/2049536.2049629","url":null,"abstract":"I developed a system for mobile devices for crowdsourcing landmarks around bus stops for blind transit riders. The main focus of my research is to develop a method to provide reliable and accurate information about landmarks around bus stops to blind transit riders. In addition to that, my research focuses on understanding how access to such information affects their use of public transportation.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115542496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}