Matthias Schulz, Stefan Schmidt, Klaus-Peter Engelbrecht, S. Möller
This paper presents work in progress which aims at analyzing the origins of interaction problems which certain users have when interacting with new technology. Our analysis is based on device models which categorize certain classes of devices via a pre-defined set of features. We provide examples which show that usability problems are partially caused by an erroneous transfer of device features to new/unknown devices.
{"title":"Using device models for analyzing user interaction problems","authors":"Matthias Schulz, Stefan Schmidt, Klaus-Peter Engelbrecht, S. Möller","doi":"10.1145/2049536.2049618","DOIUrl":"https://doi.org/10.1145/2049536.2049618","url":null,"abstract":"This paper presents work in progress which aims at analyzing the origins of interaction problems which certain users have when interacting with new technology. Our analysis is based on device models which categorize certain classes of devices via a pre-defined set of features. We provide examples which show that usability problems are partially caused by an erroneous transfer of device features to new/unknown devices.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124500616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Microsoft's Kinect 3-D motion sensor is a low cost 3D camera that provides color and depth information of indoor environments. In this demonstration, the functionality of this fun-only camera accompanied by an iPad's tangible interface is targeted to the benefit of the visually impaired. A computer-vision-based framework for real time objects localization and for their audio description is introduced. Firstly, objects are extracted from the scene and recognized using feature descriptors and machine-learning. Secondly, the recognized objects are labeled by instruments sounds, whereas their position in 3D space is described by virtual space sources of sound. As a result, the scene can be heard and explored while finger-triggering the sounds within the iPad, on which a top-view of the objects is mapped. This enables blindfolded users to build a mental occupancy grid of the environment. The approach presented here brings the promise of efficient assistance and could be adapted as an electronic travel aid for the visually-impaired in the near future.
{"title":"Toward 3D scene understanding via audio-description: Kinect-iPad fusion for the visually impaired","authors":"J. D. Gomez, Sinan Mohammed, G. Bologna, T. Pun","doi":"10.1145/2049536.2049613","DOIUrl":"https://doi.org/10.1145/2049536.2049613","url":null,"abstract":"Microsoft's Kinect 3-D motion sensor is a low cost 3D camera that provides color and depth information of indoor environments. In this demonstration, the functionality of this fun-only camera accompanied by an iPad's tangible interface is targeted to the benefit of the visually impaired. A computer-vision-based framework for real time objects localization and for their audio description is introduced. Firstly, objects are extracted from the scene and recognized using feature descriptors and machine-learning. Secondly, the recognized objects are labeled by instruments sounds, whereas their position in 3D space is described by virtual space sources of sound. As a result, the scene can be heard and explored while finger-triggering the sounds within the iPad, on which a top-view of the objects is mapped. This enables blindfolded users to build a mental occupancy grid of the environment. The approach presented here brings the promise of efficient assistance and could be adapted as an electronic travel aid for the visually-impaired in the near future.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125132518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rayoung Yang, Sangmi Park, Sonali R. Mishra, Zhenan Hong, Clint Newsom, Hyeon Joo, Erik C. Hofer, Mark W. Newman
Much of the information designed to help people navigate the built environment is conveyed through visual channels, which means it is not accessible to people with visual impairments. Due to this limitation, travelers with visual impairments often have difficulty navigating and discovering locations in unfamiliar environments, which reduces their sense of independence with respect to traveling by foot. In this paper, we examine how mobile location-based computing systems can be used to increase the feeling of independence in travelers with visual impairments. A set of formative interviews with people with visual impairments showed that increasing one's general spatial awareness is the key to greater independence. This insight guided the design of Talking Points 3 (TP3), a mobile location-aware system for people with visual impairments that seeks to increase the legibility of the environment for its users in order to facilitate navigating to desired locations, exploration, serendipitous discovery, and improvisation. We conducted studies with eight legally blind participants in three campus buildings in order to explore how and to what extent TP3 helps promote spatial awareness for its users. The results shed light on how TP3 helped users find destinations in unfamiliar environments, but also allowed them to discover new points of interest, improvise solutions to problems encountered, develop personalized strategies for navigating, and, in general, enjoy a greater sense of independence.
{"title":"Supporting spatial awareness and independent wayfinding for pedestrians with visual impairments","authors":"Rayoung Yang, Sangmi Park, Sonali R. Mishra, Zhenan Hong, Clint Newsom, Hyeon Joo, Erik C. Hofer, Mark W. Newman","doi":"10.1145/2049536.2049544","DOIUrl":"https://doi.org/10.1145/2049536.2049544","url":null,"abstract":"Much of the information designed to help people navigate the built environment is conveyed through visual channels, which means it is not accessible to people with visual impairments. Due to this limitation, travelers with visual impairments often have difficulty navigating and discovering locations in unfamiliar environments, which reduces their sense of independence with respect to traveling by foot. In this paper, we examine how mobile location-based computing systems can be used to increase the feeling of independence in travelers with visual impairments. A set of formative interviews with people with visual impairments showed that increasing one's general spatial awareness is the key to greater independence. This insight guided the design of Talking Points 3 (TP3), a mobile location-aware system for people with visual impairments that seeks to increase the legibility of the environment for its users in order to facilitate navigating to desired locations, exploration, serendipitous discovery, and improvisation. We conducted studies with eight legally blind participants in three campus buildings in order to explore how and to what extent TP3 helps promote spatial awareness for its users. The results shed light on how TP3 helped users find destinations in unfamiliar environments, but also allowed them to discover new points of interest, improvise solutions to problems encountered, develop personalized strategies for navigating, and, in general, enjoy a greater sense of independence.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133110322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People with visual disabilities increasingly use text-to-speech synthesis as a primary output modality for interaction with computers. Surprisingly, there have been no systematic comparisons of the performance of different text-to-speech systems for this user population. In this paper we report the results of a pilot experiment on the intelligibility of fast synthesized speech for individuals with early-onset blindness. Using an open-response recall task, we collected data on four synthesis systems representing two major approaches to text-to-speech synthesis: formant-based synthesis and concatenative unit selection synthesis. We found a significant effect of speaking rate on intelligibility of synthesized speech, and a trend towards significance for synthesizer type. In post-hoc analyses, we found that participant-related factors, including age and familiarity with a synthesizer and voice, also affect intelligibility of fast synthesized speech.
{"title":"On the intelligibility of fast synthesized speech for individuals with early-onset blindness","authors":"Amanda Stent, A. Syrdal, Taniya Mishra","doi":"10.1145/2049536.2049574","DOIUrl":"https://doi.org/10.1145/2049536.2049574","url":null,"abstract":"People with visual disabilities increasingly use text-to-speech synthesis as a primary output modality for interaction with computers. Surprisingly, there have been no systematic comparisons of the performance of different text-to-speech systems for this user population. In this paper we report the results of a pilot experiment on the intelligibility of fast synthesized speech for individuals with early-onset blindness. Using an open-response recall task, we collected data on four synthesis systems representing two major approaches to text-to-speech synthesis: formant-based synthesis and concatenative unit selection synthesis. We found a significant effect of speaking rate on intelligibility of synthesized speech, and a trend towards significance for synthesizer type. In post-hoc analyses, we found that participant-related factors, including age and familiarity with a synthesizer and voice, also affect intelligibility of fast synthesized speech.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130186747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many new technologies have been developed to assist people who are visually impaired in learning about their environment, but there is little understanding of their motivations for using these tools. Our tool VizWiz allows users to take a picture using their mobile phone, ask a question about the picture's contents, and receive an answer in nearly realtime. This study investigates patterns in the questions that visually impaired users ask about their surroundings, and presents the benefits and limitations of responses from both human and computerized sources.
{"title":"Analyzing visual questions from visually impaired users","authors":"Erin L. Brady","doi":"10.1145/2049536.2049622","DOIUrl":"https://doi.org/10.1145/2049536.2049622","url":null,"abstract":"Many new technologies have been developed to assist people who are visually impaired in learning about their environment, but there is little understanding of their motivations for using these tools. Our tool VizWiz allows users to take a picture using their mobile phone, ask a question about the picture's contents, and receive an answer in nearly realtime. This study investigates patterns in the questions that visually impaired users ask about their surroundings, and presents the benefits and limitations of responses from both human and computerized sources.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116374321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Biel Moyà Alcover, Antoni Jaume-i-Capó, J. Varona, Pau Martínez-Bueso, Alejandro Mesejo-Chiong
Research studies show that serious games help to motivate users in rehabilitation and therapy is better when users are motivated. In this work we experiment with serious games for cerebral palsy patients, who rarely show capacity increases with therapy which causes them demotivation. For this reason, we have implemented balance rehabilitation video games for this group of patients. The video games were developed using the prototype development paradigm, respecting the requirements indicated by physiotherapists and including desirable features for rehabilitation serious games presented in the literature. A set of patients who abandoned therapy last year due to loss of motivation, has tested the video game for a period of 6 months. Whilst using the video game no patients have abandoned therapy, showing the appropriateness of games for this kind of patients.
{"title":"Use of serious games for motivational balance rehabilitation of cerebral palsy patients","authors":"Biel Moyà Alcover, Antoni Jaume-i-Capó, J. Varona, Pau Martínez-Bueso, Alejandro Mesejo-Chiong","doi":"10.1145/2049536.2049615","DOIUrl":"https://doi.org/10.1145/2049536.2049615","url":null,"abstract":"Research studies show that serious games help to motivate users in rehabilitation and therapy is better when users are motivated. In this work we experiment with serious games for cerebral palsy patients, who rarely show capacity increases with therapy which causes them demotivation. For this reason, we have implemented balance rehabilitation video games for this group of patients. The video games were developed using the prototype development paradigm, respecting the requirements indicated by physiotherapists and including desirable features for rehabilitation serious games presented in the literature. A set of patients who abandoned therapy last year due to loss of motivation, has tested the video game for a period of 6 months. Whilst using the video game no patients have abandoned therapy, showing the appropriateness of games for this kind of patients.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125069606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deaf children have great difficulties in reading comprehension. In our contribution, we illustrate how we have collected, simplified and presented some stories in order to render them suitable for young Italian deaf readers both from a linguistic and a formal point of view. The aim is to stimulate their pleasure of reading. The experimental data suggest that the approach is effective and that enriching the stories with static and/or animated drawings significantly improves text readability. However, they also clearly point out that textual simplification alone is not enough to meet the needs of the target group and that the story structure itself and its presentation have to be carefully planned.
{"title":"Supporting deaf children's reading skills: the many challenges of text simplification","authors":"C. Vettori, O. Mich","doi":"10.1145/2049536.2049608","DOIUrl":"https://doi.org/10.1145/2049536.2049608","url":null,"abstract":"Deaf children have great difficulties in reading comprehension. In our contribution, we illustrate how we have collected, simplified and presented some stories in order to render them suitable for young Italian deaf readers both from a linguistic and a formal point of view. The aim is to stimulate their pleasure of reading. The experimental data suggest that the approach is effective and that enriching the stories with static and/or animated drawings significantly improves text readability. However, they also clearly point out that textual simplification alone is not enough to meet the needs of the target group and that the story structure itself and its presentation have to be carefully planned.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130737416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Blind people often depend on public transit for mobility. In interviews I learned that changing trains and orientation inside stations is a significant hindering reason for not being spontaneous. Since GPS-navigation typically cannot be used indoors, this paper focuses on building a tool for blind people to assist them in navigating inside train stations, designed for commodity hardware like the Apple iPhone.
{"title":"Improving public transit accessibility for blind riders: a train station navigation assistant","authors":"Markus Guentert","doi":"10.1145/2049536.2049626","DOIUrl":"https://doi.org/10.1145/2049536.2049626","url":null,"abstract":"Blind people often depend on public transit for mobility. In interviews I learned that changing trains and orientation inside stations is a significant hindering reason for not being spontaneous. Since GPS-navigation typically cannot be used indoors, this paper focuses on building a tool for blind people to assist them in navigating inside train stations, designed for commodity hardware like the Apple iPhone.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114070571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yury Puzis, Eugene Borodin, Faisal Ahmed, V. Melnyk, I. Ramakrishnan
In recent years, the Web has become an ever more sophisticated and irreplaceable tool in our daily lives. While the visual Web has been advancing at a rapid pace, assistive technology has not been able to keep up, increasingly putting visually impaired users at a disadvantage. Web automation has the potential to bridge the accessibility divide between the ways blind and sighted people access the Web; specifically, it can enable blind people to accomplish quickly web browsing tasks that were previously slow, hard, or even impossible to complete. In this paper, we propose guidelines for the design of intuitive and accessible web automation that has the potential to increase accessibility and usability of web pages, reduce interaction time, and improve user browsing experience. Our findings and a preliminary user study demonstrate the feasibility of and emphasize the pressing need for truly accessible web automation technologies.
{"title":"Guidelines for an accessible web automation interface","authors":"Yury Puzis, Eugene Borodin, Faisal Ahmed, V. Melnyk, I. Ramakrishnan","doi":"10.1145/2049536.2049591","DOIUrl":"https://doi.org/10.1145/2049536.2049591","url":null,"abstract":"In recent years, the Web has become an ever more sophisticated and irreplaceable tool in our daily lives. While the visual Web has been advancing at a rapid pace, assistive technology has not been able to keep up, increasingly putting visually impaired users at a disadvantage. Web automation has the potential to bridge the accessibility divide between the ways blind and sighted people access the Web; specifically, it can enable blind people to accomplish quickly web browsing tasks that were previously slow, hard, or even impossible to complete. In this paper, we propose guidelines for the design of intuitive and accessible web automation that has the potential to increase accessibility and usability of web pages, reduce interaction time, and improve user browsing experience. Our findings and a preliminary user study demonstrate the feasibility of and emphasize the pressing need for truly accessible web automation technologies.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128605997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yusuke Fukushima, Hiromasa Uematsu, Ryotarou Mitsuhashi, Hidetaka Suzuki, I. Yairi
This paper studies human movement of both mobility and visually impaired people using mobile sensing devices as the first step toward creating an accessible information base. Nine mobility impaired persons conduct an experiment of wheelchair moving, and the visualized sensing results mapped on Googlemap is compared with their subjective feelings. Also, one blind person conducts an experiment of walking with a walking assistant. The sensing results show that a single accelerometer enabled to detect walking, descending and waiting behaviors.
{"title":"Sensing human movement of mobility and visually impaired people","authors":"Yusuke Fukushima, Hiromasa Uematsu, Ryotarou Mitsuhashi, Hidetaka Suzuki, I. Yairi","doi":"10.1145/2049536.2049606","DOIUrl":"https://doi.org/10.1145/2049536.2049606","url":null,"abstract":"This paper studies human movement of both mobility and visually impaired people using mobile sensing devices as the first step toward creating an accessible information base. Nine mobility impaired persons conduct an experiment of wheelchair moving, and the visualized sensing results mapped on Googlemap is compared with their subjective feelings. Also, one blind person conducts an experiment of walking with a walking assistant. The sensing results show that a single accelerometer enabled to detect walking, descending and waiting behaviors.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121404158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}