Autonomy and control are important themes in design for people with disabilities. With the rise in research in autonomous vehicle design, we investigate perceived differences in control for people with vision impairments in the use of semi- and fully autonomous vehicles. We conducted focus groups with 15 people with vision impairments. Each focus group included a design component asking participants to design voice-based and tactile solutions to problems identified by the group. We contribute a new perspective of independence in the context of control. We discuss the importance of driving for blind and low vision people, describe differences in perceptions of autonomous vehicles based on level of autonomy, and the use of assistive technology in vehicle operation and information gathering. Our findings guide the design of accessible autonomous transportation systems and existing navigation and orientation systems for people with vision impairments.
{"title":"Understanding the Power of Control in Autonomous Vehicles for People with Vision Impairment","authors":"Robin N. Brewer, Vaishnav Kameswaran","doi":"10.1145/3234695.3236347","DOIUrl":"https://doi.org/10.1145/3234695.3236347","url":null,"abstract":"Autonomy and control are important themes in design for people with disabilities. With the rise in research in autonomous vehicle design, we investigate perceived differences in control for people with vision impairments in the use of semi- and fully autonomous vehicles. We conducted focus groups with 15 people with vision impairments. Each focus group included a design component asking participants to design voice-based and tactile solutions to problems identified by the group. We contribute a new perspective of independence in the context of control. We discuss the importance of driving for blind and low vision people, describe differences in perceptions of autonomous vehicles based on level of autonomy, and the use of assistive technology in vehicle operation and information gathering. Our findings guide the design of accessible autonomous transportation systems and existing navigation and orientation systems for people with vision impairments.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124226601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luis Cavazos Quero, Jorge David Iranzo Bartolomé, Seonggu Lee, En Han, Sunhee Kim, Jun-Dong Cho
The development of 3D printing technology has improved the engagement of the visually impaired people when experiencing two-dimensional visual artworks. However, it is still difficult to explore, experience and get a clear understanding. We introduce an interactive multimodal guide in which a 3D printed 2.5D representation of a painting can be explored by touch. Touching determined features in the representation triggers localized verbal, audio, wind, and light/heat feedback events that convey spatial and semantic information. In this work we present a working prototype developed through three sessions using a participatory design approach.
{"title":"An Interactive Multimodal Guide to Improve Art Accessibility for Blind People","authors":"Luis Cavazos Quero, Jorge David Iranzo Bartolomé, Seonggu Lee, En Han, Sunhee Kim, Jun-Dong Cho","doi":"10.1145/3234695.3241033","DOIUrl":"https://doi.org/10.1145/3234695.3241033","url":null,"abstract":"The development of 3D printing technology has improved the engagement of the visually impaired people when experiencing two-dimensional visual artworks. However, it is still difficult to explore, experience and get a clear understanding. We introduce an interactive multimodal guide in which a 3D printed 2.5D representation of a painting can be explored by touch. Touching determined features in the representation triggers localized verbal, audio, wind, and light/heat feedback events that convey spatial and semantic information. In this work we present a working prototype developed through three sessions using a participatory design approach.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116908116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Advances and proliferation of digital technologies have greatly expanded access to the information society for people with disabilities. Yet when considering which devices, applications, or cutting-edge immersive virtual environments to explore, people with disabilities must still take into account whether our needs will be fully supported. Despite considerable progress over the years, any time we consider educational programs, employment opportunities, online banking, electronic health care portals, artistic endeavors and entertainment options, we still must worry whether we will encounter barriers, and whether we will need to find extra time to address user interface and interoperability problems before addressing the tasks we had originally planned.
{"title":"Exploring Paths to a More Accessible Digital Future","authors":"Judy Brewer","doi":"10.1145/3234695.3243502","DOIUrl":"https://doi.org/10.1145/3234695.3243502","url":null,"abstract":"Advances and proliferation of digital technologies have greatly expanded access to the information society for people with disabilities. Yet when considering which devices, applications, or cutting-edge immersive virtual environments to explore, people with disabilities must still take into account whether our needs will be fully supported. Despite considerable progress over the years, any time we consider educational programs, employment opportunities, online banking, electronic health care portals, artistic endeavors and entertainment options, we still must worry whether we will encounter barriers, and whether we will need to find extra time to address user interface and interoperability problems before addressing the tasks we had originally planned.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115192588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Affordable rapid 3D printing technologies have become a key enabler in the maker movement by giving individuals the ability to create physical finished products. However, existing computer-aided design (CAD) tools that allow authoring and editing of 3D models are mostly visually reliant and limit access for people who are blind or visually impaired. We propose an accessible CAD workflow where 3D models are generated through OpenSCAD, a script-based 3D modeling tool, and rendered at interactive speeds in an actuated 2.5D shape display. We report preliminary findings on a case study with one blind user. Based on our observations, we frame design imperatives on interactions that might be important in future accessible CAD systems with tactile output.
{"title":"An Accessible CAD Workflow Using Programming of 3D Models and Preview Rendering in A 2.5D Shape Display","authors":"A. Siu, Joshua A. Miele, Sean Follmer","doi":"10.1145/3234695.3240996","DOIUrl":"https://doi.org/10.1145/3234695.3240996","url":null,"abstract":"Affordable rapid 3D printing technologies have become a key enabler in the maker movement by giving individuals the ability to create physical finished products. However, existing computer-aided design (CAD) tools that allow authoring and editing of 3D models are mostly visually reliant and limit access for people who are blind or visually impaired. We propose an accessible CAD workflow where 3D models are generated through OpenSCAD, a script-based 3D modeling tool, and rendered at interactive speeds in an actuated 2.5D shape display. We report preliminary findings on a case study with one blind user. Based on our observations, we frame design imperatives on interactions that might be important in future accessible CAD systems with tactile output.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130086269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous work has explored scalable methods to collect data on the accessibility of the built environment by combining manual labeling, computer vision, and online map imagery. In this poster paper, we explore how to extend these methods to track the evolution of urban accessibility over time. Using Google Street View's "time machine" feature, we introduce a three-stage classification framework: (i) manually labeling accessibility problems in one time period; (ii) classifying the labeled image patch into one of five accessibility categories; (iii) localizing the patch in all previous snapshots. Our preliminary results analyzing 1633 Street View images across 376 locations demonstrate feasibility.
{"title":"A Feasibility Study of Using Google Street View and Computer Vision to Track the Evolution of Urban Accessibility","authors":"Ladan Najafizadeh, Jon E. Froehlich","doi":"10.1145/3234695.3240999","DOIUrl":"https://doi.org/10.1145/3234695.3240999","url":null,"abstract":"Previous work has explored scalable methods to collect data on the accessibility of the built environment by combining manual labeling, computer vision, and online map imagery. In this poster paper, we explore how to extend these methods to track the evolution of urban accessibility over time. Using Google Street View's \"time machine\" feature, we introduce a three-stage classification framework: (i) manually labeling accessibility problems in one time period; (ii) classifying the labeled image patch into one of five accessibility categories; (iii) localizing the patch in all previous snapshots. Our preliminary results analyzing 1633 Street View images across 376 locations demonstrate feasibility.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125308839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John J. Kelway, Anke M. Brock, P. Guitton, Aurélie Millet, Yasushi Nakata
Recently, there has been a sharp increase in the number of students with disabilities (SWDs) enrolled in universities. Unfortunately SWDs still struggle to attain the same level of education as non-disabled students. This paper presents a collaborative approach between members of the student support service, researchers and a special needs student in order to improve his access to and participation in university education. We performed a person-technology match and analyzed different existing technologies. Then, we designed and printed a keyguard, keyboard stand and mobile armrest which allowed him to almost double his text entry speed on a computer. We hope that our experience will inspire other universities to better address the needs of students with disabilities.
{"title":"Improving the Academic Inclusion of a Student with Special Needs at University Bordeaux","authors":"John J. Kelway, Anke M. Brock, P. Guitton, Aurélie Millet, Yasushi Nakata","doi":"10.1145/3234695.3241482","DOIUrl":"https://doi.org/10.1145/3234695.3241482","url":null,"abstract":"Recently, there has been a sharp increase in the number of students with disabilities (SWDs) enrolled in universities. Unfortunately SWDs still struggle to attain the same level of education as non-disabled students. This paper presents a collaborative approach between members of the student support service, researchers and a special needs student in order to improve his access to and participation in university education. We performed a person-technology match and analyzed different existing technologies. Then, we designed and printed a keyguard, keyboard stand and mobile armrest which allowed him to almost double his text entry speed on a computer. We hope that our experience will inspire other universities to better address the needs of students with disabilities.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128535835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mikel Ostiz-Blanco, Marie Lallier, Sergi Grau, Luz Rello, Jeffrey P. Bigham, Manuel Carreiras
This demo describes an ongoing research project that aims to develop a video game for the training of two independent cognitive components involved in reading development: visual attention and auditory rhythm. The video game includes two types of gaming activities for each component. First, a proof of concept was carried out with 10 children with dyslexia. The outcome of this proof of concept study served as foundation for the development of a prototype that has been assessed. Human-computer interaction, usability and engagement were measured in a user study with 22 children with dyslexia and 22 without dyslexia. Significant interaction differences between group were not found. Usability and engagement evaluation was positive and will be used to improve the video game. Its efficacy will be tested with a longitudinal training study in developing readers. A video of Jellys user testing is available in https://youtu.be/T9oO9bZFdmM.
{"title":"Jellys","authors":"Mikel Ostiz-Blanco, Marie Lallier, Sergi Grau, Luz Rello, Jeffrey P. Bigham, Manuel Carreiras","doi":"10.1145/3234695.3241028","DOIUrl":"https://doi.org/10.1145/3234695.3241028","url":null,"abstract":"This demo describes an ongoing research project that aims to develop a video game for the training of two independent cognitive components involved in reading development: visual attention and auditory rhythm. The video game includes two types of gaming activities for each component. First, a proof of concept was carried out with 10 children with dyslexia. The outcome of this proof of concept study served as foundation for the development of a prototype that has been assessed. Human-computer interaction, usability and engagement were measured in a user study with 22 children with dyslexia and 22 without dyslexia. Significant interaction differences between group were not found. Usability and engagement evaluation was positive and will be used to improve the video game. Its efficacy will be tested with a longitudinal training study in developing readers. A video of Jellys user testing is available in https://youtu.be/T9oO9bZFdmM.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116485302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yu-Hao Lin, Suwen Zhu, Yu-Jung Ko, Wenzhe Cui, Xiaojun Bi
Gesture typing has been a widely adopted text entry method on touchscreen devices. We have conducted a study to understand whether older adults could gesture type, how they type, what are the strengths and weaknesses of gesture typing, and how to further improve it. By logging stroke-level interaction data and leveraging the existing modeling tools, we compared the gesture and tap typing behavior of older adults with young adults. Our major finding is promising and encouraging. Gesture typing outperformed the typical tap typing for older adults, and was very easy for them to learn. The gesture typing input speed was 15.28% higher than that of tap typing for 14 older adults who had none gesture typing experience in the past. One of the main reasons was that older adults adopted the word-level inputting strategy in gesture typing, while often used the letter-level correction strategy in tap typing. Compared with young adults, older adults exhibited little degradation in gesture accuracy. Our study also led to implications on how to further improve gesture typing for older adults.
{"title":"Why Is Gesture Typing Promising for Older Adults?: Comparing Gesture and Tap Typing Behavior of Older with Young Adults","authors":"Yu-Hao Lin, Suwen Zhu, Yu-Jung Ko, Wenzhe Cui, Xiaojun Bi","doi":"10.1145/3234695.3236350","DOIUrl":"https://doi.org/10.1145/3234695.3236350","url":null,"abstract":"Gesture typing has been a widely adopted text entry method on touchscreen devices. We have conducted a study to understand whether older adults could gesture type, how they type, what are the strengths and weaknesses of gesture typing, and how to further improve it. By logging stroke-level interaction data and leveraging the existing modeling tools, we compared the gesture and tap typing behavior of older adults with young adults. Our major finding is promising and encouraging. Gesture typing outperformed the typical tap typing for older adults, and was very easy for them to learn. The gesture typing input speed was 15.28% higher than that of tap typing for 14 older adults who had none gesture typing experience in the past. One of the main reasons was that older adults adopted the word-level inputting strategy in gesture typing, while often used the letter-level correction strategy in tap typing. Compared with young adults, older adults exhibited little degradation in gesture accuracy. Our study also led to implications on how to further improve gesture typing for older adults.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126083160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Arnott, Matthew R. Malone, Gareth Lloyd, Bernadette Brophy-Arnott, Susan D. Munro, Robyn McNaughton
Multiple challenges face people with cognitive and communication impairments when asked to be involved in the design of technology that is appropriate for them. This population is under-represented in healthcare research and have health inequalities relative to most people. The work discussed here concerns how to adapt research processes to suit people with these difficulties when developing smartphone apps to give access to health promotion information, an area in which health inequalities arise. Strategies are identified to assist participants to understand the proposed area of work, to give consent to participation and be involved with activities such as evaluation. A combination of adaptations is proposed to engage people who would otherwise be excluded. It is clear that strategies used to make research participation accessible can assist people with cognitive and communication impairments to influence and inform the development of technology for their use.
{"title":"Involving People with Cognitive and Communication Impairments in Mobile Health App Design","authors":"J. Arnott, Matthew R. Malone, Gareth Lloyd, Bernadette Brophy-Arnott, Susan D. Munro, Robyn McNaughton","doi":"10.1145/3234695.3240998","DOIUrl":"https://doi.org/10.1145/3234695.3240998","url":null,"abstract":"Multiple challenges face people with cognitive and communication impairments when asked to be involved in the design of technology that is appropriate for them. This population is under-represented in healthcare research and have health inequalities relative to most people. The work discussed here concerns how to adapt research processes to suit people with these difficulties when developing smartphone apps to give access to health promotion information, an area in which health inequalities arise. Strategies are identified to assist participants to understand the proposed area of work, to give consent to participation and be involved with activities such as evaluation. A combination of adaptations is proposed to engage people who would otherwise be excluded. It is clear that strategies used to make research participation accessible can assist people with cognitive and communication impairments to influence and inform the development of technology for their use.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131960048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This poster presents design updates and deployment results from the Universal Sound Detector project, or USD. The USD is a redesign of the Programmable Sound Detector [3], a device created to notify deaf and hard-of-hearing (DHH) people of auditory signals from technologies in their environment. Unlike other hardware and software solutions that respond indiscriminately to any sound, the USD can be customized to only recognize a specific sound. The poster describes the USD's function and implementation, reports on four deployments in different environments, and compares past and present versions of the device's design before discussing limitations and future work.
{"title":"Redesigning and Deploying the Universal Sound Detector: Notifying Deaf and Hard-of-hearing Users of Audio Signals","authors":"J. Stanislow, Gary W. Behm","doi":"10.1145/3234695.3240993","DOIUrl":"https://doi.org/10.1145/3234695.3240993","url":null,"abstract":"This poster presents design updates and deployment results from the Universal Sound Detector project, or USD. The USD is a redesign of the Programmable Sound Detector [3], a device created to notify deaf and hard-of-hearing (DHH) people of auditory signals from technologies in their environment. Unlike other hardware and software solutions that respond indiscriminately to any sound, the USD can be customized to only recognize a specific sound. The poster describes the USD's function and implementation, reports on four deployments in different environments, and compares past and present versions of the device's design before discussing limitations and future work.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131965897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}