D. Ahmetovic, F. Avanzini, A. Baratè, C. Bernareggi, Gabriele Galimberti, L. A. Ludovico, S. Mascetti, G. Presti
Indoor navigation is an important service, currently investigated both in industry and academia. While the main focus of research is the computation of users' position, the additional challenge of conveying guidance instructions arises when the target user is blind or visually impaired (BVI). This contribution presents our ongoing research aimed at adopting sonification techniques to guide a BVI person. In particular we introduce three sonification techniques to guide the user during rotations. Preliminary results, conducted with 7 BVI people, show that some of the proposed sonification technique outperform a benchmark solution adopted in previous contributions.
{"title":"Sonification of Pathways for People with Visual Impairments","authors":"D. Ahmetovic, F. Avanzini, A. Baratè, C. Bernareggi, Gabriele Galimberti, L. A. Ludovico, S. Mascetti, G. Presti","doi":"10.1145/3234695.3241005","DOIUrl":"https://doi.org/10.1145/3234695.3241005","url":null,"abstract":"Indoor navigation is an important service, currently investigated both in industry and academia. While the main focus of research is the computation of users' position, the additional challenge of conveying guidance instructions arises when the target user is blind or visually impaired (BVI). This contribution presents our ongoing research aimed at adopting sonification techniques to guide a BVI person. In particular we introduce three sonification techniques to guide the user during rotations. Preliminary results, conducted with 7 BVI people, show that some of the proposed sonification technique outperform a benchmark solution adopted in previous contributions.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134410805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We conducted a study with 25 older adults that aimed to investigate how older users interact with swipe-based interactions in mid-air and how menu sizes may affect swipe characteristics. Our findings suggest that currently-implemented motion-based interaction parameters may not be very well-aligned with the expectations and physical abilities of the older population. In addition, we find that GUI design can shape how older users produce a swipe gesture in mid-air, and that appropriate GUI design can lead to higher success rates for users with little familiarity with this novel input method.
{"title":"Movement Characteristics and Effects of GUI Design on How Older Adults Swipe in Mid-Air","authors":"A. Cabreira, F. Hwang","doi":"10.1145/3234695.3241014","DOIUrl":"https://doi.org/10.1145/3234695.3241014","url":null,"abstract":"We conducted a study with 25 older adults that aimed to investigate how older users interact with swipe-based interactions in mid-air and how menu sizes may affect swipe characteristics. Our findings suggest that currently-implemented motion-based interaction parameters may not be very well-aligned with the expectations and physical abilities of the older population. In addition, we find that GUI design can shape how older users produce a swipe gesture in mid-air, and that appropriate GUI design can lead to higher success rates for users with little familiarity with this novel input method.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"49 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131436201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 6: Advancing Communication","authors":"Abi Roper","doi":"10.1145/3284380","DOIUrl":"https://doi.org/10.1145/3284380","url":null,"abstract":"","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124777609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Saki Asakawa, J. Guerreiro, D. Ahmetovic, Kris M. Kitani, C. Asakawa
People with visual impairments (PVI) have shown interest in visiting museums and enjoying visual art. Based on this knowledge, some museums provide tactile reproductions of artworks, specialized tours for PVI, or enable them to schedule accessible visits. However, the ability of PVI to visit museums is still dependent on the assistance they get from their family and friends or from the museum personnel. In this paper, we surveyed 19 PVI to understand their opinions and expectations about visiting museums independently, as well as the requirements of user interfaces to support it. Moreover, we increase the knowledge about the previous experiences, motivations and accessibility issues of PVI in museums.
{"title":"The Present and Future of Museum Accessibility for People with Visual Impairments","authors":"Saki Asakawa, J. Guerreiro, D. Ahmetovic, Kris M. Kitani, C. Asakawa","doi":"10.1145/3234695.3240997","DOIUrl":"https://doi.org/10.1145/3234695.3240997","url":null,"abstract":"People with visual impairments (PVI) have shown interest in visiting museums and enjoying visual art. Based on this knowledge, some museums provide tactile reproductions of artworks, specialized tours for PVI, or enable them to schedule accessible visits. However, the ability of PVI to visit museums is still dependent on the assistance they get from their family and friends or from the museum personnel. In this paper, we surveyed 19 PVI to understand their opinions and expectations about visiting museums independently, as well as the requirements of user interfaces to support it. Moreover, we increase the knowledge about the previous experiences, motivations and accessibility issues of PVI in museums.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124307107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abigale Stangl, Esha Kothari, S. Jain, Tom Yeh, K. Grauman, D. Gurari
Our interviews with people who have visual impairments show clothes shopping is an important activity in their lives. Unfortunately, clothes shopping web sites remain largely inaccessible. We propose design recommendations to address online accessibility issues reported by visually impaired study participants and an implementation, which we call BrowseWithMe, to address these issues. BrowseWithMe employs artificial intelligence to automatically convert a product web page into a structured representation that enables a user to interactively ask the BrowseWithMe system what the user wants to learn about a product (e.g., What is the price? Can I see a magnified image of the pants?). This enables people to be active solicitors of the specific information they are seeking rather than passive listeners of unparsed information. Experiments demonstrate BrowseWithMe can make online clothes shopping more accessible and produce accurate image descriptions.
{"title":"BrowseWithMe: An Online Clothes Shopping Assistant for People with Visual Impairments","authors":"Abigale Stangl, Esha Kothari, S. Jain, Tom Yeh, K. Grauman, D. Gurari","doi":"10.1145/3234695.3236337","DOIUrl":"https://doi.org/10.1145/3234695.3236337","url":null,"abstract":"Our interviews with people who have visual impairments show clothes shopping is an important activity in their lives. Unfortunately, clothes shopping web sites remain largely inaccessible. We propose design recommendations to address online accessibility issues reported by visually impaired study participants and an implementation, which we call BrowseWithMe, to address these issues. BrowseWithMe employs artificial intelligence to automatically convert a product web page into a structured representation that enables a user to interactively ask the BrowseWithMe system what the user wants to learn about a product (e.g., What is the price? Can I see a magnified image of the pants?). This enables people to be active solicitors of the specific information they are seeking rather than passive listeners of unparsed information. Experiments demonstrate BrowseWithMe can make online clothes shopping more accessible and produce accurate image descriptions.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117268464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 2: Supporting Speech","authors":"Robin N. Brewer","doi":"10.1145/3284376","DOIUrl":"https://doi.org/10.1145/3284376","url":null,"abstract":"","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129356336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alternative and Augmentative Communication (AAC) software has the ability to improve the quality of life for individuals with a speech, language and communication need. However, the communication rate achievable with current devices is still significantly less than unimpaired speech. The embedded technologies available in mobile devices, present an opportunity to improve the ease and efficiency of communication through context aware computing. This paper explores the iterative design process of a context aware keyboard prototype and discusses the results of its evaluation, which suggest the prototype could have a marked improvement on the communication rate of AAC users.
{"title":"Designing a Context Aware AAC Solution","authors":"Conor McKillop","doi":"10.1145/3234695.3240990","DOIUrl":"https://doi.org/10.1145/3234695.3240990","url":null,"abstract":"Alternative and Augmentative Communication (AAC) software has the ability to improve the quality of life for individuals with a speech, language and communication need. However, the communication rate achievable with current devices is still significantly less than unimpaired speech. The embedded technologies available in mobile devices, present an opportunity to improve the ease and efficiency of communication through context aware computing. This paper explores the iterative design process of a context aware keyboard prototype and discusses the results of its evaluation, which suggest the prototype could have a marked improvement on the communication rate of AAC users.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128005253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tactile models are important learning materials for visually impaired students. With the adoption of 3D printing technologies, visually impaired students and teachers will have more access to 3D printed tactile models. We designed Talkit++, an iOS application that plays audio and visual content as a user touches parts of a 3D print. With Talkit++, a visually impaired student can explore a printed model tactilely, and use finger gestures and speech commands to get more information about certain elements in the model. Talkit++ detects the model and finger gestures using computer vision algorithms, simple accessories like paper stickers and printable trackers, and the built-in RGB camera on an iOS device. Based on the model's position and the user's input, Talkit++ speaks textual information, plays audio recordings, and displays visual animations.
{"title":"A Demo of Talkit++: Interacting with 3D Printed Models Using an iOS Device","authors":"Lei Shi, Zhuohao Zhang, Shiri Azenkot","doi":"10.1145/3234695.3241004","DOIUrl":"https://doi.org/10.1145/3234695.3241004","url":null,"abstract":"Tactile models are important learning materials for visually impaired students. With the adoption of 3D printing technologies, visually impaired students and teachers will have more access to 3D printed tactile models. We designed Talkit++, an iOS application that plays audio and visual content as a user touches parts of a 3D print. With Talkit++, a visually impaired student can explore a printed model tactilely, and use finger gestures and speech commands to get more information about certain elements in the model. Talkit++ detects the model and finger gestures using computer vision algorithms, simple accessories like paper stickers and printable trackers, and the built-in RGB camera on an iOS device. Based on the model's position and the user's input, Talkit++ speaks textual information, plays audio recordings, and displays visual animations.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133346608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are few large-scale empirical studies with people with disabilities or older adults, mainly because recruiting participants with specific characteristics is even harder than recruiting young and/or non-disabled populations. Analyzing four online experiments on LabintheWild with a total of 355,656 participants, we show that volunteer-based online experiments that provide personalized feedback attract large numbers of participants with diverse disabilities and ages and allow robust studies with these populations that replicate and extend the findings of prior laboratory studies. To find out what motivates people with disabilities to take part, we additionally analyzed participants' feedback and forum entries that discuss LabintheWild experiments. The results show that participants use the studies to diagnose themselves, compare their abilities to others, quantify potential impairments, self-experiment, and share their own stories -- findings that we use to inform design guidelines for online experiment platforms that adequately support and engage people with disabilities.
{"title":"Volunteer-Based Online Studies With Older Adults and People with Disabilities","authors":"Qisheng Li, Krzysztof Z Gajos, Katharina Reinecke","doi":"10.1145/3234695.3236360","DOIUrl":"https://doi.org/10.1145/3234695.3236360","url":null,"abstract":"There are few large-scale empirical studies with people with disabilities or older adults, mainly because recruiting participants with specific characteristics is even harder than recruiting young and/or non-disabled populations. Analyzing four online experiments on LabintheWild with a total of 355,656 participants, we show that volunteer-based online experiments that provide personalized feedback attract large numbers of participants with diverse disabilities and ages and allow robust studies with these populations that replicate and extend the findings of prior laboratory studies. To find out what motivates people with disabilities to take part, we additionally analyzed participants' feedback and forum entries that discuss LabintheWild experiments. The results show that participants use the studies to diagnose themselves, compare their abilities to others, quantify potential impairments, self-experiment, and share their own stories -- findings that we use to inform design guidelines for online experiment platforms that adequately support and engage people with disabilities.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129022726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Access to a variety of exercises is important for maintaining a healthy lifestyle. This variety includes physical activity in public spaces. A 400-meter jogging track is not accessible because it provides solely visual cues for people to remain in their lane. As a first step toward making exercise spaces accessible, we conducted an ecologically valid Wizard of Oz study to compare the accuracy and user experience of human guide, verbal, wrist vibration, and head beat feedback while people walked around the track. The technology conditions did not affect accuracy, but the order of preference was human guide, verbal, wrist vibration, and head beat. Participants had a difficult time perceiving vibrations when holding their cane or guide dog, and lower frequency sounds made it difficult to focus on their existing navigation strategies.
{"title":"Exploring Aural and Haptic Feedback for Visually Impaired People on a Track: A Wizard of Oz Study","authors":"Kyle Rector, Rachel Bartlett, S. Mullan","doi":"10.1145/3234695.3236345","DOIUrl":"https://doi.org/10.1145/3234695.3236345","url":null,"abstract":"Access to a variety of exercises is important for maintaining a healthy lifestyle. This variety includes physical activity in public spaces. A 400-meter jogging track is not accessible because it provides solely visual cues for people to remain in their lane. As a first step toward making exercise spaces accessible, we conducted an ecologically valid Wizard of Oz study to compare the accuracy and user experience of human guide, verbal, wrist vibration, and head beat feedback while people walked around the track. The technology conditions did not affect accuracy, but the order of preference was human guide, verbal, wrist vibration, and head beat. Participants had a difficult time perceiving vibrations when holding their cane or guide dog, and lower frequency sounds made it difficult to focus on their existing navigation strategies.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134636243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}