Independence is essential for everyone and crucial for people with disabilities. Being able to perform the activities of daily living as autonomously as possible is an important step towards real inclusion and an independent life. Several technology-enhanced services and tools have been created to address special-needs users, but are they really used and appreciated by them? Sensors and radio frequency devices are increasingly exploited to develop solutions such as the smart home, aimed at improving the quality of life for all, including people with visual impairment. This paper collects blind users' expectations and habits regarding home automation technology through an online survey and face-to-face interviews. Specifically, 42 visually impaired people answered an accessible online questionnaire to provide more insight into their needs and preferences. Next, semi-structured short interviews conducted with a set of eight totally blind participants enabled the collection of relevant user requirements in order to better understand the obstacles experienced, and to design usable home automation and remote control systems. Results showed that the main requests regard increasing autonomy in everyday tasks and having more usability and flexibility when using remote home automation control. Thanks to the collected feedback, a set of general suggestions for designers and developers of home automation and remote control systems has been proposed in order to enhance accessibility and usability for the blind user.
{"title":"Home Automation for an Independent Living: Investigating the Needs of Visually Impaired People","authors":"B. Leporini, M. Buzzi","doi":"10.1145/3192714.3192823","DOIUrl":"https://doi.org/10.1145/3192714.3192823","url":null,"abstract":"Independence is essential for everyone and crucial for people with disabilities. Being able to perform the activities of daily living as autonomously as possible is an important step towards real inclusion and an independent life. Several technology-enhanced services and tools have been created to address special-needs users, but are they really used and appreciated by them? Sensors and radio frequency devices are increasingly exploited to develop solutions such as the smart home, aimed at improving the quality of life for all, including people with visual impairment. This paper collects blind users' expectations and habits regarding home automation technology through an online survey and face-to-face interviews. Specifically, 42 visually impaired people answered an accessible online questionnaire to provide more insight into their needs and preferences. Next, semi-structured short interviews conducted with a set of eight totally blind participants enabled the collection of relevant user requirements in order to better understand the obstacles experienced, and to design usable home automation and remote control systems. Results showed that the main requests regard increasing autonomy in everyday tasks and having more usability and flexibility when using remote home automation control. Thanks to the collected feedback, a set of general suggestions for designers and developers of home automation and remote control systems has been proposed in order to enhance accessibility and usability for the blind user.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121094390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taliesin L. Smith, Jesse Greenberg, S. Reid, Emily B. Moore
Interactive simulations are used in classrooms around the world to support student learning. Creating accessible interactive simulations is a complex challenge that pushes the boundaries of current accessibility approaches and standards. In this work, we present a new approach to addressing accessibility needs within complex interactives. Within a custom scene graph that utilizes a model-view-controller architectural pattern, we utilize a parallel document object model (PDOM) to create interactive simulations (PhET Interactive Simulations) accessible to students through alternative input devices and descriptions accessed with screen reader software. In this paper, we describe our accessibility goals, challenges, and approach to creating robust accessible interactive simulations, and provide examples from an accessible simulation we have developed and possibilities for future extensions.
{"title":"Parallel DOM Architecture for Accessible Interactive Simulations","authors":"Taliesin L. Smith, Jesse Greenberg, S. Reid, Emily B. Moore","doi":"10.1145/3192714.3192817","DOIUrl":"https://doi.org/10.1145/3192714.3192817","url":null,"abstract":"Interactive simulations are used in classrooms around the world to support student learning. Creating accessible interactive simulations is a complex challenge that pushes the boundaries of current accessibility approaches and standards. In this work, we present a new approach to addressing accessibility needs within complex interactives. Within a custom scene graph that utilizes a model-view-controller architectural pattern, we utilize a parallel document object model (PDOM) to create interactive simulations (PhET Interactive Simulations) accessible to students through alternative input devices and descriptions accessed with screen reader software. In this paper, we describe our accessibility goals, challenges, and approach to creating robust accessible interactive simulations, and provide examples from an accessible simulation we have developed and possibilities for future extensions.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133914282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuyi Song, Jiajun Bu, Chengchao Shen, Andreas Artmeier, Zhi Yu, Qin Zhou
Web accessibility metrics can measure the accessibility levels of websites. Although many metrics with different motivations have been proposed, current metrics are limited in their applicability when considering user experience. This study proposes Reliability Aware Web Accessibility Experience Metric (RA-WAEM), a novel Web accessibility metric which considers the user experience of people with disabilities and their reliability in objectively assessing the severity of accessibility barriers. We present an optimization algorithm based on Expectation Maximization to derive the parameters of RA-WAEM efficiently. Moreover, we conduct an extensive accessibility study on 46 websites with 323,098 Web pages and collect the user experience of 122 people. An evaluation on this dataset shows that RA-WAEM outperforms state of the art accessibility metrics in reflecting the user experience.
{"title":"Reliability Aware Web Accessibility Experience Metric","authors":"Shuyi Song, Jiajun Bu, Chengchao Shen, Andreas Artmeier, Zhi Yu, Qin Zhou","doi":"10.1145/3192714.3192836","DOIUrl":"https://doi.org/10.1145/3192714.3192836","url":null,"abstract":"Web accessibility metrics can measure the accessibility levels of websites. Although many metrics with different motivations have been proposed, current metrics are limited in their applicability when considering user experience. This study proposes Reliability Aware Web Accessibility Experience Metric (RA-WAEM), a novel Web accessibility metric which considers the user experience of people with disabilities and their reliability in objectively assessing the severity of accessibility barriers. We present an optimization algorithm based on Expectation Maximization to derive the parameters of RA-WAEM efficiently. Moreover, we conduct an extensive accessibility study on 46 websites with 323,098 Web pages and collect the user experience of 122 people. An evaluation on this dataset shows that RA-WAEM outperforms state of the art accessibility metrics in reflecting the user experience.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130394970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Silvia García-Méndez, Milagros Fernández Gavilanes, E. Costa-Montenegro, Jonathan Juncal-Martínez, F. González-Castaño
We present our work to build the Spanish version of SimpleNLG by adapting it and creating new code to satisfy the Spanish linguistic requirements. Not only have we developed this version but also we have achieved a library that only needs the main words as input and it is able to conduct the generation process on its own. The adaptation of the library uses aLexiS, a complete and reliable lexicon with morphology that we created. On the other hand, our enhanced version uses Elsa created from the pictogram domain, which also contains syntactic and semantic information needed to conduct the generation process automatically. Both the adaptation and its enhanced version may be useful integrated in several applications as well as web applications, bringing them natural language generation functionalities. We provide a use case of the system focused on Augmentative and Alternative Communication and online video content services.
{"title":"Automatic Natural Language Generation Applied to Alternative and Augmentative Communication for Online Video Content Services using SimpleNLG for Spanish","authors":"Silvia García-Méndez, Milagros Fernández Gavilanes, E. Costa-Montenegro, Jonathan Juncal-Martínez, F. González-Castaño","doi":"10.1145/3192714.3192837","DOIUrl":"https://doi.org/10.1145/3192714.3192837","url":null,"abstract":"We present our work to build the Spanish version of SimpleNLG by adapting it and creating new code to satisfy the Spanish linguistic requirements. Not only have we developed this version but also we have achieved a library that only needs the main words as input and it is able to conduct the generation process on its own. The adaptation of the library uses aLexiS, a complete and reliable lexicon with morphology that we created. On the other hand, our enhanced version uses Elsa created from the pictogram domain, which also contains syntactic and semantic information needed to conduct the generation process automatically. Both the adaptation and its enhanced version may be useful integrated in several applications as well as web applications, bringing them natural language generation functionalities. We provide a use case of the system focused on Augmentative and Alternative Communication and online video content services.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126296630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of this research is to develop and implement Arabic accessibility resources for developers, web content managers and designers. The Arabic guidelines will not only assist Arabian developers and designers for a deep understanding of accessibility features, but also to apply these criteria on their Arabic websites in order to make them accessible to everyone including people with disabilities. The Arabic web accessibility guidelines will be designed to be reachable to all developers and designers in the Middle East including Kuwait.
{"title":"Arabic web accessibility guidelines: Understanding and use by web developers in Kuwait","authors":"Muhammad Saleem","doi":"10.1145/3192714.3196315","DOIUrl":"https://doi.org/10.1145/3192714.3196315","url":null,"abstract":"The aim of this research is to develop and implement Arabic accessibility resources for developers, web content managers and designers. The Arabic guidelines will not only assist Arabian developers and designers for a deep understanding of accessibility features, but also to apply these criteria on their Arabic websites in order to make them accessible to everyone including people with disabilities. The Arabic web accessibility guidelines will be designed to be reachable to all developers and designers in the Middle East including Kuwait.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124495586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andréa Britto Mattos, Dario Augusto Borges Oliveira
Previous work demonstrated that people who rely on lip-reading often prefer a frontal view of their interlocutor, but sometimes a profile view may display certain lip gestures more noticeably. This work refers to an assistive tool that receives an unconstrained video of a speaker, captured at an arbitrary view, and not only locates the mouth region but also displays augmented versions of the lips in the frontal and profile views. This is made using deep Generative Adversarial Networks (GANs) trained on several pairs of images. In the training set, each pair contains a mouth picture taken at a random angle and the corresponding picture (i.e., relative to the same mouth shape, person, and lighting condition) taken at a fixed view. In the test phase, the networks are able to receive an unseen mouth image taken at an arbitrary angle and map it to the fixed views -- frontal and profile. Because building a large-scale pairwise dataset is time consuming, we use realistic synthetic 3D models for training, and videos of real subjects as input for testing. Our approach is speaker-independent, language-independent, and our results demonstrate that the GAN can produce visually compelling results that may assist people with hearing impairment.
{"title":"Multi-view Mouth Renderization for Assisting Lip-reading","authors":"Andréa Britto Mattos, Dario Augusto Borges Oliveira","doi":"10.1145/3192714.3192824","DOIUrl":"https://doi.org/10.1145/3192714.3192824","url":null,"abstract":"Previous work demonstrated that people who rely on lip-reading often prefer a frontal view of their interlocutor, but sometimes a profile view may display certain lip gestures more noticeably. This work refers to an assistive tool that receives an unconstrained video of a speaker, captured at an arbitrary view, and not only locates the mouth region but also displays augmented versions of the lips in the frontal and profile views. This is made using deep Generative Adversarial Networks (GANs) trained on several pairs of images. In the training set, each pair contains a mouth picture taken at a random angle and the corresponding picture (i.e., relative to the same mouth shape, person, and lighting condition) taken at a fixed view. In the test phase, the networks are able to receive an unseen mouth image taken at an arbitrary angle and map it to the fixed views -- frontal and profile. Because building a large-scale pairwise dataset is time consuming, we use realistic synthetic 3D models for training, and videos of real subjects as input for testing. Our approach is speaker-independent, language-independent, and our results demonstrate that the GAN can produce visually compelling results that may assist people with hearing impairment.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"23 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114346325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mikaylah Gross, Joe Dara, Christopher Meyer, D. Bolchini
When people who are blind or visually impaired navigate the mobile web, they have to hold a phone in their hands at all times. Such continuous, two-handed interaction on a small screen hampers the user's ability to keep hands free to control aiding devices (e.g., cane) or touch objects nearby, especially on-the-go. In this paper, we introduce screenless access: a browsing approach that enables users to interact touch-free with aural navigation architectures using one-handed, in-air gestures recognized by an off-the-shelf armband. In a study with ten participants who are blind or visually impaired, we observed proficient navigation performance, users conceptual fit with a screen-free paradigm, and low levels of cognitive load. Our findings model the errors users made due to limits of the design and system proposed, uncover navigation styles that participants used, and illustrate unprompted adaptations of gestures that were enacted effectively to appropriate the technology. User feedback revealed insights into the potential and limitations of screenless navigation to support convenience in traveling, work contexts and privacy-preserving scenarios, as well as concerns about gestures that may become socially conspicuous.
{"title":"Exploring Aural Navigation by Screenless Access","authors":"Mikaylah Gross, Joe Dara, Christopher Meyer, D. Bolchini","doi":"10.1145/3192714.3192815","DOIUrl":"https://doi.org/10.1145/3192714.3192815","url":null,"abstract":"When people who are blind or visually impaired navigate the mobile web, they have to hold a phone in their hands at all times. Such continuous, two-handed interaction on a small screen hampers the user's ability to keep hands free to control aiding devices (e.g., cane) or touch objects nearby, especially on-the-go. In this paper, we introduce screenless access: a browsing approach that enables users to interact touch-free with aural navigation architectures using one-handed, in-air gestures recognized by an off-the-shelf armband. In a study with ten participants who are blind or visually impaired, we observed proficient navigation performance, users conceptual fit with a screen-free paradigm, and low levels of cognitive load. Our findings model the errors users made due to limits of the design and system proposed, uncover navigation styles that participants used, and illustrate unprompted adaptations of gestures that were enacted effectively to appropriate the technology. User feedback revealed insights into the potential and limitations of screenless navigation to support convenience in traveling, work contexts and privacy-preserving scenarios, as well as concerns about gestures that may become socially conspicuous.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130065570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of this article is to focus on user experience with DysHelper, the dyslexia assistive web extension. We conducted this research with university students over 18 years old. We describe the design of the extension and then focus on describing the various stages of the practical user experience, which consisted of individual user testing, the reading two types of texts, followed by discussion with users. The results indicate that the extension is generally welcomed. Although DysHelper has its limits, user experience research shows that it has a significant potential to affect reading problems positively and can be easily used, also in consideration of needs that may change over time.
{"title":"DysHelper: The Dyslexia Assistive User Experience","authors":"Tereza Parilová, Romana Remsíková","doi":"10.1145/3192714.3196320","DOIUrl":"https://doi.org/10.1145/3192714.3196320","url":null,"abstract":"The aim of this article is to focus on user experience with DysHelper, the dyslexia assistive web extension. We conducted this research with university students over 18 years old. We describe the design of the extension and then focus on describing the various stages of the practical user experience, which consisted of individual user testing, the reading two types of texts, followed by discussion with users. The results indicate that the extension is generally welcomed. Although DysHelper has its limits, user experience research shows that it has a significant potential to affect reading problems positively and can be easily used, also in consideration of needs that may change over time.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"481 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116690420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
WYSIWYG mathematical editors have existed for several decades. Recent editors have mostly been web-based. These editors often provide buttons or palettes containing hundreds of symbols used in mathematics. People who use screen readers and switch devices are restricted to semi-linear access of the buttons and must wade through a large number of buttons to find the right symbol to insert if the symbol is not present on the keyboard. This paper presents data gleaned from textbooks that shows that if the subject area is known, the number of buttons needed for special symbols is small so usability can be greatly improved.
{"title":"Improving Usability of Math Editors","authors":"N. Soiffer","doi":"10.1145/3192714.3192835","DOIUrl":"https://doi.org/10.1145/3192714.3192835","url":null,"abstract":"WYSIWYG mathematical editors have existed for several decades. Recent editors have mostly been web-based. These editors often provide buttons or palettes containing hundreds of symbols used in mathematics. People who use screen readers and switch devices are restricted to semi-linear access of the buttons and must wade through a large number of buttons to find the right symbol to insert if the symbol is not present on the keyboard. This paper presents data gleaned from textbooks that shows that if the subject area is known, the number of buttons needed for special symbols is small so usability can be greatly improved.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122159571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Guerreiro, Eshed Ohn-Bar, D. Ahmetovic, Kris M. Kitani, C. Asakawa
Recent techniques for indoor localization are now able to support practical, accurate turn-by-turn navigation for people with visual impairments (PVI). Understanding user behavior as it relates to situational contexts can be used to improve the ability of the interface to adapt to problematic scenarios, and consequently reduce navigation errors. This work performs a fine-grained analysis of user behavior during indoor assisted navigation, outlining different scenarios where user behavior (either with a white-cane or a guide-dog) is likely to cause navigation errors. The scenarios include certain instructions (e.g., slight turns, approaching turns), cases of error recovery, and the surrounding environment (e.g., open spaces and landmarks). We discuss the findings and lessons learned from a real-world user study to guide future directions for the development of assistive navigation interfaces that consider the users' behavior and coping mechanisms.
{"title":"How Context and User Behavior Affect Indoor Navigation Assistance for Blind People","authors":"J. Guerreiro, Eshed Ohn-Bar, D. Ahmetovic, Kris M. Kitani, C. Asakawa","doi":"10.1145/3192714.3192829","DOIUrl":"https://doi.org/10.1145/3192714.3192829","url":null,"abstract":"Recent techniques for indoor localization are now able to support practical, accurate turn-by-turn navigation for people with visual impairments (PVI). Understanding user behavior as it relates to situational contexts can be used to improve the ability of the interface to adapt to problematic scenarios, and consequently reduce navigation errors. This work performs a fine-grained analysis of user behavior during indoor assisted navigation, outlining different scenarios where user behavior (either with a white-cane or a guide-dog) is likely to cause navigation errors. The scenarios include certain instructions (e.g., slight turns, approaching turns), cases of error recovery, and the surrounding environment (e.g., open spaces and landmarks). We discuss the findings and lessons learned from a real-world user study to guide future directions for the development of assistive navigation interfaces that consider the users' behavior and coping mechanisms.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122633919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}