首页 > 最新文献

Proceedings of the Internet of Accessible Things最新文献

英文 中文
Home Automation for an Independent Living: Investigating the Needs of Visually Impaired People 独立生活的家庭自动化:调查视障人士的需求
Pub Date : 2018-04-23 DOI: 10.1145/3192714.3192823
B. Leporini, M. Buzzi
Independence is essential for everyone and crucial for people with disabilities. Being able to perform the activities of daily living as autonomously as possible is an important step towards real inclusion and an independent life. Several technology-enhanced services and tools have been created to address special-needs users, but are they really used and appreciated by them? Sensors and radio frequency devices are increasingly exploited to develop solutions such as the smart home, aimed at improving the quality of life for all, including people with visual impairment. This paper collects blind users' expectations and habits regarding home automation technology through an online survey and face-to-face interviews. Specifically, 42 visually impaired people answered an accessible online questionnaire to provide more insight into their needs and preferences. Next, semi-structured short interviews conducted with a set of eight totally blind participants enabled the collection of relevant user requirements in order to better understand the obstacles experienced, and to design usable home automation and remote control systems. Results showed that the main requests regard increasing autonomy in everyday tasks and having more usability and flexibility when using remote home automation control. Thanks to the collected feedback, a set of general suggestions for designers and developers of home automation and remote control systems has been proposed in order to enhance accessibility and usability for the blind user.
独立对每个人都至关重要,对残疾人至关重要。能够尽可能自主地进行日常生活活动是迈向真正包容和独立生活的重要一步。已经创建了一些技术增强的服务和工具来满足特殊需求的用户,但是他们真的使用和欣赏它们吗?越来越多的人利用传感器和射频设备来开发智能家居等解决方案,旨在改善包括视障人士在内的所有人的生活质量。本文通过在线调查和面对面访谈的方式收集盲人用户对家庭自动化技术的期望和习惯。具体来说,42名视障人士回答了一份可访问的在线问卷,以更深入地了解他们的需求和偏好。接下来,与8位完全失明的参与者进行了半结构化的简短访谈,以便收集相关的用户需求,以便更好地了解所经历的障碍,并设计可用的家庭自动化和远程控制系统。结果表明,用户的主要要求是在日常工作中增加自主性,在使用远程家庭自动化控制时具有更多的可用性和灵活性。根据收集到的反馈,为家庭自动化和远程控制系统的设计者和开发者提出了一套一般性建议,以提高盲人用户的可访问性和可用性。
{"title":"Home Automation for an Independent Living: Investigating the Needs of Visually Impaired People","authors":"B. Leporini, M. Buzzi","doi":"10.1145/3192714.3192823","DOIUrl":"https://doi.org/10.1145/3192714.3192823","url":null,"abstract":"Independence is essential for everyone and crucial for people with disabilities. Being able to perform the activities of daily living as autonomously as possible is an important step towards real inclusion and an independent life. Several technology-enhanced services and tools have been created to address special-needs users, but are they really used and appreciated by them? Sensors and radio frequency devices are increasingly exploited to develop solutions such as the smart home, aimed at improving the quality of life for all, including people with visual impairment. This paper collects blind users' expectations and habits regarding home automation technology through an online survey and face-to-face interviews. Specifically, 42 visually impaired people answered an accessible online questionnaire to provide more insight into their needs and preferences. Next, semi-structured short interviews conducted with a set of eight totally blind participants enabled the collection of relevant user requirements in order to better understand the obstacles experienced, and to design usable home automation and remote control systems. Results showed that the main requests regard increasing autonomy in everyday tasks and having more usability and flexibility when using remote home automation control. Thanks to the collected feedback, a set of general suggestions for designers and developers of home automation and remote control systems has been proposed in order to enhance accessibility and usability for the blind user.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121094390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Parallel DOM Architecture for Accessible Interactive Simulations 面向可访问交互式仿真的并行DOM架构
Pub Date : 2018-04-23 DOI: 10.1145/3192714.3192817
Taliesin L. Smith, Jesse Greenberg, S. Reid, Emily B. Moore
Interactive simulations are used in classrooms around the world to support student learning. Creating accessible interactive simulations is a complex challenge that pushes the boundaries of current accessibility approaches and standards. In this work, we present a new approach to addressing accessibility needs within complex interactives. Within a custom scene graph that utilizes a model-view-controller architectural pattern, we utilize a parallel document object model (PDOM) to create interactive simulations (PhET Interactive Simulations) accessible to students through alternative input devices and descriptions accessed with screen reader software. In this paper, we describe our accessibility goals, challenges, and approach to creating robust accessible interactive simulations, and provide examples from an accessible simulation we have developed and possibilities for future extensions.
世界各地的教室都在使用交互式模拟来支持学生的学习。创建可访问的交互式模拟是一项复杂的挑战,它推动了当前可访问性方法和标准的边界。在这项工作中,我们提出了一种新的方法来解决复杂交互中的可访问性需求。在利用模型-视图-控制器架构模式的自定义场景图中,我们利用并行文档对象模型(PDOM)创建交互式模拟(PhET交互式模拟),学生可以通过替代输入设备和屏幕阅读器软件访问的描述来访问。在本文中,我们描述了创建健壮的可访问交互式仿真的可访问性目标、挑战和方法,并提供了我们开发的可访问仿真的示例以及未来扩展的可能性。
{"title":"Parallel DOM Architecture for Accessible Interactive Simulations","authors":"Taliesin L. Smith, Jesse Greenberg, S. Reid, Emily B. Moore","doi":"10.1145/3192714.3192817","DOIUrl":"https://doi.org/10.1145/3192714.3192817","url":null,"abstract":"Interactive simulations are used in classrooms around the world to support student learning. Creating accessible interactive simulations is a complex challenge that pushes the boundaries of current accessibility approaches and standards. In this work, we present a new approach to addressing accessibility needs within complex interactives. Within a custom scene graph that utilizes a model-view-controller architectural pattern, we utilize a parallel document object model (PDOM) to create interactive simulations (PhET Interactive Simulations) accessible to students through alternative input devices and descriptions accessed with screen reader software. In this paper, we describe our accessibility goals, challenges, and approach to creating robust accessible interactive simulations, and provide examples from an accessible simulation we have developed and possibilities for future extensions.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133914282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Reliability Aware Web Accessibility Experience Metric 可靠性感知Web可访问性体验度量
Pub Date : 2018-04-23 DOI: 10.1145/3192714.3192836
Shuyi Song, Jiajun Bu, Chengchao Shen, Andreas Artmeier, Zhi Yu, Qin Zhou
Web accessibility metrics can measure the accessibility levels of websites. Although many metrics with different motivations have been proposed, current metrics are limited in their applicability when considering user experience. This study proposes Reliability Aware Web Accessibility Experience Metric (RA-WAEM), a novel Web accessibility metric which considers the user experience of people with disabilities and their reliability in objectively assessing the severity of accessibility barriers. We present an optimization algorithm based on Expectation Maximization to derive the parameters of RA-WAEM efficiently. Moreover, we conduct an extensive accessibility study on 46 websites with 323,098 Web pages and collect the user experience of 122 people. An evaluation on this dataset shows that RA-WAEM outperforms state of the art accessibility metrics in reflecting the user experience.
网站可访问性指标可以衡量网站的可访问性水平。尽管人们提出了许多带有不同动机的指标,但在考虑用户体验时,当前指标的适用性是有限的。本研究提出了一种考虑残障人士用户体验及其可靠性的网页无障碍体验度量(RA-WAEM),以客观评估无障碍障碍的严重程度。提出了一种基于期望最大化的优化算法,可以有效地推导出RA-WAEM的参数。此外,我们对46个网站的323,098个网页进行了广泛的无障碍研究,并收集了122人的用户体验。对该数据集的评估表明,RA-WAEM在反映用户体验方面优于最先进的可访问性指标。
{"title":"Reliability Aware Web Accessibility Experience Metric","authors":"Shuyi Song, Jiajun Bu, Chengchao Shen, Andreas Artmeier, Zhi Yu, Qin Zhou","doi":"10.1145/3192714.3192836","DOIUrl":"https://doi.org/10.1145/3192714.3192836","url":null,"abstract":"Web accessibility metrics can measure the accessibility levels of websites. Although many metrics with different motivations have been proposed, current metrics are limited in their applicability when considering user experience. This study proposes Reliability Aware Web Accessibility Experience Metric (RA-WAEM), a novel Web accessibility metric which considers the user experience of people with disabilities and their reliability in objectively assessing the severity of accessibility barriers. We present an optimization algorithm based on Expectation Maximization to derive the parameters of RA-WAEM efficiently. Moreover, we conduct an extensive accessibility study on 46 websites with 323,098 Web pages and collect the user experience of 122 people. An evaluation on this dataset shows that RA-WAEM outperforms state of the art accessibility metrics in reflecting the user experience.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130394970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Automatic Natural Language Generation Applied to Alternative and Augmentative Communication for Online Video Content Services using SimpleNLG for Spanish 自动自然语言生成应用于在线视频内容服务的替代和增强通信使用SimpleNLG西班牙语
Pub Date : 2018-04-23 DOI: 10.1145/3192714.3192837
Silvia García-Méndez, Milagros Fernández Gavilanes, E. Costa-Montenegro, Jonathan Juncal-Martínez, F. González-Castaño
We present our work to build the Spanish version of SimpleNLG by adapting it and creating new code to satisfy the Spanish linguistic requirements. Not only have we developed this version but also we have achieved a library that only needs the main words as input and it is able to conduct the generation process on its own. The adaptation of the library uses aLexiS, a complete and reliable lexicon with morphology that we created. On the other hand, our enhanced version uses Elsa created from the pictogram domain, which also contains syntactic and semantic information needed to conduct the generation process automatically. Both the adaptation and its enhanced version may be useful integrated in several applications as well as web applications, bringing them natural language generation functionalities. We provide a use case of the system focused on Augmentative and Alternative Communication and online video content services.
我们通过调整SimpleNLG并创建新的代码来满足西班牙语的需求,展示了我们构建西班牙语版SimpleNLG的工作。我们不仅开发了这个版本,而且还实现了一个只需要主词作为输入的库,它能够自己进行生成过程。图书馆的改编使用了aLexiS,这是我们创建的一个完整可靠的词法词典。另一方面,我们的增强版本使用从象形图域创建的Elsa,它还包含自动进行生成过程所需的语法和语义信息。无论是改编版本还是增强版本,都可以集成到多个应用程序和web应用程序中,为它们带来自然语言生成功能。我们提供了一个系统的用例,专注于增强和替代通信和在线视频内容服务。
{"title":"Automatic Natural Language Generation Applied to Alternative and Augmentative Communication for Online Video Content Services using SimpleNLG for Spanish","authors":"Silvia García-Méndez, Milagros Fernández Gavilanes, E. Costa-Montenegro, Jonathan Juncal-Martínez, F. González-Castaño","doi":"10.1145/3192714.3192837","DOIUrl":"https://doi.org/10.1145/3192714.3192837","url":null,"abstract":"We present our work to build the Spanish version of SimpleNLG by adapting it and creating new code to satisfy the Spanish linguistic requirements. Not only have we developed this version but also we have achieved a library that only needs the main words as input and it is able to conduct the generation process on its own. The adaptation of the library uses aLexiS, a complete and reliable lexicon with morphology that we created. On the other hand, our enhanced version uses Elsa created from the pictogram domain, which also contains syntactic and semantic information needed to conduct the generation process automatically. Both the adaptation and its enhanced version may be useful integrated in several applications as well as web applications, bringing them natural language generation functionalities. We provide a use case of the system focused on Augmentative and Alternative Communication and online video content services.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126296630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Arabic web accessibility guidelines: Understanding and use by web developers in Kuwait 阿拉伯语网页可访问性指南:科威特网页开发人员的理解和使用
Pub Date : 2018-04-23 DOI: 10.1145/3192714.3196315
Muhammad Saleem
The aim of this research is to develop and implement Arabic accessibility resources for developers, web content managers and designers. The Arabic guidelines will not only assist Arabian developers and designers for a deep understanding of accessibility features, but also to apply these criteria on their Arabic websites in order to make them accessible to everyone including people with disabilities. The Arabic web accessibility guidelines will be designed to be reachable to all developers and designers in the Middle East including Kuwait.
本研究的目的是为开发人员、网络内容管理人员和设计人员开发和实现阿拉伯语无障碍资源。阿拉伯语指南不仅将帮助阿拉伯开发者和设计师深入了解无障碍功能,而且还将这些标准应用于他们的阿拉伯语网站,以使包括残疾人在内的所有人都可以访问这些网站。阿拉伯语网页可访问性指南将被设计为包括科威特在内的中东所有开发人员和设计师都可以访问。
{"title":"Arabic web accessibility guidelines: Understanding and use by web developers in Kuwait","authors":"Muhammad Saleem","doi":"10.1145/3192714.3196315","DOIUrl":"https://doi.org/10.1145/3192714.3196315","url":null,"abstract":"The aim of this research is to develop and implement Arabic accessibility resources for developers, web content managers and designers. The Arabic guidelines will not only assist Arabian developers and designers for a deep understanding of accessibility features, but also to apply these criteria on their Arabic websites in order to make them accessible to everyone including people with disabilities. The Arabic web accessibility guidelines will be designed to be reachable to all developers and designers in the Middle East including Kuwait.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124495586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Multi-view Mouth Renderization for Assisting Lip-reading 辅助唇读的多视图嘴巴渲染
Pub Date : 2018-04-23 DOI: 10.1145/3192714.3192824
Andréa Britto Mattos, Dario Augusto Borges Oliveira
Previous work demonstrated that people who rely on lip-reading often prefer a frontal view of their interlocutor, but sometimes a profile view may display certain lip gestures more noticeably. This work refers to an assistive tool that receives an unconstrained video of a speaker, captured at an arbitrary view, and not only locates the mouth region but also displays augmented versions of the lips in the frontal and profile views. This is made using deep Generative Adversarial Networks (GANs) trained on several pairs of images. In the training set, each pair contains a mouth picture taken at a random angle and the corresponding picture (i.e., relative to the same mouth shape, person, and lighting condition) taken at a fixed view. In the test phase, the networks are able to receive an unseen mouth image taken at an arbitrary angle and map it to the fixed views -- frontal and profile. Because building a large-scale pairwise dataset is time consuming, we use realistic synthetic 3D models for training, and videos of real subjects as input for testing. Our approach is speaker-independent, language-independent, and our results demonstrate that the GAN can produce visually compelling results that may assist people with hearing impairment.
先前的研究表明,依赖唇读的人通常更喜欢对话者的正面视图,但有时侧面视图可能会更明显地显示某些嘴唇手势。这项工作指的是一种辅助工具,它可以接收说话者在任意视图下拍摄的不受约束的视频,不仅可以定位嘴部区域,还可以在正面和侧面视图中显示增强版本的嘴唇。这是使用深度生成对抗网络(GANs)在几对图像上训练完成的。在训练集中,每对包含一张随机角度拍摄的嘴巴图片,以及在固定视图下拍摄的对应图片(即相对于相同的嘴型、人、光照条件)。在测试阶段,这些网络能够接收到以任意角度拍摄的看不见的嘴部图像,并将其映射到固定的视图中——正面和侧面。由于构建大规模的两两数据集非常耗时,我们使用逼真的合成3D模型进行训练,并使用真实受试者的视频作为输入进行测试。我们的方法是独立于说话者和语言的,我们的结果表明,GAN可以产生视觉上引人注目的结果,这可能有助于听力障碍的人。
{"title":"Multi-view Mouth Renderization for Assisting Lip-reading","authors":"Andréa Britto Mattos, Dario Augusto Borges Oliveira","doi":"10.1145/3192714.3192824","DOIUrl":"https://doi.org/10.1145/3192714.3192824","url":null,"abstract":"Previous work demonstrated that people who rely on lip-reading often prefer a frontal view of their interlocutor, but sometimes a profile view may display certain lip gestures more noticeably. This work refers to an assistive tool that receives an unconstrained video of a speaker, captured at an arbitrary view, and not only locates the mouth region but also displays augmented versions of the lips in the frontal and profile views. This is made using deep Generative Adversarial Networks (GANs) trained on several pairs of images. In the training set, each pair contains a mouth picture taken at a random angle and the corresponding picture (i.e., relative to the same mouth shape, person, and lighting condition) taken at a fixed view. In the test phase, the networks are able to receive an unseen mouth image taken at an arbitrary angle and map it to the fixed views -- frontal and profile. Because building a large-scale pairwise dataset is time consuming, we use realistic synthetic 3D models for training, and videos of real subjects as input for testing. Our approach is speaker-independent, language-independent, and our results demonstrate that the GAN can produce visually compelling results that may assist people with hearing impairment.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"23 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114346325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Exploring Aural Navigation by Screenless Access 通过无屏幕访问探索听觉导航
Pub Date : 2018-04-23 DOI: 10.1145/3192714.3192815
Mikaylah Gross, Joe Dara, Christopher Meyer, D. Bolchini
When people who are blind or visually impaired navigate the mobile web, they have to hold a phone in their hands at all times. Such continuous, two-handed interaction on a small screen hampers the user's ability to keep hands free to control aiding devices (e.g., cane) or touch objects nearby, especially on-the-go. In this paper, we introduce screenless access: a browsing approach that enables users to interact touch-free with aural navigation architectures using one-handed, in-air gestures recognized by an off-the-shelf armband. In a study with ten participants who are blind or visually impaired, we observed proficient navigation performance, users conceptual fit with a screen-free paradigm, and low levels of cognitive load. Our findings model the errors users made due to limits of the design and system proposed, uncover navigation styles that participants used, and illustrate unprompted adaptations of gestures that were enacted effectively to appropriate the technology. User feedback revealed insights into the potential and limitations of screenless navigation to support convenience in traveling, work contexts and privacy-preserving scenarios, as well as concerns about gestures that may become socially conspicuous.
当盲人或视障人士浏览移动网络时,他们必须一直拿着手机。这种在小屏幕上持续的双手交互妨碍了用户腾出手来控制辅助设备(如手杖)或触摸附近物体的能力,尤其是在移动过程中。在本文中,我们介绍了无屏幕访问:一种浏览方法,使用户能够使用由现成臂带识别的单手空中手势与听觉导航架构进行无触摸交互。在一项有10名盲人或视力受损的参与者参与的研究中,我们观察到用户熟练的导航表现,与无屏幕范式的概念契合,以及低水平的认知负荷。我们的研究结果模拟了由于设计和系统的限制而导致的用户错误,揭示了参与者使用的导航样式,并说明了有效地制定了适当的技术的自动调整手势。用户反馈揭示了无屏幕导航的潜力和局限性,包括在旅行、工作环境和保护隐私的场景中提供便利,以及对手势可能变得引人注目的担忧。
{"title":"Exploring Aural Navigation by Screenless Access","authors":"Mikaylah Gross, Joe Dara, Christopher Meyer, D. Bolchini","doi":"10.1145/3192714.3192815","DOIUrl":"https://doi.org/10.1145/3192714.3192815","url":null,"abstract":"When people who are blind or visually impaired navigate the mobile web, they have to hold a phone in their hands at all times. Such continuous, two-handed interaction on a small screen hampers the user's ability to keep hands free to control aiding devices (e.g., cane) or touch objects nearby, especially on-the-go. In this paper, we introduce screenless access: a browsing approach that enables users to interact touch-free with aural navigation architectures using one-handed, in-air gestures recognized by an off-the-shelf armband. In a study with ten participants who are blind or visually impaired, we observed proficient navigation performance, users conceptual fit with a screen-free paradigm, and low levels of cognitive load. Our findings model the errors users made due to limits of the design and system proposed, uncover navigation styles that participants used, and illustrate unprompted adaptations of gestures that were enacted effectively to appropriate the technology. User feedback revealed insights into the potential and limitations of screenless navigation to support convenience in traveling, work contexts and privacy-preserving scenarios, as well as concerns about gestures that may become socially conspicuous.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130065570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
DysHelper: The Dyslexia Assistive User Experience DysHelper:阅读障碍辅助用户体验
Pub Date : 2018-04-23 DOI: 10.1145/3192714.3196320
Tereza Parilová, Romana Remsíková
The aim of this article is to focus on user experience with DysHelper, the dyslexia assistive web extension. We conducted this research with university students over 18 years old. We describe the design of the extension and then focus on describing the various stages of the practical user experience, which consisted of individual user testing, the reading two types of texts, followed by discussion with users. The results indicate that the extension is generally welcomed. Although DysHelper has its limits, user experience research shows that it has a significant potential to affect reading problems positively and can be easily used, also in consideration of needs that may change over time.
本文的目的是关注DysHelper的用户体验,这是一款阅读障碍辅助网络扩展。我们对18岁以上的大学生进行了这项研究。我们描述了扩展的设计,然后重点描述了实际用户体验的各个阶段,其中包括个人用户测试,阅读两种类型的文本,然后与用户讨论。结果表明,扩展是普遍欢迎的。虽然DysHelper有其局限性,但用户体验研究表明,它有很大的潜力对阅读问题产生积极的影响,并且可以很容易地使用,同时考虑到可能随着时间的推移而变化的需求。
{"title":"DysHelper: The Dyslexia Assistive User Experience","authors":"Tereza Parilová, Romana Remsíková","doi":"10.1145/3192714.3196320","DOIUrl":"https://doi.org/10.1145/3192714.3196320","url":null,"abstract":"The aim of this article is to focus on user experience with DysHelper, the dyslexia assistive web extension. We conducted this research with university students over 18 years old. We describe the design of the extension and then focus on describing the various stages of the practical user experience, which consisted of individual user testing, the reading two types of texts, followed by discussion with users. The results indicate that the extension is generally welcomed. Although DysHelper has its limits, user experience research shows that it has a significant potential to affect reading problems positively and can be easily used, also in consideration of needs that may change over time.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"481 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116690420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Usability of Math Editors 提高数学编辑器的可用性
Pub Date : 2018-04-23 DOI: 10.1145/3192714.3192835
N. Soiffer
WYSIWYG mathematical editors have existed for several decades. Recent editors have mostly been web-based. These editors often provide buttons or palettes containing hundreds of symbols used in mathematics. People who use screen readers and switch devices are restricted to semi-linear access of the buttons and must wade through a large number of buttons to find the right symbol to insert if the symbol is not present on the keyboard. This paper presents data gleaned from textbooks that shows that if the subject area is known, the number of buttons needed for special symbols is small so usability can be greatly improved.
所见即所得的数学编辑器已经存在了几十年。最近的编辑大多是基于网络的。这些编辑器通常提供包含数百个数学符号的按钮或调色板。使用屏幕阅读器和开关设备的人只能半线性地访问按钮,如果键盘上没有符号,则必须费力地通过大量按钮来找到要插入的正确符号。这篇论文展示了从教科书中收集的数据,这些数据表明,如果主题区域已知,特殊符号所需的按钮数量就会很少,因此可用性可以大大提高。
{"title":"Improving Usability of Math Editors","authors":"N. Soiffer","doi":"10.1145/3192714.3192835","DOIUrl":"https://doi.org/10.1145/3192714.3192835","url":null,"abstract":"WYSIWYG mathematical editors have existed for several decades. Recent editors have mostly been web-based. These editors often provide buttons or palettes containing hundreds of symbols used in mathematics. People who use screen readers and switch devices are restricted to semi-linear access of the buttons and must wade through a large number of buttons to find the right symbol to insert if the symbol is not present on the keyboard. This paper presents data gleaned from textbooks that shows that if the subject area is known, the number of buttons needed for special symbols is small so usability can be greatly improved.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122159571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
How Context and User Behavior Affect Indoor Navigation Assistance for Blind People 环境和用户行为如何影响盲人室内导航辅助
Pub Date : 2018-04-23 DOI: 10.1145/3192714.3192829
J. Guerreiro, Eshed Ohn-Bar, D. Ahmetovic, Kris M. Kitani, C. Asakawa
Recent techniques for indoor localization are now able to support practical, accurate turn-by-turn navigation for people with visual impairments (PVI). Understanding user behavior as it relates to situational contexts can be used to improve the ability of the interface to adapt to problematic scenarios, and consequently reduce navigation errors. This work performs a fine-grained analysis of user behavior during indoor assisted navigation, outlining different scenarios where user behavior (either with a white-cane or a guide-dog) is likely to cause navigation errors. The scenarios include certain instructions (e.g., slight turns, approaching turns), cases of error recovery, and the surrounding environment (e.g., open spaces and landmarks). We discuss the findings and lessons learned from a real-world user study to guide future directions for the development of assistive navigation interfaces that consider the users' behavior and coping mechanisms.
最近的室内定位技术现在能够为视觉障碍(PVI)的人提供实用、准确的转弯导航。了解与情境相关的用户行为可以用来提高界面适应有问题的情境的能力,从而减少导航错误。这项工作对室内辅助导航中的用户行为进行了细粒度的分析,概述了用户行为(使用手杖或导盲犬)可能导致导航错误的不同场景。这些场景包括特定的指令(例如,轻微转弯,接近转弯),错误恢复情况以及周围环境(例如,开放空间和地标)。我们讨论了从现实世界的用户研究中获得的发现和经验教训,以指导考虑用户行为和应对机制的辅助导航界面的未来发展方向。
{"title":"How Context and User Behavior Affect Indoor Navigation Assistance for Blind People","authors":"J. Guerreiro, Eshed Ohn-Bar, D. Ahmetovic, Kris M. Kitani, C. Asakawa","doi":"10.1145/3192714.3192829","DOIUrl":"https://doi.org/10.1145/3192714.3192829","url":null,"abstract":"Recent techniques for indoor localization are now able to support practical, accurate turn-by-turn navigation for people with visual impairments (PVI). Understanding user behavior as it relates to situational contexts can be used to improve the ability of the interface to adapt to problematic scenarios, and consequently reduce navigation errors. This work performs a fine-grained analysis of user behavior during indoor assisted navigation, outlining different scenarios where user behavior (either with a white-cane or a guide-dog) is likely to cause navigation errors. The scenarios include certain instructions (e.g., slight turns, approaching turns), cases of error recovery, and the surrounding environment (e.g., open spaces and landmarks). We discuss the findings and lessons learned from a real-world user study to guide future directions for the development of assistive navigation interfaces that consider the users' behavior and coping mechanisms.","PeriodicalId":330095,"journal":{"name":"Proceedings of the Internet of Accessible Things","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122633919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
期刊
Proceedings of the Internet of Accessible Things
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1