首页 > 最新文献

ASSETS. Annual ACM Conference on Assistive Technologies最新文献

英文 中文
IncluSet: A Data Surfacing Repository for Accessibility Datasets. IncluSet:可访问性数据集的数据表面存储库。
Pub Date : 2020-01-01 DOI: 10.1145/3373625.3418026
Hernisa Kacorri, Utkarsh Dwivedi, Sravya Amancherla, Mayanka K Jha, Riya Chanduka

Datasets and data sharing play an important role for innovation, benchmarking, mitigating bias, and understanding the complexity of real world AI-infused applications. However, there is a scarcity of available data generated by people with disabilities with the potential for training or evaluating machine learning models. This is partially due to smaller populations, disparate characteristics, lack of expertise for data annotation, as well as privacy concerns. Even when data are collected and are publicly available, it is often difficult to locate them. We present a novel data surfacing repository, called IncluSet, that allows researchers and the disability community to discover and link accessibility datasets. The repository is pre-populated with information about 139 existing datasets: 65 made publicly available, 25 available upon request, and 49 not shared by the authors but described in their manuscripts. More importantly, IncluSet is designed to expose existing and new dataset contributions so they may be discoverable through Google Dataset Search.

数据集和数据共享在创新、基准测试、减轻偏见和理解现实世界人工智能应用程序的复杂性方面发挥着重要作用。然而,残疾人产生的可用数据缺乏,这些数据具有训练或评估机器学习模型的潜力。这部分是由于人口较少、特征不同、缺乏数据注释方面的专业知识以及隐私问题。即使收集了数据并公开提供,通常也很难找到它们。我们提出了一个新的数据表面存储库,称为IncluSet,它允许研究人员和残疾人社区发现和链接可访问性数据集。存储库预先填充了139个现有数据集的信息:65个公开可用,25个应要求提供,49个不为作者共享,但在其手稿中有描述。更重要的是,IncluSet旨在公开现有的和新的数据集贡献,以便可以通过谷歌数据集搜索发现它们。
{"title":"IncluSet: A Data Surfacing Repository for Accessibility Datasets.","authors":"Hernisa Kacorri, Utkarsh Dwivedi, Sravya Amancherla, Mayanka K Jha, Riya Chanduka","doi":"10.1145/3373625.3418026","DOIUrl":"10.1145/3373625.3418026","url":null,"abstract":"<p><p>Datasets and data sharing play an important role for innovation, benchmarking, mitigating bias, and understanding the complexity of real world AI-infused applications. However, there is a scarcity of available data generated by people with disabilities with the potential for training or evaluating machine learning models. This is partially due to smaller populations, disparate characteristics, lack of expertise for data annotation, as well as privacy concerns. Even when data are collected and are publicly available, it is often difficult to locate them. We present a novel data surfacing repository, called IncluSet, that allows researchers and the disability community to discover and link accessibility datasets. The repository is pre-populated with information about 139 existing datasets: 65 made publicly available, 25 available upon request, and 49 not shared by the authors but described in their manuscripts. More importantly, IncluSet is designed to expose existing and new dataset contributions so they may be discoverable through Google Dataset Search.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"72 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8375514/pdf/nihms-1716335.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39349004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revisiting Blind Photography in the Context of Teachable Object Recognizers. 在可教物体识别器的背景下重新审视盲人摄影。
Pub Date : 2019-10-01 DOI: 10.1145/3308561.3353799
Kyungjun Lee, Jonggi Hong, Simone Pimento, Ebrima Jarjue, Hernisa Kacorri

For people with visual impairments, photography is essential in identifying objects through remote sighted help and image recognition apps. This is especially the case for teachable object recognizers, where recognition models are trained on user's photos. Here, we propose real-time feedback for communicating the location of an object of interest in the camera frame. Our audio-haptic feedback is powered by a deep learning model that estimates the object center location based on its proximity to the user's hand. To evaluate our approach, we conducted a user study in the lab, where participants with visual impairments (N = 9) used our feedback to train and test their object recognizer in vanilla and cluttered environments. We found that very few photos did not include the object (2% in the vanilla and 8% in the cluttered) and the recognition performance was promising even for participants with no prior camera experience. Participants tended to trust the feedback even though they know it can be wrong. Our cluster analysis indicates that better feedback is associated with photos that include the entire object. Our results provide insights into factors that can degrade feedback and recognition performance in teachable interfaces.

对于有视觉障碍的人来说,通过远程视力帮助和图像识别应用程序识别物体时,照片是必不可少的。可教对象识别器尤其如此,其识别模型是根据用户的照片进行训练的。在这里,我们提出了实时反馈功能,用于传达感兴趣的物体在相机画面中的位置。我们的音频-触觉反馈由深度学习模型驱动,该模型根据物体与用户手部的距离来估计物体中心位置。为了评估我们的方法,我们在实验室进行了一项用户研究,让有视觉障碍的参与者(N = 9)使用我们的反馈,在虚幻和杂乱的环境中训练和测试他们的物体识别器。我们发现,只有极少数照片未包含物体(2% 在虚幻环境中,8% 在杂乱环境中),即使是没有照相机使用经验的参与者,识别性能也很不错。参与者倾向于相信反馈,即使他们知道反馈可能是错误的。我们的聚类分析表明,反馈较好的照片与包含整个物体有关。我们的研究结果让我们深入了解了可能降低可教界面的反馈和识别性能的因素。
{"title":"Revisiting Blind Photography in the Context of Teachable Object Recognizers.","authors":"Kyungjun Lee, Jonggi Hong, Simone Pimento, Ebrima Jarjue, Hernisa Kacorri","doi":"10.1145/3308561.3353799","DOIUrl":"10.1145/3308561.3353799","url":null,"abstract":"<p><p>For people with visual impairments, photography is essential in identifying objects through remote sighted help and image recognition apps. This is especially the case for teachable object recognizers, where recognition models are trained on user's photos. Here, we propose real-time feedback for communicating the location of an object of interest in the camera frame. Our audio-haptic feedback is powered by a deep learning model that estimates the object center location based on its proximity to the user's hand. To evaluate our approach, we conducted a user study in the lab, where participants with visual impairments (<i>N</i> = 9) used our feedback to train and test their object recognizer in vanilla and cluttered environments. We found that very few photos did not include the object (2% in the vanilla and 8% in the cluttered) and the recognition performance was promising even for participants with no prior camera experience. Participants tended to trust the feedback even though they know it can be wrong. Our cluster analysis indicates that better feedback is associated with photos that include the entire object. Our results provide insights into factors that can degrade feedback and recognition performance in teachable interfaces.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2019 ","pages":"83-95"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7415326/pdf/nihms-1609036.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38252920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Modules to Teach Accessibility in a User-Centered Design Course. 在以用户为中心的设计课程中使用模块来教授可访问性。
Pub Date : 2019-10-01 DOI: 10.1145/3308561.3354632
Amanda Lazar, Jonathan Lazar, Alisha Pradhan

Courses in user-centered design, where students learn about centering design on the needs of individuals, is one natural point in which accessibility content can be injected into the curriculum. We describe the approach we have taken with sections in the undergraduate User-Centered Design Course at the University of Maryland, College Park. We initially introduced disability and accessibility in four modules: 1) websites and design portfolios, 2) introduction to understanding user needs, 3) prototyping, and 4) UX evaluation. We present a description of this content that was taught as an extended version in one Fall 2018 section and as an abbreviated version in all sections in Spring 2019. Survey results indicate that students' understanding of accessibility and assistive technology increased with the introduction of these modules.

在以用户为中心的设计课程中,学生学习如何将设计以个人需求为中心,这是一个自然的点,可访问性内容可以注入到课程中。我们在马里兰大学帕克分校的本科生以用户为中心的设计课程中描述了我们所采用的方法。我们最初在四个模块中介绍了残疾和可访问性:1)网站和设计作品集,2)了解用户需求的介绍,3)原型设计,4)用户体验评估。我们介绍了该内容的描述,该内容在2018年秋季的一个章节中作为扩展版本教授,在2019年春季的所有章节中作为缩写版本教授。调查结果表明,随着这些模块的引入,学生对无障碍和辅助技术的理解有所提高。
{"title":"Using Modules to Teach Accessibility in a User-Centered Design Course.","authors":"Amanda Lazar, Jonathan Lazar, Alisha Pradhan","doi":"10.1145/3308561.3354632","DOIUrl":"10.1145/3308561.3354632","url":null,"abstract":"<p><p>Courses in user-centered design, where students learn about centering design on the needs of individuals, is one natural point in which accessibility content can be injected into the curriculum. We describe the approach we have taken with sections in the undergraduate User-Centered Design Course at the University of Maryland, College Park. We initially introduced disability and accessibility in four modules: 1) websites and design portfolios, 2) introduction to understanding user needs, 3) prototyping, and 4) UX evaluation. We present a description of this content that was taught as an extended version in one Fall 2018 section and as an abbreviated version in all sections in Spring 2019. Survey results indicate that students' understanding of accessibility and assistive technology increased with the introduction of these modules.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2019 ","pages":"554-556"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7377301/pdf/nihms-1609037.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38186490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding Mental Ill-health as Psychosocial Disability: Implications for Assistive Technology. 将心理疾病理解为社会心理残疾:对辅助技术的影响。
Pub Date : 2019-10-01 DOI: 10.1145/3308561.3353785
Kathryn E Ringland, Jennifer Nicholas, Rachel Kornfield, Emily G Lattie, David C Mohr, Madhu Reddy

Psychosocial disability involves actual or perceived impairment due to a diversity of mental, emotional, or cognitive experiences. While assistive technology for psychosocial disabilities has been understudied in communities such as ASSETS, advances in computing have opened up a number of new avenues for assisting those with psychosocial disabilities beyond the clinic. However, these tools continue to emerge primarily within the framework of "treatment," emphasizing resolution or improvement of mental health symptoms. This work considers what it means to adopt a social model lens from disability studies and incorporate the expertise of assistive technology researchers in relation to mental health. Our investigation draws on interviews conducted with 18 individuals who have complex health needs that include mental health symptoms. This work highlights the potential role for assistive technology in supporting psychosocial disability outside of a clinical or medical framework.

社会心理残疾是指由于心理、情感或认知经历的多样性而导致的实际或感知障碍。虽然社会心理残疾辅助技术在 ASSETS 等社区中的研究还不够深入,但计算机技术的进步已经开辟了许多新的途径,为诊所以外的社会心理残疾患者提供帮助。然而,这些工具仍然主要在 "治疗 "的框架内出现,强调解决或改善心理健康症状。这项研究从残疾研究的社会模式视角出发,结合辅助技术研究人员在心理健康方面的专业知识,探讨了采用这种视角的意义。我们的调查借鉴了对 18 名有复杂健康需求(包括精神健康症状)的个人进行的访谈。这项工作强调了辅助技术在临床或医疗框架之外支持社会心理残疾方面的潜在作用。
{"title":"Understanding Mental Ill-health as Psychosocial Disability: Implications for Assistive Technology.","authors":"Kathryn E Ringland, Jennifer Nicholas, Rachel Kornfield, Emily G Lattie, David C Mohr, Madhu Reddy","doi":"10.1145/3308561.3353785","DOIUrl":"10.1145/3308561.3353785","url":null,"abstract":"<p><p>Psychosocial disability involves actual or perceived impairment due to a diversity of mental, emotional, or cognitive experiences. While assistive technology for psychosocial disabilities has been understudied in communities such as ASSETS, advances in computing have opened up a number of new avenues for assisting those with psychosocial disabilities beyond the clinic. However, these tools continue to emerge primarily within the framework of \"treatment,\" emphasizing resolution or improvement of mental health symptoms. This work considers what it means to adopt a social model lens from disability studies and incorporate the expertise of assistive technology researchers in relation to mental health. Our investigation draws on interviews conducted with 18 individuals who have complex health needs that include mental health symptoms. This work highlights the potential role for assistive technology in supporting psychosocial disability outside of a clinical or medical framework.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2019 ","pages":"156-170"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7918274/pdf/nihms-1673380.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25424615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating Author and User Experience for an Audio-Haptic System for Annotation of Physical Models. 物理模型注释的听觉-触觉系统的作者和用户体验评估。
Pub Date : 2017-10-01 DOI: 10.1145/3132525.3134811
James M Coughlan, Joshua Miele

We describe three usability studies involving a prototype system for creation and haptic exploration of labeled locations on 3D objects. The system uses a computer, webcam, and fiducial markers to associate a physical 3D object in the camera's view with a predefined digital map of labeled locations ("hotspots"), and to do real-time finger tracking, allowing a blind or visually impaired user to explore the object and hear individual labels spoken as each hotspot is touched. This paper describes: (a) a formative study with blind users exploring pre-annotated objects to assess system usability and accuracy; (b) a focus group of blind participants who used the system and, through structured and unstructured discussion, provided feedback on its practicality, possible applications, and real-world potential; and (c) a formative study in which a sighted adult used the system to add labels to on-screen images of objects, demonstrating the practicality of remote annotation of 3D models. These studies and related literature suggest potential for future iterations of the system to benefit blind and visually impaired users in educational, professional, and recreational contexts.

我们描述了三个可用性研究,涉及一个原型系统,用于在3D对象上标记位置的创建和触觉探索。该系统使用计算机、网络摄像头和基准标记,将摄像头视图中的物理3D物体与预定义的标记位置(“热点”)的数字地图相关联,并进行实时手指跟踪,允许盲人或视力受损的用户探索物体,并在触摸每个热点时听到单独的标签。本文描述了:(a)盲人用户探索预注释对象的形成性研究,以评估系统的可用性和准确性;(b)由使用该系统的盲人参与者组成的焦点小组,通过结构化和非结构化的讨论,就该系统的实用性、可能的应用和现实世界的潜力提供反馈;(c)一项形成性研究,其中一名视力正常的成年人使用该系统为屏幕上的物体图像添加标签,展示了远程注释3D模型的实用性。这些研究和相关文献表明,该系统的未来迭代可能会使盲人和视障用户在教育、专业和娱乐环境中受益。
{"title":"Evaluating Author and User Experience for an Audio-Haptic System for Annotation of Physical Models.","authors":"James M Coughlan, Joshua Miele","doi":"10.1145/3132525.3134811","DOIUrl":"10.1145/3132525.3134811","url":null,"abstract":"<p><p>We describe three usability studies involving a prototype system for creation and haptic exploration of labeled locations on 3D objects. The system uses a computer, webcam, and fiducial markers to associate a physical 3D object in the camera's view with a predefined digital map of labeled locations (\"hotspots\"), and to do real-time finger tracking, allowing a blind or visually impaired user to explore the object and hear individual labels spoken as each hotspot is touched. This paper describes: (a) a formative study with blind users exploring pre-annotated objects to assess system usability and accuracy; (b) a focus group of blind participants who used the system and, through structured and unstructured discussion, provided feedback on its practicality, possible applications, and real-world potential; and (c) a formative study in which a sighted adult used the system to add labels to on-screen images of objects, demonstrating the practicality of remote annotation of 3D models. These studies and related literature suggest potential for future iterations of the system to benefit blind and visually impaired users in educational, professional, and recreational contexts.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2017 ","pages":"369-370"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5714613/pdf/nihms919789.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35322706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JustPoint: Identifying Colors with a Natural User Interface. JustPoint:用自然的用户界面识别颜色。
Pub Date : 2017-10-01 DOI: 10.1145/3132525.3134802
Sergio Mascetti, Silvia D'Acquisto, Andrea Gerino, Mattia Ducci, Cristian Bernareggi, James M Coughlan

People with severe visual impairments usually have no way of identifying the colors of objects in their environment. While existing smartphone apps can recognize colors and speak them aloud, they require the user to center the object of interest in the camera's field of view, which is challenging for many users. We developed a smartphone app to address this problem that reads aloud the color of the object pointed to by the user's fingertip, without confusion from background colors. We evaluated the app with nine people who are blind, demonstrating the app's effectiveness and suggesting directions for improvements in the future.

有严重视觉障碍的人通常无法识别环境中物体的颜色。虽然现有的智能手机应用程序可以识别颜色并大声说出它们,但它们要求用户将感兴趣的物体放在相机视野的中心,这对许多用户来说都是一个挑战。我们开发了一款智能手机应用来解决这个问题,它可以大声读出用户指尖所指物体的颜色,而不会因背景颜色而混淆。我们让9位盲人对这款应用进行了评估,展示了应用的有效性,并提出了未来改进的方向。
{"title":"JustPoint: Identifying Colors with a Natural User Interface.","authors":"Sergio Mascetti, Silvia D'Acquisto, Andrea Gerino, Mattia Ducci, Cristian Bernareggi, James M Coughlan","doi":"10.1145/3132525.3134802","DOIUrl":"10.1145/3132525.3134802","url":null,"abstract":"<p><p>People with severe visual impairments usually have no way of identifying the colors of objects in their environment. While existing smartphone apps can recognize colors and speak them aloud, they require the user to center the object of interest in the camera's field of view, which is challenging for many users. We developed a smartphone app to address this problem that reads aloud the color of the object pointed to by the user's fingertip, without confusion from background colors. We evaluated the app with nine people who are blind, demonstrating the app's effectiveness and suggesting directions for improvements in the future.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2017 ","pages":"329-330"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5714614/pdf/nihms919790.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35322705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speed-Dial: A Surrogate Mouse for Non-Visual Web Browsing. 快速拨号:非视觉网页浏览的替代鼠标。
Pub Date : 2017-10-01 DOI: 10.1145/3132525.3132531
Syed Masum Billah, Vikas Ashok, Donald E Porter, I V Ramakrishnan

Sighted people can browse the Web almost exclusively using a mouse. This is because web browsing mostly entails pointing and clicking on some element in the web page, and these two operations can be done almost instantaneously with a computer mouse. Unfortunately, people with vision impairments cannot use a mouse as it only provides visual feedback through a cursor. Instead, they are forced to go through a slow and tedious process of building a mental map of the web page, relying primarily on a screen reader's keyboard shortcuts and its serial audio readout of the textual content of the page, including metadata. This can often cause content and cognitive overload. This paper describes our Speed-Dial system which uses an off-the-shelf physical Dial as a surrogate for the mouse for non-visual web browsing. Speed-Dial interfaces the physical Dial with the semantic model of a web page, and provides an intuitive and rapid access to the entities and their content in the model, thereby bringing blind people's browsing experience closer to how sighted people perceive and interact with the Web. A user study with blind participants suggests that with Speed-Dial they can quickly move around the web page to select content of interest, akin to pointing and clicking with a mouse.

视力正常的人几乎只用鼠标就能浏览网页。这是因为网页浏览主要需要指向和点击网页上的某些元素,而这两项操作几乎可以用电脑鼠标瞬间完成。不幸的是,有视力障碍的人不能使用鼠标,因为它只能通过光标提供视觉反馈。相反,他们被迫经历一个缓慢而乏味的过程,构建一个网页的心理地图,主要依赖于屏幕阅读器的键盘快捷键和页面文本内容(包括元数据)的串行音频读出。这通常会导致内容和认知超载。本文介绍了我们的快速拨号系统,该系统使用现成的物理拨号代替鼠标进行非视觉网页浏览。快速拨号将物理拨号与网页的语义模型相结合,提供对模型中的实体及其内容的直观、快速的访问,从而使盲人的浏览体验更接近正常人对网络的感知和交互方式。一项针对盲人的用户研究表明,使用快速拨号,他们可以在网页上快速移动,选择感兴趣的内容,就像用鼠标点击一样。
{"title":"Speed-Dial: A Surrogate Mouse for Non-Visual Web Browsing.","authors":"Syed Masum Billah,&nbsp;Vikas Ashok,&nbsp;Donald E Porter,&nbsp;I V Ramakrishnan","doi":"10.1145/3132525.3132531","DOIUrl":"https://doi.org/10.1145/3132525.3132531","url":null,"abstract":"<p><p>Sighted people can browse the Web almost exclusively using a mouse. This is because web browsing mostly entails pointing and clicking on some element in the web page, and these two operations can be done almost instantaneously with a computer mouse. Unfortunately, people with vision impairments cannot use a mouse as it only provides visual feedback through a cursor. Instead, they are forced to go through a slow and tedious process of building a mental map of the web page, relying primarily on a screen reader's keyboard shortcuts and its serial audio readout of the textual content of the page, including metadata. This can often cause content and cognitive overload. This paper describes our Speed-Dial system which uses an off-the-shelf physical Dial as a surrogate for the mouse for non-visual web browsing. Speed-Dial interfaces the physical Dial with the semantic model of a web page, and provides an intuitive and rapid access to the entities and their content in the model, thereby bringing blind people's browsing experience closer to how sighted people perceive and interact with the Web. A user study with blind participants suggests that with Speed-Dial they can quickly move around the web page to select content of interest, akin to pointing and clicking with a mouse.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2017 ","pages":"110-119"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3132525.3132531","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36326437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
A Platform Agnostic Remote Desktop System for Screen Reading. 一个与平台无关的屏幕读取远程桌面系统。
Pub Date : 2016-10-01 DOI: 10.1145/2982142.2982151
Syed Masum Billah, Vikas Ashok, Donald E Porter, I V Ramakrishnan

Remote desktop technology, the enabler of access to applications hosted on remote hosts, relies primarily on scraping the pixels on the remote screen and redrawing them as a simple bitmap on the client's local screen. Such a technology will simply not work with screen readers since the latter are innately tied to reading text. Since screen readers are locked-in to a specific OS platform, extant solutions that enable remote access with screen readers such as NVDARemote and JAWS Tandem require homogeneity of OS platforms at both the client and remote sites. This demo will present Sinter, a system that eliminates this requirement. With Sinter, a blind Mac user, for example, can now access a remote Windows application with VoiceOver, a scenario heretofore not possible.

远程桌面技术使访问驻留在远程主机上的应用程序成为可能,它主要依赖于在远程屏幕上抓取像素,并在客户机的本地屏幕上将其重绘为简单的位图。这种技术根本不适用于屏幕阅读器,因为后者天生与阅读文本有关。由于屏幕阅读器被锁定在特定的操作系统平台上,现有的使用屏幕阅读器(如NVDARemote和JAWS Tandem)进行远程访问的解决方案要求客户端和远程站点的操作系统平台具有同性。本演示将介绍Sinter,一个消除这一要求的系统。例如,有了Sinter,盲人Mac用户现在可以使用VoiceOver访问远程Windows应用程序,这在以前是不可能的。
{"title":"A Platform Agnostic Remote Desktop System for Screen Reading.","authors":"Syed Masum Billah,&nbsp;Vikas Ashok,&nbsp;Donald E Porter,&nbsp;I V Ramakrishnan","doi":"10.1145/2982142.2982151","DOIUrl":"https://doi.org/10.1145/2982142.2982151","url":null,"abstract":"<p><p>Remote desktop technology, the enabler of access to applications hosted on remote hosts, relies primarily on scraping the pixels on the remote screen and redrawing them as a simple bitmap on the client's local screen. Such a technology will simply not work with screen readers since the latter are innately tied to reading text. Since screen readers are locked-in to a specific OS platform, extant solutions that enable remote access with screen readers such as NVDARemote and JAWS Tandem require homogeneity of OS platforms at both the client and remote sites. This demo will present Sinter, a system that eliminates this requirement. With Sinter, a blind Mac user, for example, can now access a remote Windows application with VoiceOver, a scenario heretofore not possible.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2016 ","pages":"283-284"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/2982142.2982151","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35288067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a Sign-Based Indoor Navigation System for People with Visual Impairments. 基于视觉障碍人士的室内导航系统研究。
Pub Date : 2016-10-01 DOI: 10.1145/2982142.2982202
Alejandro Rituerto, Giovanni Fusco, James M Coughlan

Navigation is a challenging task for many travelers with visual impairments. While a variety of GPS-enabled tools can provide wayfinding assistance in outdoor settings, GPS provides no useful localization information indoors. A variety of indoor navigation tools are being developed, but most of them require potentially costly physical infrastructure to be installed and maintained, or else the creation of detailed visual models of the environment. We report development of a new smartphone-based navigation aid, which combines inertial sensing, computer vision and floor plan information to estimate the user's location with no additional physical infrastructure and requiring only the locations of signs relative to the floor plan. A formative study was conducted with three blind volunteer participants demonstrating the feasibility of the approach and highlighting the areas needing improvement.

导航对许多有视觉障碍的旅行者来说是一项具有挑战性的任务。虽然各种支持GPS的工具可以在室外环境中提供寻路帮助,但GPS无法在室内提供有用的定位信息。各种室内导航工具正在开发中,但其中大多数需要安装和维护潜在的昂贵的物理基础设施,或者创建详细的环境视觉模型。我们报告了一种新的基于智能手机的导航辅助设备的开发,它结合了惯性传感、计算机视觉和平面图信息来估计用户的位置,不需要额外的物理基础设施,只需要相对于平面图的标志位置。三名盲人志愿者参与了一项形成性研究,展示了该方法的可行性,并强调了需要改进的领域。
{"title":"Towards a Sign-Based Indoor Navigation System for People with Visual Impairments.","authors":"Alejandro Rituerto, Giovanni Fusco, James M Coughlan","doi":"10.1145/2982142.2982202","DOIUrl":"10.1145/2982142.2982202","url":null,"abstract":"<p><p>Navigation is a challenging task for many travelers with visual impairments. While a variety of GPS-enabled tools can provide wayfinding assistance in outdoor settings, GPS provides no useful localization information indoors. A variety of indoor navigation tools are being developed, but most of them require potentially costly physical infrastructure to be installed and maintained, or else the creation of detailed visual models of the environment. We report development of a new smartphone-based navigation aid, which combines inertial sensing, computer vision and floor plan information to estimate the user's location with no additional physical infrastructure and requiring only the locations of signs relative to the floor plan. A formative study was conducted with three blind volunteer participants demonstrating the feasibility of the approach and highlighting the areas needing improvement.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2016 ","pages":"287-288"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5714555/pdf/nihms919788.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35319657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Computer Vision to Access Appliance Displays. 使用计算机视觉访问电器显示器。
Pub Date : 2014-01-01 DOI: 10.1145/2661334.2661404
Giovanni Fusco, Ender Tekin, Richard E Ladner, James M Coughlan

People who are blind or visually impaired face difficulties accessing a growing array of everyday appliances, needed to perform a variety of daily activities, because they are equipped with electronic displays. We are developing a "Display Reader" smartphone app, which uses computer vision to help a user acquire a usable image of a display, to address this problem. The current prototype analyzes video from the smartphone's camera, providing real-time feedback to guide the user until a satisfactory image is acquired, based on automatic estimates of image blur and glare. Formative studies were conducted with several blind and visually impaired participants, whose feedback is guiding the development of the user interface. The prototype software has been released as a Free and Open Source (FOSS) project.

盲人或视力受损的人在使用越来越多的日常设备时面临困难,这些设备需要进行各种各样的日常活动,因为他们配备了电子显示器。为了解决这个问题,我们正在开发一款“显示屏阅读器”智能手机应用程序,它使用计算机视觉来帮助用户获取显示器的可用图像。目前的原型机分析来自智能手机摄像头的视频,根据图像模糊和眩光的自动估计,提供实时反馈来指导用户,直到获得满意的图像。对几名盲人和视障参与者进行了形成性研究,他们的反馈指导了用户界面的开发。原型软件已经作为一个自由和开源(FOSS)项目发布。
{"title":"Using Computer Vision to Access Appliance Displays.","authors":"Giovanni Fusco,&nbsp;Ender Tekin,&nbsp;Richard E Ladner,&nbsp;James M Coughlan","doi":"10.1145/2661334.2661404","DOIUrl":"https://doi.org/10.1145/2661334.2661404","url":null,"abstract":"<p><p>People who are blind or visually impaired face difficulties accessing a growing array of everyday appliances, needed to perform a variety of daily activities, because they are equipped with electronic displays. We are developing a \"Display Reader\" smartphone app, which uses computer vision to help a user acquire a usable image of a display, to address this problem. The current prototype analyzes video from the smartphone's camera, providing real-time feedback to guide the user until a satisfactory image is acquired, based on automatic estimates of image blur and glare. Formative studies were conducted with several blind and visually impaired participants, whose feedback is guiding the development of the user interface. The prototype software has been released as a Free and Open Source (FOSS) project.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":"2014 ","pages":"281-282"},"PeriodicalIF":0.0,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/2661334.2661404","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32925817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
期刊
ASSETS. Annual ACM Conference on Assistive Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1