首页 > 最新文献

Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference最新文献

英文 中文
Systematically Evaluating Digital Map Tools Based on the WCAG. 基于WCAG的数字地图工具系统评价。
Brandon Biggs, James M Coughlan, Bruce N Walker

This study examines the accessibility of digital map tools in relation to the Web Accessibility Guidelines (WCAG) 2.1, highlighting critical issues for disabled users. Despite the widespread use of digital maps across various professions and daily activities, their accessibility remains insufficient. The research involved a partial Accessibility Conformance Report (ACR) comparison of the top 14 digital map tools, focusing on 15 WCAG criteria particularly relevant to geographic maps. The study expanded definitions for three criteria - 1.1.1 Non-Text Content, 1.4.11 Non-text Contrast, and 2.1.1 Keyboard Accessibility - to better apply them to map contexts. Findings revealed significant accessibility shortcomings, with only one tool (Audiom) achieving full compliance and others lacking adequate text alternatives, proper contrast, and keyboard operability. The discussion emphasizes the urgency for map developers to enhance accessibility, especially in light of upcoming legal requirements like the ADA Title II regulations. Making maps accessible not only aids users with disabilities but also offers business benefits by expanding the user base and fostering innovation. The study provides a systematic evaluation framework and clear guidelines to encourage greater digital map accessibility within an academic context.

本研究考察了与Web无障碍指南(WCAG) 2.1相关的数字地图工具的可访问性,突出了残疾用户的关键问题。尽管数字地图在各种职业和日常活动中广泛使用,但其可及性仍然不足。该研究包括对前14个数字地图工具的部分可访问性一致性报告(ACR)比较,重点关注与地理地图特别相关的15个WCAG标准。该研究扩展了三个标准的定义——1.1.1非文本内容、1.4.11非文本对比和2.1.1键盘可访问性——以便更好地将它们应用于地图上下文。调查结果显示了显著的可访问性缺陷,只有一个工具(Audiom)完全符合要求,其他工具缺乏足够的文本选择、适当的对比和键盘可操作性。讨论强调了地图开发商提高可访问性的紧迫性,特别是考虑到即将出台的法律要求,如《美国残疾人法》第二章规定。使地图易于访问不仅可以帮助残疾用户,还可以通过扩大用户基础和促进创新来提供商业利益。该研究提供了一个系统的评估框架和明确的指导方针,以鼓励在学术背景下提高数字地图的可访问性。
{"title":"Systematically Evaluating Digital Map Tools Based on the WCAG.","authors":"Brandon Biggs, James M Coughlan, Bruce N Walker","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This study examines the accessibility of digital map tools in relation to the Web Accessibility Guidelines (WCAG) 2.1, highlighting critical issues for disabled users. Despite the widespread use of digital maps across various professions and daily activities, their accessibility remains insufficient. The research involved a partial Accessibility Conformance Report (ACR) comparison of the top 14 digital map tools, focusing on 15 WCAG criteria particularly relevant to geographic maps. The study expanded definitions for three criteria - 1.1.1 Non-Text Content, 1.4.11 Non-text Contrast, and 2.1.1 Keyboard Accessibility - to better apply them to map contexts. Findings revealed significant accessibility shortcomings, with only one tool (Audiom) achieving full compliance and others lacking adequate text alternatives, proper contrast, and keyboard operability. The discussion emphasizes the urgency for map developers to enhance accessibility, especially in light of upcoming legal requirements like the ADA Title II regulations. Making maps accessible not only aids users with disabilities but also offers business benefits by expanding the user base and fostering innovation. The study provides a systematic evaluation framework and clear guidelines to encourage greater digital map accessibility within an academic context.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":"13 ","pages":"145-168"},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12094671/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144121586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel Stimuli to Benchmark and Train Echolocation Skills. 测试和训练回声定位技能的新刺激。
Haydée G García-Lázaro, Santani Teng

Echolocation is a remarkable skill used by some blind people to navigate their surroundings by interpreting echoes from self-made sounds such as mouth clicks. Despite its potential to significantly improve blind travelers' navigational independence and quality of life (Thaler; Norman, Dodsworth, et al.), echolocation remains largely underutilized. This is partly due to limited understanding of its benefits and mechanisms, as well as its steep learning curve and the lack of optimal sensory cues for training. This study describes a carefully designed set of sounds that manipulate specific temporal cues for improved spatial perception, making echolocation more accessible to beginners and potentially speeding up the learning process. These stimuli and findings could be used to develop targeted training programs to accelerate beginners' learning, raise awareness, and promote their teaching more broadly. Furthermore, incorporating these stimuli into echolocation-based assistive devices, virtual platforms, and environments could broaden the reach and impact of echolocation on the lives of blind and visually impaired people.

回声定位是一项非凡的技能,一些盲人通过解读自己发出的声音(比如嘴巴咔哒声)的回声来导航周围环境。尽管它有可能显著提高盲人旅行者的导航独立性和生活质量(塞勒;Norman, Dodsworth等人),回声定位在很大程度上仍未得到充分利用。这部分是由于对其益处和机制的了解有限,以及其陡峭的学习曲线和缺乏最佳的训练感官线索。这项研究描述了一组精心设计的声音,这些声音可以操纵特定的时间线索来改善空间感知,使初学者更容易获得回声定位,并可能加快学习过程。这些刺激和发现可以用来制定有针对性的培训计划,以加速初学者的学习,提高认识,并更广泛地促进他们的教学。此外,将这些刺激整合到基于回声定位的辅助设备、虚拟平台和环境中,可以扩大回声定位对盲人和视障人士生活的影响。
{"title":"Novel Stimuli to Benchmark and Train Echolocation Skills.","authors":"Haydée G García-Lázaro, Santani Teng","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Echolocation is a remarkable skill used by some blind people to navigate their surroundings by interpreting echoes from self-made sounds such as mouth clicks. Despite its potential to significantly improve blind travelers' navigational independence and quality of life (Thaler; Norman, Dodsworth, et al.), echolocation remains largely underutilized. This is partly due to limited understanding of its benefits and mechanisms, as well as its steep learning curve and the lack of optimal sensory cues for training. This study describes a carefully designed set of sounds that manipulate specific temporal cues for improved spatial perception, making echolocation more accessible to beginners and potentially speeding up the learning process. These stimuli and findings could be used to develop targeted training programs to accelerate beginners' learning, raise awareness, and promote their teaching more broadly. Furthermore, incorporating these stimuli into echolocation-based assistive devices, virtual platforms, and environments could broaden the reach and impact of echolocation on the lives of blind and visually impaired people.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":"13 ","pages":"367-384"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12188991/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Use of Braille in the Workplace by People Who Are Blind. 盲人在工作场所使用盲文。
Michele C McDonnall, Rachael Sessler-Trinkowsky, Anne Steverson

Interest in the benefits of braille for people who are blind is high among professionals in the blindness field, but we know little about how braille is used in the workplace. The broad purpose of this study was to learn how employed people who are blind use braille on the job. Specific topics investigated included: work tasks refreshable braille technology (RBT) is used for, personal and job characteristics of RBT users compared to non-users, and factors associated with RBT use among workers with at least moderate braille skills. This study utilized data from 304 participants in a longitudinal research project investigating assistive technology use in the workplace by people who are blind. Two-thirds of our participants used braille on the job, and more than half utilized RBT. Workers who used RBT did not necessarily use it for all computer-related tasks they performed. RBT use was generally not significantly related to job characteristics, except for working for a blindness organization. RBT use was not significantly related to general personal characteristics but it was significantly different based on disability-related characteristics. Only older age and higher braille skills were significantly associated with RBT use on the job in a multivariate logistic regression model.

盲人领域的专业人士对盲文给盲人带来的益处兴趣浓厚,但我们对盲文在工作场所的使用情况却知之甚少。这项研究的主要目的是了解就业盲人如何在工作中使用盲文。调查的具体主题包括:可刷新盲文技术(RBT)的工作任务、与不使用 RBT 的人相比,RBT 使用者的个人和工作特征,以及至少具备中等盲文技能的工人使用 RBT 的相关因素。本研究利用了一个调查盲人在工作场所使用辅助技术情况的纵向研究项目中 304 名参与者的数据。三分之二的参与者在工作中使用盲文,超过一半的人使用 RBT。使用 RBT 的工人并不一定会在执行所有与计算机相关的任务时都使用 RBT。除在盲人组织工作外,RBT 的使用一般与工作特征无明显关系。RBT 的使用与一般个人特征没有明显关系,但与残疾相关的特征却有显著差异。在多变量逻辑回归模型中,只有年龄较大和点字技能较高的人与在工作中使用 RBT 有明显关系。
{"title":"Use of Braille in the Workplace by People Who Are Blind.","authors":"Michele C McDonnall, Rachael Sessler-Trinkowsky, Anne Steverson","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Interest in the benefits of braille for people who are blind is high among professionals in the blindness field, but we know little about how braille is used in the workplace. The broad purpose of this study was to learn how employed people who are blind use braille on the job. Specific topics investigated included: work tasks refreshable braille technology (RBT) is used for, personal and job characteristics of RBT users compared to non-users, and factors associated with RBT use among workers with at least moderate braille skills. This study utilized data from 304 participants in a longitudinal research project investigating assistive technology use in the workplace by people who are blind. Two-thirds of our participants used braille on the job, and more than half utilized RBT. Workers who used RBT did not necessarily use it for all computer-related tasks they performed. RBT use was generally not significantly related to job characteristics, except for working for a blindness organization. RBT use was not significantly related to general personal characteristics but it was significantly different based on disability-related characteristics. Only older age and higher braille skills were significantly associated with RBT use on the job in a multivariate logistic regression model.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":"12 ","pages":"58-75"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11404553/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
You Described, We Archived: A Rich Audio Description Dataset. 你描述,我们存档:丰富的音频描述数据集
Charity Pitcher-Cooper, Manali Seth, Benjamin Kao, James M Coughlan, Ilmi Yoon

The You Described, We Archived dataset (YuWA) is a collaboration between San Francisco State University and The Smith-Kettlewell Eye Research Institute. It includes audio description (AD) data collected worldwide 2013-2022 through YouDescribe, an accessibility tool for adding audio descriptions to YouTube videos. YouDescribe, a web-based audio description tool along with an iOS viewing app, has a community of 12,000+ average annual visitors, with approximately 3,000 volunteer describers, and has created over 5,500 audio described YouTube videos. Blind and visually impaired (BVI) viewers request videos, which then are saved to a wish list and volunteer audio describers select a video, write a script, record audio clips, and edit clip placement to create an audio description. The AD tracks are stored separately, posted for public view at https://youdescribe.org/ and played together with the YouTube video. The YuWA audio description data paired with the describer and viewer metadata, and collection timeline has a large number of research applications including artificial intelligence, machine learning, sociolinguistics, audio description, video understanding, video retrieval and video-language grounding tasks.

你描述,我们存档 "数据集 (YuWA) 是旧金山州立大学和史密斯-凯特威尔眼科研究所的合作成果。该数据集包括 2013-2022 年通过 YouDescribe 在全球收集的音频描述(AD)数据,YouDescribe 是一款用于在 YouTube 视频中添加音频描述的无障碍工具。YouDescribe 是一款基于网络的音频描述工具,同时还提供 iOS 观看应用程序,拥有一个年均访问量超过 12,000 人的社区,约有 3,000 名志愿描述者,并创建了超过 5,500 个音频描述 YouTube 视频。盲人和视障(BVI)观众申请观看视频,然后将视频保存到愿望列表中,志愿音频描述员选择视频、编写脚本、录制音频片段并编辑片段位置以创建音频描述。AD 音轨单独存储,发布在 https://youdescribe.org/ 上供公众查看,并与 YouTube 视频一起播放。YuWA 音频描述数据与描述者和观看者元数据以及收集时间轴配对,可用于大量研究应用,包括人工智能、机器学习、社会语言学、音频描述、视频理解、视频检索和视频语言基础任务。
{"title":"You Described, We Archived: A Rich Audio Description Dataset.","authors":"Charity Pitcher-Cooper, Manali Seth, Benjamin Kao, James M Coughlan, Ilmi Yoon","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The You Described, We Archived dataset (YuWA) is a collaboration between San Francisco State University and The Smith-Kettlewell Eye Research Institute. It includes audio description (AD) data collected worldwide 2013-2022 through YouDescribe, an accessibility tool for adding audio descriptions to YouTube videos. YouDescribe, a web-based audio description tool along with an iOS viewing app, has a community of 12,000+ average annual visitors, with approximately 3,000 volunteer describers, and has created over 5,500 audio described YouTube videos. Blind and visually impaired (BVI) viewers request videos, which then are saved to a wish list and volunteer audio describers select a video, write a script, record audio clips, and edit clip placement to create an audio description. The AD tracks are stored separately, posted for public view at https://youdescribe.org/ and played together with the YouTube video. The YuWA audio description data paired with the describer and viewer metadata, and collection timeline has a large number of research applications including artificial intelligence, machine learning, sociolinguistics, audio description, video understanding, video retrieval and video-language grounding tasks.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":"11 ","pages":"192-208"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10956524/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140186480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ASL Consent in the Digital Informed Consent Process. 数字知情同意流程中的ASL同意。
Ben S Kosa, Ai Minakawa, Patrick Boudreault, Christian Vogler, Poorna Kushalnagar, Raja Kushalnagar

There is an estimated 500,000 people in the U.S. who are deaf and who use ASL and live in the U.S. Compared to the general population, deaf people are at greater risk of having chronic health problems and experience significant health disparities and inequities (Sanfacon, Leffers, Miller, Stabbe, DeWindt, Wagner, & Kushalnagar, 2020; Kushalnagar, Reesman, Holcomb, & Ryan, 2019; Kushalnagar & Miller, 2019). Much of the disparities are explained by the barriers in the environment, such as the unavailability of materials in ASL and lack of healthcare professionals who know how to provide deaf patient-centered care. Intersecting social determinants of health (e.g., intrinsic - low education; and extrinsic - barrier to healthcare services) create a mutually constituted vulnerability for health disparities when a person is deaf (Kushalnagar & Miller, 2019; Lesch, Brucher, Chapple, R., & Chapple, K., 2019; Smith & Chin, 2012). Moreover, the longstanding history of inequitable access to language and education, and a lack of printed information and materials, leave people who are deaf and use ASL unaware of opportunities to participate in cutting-edge research/clinical trials. An unintended consequence, therefore, is that PIs neglect to include people who are deaf and use ASL in their subject sample pools, and this marginalized population continues to be at disparity for health outcomes and also clinical research participation. One barrier is the unavailability of informed consent materials that are accessible in ASL. The current research study conducted by our team at the Center for Deaf Health Equity at Gallaudet University attempts to address the language barrier to the consent process through a careful reconsideration of its traditional English format and the development of an American Sign Language (ASL) informed consent app. This team successfully leveraged existing machine learning methods to develop a way to navigate and signature an informed consent process using ASL. We call this new method of navigation and signature "ASL consent." In our findings, we found that deaf people who are primarily college educated were more likely to agree that the process for obtaining ASL consent through an accessible app is comparable to traditional English consent.

据估计,美国有50万聋哑人使用美国手语并生活在美国。与普通人群相比,聋哑人患慢性健康问题的风险更大,并且经历了重大的健康差距和不平等(Sanfacon, Leffers, Miller, Stabbe, DeWindt, Wagner, & Kushalnagar, 2020; Kushalnagar, Reesman, Holcomb, & Ryan, 2019; Kushalnagar & Miller, 2019)。许多差异是由环境中的障碍造成的,比如美国手语材料的缺乏,以及缺乏知道如何提供以聋人患者为中心的护理的医疗专业人员。健康的交叉社会决定因素(例如,内在的-低教育水平;和外在的-医疗保健服务障碍)在聋人健康差异方面形成了相互构成的脆弱性(Kushalnagar和Miller, 2019; Lesch, Brucher, Chapple, R.和Chapple, K., 2019; Smith和Chin, 2012)。此外,长期以来在语言和教育方面的不平等,以及印刷信息和材料的缺乏,使使用美国手语的聋人没有机会参与尖端研究/临床试验。因此,一个意想不到的后果是,pi忽视了将聋哑人和使用美国手语的人纳入其受试者样本池,这一边缘化人群在健康结果和临床研究参与方面继续存在差异。一个障碍是在美国手语中无法获得知情同意材料。目前,我们在加劳德特大学聋人健康平等中心的团队进行了一项研究,试图通过仔细重新考虑其传统的英语格式和开发美国手语(ASL)知情同意应用程序来解决同意过程中的语言障碍。该团队成功地利用现有的机器学习方法开发了一种使用ASL浏览和签署知情同意过程的方法。我们把这种导航和签名的新方法称为“美国手语同意书”。在我们的研究中,我们发现主要受过大学教育的聋人更有可能同意通过可访问的应用程序获得美国手语同意的过程与传统的英语同意相当。
{"title":"ASL Consent in the Digital Informed Consent Process.","authors":"Ben S Kosa, Ai Minakawa, Patrick Boudreault, Christian Vogler, Poorna Kushalnagar, Raja Kushalnagar","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>There is an estimated 500,000 people in the U.S. who are deaf and who use ASL and live in the U.S. Compared to the general population, deaf people are at greater risk of having chronic health problems and experience significant health disparities and inequities (Sanfacon, Leffers, Miller, Stabbe, DeWindt, Wagner, & Kushalnagar, 2020; Kushalnagar, Reesman, Holcomb, & Ryan, 2019; Kushalnagar & Miller, 2019). Much of the disparities are explained by the barriers in the environment, such as the unavailability of materials in ASL and lack of healthcare professionals who know how to provide deaf patient-centered care. Intersecting social determinants of health (e.g., intrinsic - low education; and extrinsic - barrier to healthcare services) create a mutually constituted vulnerability for health disparities when a person is deaf (Kushalnagar & Miller, 2019; Lesch, Brucher, Chapple, R., & Chapple, K., 2019; Smith & Chin, 2012). Moreover, the longstanding history of inequitable access to language and education, and a lack of printed information and materials, leave people who are deaf and use ASL unaware of opportunities to participate in cutting-edge research/clinical trials. An unintended consequence, therefore, is that PIs neglect to include people who are deaf and use ASL in their subject sample pools, and this marginalized population continues to be at disparity for health outcomes and also clinical research participation. One barrier is the unavailability of informed consent materials that are accessible in ASL. The current research study conducted by our team at the Center for Deaf Health Equity at Gallaudet University attempts to address the language barrier to the consent process through a careful reconsideration of its traditional English format and the development of an American Sign Language (ASL) informed consent app. This team successfully leveraged existing machine learning methods to develop a way to <i>navigate</i> and <i>signature</i> an informed consent process using ASL. We call this new method of navigation and signature \"<i>ASL consent.\"</i> In our findings, we found that deaf people who are primarily college educated were more likely to agree that the process for obtaining <i>ASL consent</i> through an accessible app is comparable to traditional English consent.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":"11 ","pages":"288-306"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12747571/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145865910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VR Training to Facilitate Blind Photography for Navigation. 促进盲人摄影导航的 VR 培训。
Jonggi Hong, James M Coughlan

Smartphone-based navigation apps allow blind and visually impaired (BVI) people to take images or videos to complete various tasks such as determining a user 's location, recognizing objects, and detecting obstacles. The quality of the images and videos significantly affects the performance of these systems, but manipulating a camera to capture clear images with proper framing is a challenging task for BVI users. This research explores the interactions between a camera and BVI users in assistive navigation systems through interviews with BVI participants. We identified the form factors, applications, and challenges in using camera-based navigation systems and designed an interactive training app to improve BVI users' skills in using a camera for navigation. In this paper, we describe a novel virtual environment of the training app and report the preliminary results of a user study with BVI participants.

基于智能手机的导航应用程序允许盲人和视障人士拍摄图像或视频,以完成各种任务,如确定用户位置、识别物体和检测障碍物。图像和视频的质量会极大地影响这些系统的性能,但对于盲人和视障人士来说,如何操作摄像头来捕捉清晰的图像和适当的取景是一项具有挑战性的任务。本研究通过对 BVI 参与者的访谈,探讨了辅助导航系统中摄像头与 BVI 用户之间的互动。我们确定了使用基于摄像头的导航系统的形式因素、应用和挑战,并设计了一个交互式培训应用程序,以提高 BVI 用户使用摄像头进行导航的技能。在本文中,我们介绍了该培训应用程序的新型虚拟环境,并报告了对 BVI 参与者进行的用户研究的初步结果。
{"title":"VR Training to Facilitate Blind Photography for Navigation.","authors":"Jonggi Hong, James M Coughlan","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Smartphone-based navigation apps allow blind and visually impaired (BVI) people to take images or videos to complete various tasks such as determining a user 's location, recognizing objects, and detecting obstacles. The quality of the images and videos significantly affects the performance of these systems, but manipulating a camera to capture clear images with proper framing is a challenging task for BVI users. This research explores the interactions between a camera and BVI users in assistive navigation systems through interviews with BVI participants. We identified the form factors, applications, and challenges in using camera-based navigation systems and designed an interactive training app to improve BVI users' skills in using a camera for navigation. In this paper, we describe a novel virtual environment of the training app and report the preliminary results of a user study with BVI participants.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":"11 ","pages":"245-259"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10962001/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140289848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Getting in Touch With Tactile Map Automated Production: Evaluating impact and areas for improvement. 接触触觉地图自动化生产:评估影响和需要改进的领域。
Brandon Biggs, Charity Pitcher-Cooper, James M Coughlan

This study evaluated the impact the Tactile Maps Automated Production (TMAP) system has had on its blind and visually impaired (BVI) and Orientation and Mobility (O&M) users and obtained suggestions for improvement. A semi-structured interview was performed with six BVI and seven O&M TMAP users who had printed or ordered two or more TMAPs in the last year. The number of maps downloaded from the online TMAP generation platform was also reviewed for each participant. The most significant finding is that having access to TMAPs increased map usage for BVIs from less than 1 map a year to getting at least two maps from the order system, with those who had easy access to an embosser generating on average 18.33 TMAPs from the online system and saying they embossed 42 maps on average at home or work. O&Ms appreciated the quick, high-quality, and scaled map they could create and send home with their students, and they frequently used TMAPs with their braille reading students. To improve TMAPs, users requested that the following features be added: interactivity, greater customizability of TMAPs, viewing of transit stops, lower cost of the ordered TMAP, and nonvisual viewing of the digital TMAP on the online platform.

本研究评估触觉地图自动化生产(TMAP)系统对盲人和视障人士(BVI)和方向和移动(O&M)用户的影响,并提出改进建议。对6名英属维尔京群岛和7名O&M TMAP用户进行了半结构化访谈,这些用户在过去一年中印刷或订购了2个或更多的TMAP。还为每个参与者审查了从在线TMAP生成平台下载的地图数量。最重要的发现是,使用TMAPs使BVIs的地图使用量从每年不到一张地图增加到至少从订购系统获得两张地图,那些容易使用压花机的人平均从在线系统生成18.33张TMAPs,并表示他们平均在家中或工作中压花42张地图。老师们很欣赏这种快速、高质量的缩放地图,他们可以制作并将其带回家给学生,他们经常用TMAPs来指导盲文阅读的学生。为了改进TMAP,用户要求增加以下功能:交互性,TMAP更大的可定制性,查看过境站点,降低订购TMAP的成本,以及在在线平台上非视觉查看数字TMAP。
{"title":"Getting in Touch With Tactile Map Automated Production: Evaluating impact and areas for improvement.","authors":"Brandon Biggs, Charity Pitcher-Cooper, James M Coughlan","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This study evaluated the impact the Tactile Maps Automated Production (TMAP) system has had on its blind and visually impaired (BVI) and Orientation and Mobility (O&M) users and obtained suggestions for improvement. A semi-structured interview was performed with six BVI and seven O&M TMAP users who had printed or ordered two or more TMAPs in the last year. The number of maps downloaded from the online TMAP generation platform was also reviewed for each participant. The most significant finding is that having access to TMAPs increased map usage for BVIs from less than 1 map a year to getting at least two maps from the order system, with those who had easy access to an embosser generating on average 18.33 TMAPs from the online system and saying they embossed 42 maps on average at home or work. O&Ms appreciated the quick, high-quality, and scaled map they could create and send home with their students, and they frequently used TMAPs with their braille reading students. To improve TMAPs, users requested that the following features be added: interactivity, greater customizability of TMAPs, viewing of transit stops, lower cost of the ordered TMAP, and nonvisual viewing of the digital TMAP on the online platform.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":"10 ","pages":"135-153"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10065749/pdf/nihms-1835895.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9636841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Sign Detection for Accessible Indoor Navigation. 无障碍室内导航的实时标志检测。
Seyed Ali Cheraghi, Giovanni Fusco, James M Coughlan

Indoor navigation is a major challenge for people with visual impairments, who often lack access to visual cues such as informational signs, landmarks and structural features that people with normal vision rely on for wayfinding. We describe a new approach to recognizing and analyzing informational signs, such as Exit and restroom signs, in a building. This approach will be incorporated in iNavigate, a smartphone app we are developing, that provides accessible indoor navigation assistance. The app combines a digital map of the environment with computer vision and inertial sensing to estimate the user's location on the map in real time. Our new approach can recognize and analyze any sign from a small number of training images, and multiple types of signs can be processed simultaneously in each video frame. Moreover, in addition to estimating the distance to each detected sign, we can also estimate the approximate sign orientation (indicating if the sign is viewed head-on or obliquely), which improves the localization performance in challenging conditions. We evaluate the performance of our approach on four sign types distributed among multiple floors of an office building.

室内导航对视力障碍人士来说是一项重大挑战,他们往往无法获得视力正常的人赖以寻路的视觉线索,如信息标志、地标和结构特征。我们描述了一种识别和分析信息标志的新方法,例如建筑物中的出口和洗手间标志。这种方法将被整合到我们正在开发的智能手机应用iNavigate中,该应用可以提供方便的室内导航辅助。该应用程序将环境的数字地图与计算机视觉和惯性感应相结合,可以实时估计用户在地图上的位置。我们的新方法可以从少量的训练图像中识别和分析任何标志,并且可以在每个视频帧中同时处理多种类型的标志。此外,除了估计到每个检测到的标志的距离外,我们还可以估计大约的标志方向(表明标志是正面观看还是倾斜观看),这提高了在具有挑战性的条件下的定位性能。我们对分布在办公楼多个楼层的四种标识类型的性能进行了评估。
{"title":"Real-Time Sign Detection for Accessible Indoor Navigation.","authors":"Seyed Ali Cheraghi, Giovanni Fusco, James M Coughlan","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Indoor navigation is a major challenge for people with visual impairments, who often lack access to visual cues such as informational signs, landmarks and structural features that people with normal vision rely on for wayfinding. We describe a new approach to recognizing and analyzing informational signs, such as Exit and restroom signs, in a building. This approach will be incorporated in iNavigate, a smartphone app we are developing, that provides accessible indoor navigation assistance. The app combines a digital map of the environment with computer vision and inertial sensing to estimate the user's location on the map in real time. Our new approach can recognize and analyze any sign from a small number of training images, and multiple types of signs can be processed simultaneously in each video frame. Moreover, in addition to estimating the distance to each detected sign, we can also estimate the approximate sign orientation (indicating if the sign is viewed head-on or obliquely), which improves the localization performance in challenging conditions. We evaluate the performance of our approach on four sign types distributed among multiple floors of an office building.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":"9 ","pages":"125-139"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8331194/pdf/nihms-1725000.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39277335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Accessible Audio Labeling of 3D Objects. 面向无障碍音频标记的3D对象。
James M Coughlan, Huiying Shen, Brandon Biggs

We describe a new approach to audio labeling of 3D objects such as appliances, 3D models and maps that enables a visually impaired person to audio label objects. Our approach to audio labeling is called CamIO, a smartphone app that issues audio labels when the user points to a hotspot (a location of interest on an object) with a handheld stylus viewed by the smartphone camera. The CamIO app allows a user to create a new hotspot location by pointing at the location with a second stylus and recording a personalized audio label for the hotspot. In contrast with other audio labeling approaches that require the object of interest to be constructed of special materials, 3D printed, or equipped with special sensors, CamIO works with virtually any rigid object and requires only a smartphone, a paper barcode pattern mounted to the object of interest, and two inexpensive styluses. Moreover, our approach allows a visually impaired user to create audio labels independently. We describe a co-design performed with six blind participants exploring how they label objects in their daily lives and a study with the participants demonstrating the feasibility of CamIO for providing accessible audio labeling.

我们描述了一种新的方法来音频标记的3D对象,如电器,3D模型和地图,使视障人士音频标记对象。我们的音频标签方法被称为CamIO,这是一款智能手机应用程序,当用户用智能手机摄像头看到的手持触控笔指向热点(物体上感兴趣的位置)时,它会发出音频标签。CamIO应用允许用户用另一支触控笔指向新的热点位置,并为该热点录制个性化的音频标签。与其他音频标签方法相比,需要感兴趣的对象是用特殊材料建造的,3 d打印,或配备特殊传感器,CamIO适用于几乎任何刚性对象,只需要一个智能手机,一篇论文条形码模式安装到感兴趣的对象,和两个便宜的手写笔。此外,我们的方法允许视障用户独立创建音频标签。我们描述了与六名盲人参与者一起进行的共同设计,探索他们如何在日常生活中标记物体,并与参与者一起进行了一项研究,展示了CamIO提供可访问音频标记的可行性。
{"title":"Towards Accessible Audio Labeling of 3D Objects.","authors":"James M Coughlan, Huiying Shen, Brandon Biggs","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We describe a new approach to audio labeling of 3D objects such as appliances, 3D models and maps that enables a visually impaired person to audio label objects. Our approach to audio labeling is called CamIO, a smartphone app that issues audio labels when the user points to a <i>hotspot</i> (a location of interest on an object) with a handheld stylus viewed by the smartphone camera. The CamIO app allows a user to create a new hotspot location by pointing at the location with a second stylus and recording a personalized audio label for the hotspot. In contrast with other audio labeling approaches that require the object of interest to be constructed of special materials, 3D printed, or equipped with special sensors, CamIO works with virtually any rigid object and requires only a smartphone, a paper barcode pattern mounted to the object of interest, and two inexpensive styluses. Moreover, our approach allows a visually impaired user to create audio labels independently. We describe a co-design performed with six blind participants exploring how they label objects in their daily lives and a study with the participants demonstrating the feasibility of CamIO for providing accessible audio labeling.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":"8 ","pages":"210-222"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7425180/pdf/nihms-1611173.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38279362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
S-K Smartphone Barcode Reader for the Blind. S-K盲人智能手机条码阅读器。
Ender Tekin, David Vásquez, James M Coughlan

We describe a new smartphone app called BLaDE (Barcode Localization and Decoding Engine), designed to enable a blind or visually impaired user find and read product barcodes. Developed at The Smith-Kettlewell Eye Research Institute, the BLaDE Android app has been released as open source software, which can be used for free or modified for commercial or non-commercial use. Unlike popular commercial smartphone apps, BLaDE provides real-time audio feedback to help visually impaired users locate a barcode, which is a prerequisite to being able to read it. We describe experiments performed with five blind/visually impaired volunteer participants demonstrating that BLaDE is usable and that the audio feedback is key to its usability.

我们描述了一款名为BLaDE(条形码定位和解码引擎)的新智能手机应用程序,旨在使盲人或视障用户能够找到和读取产品条形码。由史密斯-凯特尔维尔眼科研究所开发的BLaDE安卓应用程序已经作为开源软件发布,可以免费使用,也可以修改为商业或非商业用途。与流行的商业智能手机应用不同,BLaDE提供实时音频反馈,帮助视障用户定位条形码,这是能够阅读条形码的先决条件。我们描述了五个盲人/视障志愿者参与者的实验,证明BLaDE是可用的,音频反馈是其可用性的关键。
{"title":"S-K Smartphone Barcode Reader for the Blind.","authors":"Ender Tekin,&nbsp;David Vásquez,&nbsp;James M Coughlan","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We describe a new smartphone app called BLaDE (Barcode Localization and Decoding Engine), designed to enable a blind or visually impaired user find and read product barcodes. Developed at The Smith-Kettlewell Eye Research Institute, the BLaDE Android app has been released as open source software, which can be used for free or modified for commercial or non-commercial use. Unlike popular commercial smartphone apps, BLaDE provides real-time audio feedback to help visually impaired users locate a barcode, which is a prerequisite to being able to read it. We describe experiments performed with five blind/visually impaired volunteer participants demonstrating that BLaDE is usable and that the audio feedback is key to its usability.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":"28 ","pages":"230-239"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4288446/pdf/nihms626930.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32986799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1