首页 > 最新文献

Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference最新文献

英文 中文
Use of Braille in the Workplace by People Who Are Blind. 盲人在工作场所使用盲文。
Michele C McDonnall, Rachael Sessler-Trinkowsky, Anne Steverson

Interest in the benefits of braille for people who are blind is high among professionals in the blindness field, but we know little about how braille is used in the workplace. The broad purpose of this study was to learn how employed people who are blind use braille on the job. Specific topics investigated included: work tasks refreshable braille technology (RBT) is used for, personal and job characteristics of RBT users compared to non-users, and factors associated with RBT use among workers with at least moderate braille skills. This study utilized data from 304 participants in a longitudinal research project investigating assistive technology use in the workplace by people who are blind. Two-thirds of our participants used braille on the job, and more than half utilized RBT. Workers who used RBT did not necessarily use it for all computer-related tasks they performed. RBT use was generally not significantly related to job characteristics, except for working for a blindness organization. RBT use was not significantly related to general personal characteristics but it was significantly different based on disability-related characteristics. Only older age and higher braille skills were significantly associated with RBT use on the job in a multivariate logistic regression model.

盲人领域的专业人士对盲文给盲人带来的益处兴趣浓厚,但我们对盲文在工作场所的使用情况却知之甚少。这项研究的主要目的是了解就业盲人如何在工作中使用盲文。调查的具体主题包括:可刷新盲文技术(RBT)的工作任务、与不使用 RBT 的人相比,RBT 使用者的个人和工作特征,以及至少具备中等盲文技能的工人使用 RBT 的相关因素。本研究利用了一个调查盲人在工作场所使用辅助技术情况的纵向研究项目中 304 名参与者的数据。三分之二的参与者在工作中使用盲文,超过一半的人使用 RBT。使用 RBT 的工人并不一定会在执行所有与计算机相关的任务时都使用 RBT。除在盲人组织工作外,RBT 的使用一般与工作特征无明显关系。RBT 的使用与一般个人特征没有明显关系,但与残疾相关的特征却有显著差异。在多变量逻辑回归模型中,只有年龄较大和点字技能较高的人与在工作中使用 RBT 有明显关系。
{"title":"Use of Braille in the Workplace by People Who Are Blind.","authors":"Michele C McDonnall, Rachael Sessler-Trinkowsky, Anne Steverson","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Interest in the benefits of braille for people who are blind is high among professionals in the blindness field, but we know little about how braille is used in the workplace. The broad purpose of this study was to learn how employed people who are blind use braille on the job. Specific topics investigated included: work tasks refreshable braille technology (RBT) is used for, personal and job characteristics of RBT users compared to non-users, and factors associated with RBT use among workers with at least moderate braille skills. This study utilized data from 304 participants in a longitudinal research project investigating assistive technology use in the workplace by people who are blind. Two-thirds of our participants used braille on the job, and more than half utilized RBT. Workers who used RBT did not necessarily use it for all computer-related tasks they performed. RBT use was generally not significantly related to job characteristics, except for working for a blindness organization. RBT use was not significantly related to general personal characteristics but it was significantly different based on disability-related characteristics. Only older age and higher braille skills were significantly associated with RBT use on the job in a multivariate logistic regression model.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11404553/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
You Described, We Archived: A Rich Audio Description Dataset. 你描述,我们存档:丰富的音频描述数据集
Charity Pitcher-Cooper, Manali Seth, Benjamin Kao, James M Coughlan, Ilmi Yoon

The You Described, We Archived dataset (YuWA) is a collaboration between San Francisco State University and The Smith-Kettlewell Eye Research Institute. It includes audio description (AD) data collected worldwide 2013-2022 through YouDescribe, an accessibility tool for adding audio descriptions to YouTube videos. YouDescribe, a web-based audio description tool along with an iOS viewing app, has a community of 12,000+ average annual visitors, with approximately 3,000 volunteer describers, and has created over 5,500 audio described YouTube videos. Blind and visually impaired (BVI) viewers request videos, which then are saved to a wish list and volunteer audio describers select a video, write a script, record audio clips, and edit clip placement to create an audio description. The AD tracks are stored separately, posted for public view at https://youdescribe.org/ and played together with the YouTube video. The YuWA audio description data paired with the describer and viewer metadata, and collection timeline has a large number of research applications including artificial intelligence, machine learning, sociolinguistics, audio description, video understanding, video retrieval and video-language grounding tasks.

你描述,我们存档 "数据集 (YuWA) 是旧金山州立大学和史密斯-凯特威尔眼科研究所的合作成果。该数据集包括 2013-2022 年通过 YouDescribe 在全球收集的音频描述(AD)数据,YouDescribe 是一款用于在 YouTube 视频中添加音频描述的无障碍工具。YouDescribe 是一款基于网络的音频描述工具,同时还提供 iOS 观看应用程序,拥有一个年均访问量超过 12,000 人的社区,约有 3,000 名志愿描述者,并创建了超过 5,500 个音频描述 YouTube 视频。盲人和视障(BVI)观众申请观看视频,然后将视频保存到愿望列表中,志愿音频描述员选择视频、编写脚本、录制音频片段并编辑片段位置以创建音频描述。AD 音轨单独存储,发布在 https://youdescribe.org/ 上供公众查看,并与 YouTube 视频一起播放。YuWA 音频描述数据与描述者和观看者元数据以及收集时间轴配对,可用于大量研究应用,包括人工智能、机器学习、社会语言学、音频描述、视频理解、视频检索和视频语言基础任务。
{"title":"You Described, We Archived: A Rich Audio Description Dataset.","authors":"Charity Pitcher-Cooper, Manali Seth, Benjamin Kao, James M Coughlan, Ilmi Yoon","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The You Described, We Archived dataset (YuWA) is a collaboration between San Francisco State University and The Smith-Kettlewell Eye Research Institute. It includes audio description (AD) data collected worldwide 2013-2022 through YouDescribe, an accessibility tool for adding audio descriptions to YouTube videos. YouDescribe, a web-based audio description tool along with an iOS viewing app, has a community of 12,000+ average annual visitors, with approximately 3,000 volunteer describers, and has created over 5,500 audio described YouTube videos. Blind and visually impaired (BVI) viewers request videos, which then are saved to a wish list and volunteer audio describers select a video, write a script, record audio clips, and edit clip placement to create an audio description. The AD tracks are stored separately, posted for public view at https://youdescribe.org/ and played together with the YouTube video. The YuWA audio description data paired with the describer and viewer metadata, and collection timeline has a large number of research applications including artificial intelligence, machine learning, sociolinguistics, audio description, video understanding, video retrieval and video-language grounding tasks.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10956524/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140186480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VR Training to Facilitate Blind Photography for Navigation. 促进盲人摄影导航的 VR 培训。
Jonggi Hong, James M Coughlan

Smartphone-based navigation apps allow blind and visually impaired (BVI) people to take images or videos to complete various tasks such as determining a user 's location, recognizing objects, and detecting obstacles. The quality of the images and videos significantly affects the performance of these systems, but manipulating a camera to capture clear images with proper framing is a challenging task for BVI users. This research explores the interactions between a camera and BVI users in assistive navigation systems through interviews with BVI participants. We identified the form factors, applications, and challenges in using camera-based navigation systems and designed an interactive training app to improve BVI users' skills in using a camera for navigation. In this paper, we describe a novel virtual environment of the training app and report the preliminary results of a user study with BVI participants.

基于智能手机的导航应用程序允许盲人和视障人士拍摄图像或视频,以完成各种任务,如确定用户位置、识别物体和检测障碍物。图像和视频的质量会极大地影响这些系统的性能,但对于盲人和视障人士来说,如何操作摄像头来捕捉清晰的图像和适当的取景是一项具有挑战性的任务。本研究通过对 BVI 参与者的访谈,探讨了辅助导航系统中摄像头与 BVI 用户之间的互动。我们确定了使用基于摄像头的导航系统的形式因素、应用和挑战,并设计了一个交互式培训应用程序,以提高 BVI 用户使用摄像头进行导航的技能。在本文中,我们介绍了该培训应用程序的新型虚拟环境,并报告了对 BVI 参与者进行的用户研究的初步结果。
{"title":"VR Training to Facilitate Blind Photography for Navigation.","authors":"Jonggi Hong, James M Coughlan","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Smartphone-based navigation apps allow blind and visually impaired (BVI) people to take images or videos to complete various tasks such as determining a user 's location, recognizing objects, and detecting obstacles. The quality of the images and videos significantly affects the performance of these systems, but manipulating a camera to capture clear images with proper framing is a challenging task for BVI users. This research explores the interactions between a camera and BVI users in assistive navigation systems through interviews with BVI participants. We identified the form factors, applications, and challenges in using camera-based navigation systems and designed an interactive training app to improve BVI users' skills in using a camera for navigation. In this paper, we describe a novel virtual environment of the training app and report the preliminary results of a user study with BVI participants.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10962001/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140289848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Getting in Touch With Tactile Map Automated Production: Evaluating impact and areas for improvement. 接触触觉地图自动化生产:评估影响和需要改进的领域。
Brandon Biggs, Charity Pitcher-Cooper, James M Coughlan

This study evaluated the impact the Tactile Maps Automated Production (TMAP) system has had on its blind and visually impaired (BVI) and Orientation and Mobility (O&M) users and obtained suggestions for improvement. A semi-structured interview was performed with six BVI and seven O&M TMAP users who had printed or ordered two or more TMAPs in the last year. The number of maps downloaded from the online TMAP generation platform was also reviewed for each participant. The most significant finding is that having access to TMAPs increased map usage for BVIs from less than 1 map a year to getting at least two maps from the order system, with those who had easy access to an embosser generating on average 18.33 TMAPs from the online system and saying they embossed 42 maps on average at home or work. O&Ms appreciated the quick, high-quality, and scaled map they could create and send home with their students, and they frequently used TMAPs with their braille reading students. To improve TMAPs, users requested that the following features be added: interactivity, greater customizability of TMAPs, viewing of transit stops, lower cost of the ordered TMAP, and nonvisual viewing of the digital TMAP on the online platform.

本研究评估触觉地图自动化生产(TMAP)系统对盲人和视障人士(BVI)和方向和移动(O&M)用户的影响,并提出改进建议。对6名英属维尔京群岛和7名O&M TMAP用户进行了半结构化访谈,这些用户在过去一年中印刷或订购了2个或更多的TMAP。还为每个参与者审查了从在线TMAP生成平台下载的地图数量。最重要的发现是,使用TMAPs使BVIs的地图使用量从每年不到一张地图增加到至少从订购系统获得两张地图,那些容易使用压花机的人平均从在线系统生成18.33张TMAPs,并表示他们平均在家中或工作中压花42张地图。老师们很欣赏这种快速、高质量的缩放地图,他们可以制作并将其带回家给学生,他们经常用TMAPs来指导盲文阅读的学生。为了改进TMAP,用户要求增加以下功能:交互性,TMAP更大的可定制性,查看过境站点,降低订购TMAP的成本,以及在在线平台上非视觉查看数字TMAP。
{"title":"Getting in Touch With Tactile Map Automated Production: Evaluating impact and areas for improvement.","authors":"Brandon Biggs,&nbsp;Charity Pitcher-Cooper,&nbsp;James M Coughlan","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This study evaluated the impact the Tactile Maps Automated Production (TMAP) system has had on its blind and visually impaired (BVI) and Orientation and Mobility (O&M) users and obtained suggestions for improvement. A semi-structured interview was performed with six BVI and seven O&M TMAP users who had printed or ordered two or more TMAPs in the last year. The number of maps downloaded from the online TMAP generation platform was also reviewed for each participant. The most significant finding is that having access to TMAPs increased map usage for BVIs from less than 1 map a year to getting at least two maps from the order system, with those who had easy access to an embosser generating on average 18.33 TMAPs from the online system and saying they embossed 42 maps on average at home or work. O&Ms appreciated the quick, high-quality, and scaled map they could create and send home with their students, and they frequently used TMAPs with their braille reading students. To improve TMAPs, users requested that the following features be added: interactivity, greater customizability of TMAPs, viewing of transit stops, lower cost of the ordered TMAP, and nonvisual viewing of the digital TMAP on the online platform.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10065749/pdf/nihms-1835895.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9636841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Sign Detection for Accessible Indoor Navigation. 无障碍室内导航的实时标志检测。
Seyed Ali Cheraghi, Giovanni Fusco, James M Coughlan

Indoor navigation is a major challenge for people with visual impairments, who often lack access to visual cues such as informational signs, landmarks and structural features that people with normal vision rely on for wayfinding. We describe a new approach to recognizing and analyzing informational signs, such as Exit and restroom signs, in a building. This approach will be incorporated in iNavigate, a smartphone app we are developing, that provides accessible indoor navigation assistance. The app combines a digital map of the environment with computer vision and inertial sensing to estimate the user's location on the map in real time. Our new approach can recognize and analyze any sign from a small number of training images, and multiple types of signs can be processed simultaneously in each video frame. Moreover, in addition to estimating the distance to each detected sign, we can also estimate the approximate sign orientation (indicating if the sign is viewed head-on or obliquely), which improves the localization performance in challenging conditions. We evaluate the performance of our approach on four sign types distributed among multiple floors of an office building.

室内导航对视力障碍人士来说是一项重大挑战,他们往往无法获得视力正常的人赖以寻路的视觉线索,如信息标志、地标和结构特征。我们描述了一种识别和分析信息标志的新方法,例如建筑物中的出口和洗手间标志。这种方法将被整合到我们正在开发的智能手机应用iNavigate中,该应用可以提供方便的室内导航辅助。该应用程序将环境的数字地图与计算机视觉和惯性感应相结合,可以实时估计用户在地图上的位置。我们的新方法可以从少量的训练图像中识别和分析任何标志,并且可以在每个视频帧中同时处理多种类型的标志。此外,除了估计到每个检测到的标志的距离外,我们还可以估计大约的标志方向(表明标志是正面观看还是倾斜观看),这提高了在具有挑战性的条件下的定位性能。我们对分布在办公楼多个楼层的四种标识类型的性能进行了评估。
{"title":"Real-Time Sign Detection for Accessible Indoor Navigation.","authors":"Seyed Ali Cheraghi,&nbsp;Giovanni Fusco,&nbsp;James M Coughlan","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Indoor navigation is a major challenge for people with visual impairments, who often lack access to visual cues such as informational signs, landmarks and structural features that people with normal vision rely on for wayfinding. We describe a new approach to recognizing and analyzing informational signs, such as Exit and restroom signs, in a building. This approach will be incorporated in iNavigate, a smartphone app we are developing, that provides accessible indoor navigation assistance. The app combines a digital map of the environment with computer vision and inertial sensing to estimate the user's location on the map in real time. Our new approach can recognize and analyze any sign from a small number of training images, and multiple types of signs can be processed simultaneously in each video frame. Moreover, in addition to estimating the distance to each detected sign, we can also estimate the approximate sign orientation (indicating if the sign is viewed head-on or obliquely), which improves the localization performance in challenging conditions. We evaluate the performance of our approach on four sign types distributed among multiple floors of an office building.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8331194/pdf/nihms-1725000.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39277335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Accessible Audio Labeling of 3D Objects. 面向无障碍音频标记的3D对象。
James M Coughlan, Huiying Shen, Brandon Biggs

We describe a new approach to audio labeling of 3D objects such as appliances, 3D models and maps that enables a visually impaired person to audio label objects. Our approach to audio labeling is called CamIO, a smartphone app that issues audio labels when the user points to a hotspot (a location of interest on an object) with a handheld stylus viewed by the smartphone camera. The CamIO app allows a user to create a new hotspot location by pointing at the location with a second stylus and recording a personalized audio label for the hotspot. In contrast with other audio labeling approaches that require the object of interest to be constructed of special materials, 3D printed, or equipped with special sensors, CamIO works with virtually any rigid object and requires only a smartphone, a paper barcode pattern mounted to the object of interest, and two inexpensive styluses. Moreover, our approach allows a visually impaired user to create audio labels independently. We describe a co-design performed with six blind participants exploring how they label objects in their daily lives and a study with the participants demonstrating the feasibility of CamIO for providing accessible audio labeling.

我们描述了一种新的方法来音频标记的3D对象,如电器,3D模型和地图,使视障人士音频标记对象。我们的音频标签方法被称为CamIO,这是一款智能手机应用程序,当用户用智能手机摄像头看到的手持触控笔指向热点(物体上感兴趣的位置)时,它会发出音频标签。CamIO应用允许用户用另一支触控笔指向新的热点位置,并为该热点录制个性化的音频标签。相比与其他音频标签方法要求对象感兴趣的是用特殊材料建造的,3 d打印,或配备特殊传感器,CamIO适用于几乎任何严格的对象,只需要一个智能手机,一篇论文条形码模式安装到感兴趣的对象,和两个便宜的手写笔。此外,我们的方法允许视障用户独立创建音频标签。我们描述了与六名盲人参与者一起进行的共同设计,探索他们如何在日常生活中标记物体,并与参与者一起进行了一项研究,展示了CamIO提供可访问音频标记的可行性。
{"title":"Towards Accessible Audio Labeling of 3D Objects.","authors":"James M Coughlan,&nbsp;Huiying Shen,&nbsp;Brandon Biggs","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We describe a new approach to audio labeling of 3D objects such as appliances, 3D models and maps that enables a visually impaired person to audio label objects. Our approach to audio labeling is called CamIO, a smartphone app that issues audio labels when the user points to a <i>hotspot</i> (a location of interest on an object) with a handheld stylus viewed by the smartphone camera. The CamIO app allows a user to create a new hotspot location by pointing at the location with a second stylus and recording a personalized audio label for the hotspot. In contrast with other audio labeling approaches that require the object of interest to be constructed of special materials, 3D printed, or equipped with special sensors, CamIO works with virtually any rigid object and requires only a smartphone, a paper barcode pattern mounted to the object of interest, and two inexpensive styluses. Moreover, our approach allows a visually impaired user to create audio labels independently. We describe a co-design performed with six blind participants exploring how they label objects in their daily lives and a study with the participants demonstrating the feasibility of CamIO for providing accessible audio labeling.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7425180/pdf/nihms-1611173.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38279362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
S-K Smartphone Barcode Reader for the Blind. S-K盲人智能手机条码阅读器。
Ender Tekin, David Vásquez, James M Coughlan

We describe a new smartphone app called BLaDE (Barcode Localization and Decoding Engine), designed to enable a blind or visually impaired user find and read product barcodes. Developed at The Smith-Kettlewell Eye Research Institute, the BLaDE Android app has been released as open source software, which can be used for free or modified for commercial or non-commercial use. Unlike popular commercial smartphone apps, BLaDE provides real-time audio feedback to help visually impaired users locate a barcode, which is a prerequisite to being able to read it. We describe experiments performed with five blind/visually impaired volunteer participants demonstrating that BLaDE is usable and that the audio feedback is key to its usability.

我们描述了一款名为BLaDE(条形码定位和解码引擎)的新智能手机应用程序,旨在使盲人或视障用户能够找到和读取产品条形码。由史密斯-凯特尔维尔眼科研究所开发的BLaDE安卓应用程序已经作为开源软件发布,可以免费使用,也可以修改为商业或非商业用途。与流行的商业智能手机应用不同,BLaDE提供实时音频反馈,帮助视障用户定位条形码,这是能够阅读条形码的先决条件。我们描述了五个盲人/视障志愿者参与者的实验,证明BLaDE是可用的,音频反馈是其可用性的关键。
{"title":"S-K Smartphone Barcode Reader for the Blind.","authors":"Ender Tekin,&nbsp;David Vásquez,&nbsp;James M Coughlan","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We describe a new smartphone app called BLaDE (Barcode Localization and Decoding Engine), designed to enable a blind or visually impaired user find and read product barcodes. Developed at The Smith-Kettlewell Eye Research Institute, the BLaDE Android app has been released as open source software, which can be used for free or modified for commercial or non-commercial use. Unlike popular commercial smartphone apps, BLaDE provides real-time audio feedback to help visually impaired users locate a barcode, which is a prerequisite to being able to read it. We describe experiments performed with five blind/visually impaired volunteer participants demonstrating that BLaDE is usable and that the audio feedback is key to its usability.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4288446/pdf/nihms626930.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32986799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1