首页 > 最新文献

Computers helping people with special needs : ... International Conference, ICCHP ... : proceedings. International Conference on Computers Helping People with Special Needs最新文献

英文 中文
Accessible Point-and-Tap Interaction for Acquiring Detailed Information about Tactile Graphics and 3D Models. 用于获取触觉图形和三维模型详细信息的无障碍点触交互。
Andrea Narcisi, Huiying Shen, Dragan Ahmetovic, Sergio Mascetti, James M Coughlan

We have devised a novel "Point-and-Tap" interface that enables people who are blind or visually impaired (BVI) to easily acquire multiple levels of information about tactile graphics and 3D models. The interface uses an iPhone's depth and color cameras to track the user's hands while they interact with a model. When the user points to a feature of interest on the model with their index finger, the system reads aloud basic information about that feature. For additional information, the user lifts their index finger and taps the feature again. This process can be repeated multiple times to access additional levels of information. For instance, tapping once on a region in a tactile map could trigger the name of the region, with subsequent taps eliciting the population, area, climate, etc. No audio labels are triggered unless the user makes a pointing gesture, which allows the user to explore the model freely with one or both hands. Multiple taps can be used to skip through information levels quickly, with each tap interrupting the current utterance. This allows users to reach the desired level of information more quickly than listening to all levels in sequence. Experiments with six BVI participants demonstrate that the approach is practical, easy to learn and effective.

我们设计了一种新颖的 "点触 "界面,使盲人或视障人士(BVI)能够轻松获取触觉图形和三维模型的多层次信息。该界面使用 iPhone 的深度和彩色摄像头来跟踪用户与模型交互时的双手。当用户用食指指向模型上感兴趣的特征时,系统会朗读该特征的基本信息。如需其他信息,用户抬起食指,再次点击该特征。这个过程可以重复多次,以获取更多层次的信息。例如,在触感地图上点击一次某个地区,系统就会显示该地区的名称,随后的点击会显示人口、面积、气候等信息。除非用户做出指向手势,否则不会触发音频标签,这样用户就可以用单手或双手自由探索模型。用户可以通过多次点击来快速跳过信息层级,每次点击都会打断当前语音。与依次聆听所有信息层级相比,这能让用户更快地到达所需的信息层级。对六名英属维尔京群岛(BVI)参与者进行的实验表明,这种方法实用、易学、有效。
{"title":"Accessible Point-and-Tap Interaction for Acquiring Detailed Information about Tactile Graphics and 3D Models.","authors":"Andrea Narcisi, Huiying Shen, Dragan Ahmetovic, Sergio Mascetti, James M Coughlan","doi":"10.1007/978-3-031-62846-7_30","DOIUrl":"10.1007/978-3-031-62846-7_30","url":null,"abstract":"<p><p>We have devised a novel \"Point-and-Tap\" interface that enables people who are blind or visually impaired (BVI) to easily acquire multiple levels of information about tactile graphics and 3D models. The interface uses an iPhone's depth and color cameras to track the user's hands while they interact with a model. When the user points to a feature of interest on the model with their index finger, the system reads aloud basic information about that feature. For additional information, the user lifts their index finger and taps the feature again. This process can be repeated multiple times to access additional levels of information. For instance, tapping once on a region in a tactile map could trigger the name of the region, with subsequent taps eliciting the population, area, climate, etc. No audio labels are triggered unless the user makes a pointing gesture, which allows the user to explore the model freely with one or both hands. Multiple taps can be used to skip through information levels quickly, with each tap interrupting the current utterance. This allows users to reach the desired level of information more quickly than listening to all levels in sequence. Experiments with six BVI participants demonstrate that the approach is practical, easy to learn and effective.</p>","PeriodicalId":90476,"journal":{"name":"Computers helping people with special needs : ... International Conference, ICCHP ... : proceedings. International Conference on Computers Helping People with Special Needs","volume":"14750 ","pages":"252-259"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11338176/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142019823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Step Length Estimation for Blind Walkers. 盲人步行者的步长估计
Fatemeh Elyasi, Roberto Manduchi

Wayfinding systems using inertial data recorded from a smartphone carried by the walker have great potential for increasing mobility independence of blind pedestrians. Pedestrian dead-reckoning (PDR) algorithms for localization require estimation of the step length of the walker. Prior work has shown that step length can be reliably predicted by processing the inertial data recorded by the smartphone with a simple machine learning algorithm. However, this prior work only considered sighted walkers, whose gait may be different from that of blind walkers using a long cane or a dog guide. In this work, we show that a step length estimation network trained on data from sighted walkers performs poorly when tested on blind walkers, and that retraining with data from blind walkers can dramatically increase the accuracy of step length prediction.

使用步行者随身携带的智能手机记录的惯性数据的寻路系统在提高盲人步行者的行动独立性方面具有巨大潜力。用于定位的行人死区重定位(PDR)算法需要估算步行者的步长。先前的研究表明,通过使用简单的机器学习算法处理智能手机记录的惯性数据,可以可靠地预测步长。然而,之前的工作只考虑了视力正常的步行者,他们的步态可能与使用长手杖或导盲犬的盲人步行者不同。在这项工作中,我们证明了根据健视步行者的数据训练的步长估计网络在盲人步行者身上测试时表现不佳,而使用盲人步行者的数据重新训练可以显著提高步长预测的准确性。
{"title":"Step Length Estimation for Blind Walkers.","authors":"Fatemeh Elyasi, Roberto Manduchi","doi":"10.1007/978-3-031-62846-7_48","DOIUrl":"10.1007/978-3-031-62846-7_48","url":null,"abstract":"<p><p>Wayfinding systems using inertial data recorded from a smartphone carried by the walker have great potential for increasing mobility independence of blind pedestrians. Pedestrian dead-reckoning (PDR) algorithms for localization require estimation of the step length of the walker. Prior work has shown that step length can be reliably predicted by processing the inertial data recorded by the smartphone with a simple machine learning algorithm. However, this prior work only considered sighted walkers, whose gait may be different from that of blind walkers using a long cane or a dog guide. In this work, we show that a step length estimation network trained on data from sighted walkers performs poorly when tested on blind walkers, and that retraining with data from blind walkers can dramatically increase the accuracy of step length prediction.</p>","PeriodicalId":90476,"journal":{"name":"Computers helping people with special needs : ... International Conference, ICCHP ... : proceedings. International Conference on Computers Helping People with Special Needs","volume":"14750 ","pages":"400-407"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11298791/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141895056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-Visual Access to an Interactive 3D Map. 交互式 3D 地图的非视觉访问。
James M Coughlan, Brandon Biggs, Huiying Shen

Maps are indispensable for helping people learn about unfamiliar environments and plan trips. While tactile (2D) and 3D maps offer non-visual map access to people who are blind or visually impaired (BVI), this access is greatly enhanced by adding interactivity to the maps: when the user points at a feature of interest on the map, the name and other information about the feature is read aloud in audio. We explore how the use of an interactive 3D map of a playground, containing over seventy play structures and other features, affects spatial learning and cognition. Specifically, we perform experiments in which four blind participants answer questions about the map to evaluate their grasp of three types of spatial knowledge: landmark, route and survey. The results of these experiments demonstrate that participants are able to acquire this knowledge, most of which would be inaccessible without the interactivity of the map.

地图是帮助人们了解陌生环境和计划旅行不可或缺的工具。虽然触感地图(二维地图)和三维地图为盲人或视障人士(BVI)提供了非视觉地图访问,但如果在地图中加入交互功能,这种访问就会大大增强:当用户指向地图上感兴趣的地物时,该地物的名称和其他信息就会通过音频朗读出来。我们探讨了使用包含七十多个游戏结构和其他特征的游乐场交互式三维地图如何影响空间学习和认知。具体来说,我们在实验中让四名盲人参与者回答有关地图的问题,以评估他们对三种空间知识的掌握情况:地标、路线和调查。这些实验结果表明,参与者能够掌握这些知识,而如果没有地图的互动性,其中大部分知识是无法掌握的。
{"title":"Non-Visual Access to an Interactive 3D Map.","authors":"James M Coughlan, Brandon Biggs, Huiying Shen","doi":"10.1007/978-3-031-08648-9_29","DOIUrl":"10.1007/978-3-031-08648-9_29","url":null,"abstract":"<p><p>Maps are indispensable for helping people learn about unfamiliar environments and plan trips. While tactile (2D) and 3D maps offer non-visual map access to people who are blind or visually impaired (BVI), this access is greatly enhanced by adding interactivity to the maps: when the user points at a feature of interest on the map, the name and other information about the feature is read aloud in audio. We explore how the use of an interactive 3D map of a playground, containing over seventy play structures and other features, affects spatial learning and cognition. Specifically, we perform experiments in which four blind participants answer questions about the map to evaluate their grasp of three types of spatial knowledge: landmark, route and survey. The results of these experiments demonstrate that participants are able to acquire this knowledge, most of which would be inaccessible without the interactivity of the map.</p>","PeriodicalId":90476,"journal":{"name":"Computers helping people with special needs : ... International Conference, ICCHP ... : proceedings. International Conference on Computers Helping People with Special Needs","volume":"13341 ","pages":"253-260"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9467469/pdf/nihms-1832494.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40357560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computers Helping People with Special Needs: 18th International Conference, ICCHP-AAATE 2022, Lecco, Italy, July 11–15, 2022, Proceedings, Part I 计算机帮助有特殊需要的人:第18届国际会议,ICCHP-AAATE 2022,意大利莱科,2022年7月11日至15日,会议录,第一部分
{"title":"Computers Helping People with Special Needs: 18th International Conference, ICCHP-AAATE 2022, Lecco, Italy, July 11–15, 2022, Proceedings, Part I","authors":"","doi":"10.1007/978-3-031-08648-9","DOIUrl":"https://doi.org/10.1007/978-3-031-08648-9","url":null,"abstract":"","PeriodicalId":90476,"journal":{"name":"Computers helping people with special needs : ... International Conference, ICCHP ... : proceedings. International Conference on Computers Helping People with Special Needs","volume":"128 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83240433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Computers Helping People with Special Needs: 18th International Conference, ICCHP-AAATE 2022, Lecco, Italy, July 11–15, 2022, Proceedings, Part II 计算机帮助有特殊需要的人:第18届国际会议,ICCHP-AAATE 2022,意大利莱科,2022年7月11日至15日,会议录,第二部分
{"title":"Computers Helping People with Special Needs: 18th International Conference, ICCHP-AAATE 2022, Lecco, Italy, July 11–15, 2022, Proceedings, Part II","authors":"","doi":"10.1007/978-3-031-08645-8","DOIUrl":"https://doi.org/10.1007/978-3-031-08645-8","url":null,"abstract":"","PeriodicalId":90476,"journal":{"name":"Computers helping people with special needs : ... International Conference, ICCHP ... : proceedings. International Conference on Computers Helping People with Special Needs","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82541545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction to: Gait Patterns Monitoring Using Instrumented Forearm Crutches 纠正:使用器械前臂拐杖监测步态模式
Marien Narváez, J. Aranda
{"title":"Correction to: Gait Patterns Monitoring Using Instrumented Forearm Crutches","authors":"Marien Narváez, J. Aranda","doi":"10.1007/978-3-030-58805-2_58","DOIUrl":"https://doi.org/10.1007/978-3-030-58805-2_58","url":null,"abstract":"","PeriodicalId":90476,"journal":{"name":"Computers helping people with special needs : ... International Conference, ICCHP ... : proceedings. International Conference on Computers Helping People with Special Needs","volume":"148 1","pages":"C1 - C1"},"PeriodicalIF":0.0,"publicationDate":"2020-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81020907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction to: Suitable Camera and Rotation Navigation for People with Visual Impairment on Looking for Something Using Object Detection Technique 修正:适用于使用物体检测技术寻找物体的视障人士的相机和旋转导航
M. Iwamura, Yoshihiko Inoue, Kazunori Minatani, K. Kise
{"title":"Correction to: Suitable Camera and Rotation Navigation for People with Visual Impairment on Looking for Something Using Object Detection Technique","authors":"M. Iwamura, Yoshihiko Inoue, Kazunori Minatani, K. Kise","doi":"10.1007/978-3-030-58796-3_61","DOIUrl":"https://doi.org/10.1007/978-3-030-58796-3_61","url":null,"abstract":"","PeriodicalId":90476,"journal":{"name":"Computers helping people with special needs : ... International Conference, ICCHP ... : proceedings. International Conference on Computers Helping People with Special Needs","volume":"55 1","pages":"C1 - C1"},"PeriodicalIF":0.0,"publicationDate":"2020-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78824672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerometer-Based Machine Learning Categorization of Body Position in Adult Populations. 基于加速度计的成人体位机器学习分类。
Leighanne Jarvis, Sarah Moninger, Juliessa Pavon, Chandra Throckmorton, Kevin Caves

This manuscript describes tests and results of a study to evaluate classification algorithms derived from accelerometer data collected on healthy adults and older adults to better classify posture movements. Specifically, tests were conducted to 1) compare performance of 1 sensor vs. 2 sensors; 2) examine custom trained algorithms to classify for a given task 3) determine overall classifier accuracy for healthy adults under 55 and older adults (55 or older). Despite the current variety of commercially available platforms, sensors, and analysis software, many do not provide the data granularity needed to characterize all stages of movement. Additionally, some clinicians have expressed concerns regarding validity of analysis on specialized populations, such as hospitalized older adults. Accurate classification of movement data is important in a clinical setting as more hospital systems are using sensors to help with clinical decision making. We developed custom software and classification algorithms to identify laying, reclining, sitting, standing, and walking. Our algorithm accuracy is 93.2% for healthy adults under 55 and 95% for healthy older adults over 55 for the tasks in our setting. The high accuracy of this approach will aid future investigation into classifying movement in hospitalized older adults. Results from these tests also indicate that researchers and clinicians need to be aware of sensor body position in relation to where the algorithm used was trained. Additionally, results suggest more research is needed to determine if algorithms trained on one population can accurately be used to classify data from another population.

本文描述了一项研究的测试和结果,该研究评估了从健康成年人和老年人收集的加速度计数据中得出的分类算法,以更好地分类姿势运动。具体来说,进行测试是为了1)比较1个传感器与2个传感器的性能;2)检查自定义训练算法对给定任务进行分类3)确定55岁以下健康成年人和老年人(55岁或以上)的总体分类器准确性。尽管目前市面上有各种各样的商用平台、传感器和分析软件,但许多都不能提供表征运动所有阶段所需的数据粒度。此外,一些临床医生对特殊人群(如住院老年人)分析的有效性表示担忧。随着越来越多的医院系统使用传感器来帮助临床决策,运动数据的准确分类在临床环境中非常重要。我们开发了定制软件和分类算法来识别躺着、躺着、坐着、站着和走路。对于55岁以下的健康成年人,我们的算法准确率为93.2%,对于55岁以上的健康老年人,我们的算法准确率为95%。这种方法的高准确性将有助于未来对住院老年人运动分类的调查。这些测试的结果还表明,研究人员和临床医生需要了解传感器的身体位置与所使用算法的训练地点有关。此外,研究结果表明,需要进行更多的研究,以确定针对一个群体训练的算法是否可以准确地用于对另一个群体的数据进行分类。
{"title":"Accelerometer-Based Machine Learning Categorization of Body Position in Adult Populations.","authors":"Leighanne Jarvis,&nbsp;Sarah Moninger,&nbsp;Juliessa Pavon,&nbsp;Chandra Throckmorton,&nbsp;Kevin Caves","doi":"10.1007/978-3-030-58805-2_29","DOIUrl":"https://doi.org/10.1007/978-3-030-58805-2_29","url":null,"abstract":"<p><p>This manuscript describes tests and results of a study to evaluate classification algorithms derived from accelerometer data collected on healthy adults and older adults to better classify posture movements. Specifically, tests were conducted to 1) compare performance of 1 sensor vs. 2 sensors; 2) examine custom trained algorithms to classify for a given task 3) determine overall classifier accuracy for healthy adults under 55 and older adults (55 or older). Despite the current variety of commercially available platforms, sensors, and analysis software, many do not provide the data granularity needed to characterize all stages of movement. Additionally, some clinicians have expressed concerns regarding validity of analysis on specialized populations, such as hospitalized older adults. Accurate classification of movement data is important in a clinical setting as more hospital systems are using sensors to help with clinical decision making. We developed custom software and classification algorithms to identify laying, reclining, sitting, standing, and walking. Our algorithm accuracy is 93.2% for healthy adults under 55 and 95% for healthy older adults over 55 for the tasks in our setting. The high accuracy of this approach will aid future investigation into classifying movement in hospitalized older adults. Results from these tests also indicate that researchers and clinicians need to be aware of sensor body position in relation to where the algorithm used was trained. Additionally, results suggest more research is needed to determine if algorithms trained on one population can accurately be used to classify data from another population.</p>","PeriodicalId":90476,"journal":{"name":"Computers helping people with special needs : ... International Conference, ICCHP ... : proceedings. International Conference on Computers Helping People with Special Needs","volume":"12377 ","pages":"242-249"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7548108/pdf/nihms-1634319.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38480045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Indoor Navigation App using Computer Vision and Sign Recognition. 使用计算机视觉和手势识别的室内导航应用程序。
Giovanni Fusco, Seyed Ali Cheraghi, Leo Neat, James M Coughlan

Indoor navigation is a major challenge for people with visual impairments, who often lack access to visual cues such as informational signs, landmarks and structural features that people with normal vision rely on for wayfinding. Building on our recent work on a computer vision-based localization approach that runs in real time on a smartphone, we describe an accessible wayfinding iOS app we have created that provides turn-by-turn directions to a desired destination. The localization approach combines dead reckoning obtained using visual-inertial odometry (VIO) with information about the user's location in the environment from informational sign detections and map constraints. We explain how we estimate the user's distance from Exit signs appearing in the image, describe new improvements in the sign detection and range estimation algorithms, and outline our algorithm for determining appropriate turn-by-turn directions.

室内导航对视力障碍人士来说是一项重大挑战,他们往往无法获得视力正常的人赖以寻路的视觉线索,如信息标志、地标和结构特征。基于我们最近在智能手机上实时运行的基于计算机视觉的定位方法的工作,我们描述了我们创造的一款可访问的寻路iOS应用,它可以提供到理想目的地的转弯方向。定位方法结合了视觉惯性里程计(VIO)获得的航位推算,以及来自信息标志检测和地图约束的用户在环境中的位置信息。我们解释了如何估计用户与图像中出现的出口标志的距离,描述了标志检测和距离估计算法的新改进,并概述了我们用于确定适当的转弯方向的算法。
{"title":"An Indoor Navigation App using Computer Vision and Sign Recognition.","authors":"Giovanni Fusco,&nbsp;Seyed Ali Cheraghi,&nbsp;Leo Neat,&nbsp;James M Coughlan","doi":"10.1007/978-3-030-58796-3_56","DOIUrl":"https://doi.org/10.1007/978-3-030-58796-3_56","url":null,"abstract":"<p><p>Indoor navigation is a major challenge for people with visual impairments, who often lack access to visual cues such as informational signs, landmarks and structural features that people with normal vision rely on for wayfinding. Building on our recent work on a computer vision-based localization approach that runs in real time on a smartphone, we describe an accessible wayfinding iOS app we have created that provides turn-by-turn directions to a desired destination. The localization approach combines dead reckoning obtained using visual-inertial odometry (VIO) with information about the user's location in the environment from informational sign detections and map constraints. We explain how we estimate the user's distance from Exit signs appearing in the image, describe new improvements in the sign detection and range estimation algorithms, and outline our algorithm for determining appropriate turn-by-turn directions.</p>","PeriodicalId":90476,"journal":{"name":"Computers helping people with special needs : ... International Conference, ICCHP ... : proceedings. International Conference on Computers Helping People with Special Needs","volume":"12376 ","pages":"485-494"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7703403/pdf/nihms-1645298.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38664361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
An Audio-Based 3D Spatial Guidance AR System for Blind Users. 基于音频的盲人三维空间导引AR系统。
James M Coughlan, Brandon Biggs, Marc-Aurèle Rivière, Huiying Shen

Augmented reality (AR) has great potential for blind users because it enables a range of applications that provide audio information about specific locations or directions in the user's environment. For instance, the CamIO ("Camera Input-Output") AR app makes physical objects (such as documents, maps, devices and 3D models) accessible to blind and visually impaired persons by providing real-time audio feedback in response to the location on an object that the user is touching (using an inexpensive stylus). An important feature needed by blind users of AR apps such as CamIO is a 3D spatial guidance feature that provides real-time audio feedback to help the user find a desired location on an object. We have devised a simple audio interface to provide verbal guidance towards a target of interest in 3D. The experiment we report with blind participants using this guidance interface demonstrates the feasibility of the approach and its benefit for helping users find locations of interest.

增强现实(AR)对盲人用户来说具有巨大的潜力,因为它使一系列应用程序能够提供有关用户环境中特定位置或方向的音频信息。例如,CamIO(“摄像头输入输出”)AR应用程序通过提供实时音频反馈来响应用户触摸物体的位置(使用廉价的触控笔),使盲人和视障人士可以访问物理对象(如文件、地图、设备和3D模型)。CamIO等AR应用的盲人用户需要的一个重要功能是3D空间引导功能,该功能可以提供实时音频反馈,帮助用户找到物体的理想位置。我们设计了一个简单的音频界面,为3D中感兴趣的目标提供口头指导。我们报告的盲人参与者使用该引导界面的实验证明了该方法的可行性及其在帮助用户找到感兴趣的位置方面的好处。
{"title":"An Audio-Based 3D Spatial Guidance AR System for Blind Users.","authors":"James M Coughlan,&nbsp;Brandon Biggs,&nbsp;Marc-Aurèle Rivière,&nbsp;Huiying Shen","doi":"10.1007/978-3-030-58796-3_55","DOIUrl":"https://doi.org/10.1007/978-3-030-58796-3_55","url":null,"abstract":"<p><p>Augmented reality (AR) has great potential for blind users because it enables a range of applications that provide audio information about specific locations or directions in the user's environment. For instance, the CamIO (\"Camera Input-Output\") AR app makes physical objects (such as documents, maps, devices and 3D models) accessible to blind and visually impaired persons by providing real-time audio feedback in response to the location on an object that the user is touching (using an inexpensive stylus). An important feature needed by blind users of AR apps such as CamIO is a 3D spatial guidance feature that provides real-time audio feedback to help the user find a desired location on an object. We have devised a simple audio interface to provide verbal guidance towards a target of interest in 3D. The experiment we report with blind participants using this guidance interface demonstrates the feasibility of the approach and its benefit for helping users find locations of interest.</p>","PeriodicalId":90476,"journal":{"name":"Computers helping people with special needs : ... International Conference, ICCHP ... : proceedings. International Conference on Computers Helping People with Special Needs","volume":"12376 ","pages":"475-484"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7676634/pdf/nihms-1645296.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38632482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
Computers helping people with special needs : ... International Conference, ICCHP ... : proceedings. International Conference on Computers Helping People with Special Needs
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1