首页 > 最新文献

2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)最新文献

英文 中文
Repurposing Labeled Photographs for Facial Tracking with Alternative Camera Intrinsics 重新利用标记照片与替代相机内在的面部跟踪
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798303
Caio Brito, Kenny Mitchell
Acquiring manually labeled training data for a specific application is expensive and while such data is often fully available for casual camera imagery, it is not a good fit for novel cameras. To overcome this, we present a repurposing approach that relies on spherical image warping to retarget an existing dataset of landmark labeled casual photography of people's faces with arbitrary poses from regular camera lenses to target cameras with significantly different intrinsics, such as those often attached to the head mounted displays (HMDs) with wide-angle lenses necessary to observe mouth and other features at close proximity and infrared only sensing for eye observations. Our method can predict landmarks of the HMD wearer in facial sub-regions in a divide-and-conquer fashion with particular focus on mouth and eyes. We demonstrate animated avatars in realtime using the face landmarks as input without user-specific nor application-specific dataset.
为特定的应用程序获取手动标记的训练数据是昂贵的,虽然这些数据通常完全可用于休闲相机图像,但它不适合新型相机。为了克服这一点,我们提出了一种再利用方法,该方法依赖于球形图像扭曲,将现有的具有任意姿势的地标标记的人脸随意摄影数据集从普通相机镜头重新定位到具有显著不同特性的目标相机,例如那些经常连接到头戴式显示器(hmd)上的广角镜头,用于近距离观察嘴巴和其他特征,以及仅用于眼睛观察的红外传感。我们的方法可以以分而治之的方式预测HMD佩戴者面部子区域的地标,特别关注嘴巴和眼睛。我们使用面部地标作为输入实时演示动画头像,没有特定于用户或特定于应用程序的数据集。
{"title":"Repurposing Labeled Photographs for Facial Tracking with Alternative Camera Intrinsics","authors":"Caio Brito, Kenny Mitchell","doi":"10.1109/VR.2019.8798303","DOIUrl":"https://doi.org/10.1109/VR.2019.8798303","url":null,"abstract":"Acquiring manually labeled training data for a specific application is expensive and while such data is often fully available for casual camera imagery, it is not a good fit for novel cameras. To overcome this, we present a repurposing approach that relies on spherical image warping to retarget an existing dataset of landmark labeled casual photography of people's faces with arbitrary poses from regular camera lenses to target cameras with significantly different intrinsics, such as those often attached to the head mounted displays (HMDs) with wide-angle lenses necessary to observe mouth and other features at close proximity and infrared only sensing for eye observations. Our method can predict landmarks of the HMD wearer in facial sub-regions in a divide-and-conquer fashion with particular focus on mouth and eyes. We demonstrate animated avatars in realtime using the face landmarks as input without user-specific nor application-specific dataset.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132818080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Evaluation on a Wheelchair Simulator Using Limited-Motion Patterns and Vection-Inducing Movies 基于受限运动模式和矢量诱导电影的轮椅模拟器评价
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797726
Akihiro Miyata, Hironobu Uno, Kenro Go
Existing virtual reality (VR) based wheelchair simulators have difficulty providing both visual and motion feedback at low cost. To address this issue, we propose a VR-based wheelchair simulator using a combination of motions attainable by an electric-powered wheelchair and vection-inducing movies displayed on a head-mounted display. This approach enables the user to have a richer simulation experience, because the scenes of the movie change as if the wheelchair performs motions that are not actually performable. We developed a proof of concept using only consumer products and conducted evaluation tasks, confirming that our approach can provide a richer experience for barrier simulations.
现有的基于虚拟现实(VR)的轮椅模拟器难以以低成本同时提供视觉和运动反馈。为了解决这个问题,我们提出了一个基于vr的轮椅模拟器,它结合了电动轮椅可以实现的运动和头戴式显示器上显示的矢量诱导电影。这种方法使用户能够拥有更丰富的模拟体验,因为电影场景的变化就好像轮椅执行了实际上无法执行的动作一样。我们开发了一个仅使用消费品的概念验证,并进行了评估任务,确认我们的方法可以为屏障模拟提供更丰富的体验。
{"title":"Evaluation on a Wheelchair Simulator Using Limited-Motion Patterns and Vection-Inducing Movies","authors":"Akihiro Miyata, Hironobu Uno, Kenro Go","doi":"10.1109/VR.2019.8797726","DOIUrl":"https://doi.org/10.1109/VR.2019.8797726","url":null,"abstract":"Existing virtual reality (VR) based wheelchair simulators have difficulty providing both visual and motion feedback at low cost. To address this issue, we propose a VR-based wheelchair simulator using a combination of motions attainable by an electric-powered wheelchair and vection-inducing movies displayed on a head-mounted display. This approach enables the user to have a richer simulation experience, because the scenes of the movie change as if the wheelchair performs motions that are not actually performable. We developed a proof of concept using only consumer products and conducted evaluation tasks, confirming that our approach can provide a richer experience for barrier simulations.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131119630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Grasping objects in immersive Virtual Reality 沉浸式虚拟现实中的抓取对象
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798155
Manuela Chessa, Guido Maiello, Lina K. Klein, Vivian C. Paulun, F. Solari
Grasping is one of the fundamental actions we perform to interact with objects in real environments, and in the real world we rarely experience difficulty picking up objects. Grasping plays a fundamental role for interactive virtual reality (VR) systems that are increasingly employed not only for recreational purposes, but also for training in industrial contexts, in medical tasks, and for rehabilitation protocols. To ensure the effectiveness of such VR applications, we must understand whether the same grasping behaviors and strategies employed in the real world are adopted when interacting with objects in VR. To this aim, we replicated in VR an experimental paradigm employed to investigate grasping behavior in the real world. We tracked participants' forefinger and thumb as they picked up, in a VR environment, unfamiliar objects presented at different orientations, and exhibiting the same physics behavior of their real counterparts. We compared grasping behavior within and across participants, in VR and in the corresponding real world situation. Our findings highlight the similarities and differences in grasping behavior in real and virtual environments.
抓取是我们在现实环境中与物体互动的基本动作之一,在现实世界中,我们很少遇到抓取物体的困难。抓取在交互式虚拟现实(VR)系统中起着至关重要的作用,该系统不仅越来越多地用于娱乐目的,而且还用于工业背景下的培训、医疗任务和康复协议。为了确保这种VR应用的有效性,我们必须了解在VR中与物体交互时是否采用了与现实世界中相同的抓取行为和策略。为此,我们在虚拟现实中复制了一个用于研究现实世界中抓取行为的实验范式。在虚拟现实环境中,我们追踪了参与者的食指和拇指,因为他们拿起了以不同方向呈现的不熟悉物体,并表现出与真实物体相同的物理行为。我们比较了参与者内部和参与者之间,在虚拟现实和相应的现实世界中的抓取行为。我们的发现强调了真实和虚拟环境中抓取行为的异同。
{"title":"Grasping objects in immersive Virtual Reality","authors":"Manuela Chessa, Guido Maiello, Lina K. Klein, Vivian C. Paulun, F. Solari","doi":"10.1109/VR.2019.8798155","DOIUrl":"https://doi.org/10.1109/VR.2019.8798155","url":null,"abstract":"Grasping is one of the fundamental actions we perform to interact with objects in real environments, and in the real world we rarely experience difficulty picking up objects. Grasping plays a fundamental role for interactive virtual reality (VR) systems that are increasingly employed not only for recreational purposes, but also for training in industrial contexts, in medical tasks, and for rehabilitation protocols. To ensure the effectiveness of such VR applications, we must understand whether the same grasping behaviors and strategies employed in the real world are adopted when interacting with objects in VR. To this aim, we replicated in VR an experimental paradigm employed to investigate grasping behavior in the real world. We tracked participants' forefinger and thumb as they picked up, in a VR environment, unfamiliar objects presented at different orientations, and exhibiting the same physics behavior of their real counterparts. We compared grasping behavior within and across participants, in VR and in the corresponding real world situation. Our findings highlight the similarities and differences in grasping behavior in real and virtual environments.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"205 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133134927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Imspector: Immersive System of Inspection of Bridges/Viaducts 检查员:沉浸式桥梁/高架桥检查系统
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798295
M. Veronez, L. G. D. Silveira, F. Bordin, Leonardo Campos Inocencio, Graciela Racolte, L. S. Kupssinskü, Pedro Rossa, L. Scalco
One of the main difficulties in the inspection of Bridges/Viaducts by observation is inaccessibility or lack of access throughout the structure. Mapping using remote sensors on Unmanned Aerial Vehicles (UAVs) or by means of laser scanning can be an interesting alternative to the engineer as it can enable more detailed analysis and diagnostics. Such mapping techniques also allow the generation of realistic 3D models that can be integrated in Virtual Reality (VR) environments. In this sense, we present the ImSpector, a system that uses realistic 3D models generated by remote sensors embedded in UAVs that implements a virtual and immersive environment for inspections. As a result, the system provides the engineer a tool to carry out field tests directly at the office, ensuring agility, accuracy and safety in bridge and viaduct inspections.
通过观察检查桥梁/高架桥的主要困难之一是整个结构无法进入或缺乏通道。对工程师来说,使用无人机(uav)上的远程传感器或通过激光扫描进行测绘是一个有趣的选择,因为它可以实现更详细的分析和诊断。这种映射技术还允许生成逼真的3D模型,可以集成到虚拟现实(VR)环境中。从这个意义上讲,我们提出了检查员,这是一个系统,它使用嵌入在无人机中的远程传感器生成的逼真3D模型,实现了虚拟的沉浸式检查环境。因此,该系统为工程师提供了直接在办公室进行现场测试的工具,确保了桥梁和高架桥检查的敏捷性、准确性和安全性。
{"title":"Imspector: Immersive System of Inspection of Bridges/Viaducts","authors":"M. Veronez, L. G. D. Silveira, F. Bordin, Leonardo Campos Inocencio, Graciela Racolte, L. S. Kupssinskü, Pedro Rossa, L. Scalco","doi":"10.1109/VR.2019.8798295","DOIUrl":"https://doi.org/10.1109/VR.2019.8798295","url":null,"abstract":"One of the main difficulties in the inspection of Bridges/Viaducts by observation is inaccessibility or lack of access throughout the structure. Mapping using remote sensors on Unmanned Aerial Vehicles (UAVs) or by means of laser scanning can be an interesting alternative to the engineer as it can enable more detailed analysis and diagnostics. Such mapping techniques also allow the generation of realistic 3D models that can be integrated in Virtual Reality (VR) environments. In this sense, we present the ImSpector, a system that uses realistic 3D models generated by remote sensors embedded in UAVs that implements a virtual and immersive environment for inspections. As a result, the system provides the engineer a tool to carry out field tests directly at the office, ensuring agility, accuracy and safety in bridge and viaduct inspections.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"275 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123497461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Improve the Decision-making Skill of Basketball Players by an Action-aware VR Training System 基于动作感知的VR训练系统提高篮球运动员决策能力
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798309
Wan-Lun Tsai, Liwei Su, Tsai-Yen Ko, Cheng-Ta Yang, Min-Chun Hu
Decision-making is an essential part in basketball offenses. In this paper, we proposed a basketball offensive decision-making VR training system. During the training, the trainee can intuitively interact with the system by wearing a motion capture suit and be trained in different virtual defensive scenarios designed by professional coaches. The system will recognize the offensive action performed by the user and provide correct suggestions when he/she makes a bad offensive decision. We compared the effectiveness of the training protocols by using conventional tactics board and the proposed VR system. Furthermore, we investigated the influence of using prerecorded 360-degree panorama video and computer simulated virtual content to create immersive training environment.
决策是篮球进攻的重要组成部分。本文提出了一种篮球进攻决策VR训练系统。在训练过程中,受训者穿着动作捕捉服,可以直观地与系统互动,并在专业教练设计的不同虚拟防守场景中进行训练。系统会识别用户的冒犯行为,并在用户做出错误的冒犯决定时给出正确的建议。我们比较了传统战术板和虚拟现实系统训练方案的有效性。此外,我们还研究了使用预先录制的360度全景视频和计算机模拟的虚拟内容来创建沉浸式训练环境的影响。
{"title":"Improve the Decision-making Skill of Basketball Players by an Action-aware VR Training System","authors":"Wan-Lun Tsai, Liwei Su, Tsai-Yen Ko, Cheng-Ta Yang, Min-Chun Hu","doi":"10.1109/VR.2019.8798309","DOIUrl":"https://doi.org/10.1109/VR.2019.8798309","url":null,"abstract":"Decision-making is an essential part in basketball offenses. In this paper, we proposed a basketball offensive decision-making VR training system. During the training, the trainee can intuitively interact with the system by wearing a motion capture suit and be trained in different virtual defensive scenarios designed by professional coaches. The system will recognize the offensive action performed by the user and provide correct suggestions when he/she makes a bad offensive decision. We compared the effectiveness of the training protocols by using conventional tactics board and the proposed VR system. Furthermore, we investigated the influence of using prerecorded 360-degree panorama video and computer simulated virtual content to create immersive training environment.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123702165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Get a Grip! Introducing Variable Grip for Controller-Based VR Systems 控制一下!基于控制器的VR系统的可变抓地力介绍
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797824
Michael Bonfert, R. Porzel, R. Malaka
We propose an approach to facilitate adjustable grip for object interaction in virtual reality. It enables the user to handle objects with loose and firm grip using conventional controllers. Pivotal design properties were identified and evaluated in a qualitative pilot study. Two revised interaction designs with variable grip were compared to the status quo of invariable grip in a quantitative study. The users performed placing actions with all interaction modes. Performance, clutching, task load, and usability were measured. While the handling time increased slightly using variable grip, the usability score was significantly higher. No substantial differences were measured in positioning accuracy. The results lead to the conclusion that variable grip can be useful and improve realism depending on tasks, goals, and user preference.
我们提出了一种促进虚拟现实中物体交互的可调节抓地力的方法。它使用户能够使用传统控制器以松散和牢固的握力处理对象。在一项定性的初步研究中,确定并评估了关键设计特性。将两种改进的变握把交互设计与不变握把交互设计进行了定量比较。用户在所有交互模式下执行放置操作。测试了性能、紧抓、任务负荷和可用性。虽然使用可变握把的操作时间略有增加,但可用性得分明显较高。在定位精度方面没有测量到实质性差异。结果得出的结论是,根据任务、目标和用户偏好,可变握持可能是有用的,可以提高现实性。
{"title":"Get a Grip! Introducing Variable Grip for Controller-Based VR Systems","authors":"Michael Bonfert, R. Porzel, R. Malaka","doi":"10.1109/VR.2019.8797824","DOIUrl":"https://doi.org/10.1109/VR.2019.8797824","url":null,"abstract":"We propose an approach to facilitate adjustable grip for object interaction in virtual reality. It enables the user to handle objects with loose and firm grip using conventional controllers. Pivotal design properties were identified and evaluated in a qualitative pilot study. Two revised interaction designs with variable grip were compared to the status quo of invariable grip in a quantitative study. The users performed placing actions with all interaction modes. Performance, clutching, task load, and usability were measured. While the handling time increased slightly using variable grip, the usability score was significantly higher. No substantial differences were measured in positioning accuracy. The results lead to the conclusion that variable grip can be useful and improve realism depending on tasks, goals, and user preference.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125259349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Training Transfer of Bimanual Assembly Tasks in Cost-Differentiated Virtual Reality Systems 成本差异化虚拟现实系统中手工装配任务的培训转移
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797917
S. Shen, Hsiang-Ting Chen, T. Leong
Recent advances of the affordable virtual reality headsets make virtual reality training an economical choice when compared to traditional training. However, these virtual reality devices present a range of different levels of virtual reality fidelity and interactions. Few works have evaluated their validity against the traditional training formats. This paper presents a study that compares the learning efficiency of a bimanual gearbox assembly task among traditional training, virtual reality training with direct 3D inputs (HTC VIVE), and virtual reality training without 3D inputs (Google Cardboard). A pilot study was conducted and the result shows that HTC VIVE brings the best learning outcomes.
与传统培训相比,最近经济实惠的虚拟现实耳机的进步使虚拟现实培训成为一种经济的选择。然而,这些虚拟现实设备呈现出一系列不同级别的虚拟现实保真度和交互性。很少有研究对传统的培训形式进行有效性评估。本文比较了传统训练、直接3D输入的虚拟现实训练(HTC VIVE)和无3D输入的虚拟现实训练(Google Cardboard)对手工变速箱装配任务的学习效率。我们进行了一项试点研究,结果表明HTC VIVE带来了最好的学习效果。
{"title":"Training Transfer of Bimanual Assembly Tasks in Cost-Differentiated Virtual Reality Systems","authors":"S. Shen, Hsiang-Ting Chen, T. Leong","doi":"10.1109/VR.2019.8797917","DOIUrl":"https://doi.org/10.1109/VR.2019.8797917","url":null,"abstract":"Recent advances of the affordable virtual reality headsets make virtual reality training an economical choice when compared to traditional training. However, these virtual reality devices present a range of different levels of virtual reality fidelity and interactions. Few works have evaluated their validity against the traditional training formats. This paper presents a study that compares the learning efficiency of a bimanual gearbox assembly task among traditional training, virtual reality training with direct 3D inputs (HTC VIVE), and virtual reality training without 3D inputs (Google Cardboard). A pilot study was conducted and the result shows that HTC VIVE brings the best learning outcomes.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125156824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Virtual Reality Video Game Paired with Physical Monocular Blurring as Accessible Therapy for Amblyopia 虚拟现实视频游戏配合物理单眼模糊治疗弱视
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797997
O. Hurd, S. Kurniawan, M. Teodorescu
This paper discusses a virtual reality (VR) therapeutic video game for treatment of the neurological eye disorder, Amblyopia. Amblyopia is often referred to as lazy eye, and it entails weaker vision in one eye due to a poor connection between the eye and the brain. Until recently it was thought to be untreatable in adults, but new research has proven that with consistent therapy even adults can improve their Amblyopia, especially through perceptual learning and video games. Even so, therapy compliance remains low due to the fact that conventional therapies are perceived as either invasive, dull and/or boring. Our game aims to make Amblyopia therapy more immersive, enjoyable and playful. The game was perceived by our users to be a fun and accessible alternative, as it involves adhering a Bangerter foil (an opaque sticker) on a VR headset to blur vision in an Amblyopic person's dominant eye while having them playa VR video game. To perform well in the video game, their brain must adapt to rely on seeing with their weaker eye, thereby reforging that neurological connection. While testing our game, we also studied users behavior to investigate what visual and kinetic components were more effective therapeutically. Our findings generally show positive results, showing that visual acuity in adults increases with 45 minutes of therapy. Amblyopia has many negative symptoms including poor depth perception (nec-essary for daily activities such as driving), so this therapy could be life changing for adults with Amblyopia.
本文讨论了一种虚拟现实(VR)治疗性视频游戏,用于治疗神经性眼睛疾病弱视。弱视通常被称为懒惰的眼睛,由于眼睛和大脑之间的连接不良,它导致一只眼睛的视力较弱。直到最近,人们还认为成人弱视无法治愈,但新的研究证明,通过持续的治疗,即使是成年人也可以改善他们的弱视,尤其是通过感知学习和电子游戏。即便如此,由于传统疗法被认为是侵入性的、枯燥的和/或无聊的,治疗依从性仍然很低。我们的游戏旨在使弱视治疗更加身临其境,愉快和有趣。我们的用户认为这是一款有趣且容易上手的游戏,因为它将Bangerter箔(一种不透明的贴纸)粘在VR头盔上,让弱视者在玩VR视频游戏时模糊主视眼的视觉。为了在电子游戏中表现出色,他们的大脑必须适应用较弱的眼睛看东西,从而重塑神经联系。在测试我们的游戏时,我们还研究了用户行为,以调查哪些视觉和动态组件更有效。我们的研究结果总体上显示出积极的结果,表明成人的视力在45分钟的治疗后有所提高。弱视有许多负面症状,包括深度知觉差(驾驶等日常活动所必需的),因此这种疗法可能会改变弱视成人的生活。
{"title":"Virtual Reality Video Game Paired with Physical Monocular Blurring as Accessible Therapy for Amblyopia","authors":"O. Hurd, S. Kurniawan, M. Teodorescu","doi":"10.1109/VR.2019.8797997","DOIUrl":"https://doi.org/10.1109/VR.2019.8797997","url":null,"abstract":"This paper discusses a virtual reality (VR) therapeutic video game for treatment of the neurological eye disorder, Amblyopia. Amblyopia is often referred to as lazy eye, and it entails weaker vision in one eye due to a poor connection between the eye and the brain. Until recently it was thought to be untreatable in adults, but new research has proven that with consistent therapy even adults can improve their Amblyopia, especially through perceptual learning and video games. Even so, therapy compliance remains low due to the fact that conventional therapies are perceived as either invasive, dull and/or boring. Our game aims to make Amblyopia therapy more immersive, enjoyable and playful. The game was perceived by our users to be a fun and accessible alternative, as it involves adhering a Bangerter foil (an opaque sticker) on a VR headset to blur vision in an Amblyopic person's dominant eye while having them playa VR video game. To perform well in the video game, their brain must adapt to rely on seeing with their weaker eye, thereby reforging that neurological connection. While testing our game, we also studied users behavior to investigate what visual and kinetic components were more effective therapeutically. Our findings generally show positive results, showing that visual acuity in adults increases with 45 minutes of therapy. Amblyopia has many negative symptoms including poor depth perception (nec-essary for daily activities such as driving), so this therapy could be life changing for adults with Amblyopia.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116101938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Required Accuracy of Gaze Tracking for Varifocal Displays 变焦显示器注视跟踪的精度要求
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798273
David Dunn
Varifocal displays are a practical method to solve vergence–accommodation conflict in near-eye displays for both virtual and augmented reality, but they are reliant on knowing the user's focal state. One approach for detecting the focal state is to use the link between vergence and accommodation and employ binocular gaze tracking to determine the depth of the fixation point; consequently, the focal depth is also known. In order to ensure the virtual image is in focus, the display must be set to a depth which causes no negative perceptual or physiological effects to the viewer, which indicates error bounds for the calculation of fixation point. I analyze the required gaze tracker accuracy to ensure the display focus is set within the viewer's depth of field, zone of comfort, and zone of clear single binocular vision. My findings indicate that for the median adult using an augmented reality varifocal display, gaze tracking accuracy must be better than 0.541°. In addition, I discuss eye tracking approaches presented in the literature to determine their ability to meet the specified requirements.
变焦显示是解决虚拟现实和增强现实近眼显示中收敛调节冲突的一种实用方法,但它依赖于了解用户的焦点状态。一种检测焦点状态的方法是利用会聚和调节之间的联系,并采用双目注视跟踪来确定注视点的深度;因此,震源深度也是已知的。为了确保虚拟图像的焦点,必须将显示深度设置为不会对观看者产生负面感知或生理影响的深度,这表明了计算注视点的误差范围。我分析了所需的注视跟踪器精度,以确保显示焦点设置在观看者的景深,舒适区域和清晰的单双眼视觉区域内。我的研究结果表明,对于使用增强现实变焦显示器的中位数成年人,凝视跟踪精度必须优于0.541°。此外,我还讨论了文献中提出的眼动追踪方法,以确定它们满足特定要求的能力。
{"title":"Required Accuracy of Gaze Tracking for Varifocal Displays","authors":"David Dunn","doi":"10.1109/VR.2019.8798273","DOIUrl":"https://doi.org/10.1109/VR.2019.8798273","url":null,"abstract":"Varifocal displays are a practical method to solve vergence–accommodation conflict in near-eye displays for both virtual and augmented reality, but they are reliant on knowing the user's focal state. One approach for detecting the focal state is to use the link between vergence and accommodation and employ binocular gaze tracking to determine the depth of the fixation point; consequently, the focal depth is also known. In order to ensure the virtual image is in focus, the display must be set to a depth which causes no negative perceptual or physiological effects to the viewer, which indicates error bounds for the calculation of fixation point. I analyze the required gaze tracker accuracy to ensure the display focus is set within the viewer's depth of field, zone of comfort, and zone of clear single binocular vision. My findings indicate that for the median adult using an augmented reality varifocal display, gaze tracking accuracy must be better than 0.541°. In addition, I discuss eye tracking approaches presented in the literature to determine their ability to meet the specified requirements.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127208962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Interactive and Multimodal-based Augmented Reality for Remote Assistance using a Digital Surgical Microscope 交互式和基于多模态的增强现实,用于远程辅助的数字外科显微镜
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797682
E. Wisotzky, Jean-Claude Rosenthal, P. Eisert, A. Hilsmann, Falko Schmid, M. Bauer, Armin Schneider, F. Uecker
We present an interactive and multimodal-based augmented reality system for computer-assisted surgery in the context of ear, nose and throat (ENT) treatment. The proposed processing pipeline uses fully digital stereoscopic imaging devices, which support multispectral and white light imaging to generate high resolution image data, and consists of five modules. Input/output data handling, a hybrid multimodal image analysis and a bi-directional interactive augmented reality (AR) and mixed reality (MR) interface for local and remote surgical assistance are of high relevance for the complete framework. The hybrid multimodal 3D scene analysis module uses different wavelengths to classify tissue structures and combines this spectral data with metric 3D information. Additionally, we propose a zoom-independent intraoperative tool for virtual ossicular prosthesis insertion (e.g. stapedectomy) guaranteeing very high metric accuracy in sub-millimeter range (1/10 mm). A bi-directional interactive AR/MR communication module guarantees low latency, while consisting surgical information and avoiding informational overload. Display agnostic AR/MR visualization can show our analyzed data synchronized inside the digital binocular, the 3D display or any connected head-mounted-display (HMD). In addition, the analyzed data can be enriched with annotations by involving external clinical experts using AR/MR and furthermore an accurate registration of preoperative data. The benefits of such a collaborative surgical system are manifold and will lead to a highly improved patient outcome through an easier tissue classification and reduced surgery risk.
我们提出了一个交互式和基于多模式的增强现实系统,用于耳鼻喉(ENT)治疗的计算机辅助手术。该处理流水线采用全数字立体成像设备,支持多光谱和白光成像以生成高分辨率图像数据,由五个模块组成。输入/输出数据处理、混合多模态图像分析以及用于本地和远程手术辅助的双向交互式增强现实(AR)和混合现实(MR)接口与整个框架高度相关。混合多模态3D场景分析模块使用不同的波长对组织结构进行分类,并将该光谱数据与度量3D信息相结合。此外,我们提出了一种不依赖变焦的术中工具,用于虚拟听骨假体插入(例如镫骨切除术),保证了亚毫米范围(1/10毫米)的极高度量精度。双向交互式AR/MR通信模块保证低延迟,同时包含手术信息,避免信息过载。与显示无关的AR/MR可视化可以显示我们在数字双目、3D显示器或任何连接的头戴式显示器(HMD)内同步的分析数据。此外,分析的数据可以通过使用AR/MR的外部临床专家进行注释来丰富,并且可以准确地注册术前数据。这种合作手术系统的好处是多方面的,通过更容易的组织分类和降低手术风险,将大大改善患者的预后。
{"title":"Interactive and Multimodal-based Augmented Reality for Remote Assistance using a Digital Surgical Microscope","authors":"E. Wisotzky, Jean-Claude Rosenthal, P. Eisert, A. Hilsmann, Falko Schmid, M. Bauer, Armin Schneider, F. Uecker","doi":"10.1109/VR.2019.8797682","DOIUrl":"https://doi.org/10.1109/VR.2019.8797682","url":null,"abstract":"We present an interactive and multimodal-based augmented reality system for computer-assisted surgery in the context of ear, nose and throat (ENT) treatment. The proposed processing pipeline uses fully digital stereoscopic imaging devices, which support multispectral and white light imaging to generate high resolution image data, and consists of five modules. Input/output data handling, a hybrid multimodal image analysis and a bi-directional interactive augmented reality (AR) and mixed reality (MR) interface for local and remote surgical assistance are of high relevance for the complete framework. The hybrid multimodal 3D scene analysis module uses different wavelengths to classify tissue structures and combines this spectral data with metric 3D information. Additionally, we propose a zoom-independent intraoperative tool for virtual ossicular prosthesis insertion (e.g. stapedectomy) guaranteeing very high metric accuracy in sub-millimeter range (1/10 mm). A bi-directional interactive AR/MR communication module guarantees low latency, while consisting surgical information and avoiding informational overload. Display agnostic AR/MR visualization can show our analyzed data synchronized inside the digital binocular, the 3D display or any connected head-mounted-display (HMD). In addition, the analyzed data can be enriched with annotations by involving external clinical experts using AR/MR and furthermore an accurate registration of preoperative data. The benefits of such a collaborative surgical system are manifold and will lead to a highly improved patient outcome through an easier tissue classification and reduced surgery risk.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115287830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
期刊
2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1