首页 > 最新文献

2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)最新文献

英文 中文
Human Face Reconstruction under a HMD Occlusion HMD遮挡下的人脸重建
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797959
Zhengfu Peng, Ting Lu, Zhaowen Chen, Xiangmin Xu, Shu-Min Lin
With the help of existing augmented vision perception motion capture technologies, virtual reality (VR) can make users immerse in virtual environments. But users are difficult to convey their actual emotions to others in virtual environments. Since the head-mounted displays (HMDs) significantly obstruct users face, it is hard to recover the full face directly with traditional techniques. In this paper, we introduce a novel method to address this problem by only using the RGB image of a person, without the need of any other sensors or devices. Firstly, we utilize the facial landmark points to estimate the face shape, expression and pose of the user. Then with the information of the Non occlusion face area, we could recover the face texture and the illumination of the current scene.
借助现有的增强视觉感知运动捕捉技术,虚拟现实(VR)可以使用户沉浸在虚拟环境中。但在虚拟环境中,用户很难将自己的真实情感传达给他人。由于头戴式显示器(hmd)严重遮挡用户面部,传统技术难以直接恢复完整的人脸。在本文中,我们介绍了一种新的方法来解决这个问题,只使用一个人的RGB图像,而不需要任何其他传感器或设备。首先,我们利用面部地标点来估计用户的脸型、表情和姿势。然后利用无遮挡人脸区域的信息,恢复人脸纹理和当前场景的光照。
{"title":"Human Face Reconstruction under a HMD Occlusion","authors":"Zhengfu Peng, Ting Lu, Zhaowen Chen, Xiangmin Xu, Shu-Min Lin","doi":"10.1109/VR.2019.8797959","DOIUrl":"https://doi.org/10.1109/VR.2019.8797959","url":null,"abstract":"With the help of existing augmented vision perception motion capture technologies, virtual reality (VR) can make users immerse in virtual environments. But users are difficult to convey their actual emotions to others in virtual environments. Since the head-mounted displays (HMDs) significantly obstruct users face, it is hard to recover the full face directly with traditional techniques. In this paper, we introduce a novel method to address this problem by only using the RGB image of a person, without the need of any other sensors or devices. Firstly, we utilize the facial landmark points to estimate the face shape, expression and pose of the user. Then with the information of the Non occlusion face area, we could recover the face texture and the illumination of the current scene.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114638551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Training Transfer of Bimanual Assembly Tasks in Cost-Differentiated Virtual Reality Systems 成本差异化虚拟现实系统中手工装配任务的培训转移
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797917
S. Shen, Hsiang-Ting Chen, T. Leong
Recent advances of the affordable virtual reality headsets make virtual reality training an economical choice when compared to traditional training. However, these virtual reality devices present a range of different levels of virtual reality fidelity and interactions. Few works have evaluated their validity against the traditional training formats. This paper presents a study that compares the learning efficiency of a bimanual gearbox assembly task among traditional training, virtual reality training with direct 3D inputs (HTC VIVE), and virtual reality training without 3D inputs (Google Cardboard). A pilot study was conducted and the result shows that HTC VIVE brings the best learning outcomes.
与传统培训相比,最近经济实惠的虚拟现实耳机的进步使虚拟现实培训成为一种经济的选择。然而,这些虚拟现实设备呈现出一系列不同级别的虚拟现实保真度和交互性。很少有研究对传统的培训形式进行有效性评估。本文比较了传统训练、直接3D输入的虚拟现实训练(HTC VIVE)和无3D输入的虚拟现实训练(Google Cardboard)对手工变速箱装配任务的学习效率。我们进行了一项试点研究,结果表明HTC VIVE带来了最好的学习效果。
{"title":"Training Transfer of Bimanual Assembly Tasks in Cost-Differentiated Virtual Reality Systems","authors":"S. Shen, Hsiang-Ting Chen, T. Leong","doi":"10.1109/VR.2019.8797917","DOIUrl":"https://doi.org/10.1109/VR.2019.8797917","url":null,"abstract":"Recent advances of the affordable virtual reality headsets make virtual reality training an economical choice when compared to traditional training. However, these virtual reality devices present a range of different levels of virtual reality fidelity and interactions. Few works have evaluated their validity against the traditional training formats. This paper presents a study that compares the learning efficiency of a bimanual gearbox assembly task among traditional training, virtual reality training with direct 3D inputs (HTC VIVE), and virtual reality training without 3D inputs (Google Cardboard). A pilot study was conducted and the result shows that HTC VIVE brings the best learning outcomes.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125156824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improve the Decision-making Skill of Basketball Players by an Action-aware VR Training System 基于动作感知的VR训练系统提高篮球运动员决策能力
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798309
Wan-Lun Tsai, Liwei Su, Tsai-Yen Ko, Cheng-Ta Yang, Min-Chun Hu
Decision-making is an essential part in basketball offenses. In this paper, we proposed a basketball offensive decision-making VR training system. During the training, the trainee can intuitively interact with the system by wearing a motion capture suit and be trained in different virtual defensive scenarios designed by professional coaches. The system will recognize the offensive action performed by the user and provide correct suggestions when he/she makes a bad offensive decision. We compared the effectiveness of the training protocols by using conventional tactics board and the proposed VR system. Furthermore, we investigated the influence of using prerecorded 360-degree panorama video and computer simulated virtual content to create immersive training environment.
决策是篮球进攻的重要组成部分。本文提出了一种篮球进攻决策VR训练系统。在训练过程中,受训者穿着动作捕捉服,可以直观地与系统互动,并在专业教练设计的不同虚拟防守场景中进行训练。系统会识别用户的冒犯行为,并在用户做出错误的冒犯决定时给出正确的建议。我们比较了传统战术板和虚拟现实系统训练方案的有效性。此外,我们还研究了使用预先录制的360度全景视频和计算机模拟的虚拟内容来创建沉浸式训练环境的影响。
{"title":"Improve the Decision-making Skill of Basketball Players by an Action-aware VR Training System","authors":"Wan-Lun Tsai, Liwei Su, Tsai-Yen Ko, Cheng-Ta Yang, Min-Chun Hu","doi":"10.1109/VR.2019.8798309","DOIUrl":"https://doi.org/10.1109/VR.2019.8798309","url":null,"abstract":"Decision-making is an essential part in basketball offenses. In this paper, we proposed a basketball offensive decision-making VR training system. During the training, the trainee can intuitively interact with the system by wearing a motion capture suit and be trained in different virtual defensive scenarios designed by professional coaches. The system will recognize the offensive action performed by the user and provide correct suggestions when he/she makes a bad offensive decision. We compared the effectiveness of the training protocols by using conventional tactics board and the proposed VR system. Furthermore, we investigated the influence of using prerecorded 360-degree panorama video and computer simulated virtual content to create immersive training environment.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123702165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Interactive and Multimodal-based Augmented Reality for Remote Assistance using a Digital Surgical Microscope 交互式和基于多模态的增强现实,用于远程辅助的数字外科显微镜
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797682
E. Wisotzky, Jean-Claude Rosenthal, P. Eisert, A. Hilsmann, Falko Schmid, M. Bauer, Armin Schneider, F. Uecker
We present an interactive and multimodal-based augmented reality system for computer-assisted surgery in the context of ear, nose and throat (ENT) treatment. The proposed processing pipeline uses fully digital stereoscopic imaging devices, which support multispectral and white light imaging to generate high resolution image data, and consists of five modules. Input/output data handling, a hybrid multimodal image analysis and a bi-directional interactive augmented reality (AR) and mixed reality (MR) interface for local and remote surgical assistance are of high relevance for the complete framework. The hybrid multimodal 3D scene analysis module uses different wavelengths to classify tissue structures and combines this spectral data with metric 3D information. Additionally, we propose a zoom-independent intraoperative tool for virtual ossicular prosthesis insertion (e.g. stapedectomy) guaranteeing very high metric accuracy in sub-millimeter range (1/10 mm). A bi-directional interactive AR/MR communication module guarantees low latency, while consisting surgical information and avoiding informational overload. Display agnostic AR/MR visualization can show our analyzed data synchronized inside the digital binocular, the 3D display or any connected head-mounted-display (HMD). In addition, the analyzed data can be enriched with annotations by involving external clinical experts using AR/MR and furthermore an accurate registration of preoperative data. The benefits of such a collaborative surgical system are manifold and will lead to a highly improved patient outcome through an easier tissue classification and reduced surgery risk.
我们提出了一个交互式和基于多模式的增强现实系统,用于耳鼻喉(ENT)治疗的计算机辅助手术。该处理流水线采用全数字立体成像设备,支持多光谱和白光成像以生成高分辨率图像数据,由五个模块组成。输入/输出数据处理、混合多模态图像分析以及用于本地和远程手术辅助的双向交互式增强现实(AR)和混合现实(MR)接口与整个框架高度相关。混合多模态3D场景分析模块使用不同的波长对组织结构进行分类,并将该光谱数据与度量3D信息相结合。此外,我们提出了一种不依赖变焦的术中工具,用于虚拟听骨假体插入(例如镫骨切除术),保证了亚毫米范围(1/10毫米)的极高度量精度。双向交互式AR/MR通信模块保证低延迟,同时包含手术信息,避免信息过载。与显示无关的AR/MR可视化可以显示我们在数字双目、3D显示器或任何连接的头戴式显示器(HMD)内同步的分析数据。此外,分析的数据可以通过使用AR/MR的外部临床专家进行注释来丰富,并且可以准确地注册术前数据。这种合作手术系统的好处是多方面的,通过更容易的组织分类和降低手术风险,将大大改善患者的预后。
{"title":"Interactive and Multimodal-based Augmented Reality for Remote Assistance using a Digital Surgical Microscope","authors":"E. Wisotzky, Jean-Claude Rosenthal, P. Eisert, A. Hilsmann, Falko Schmid, M. Bauer, Armin Schneider, F. Uecker","doi":"10.1109/VR.2019.8797682","DOIUrl":"https://doi.org/10.1109/VR.2019.8797682","url":null,"abstract":"We present an interactive and multimodal-based augmented reality system for computer-assisted surgery in the context of ear, nose and throat (ENT) treatment. The proposed processing pipeline uses fully digital stereoscopic imaging devices, which support multispectral and white light imaging to generate high resolution image data, and consists of five modules. Input/output data handling, a hybrid multimodal image analysis and a bi-directional interactive augmented reality (AR) and mixed reality (MR) interface for local and remote surgical assistance are of high relevance for the complete framework. The hybrid multimodal 3D scene analysis module uses different wavelengths to classify tissue structures and combines this spectral data with metric 3D information. Additionally, we propose a zoom-independent intraoperative tool for virtual ossicular prosthesis insertion (e.g. stapedectomy) guaranteeing very high metric accuracy in sub-millimeter range (1/10 mm). A bi-directional interactive AR/MR communication module guarantees low latency, while consisting surgical information and avoiding informational overload. Display agnostic AR/MR visualization can show our analyzed data synchronized inside the digital binocular, the 3D display or any connected head-mounted-display (HMD). In addition, the analyzed data can be enriched with annotations by involving external clinical experts using AR/MR and furthermore an accurate registration of preoperative data. The benefits of such a collaborative surgical system are manifold and will lead to a highly improved patient outcome through an easier tissue classification and reduced surgery risk.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115287830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Unifying Research to Address Motion Sickness 解决晕动病的统一研究
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798297
Mark S. Dennison, D. Krum
Be it discussed as cybersickness, immersive sickness, simulator sickness, or virtual reality sickness, the ill effects of visuo-vestibular mismatch in immersive environments are of great concern for the wider adoption of virtual reality and related technologies. In this position paper, we discuss a unified research approach that may address motion sickness and identify critical research topics.
无论是网络病、沉浸式病、模拟器病还是虚拟现实病,沉浸式环境中视觉前庭不匹配的不良影响都是虚拟现实及相关技术广泛应用所关注的问题。在这篇论文中,我们讨论了一种统一的研究方法,可以解决晕动病和确定关键的研究课题。
{"title":"Unifying Research to Address Motion Sickness","authors":"Mark S. Dennison, D. Krum","doi":"10.1109/VR.2019.8798297","DOIUrl":"https://doi.org/10.1109/VR.2019.8798297","url":null,"abstract":"Be it discussed as cybersickness, immersive sickness, simulator sickness, or virtual reality sickness, the ill effects of visuo-vestibular mismatch in immersive environments are of great concern for the wider adoption of virtual reality and related technologies. In this position paper, we discuss a unified research approach that may address motion sickness and identify critical research topics.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127857466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
RelivelnVR: Capturing and Reliving Virtual Reality Experiences Together RelivelnVR:捕捉和重现虚拟现实体验
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798363
Cheng Yao Wang, Mose Sakashita, Upol Ehsan, Jingjin Li, A. S. Won
We present a new type of sharing VR experience over distance which allows people to relive their recorded experience in VR together. We describe a pilot study examining the user experience when people share their VR experience together remotely. Finally, we discuss the implications for sharing VR experiences over time and space.
我们提出了一种新型的跨距离共享VR体验,让人们一起在VR中重温他们所记录的体验。我们描述了一项试点研究,研究了当人们远程分享他们的VR体验时的用户体验。最后,我们讨论了在时间和空间上共享VR体验的含义。
{"title":"RelivelnVR: Capturing and Reliving Virtual Reality Experiences Together","authors":"Cheng Yao Wang, Mose Sakashita, Upol Ehsan, Jingjin Li, A. S. Won","doi":"10.1109/VR.2019.8798363","DOIUrl":"https://doi.org/10.1109/VR.2019.8798363","url":null,"abstract":"We present a new type of sharing VR experience over distance which allows people to relive their recorded experience in VR together. We describe a pilot study examining the user experience when people share their VR experience together remotely. Finally, we discuss the implications for sharing VR experiences over time and space.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127804254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An Educational Augmented Reality Application for Elementary School Students Focusing on the Human Skeletal System 基于人体骨骼系统的小学生教育增强现实应用
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798058
M. Kouzi, Abdihakim Mao, Diego Zambrano
Augmented Reality (AR) as a new field regarding Human Computing Interaction (HCI) has been gaining momentum in the last few years. Being able to project interactive graphics into real-life environments can be applied in various fields, research and commercial goals. In the field of education, textbooks are still considered to be the primary tool used by students to learn about new topics. Since AR requires interaction and exploration, it brings a ludic component that is hard to replicate using regular textbooks. The application we developed allows elementary school students to interact with a fully three-dimensional human skeleton model, using specialized virtual buttons. Students can understand this complex structure and learn the names of important bones just by using a tablet, a picture and their hands. Results show that the majority of students consider that our AR application helped them visualize and learn more about the human skeletal system. Additionally, the data we gathered shows that there was a 16% increase in correct responses regarding bone names after using our AR application. Our AR application successfully helped the students learn about the human skeletal system by introducing them to AR technologies.
增强现实(AR)作为人机交互(HCI)的一个新领域,在过去几年中获得了长足的发展。能够将交互式图形投射到现实生活环境中可以应用于各个领域,研究和商业目标。在教育领域,教科书仍然被认为是学生学习新课题的主要工具。由于AR需要互动和探索,它带来了难以用常规教科书复制的有趣成分。我们开发的应用程序允许小学生使用专门的虚拟按钮与一个完全三维的人体骨骼模型进行交互。学生只需用写字板、图片和手就能了解这种复杂的结构,并学习重要骨骼的名称。结果表明,大多数学生认为我们的AR应用程序帮助他们可视化和更多地了解人体骨骼系统。此外,我们收集的数据显示,在使用我们的AR应用程序后,关于骨骼名称的正确回答增加了16%。我们的增强现实应用成功地帮助学生了解了人体骨骼系统,并向他们介绍了增强现实技术。
{"title":"An Educational Augmented Reality Application for Elementary School Students Focusing on the Human Skeletal System","authors":"M. Kouzi, Abdihakim Mao, Diego Zambrano","doi":"10.1109/VR.2019.8798058","DOIUrl":"https://doi.org/10.1109/VR.2019.8798058","url":null,"abstract":"Augmented Reality (AR) as a new field regarding Human Computing Interaction (HCI) has been gaining momentum in the last few years. Being able to project interactive graphics into real-life environments can be applied in various fields, research and commercial goals. In the field of education, textbooks are still considered to be the primary tool used by students to learn about new topics. Since AR requires interaction and exploration, it brings a ludic component that is hard to replicate using regular textbooks. The application we developed allows elementary school students to interact with a fully three-dimensional human skeleton model, using specialized virtual buttons. Students can understand this complex structure and learn the names of important bones just by using a tablet, a picture and their hands. Results show that the majority of students consider that our AR application helped them visualize and learn more about the human skeletal system. Additionally, the data we gathered shows that there was a 16% increase in correct responses regarding bone names after using our AR application. Our AR application successfully helped the students learn about the human skeletal system by introducing them to AR technologies.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127969147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Enchanting Your Noodles: GAN-based Real-time Food-to-Food Translation and Its Impact on Vision-induced Gustatory Manipulation 魅惑你的面条:基于gan的实时食物到食物的转换及其对视觉诱导的味觉操纵的影响
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798336
K. Nakano, K. Kiyokawa, Daichi Horita, Keiji Yanai, Nobuchika Sakata, Takuji Narumi
We propose a novel gustatory manipulation interface which utilizes the cross-modal effect of vision on taste elicited with augmented reality (AR)-based real-time food appearance modulation using a generative adversarial network (GAN). Unlike existing systems which only change color or texture pattern of a particular type of food in an inflexible manner, our system changes the appearance of food into multiple types of food in real-time flexibly, dynamically and interactively in accordance with the deformation of the food that the user is actually eating by using GAN-based image-to-image translation. The experimental results reveal that our system successfully manipulates gustatory sensations to some extent and that the effectiveness depends on the original and target types of food as well as each user's food experience.
我们提出了一种新的味觉操作界面,该界面利用基于增强现实(AR)的实时食物外观调制引发的视觉对味觉的跨模态效应,使用生成对抗网络(GAN)。不同于现有的系统只能以僵化的方式改变某一种食物的颜色或纹理图案,我们的系统通过基于gan的图像到图像转换,根据用户实际食用的食物的变形情况,实时灵活、动态、互动地将食物的外观改变为多种食物。实验结果表明,我们的系统在一定程度上成功地操纵了味觉感觉,其有效性取决于原始和目标食物类型以及每个用户的食物体验。
{"title":"Enchanting Your Noodles: GAN-based Real-time Food-to-Food Translation and Its Impact on Vision-induced Gustatory Manipulation","authors":"K. Nakano, K. Kiyokawa, Daichi Horita, Keiji Yanai, Nobuchika Sakata, Takuji Narumi","doi":"10.1109/VR.2019.8798336","DOIUrl":"https://doi.org/10.1109/VR.2019.8798336","url":null,"abstract":"We propose a novel gustatory manipulation interface which utilizes the cross-modal effect of vision on taste elicited with augmented reality (AR)-based real-time food appearance modulation using a generative adversarial network (GAN). Unlike existing systems which only change color or texture pattern of a particular type of food in an inflexible manner, our system changes the appearance of food into multiple types of food in real-time flexibly, dynamically and interactively in accordance with the deformation of the food that the user is actually eating by using GAN-based image-to-image translation. The experimental results reveal that our system successfully manipulates gustatory sensations to some extent and that the effectiveness depends on the original and target types of food as well as each user's food experience.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128268296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Imspector: Immersive System of Inspection of Bridges/Viaducts 检查员:沉浸式桥梁/高架桥检查系统
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798295
M. Veronez, L. G. D. Silveira, F. Bordin, Leonardo Campos Inocencio, Graciela Racolte, L. S. Kupssinskü, Pedro Rossa, L. Scalco
One of the main difficulties in the inspection of Bridges/Viaducts by observation is inaccessibility or lack of access throughout the structure. Mapping using remote sensors on Unmanned Aerial Vehicles (UAVs) or by means of laser scanning can be an interesting alternative to the engineer as it can enable more detailed analysis and diagnostics. Such mapping techniques also allow the generation of realistic 3D models that can be integrated in Virtual Reality (VR) environments. In this sense, we present the ImSpector, a system that uses realistic 3D models generated by remote sensors embedded in UAVs that implements a virtual and immersive environment for inspections. As a result, the system provides the engineer a tool to carry out field tests directly at the office, ensuring agility, accuracy and safety in bridge and viaduct inspections.
通过观察检查桥梁/高架桥的主要困难之一是整个结构无法进入或缺乏通道。对工程师来说,使用无人机(uav)上的远程传感器或通过激光扫描进行测绘是一个有趣的选择,因为它可以实现更详细的分析和诊断。这种映射技术还允许生成逼真的3D模型,可以集成到虚拟现实(VR)环境中。从这个意义上讲,我们提出了检查员,这是一个系统,它使用嵌入在无人机中的远程传感器生成的逼真3D模型,实现了虚拟的沉浸式检查环境。因此,该系统为工程师提供了直接在办公室进行现场测试的工具,确保了桥梁和高架桥检查的敏捷性、准确性和安全性。
{"title":"Imspector: Immersive System of Inspection of Bridges/Viaducts","authors":"M. Veronez, L. G. D. Silveira, F. Bordin, Leonardo Campos Inocencio, Graciela Racolte, L. S. Kupssinskü, Pedro Rossa, L. Scalco","doi":"10.1109/VR.2019.8798295","DOIUrl":"https://doi.org/10.1109/VR.2019.8798295","url":null,"abstract":"One of the main difficulties in the inspection of Bridges/Viaducts by observation is inaccessibility or lack of access throughout the structure. Mapping using remote sensors on Unmanned Aerial Vehicles (UAVs) or by means of laser scanning can be an interesting alternative to the engineer as it can enable more detailed analysis and diagnostics. Such mapping techniques also allow the generation of realistic 3D models that can be integrated in Virtual Reality (VR) environments. In this sense, we present the ImSpector, a system that uses realistic 3D models generated by remote sensors embedded in UAVs that implements a virtual and immersive environment for inspections. As a result, the system provides the engineer a tool to carry out field tests directly at the office, ensuring agility, accuracy and safety in bridge and viaduct inspections.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"275 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123497461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Virtual Reality Video Game Paired with Physical Monocular Blurring as Accessible Therapy for Amblyopia 虚拟现实视频游戏配合物理单眼模糊治疗弱视
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797997
O. Hurd, S. Kurniawan, M. Teodorescu
This paper discusses a virtual reality (VR) therapeutic video game for treatment of the neurological eye disorder, Amblyopia. Amblyopia is often referred to as lazy eye, and it entails weaker vision in one eye due to a poor connection between the eye and the brain. Until recently it was thought to be untreatable in adults, but new research has proven that with consistent therapy even adults can improve their Amblyopia, especially through perceptual learning and video games. Even so, therapy compliance remains low due to the fact that conventional therapies are perceived as either invasive, dull and/or boring. Our game aims to make Amblyopia therapy more immersive, enjoyable and playful. The game was perceived by our users to be a fun and accessible alternative, as it involves adhering a Bangerter foil (an opaque sticker) on a VR headset to blur vision in an Amblyopic person's dominant eye while having them playa VR video game. To perform well in the video game, their brain must adapt to rely on seeing with their weaker eye, thereby reforging that neurological connection. While testing our game, we also studied users behavior to investigate what visual and kinetic components were more effective therapeutically. Our findings generally show positive results, showing that visual acuity in adults increases with 45 minutes of therapy. Amblyopia has many negative symptoms including poor depth perception (nec-essary for daily activities such as driving), so this therapy could be life changing for adults with Amblyopia.
本文讨论了一种虚拟现实(VR)治疗性视频游戏,用于治疗神经性眼睛疾病弱视。弱视通常被称为懒惰的眼睛,由于眼睛和大脑之间的连接不良,它导致一只眼睛的视力较弱。直到最近,人们还认为成人弱视无法治愈,但新的研究证明,通过持续的治疗,即使是成年人也可以改善他们的弱视,尤其是通过感知学习和电子游戏。即便如此,由于传统疗法被认为是侵入性的、枯燥的和/或无聊的,治疗依从性仍然很低。我们的游戏旨在使弱视治疗更加身临其境,愉快和有趣。我们的用户认为这是一款有趣且容易上手的游戏,因为它将Bangerter箔(一种不透明的贴纸)粘在VR头盔上,让弱视者在玩VR视频游戏时模糊主视眼的视觉。为了在电子游戏中表现出色,他们的大脑必须适应用较弱的眼睛看东西,从而重塑神经联系。在测试我们的游戏时,我们还研究了用户行为,以调查哪些视觉和动态组件更有效。我们的研究结果总体上显示出积极的结果,表明成人的视力在45分钟的治疗后有所提高。弱视有许多负面症状,包括深度知觉差(驾驶等日常活动所必需的),因此这种疗法可能会改变弱视成人的生活。
{"title":"Virtual Reality Video Game Paired with Physical Monocular Blurring as Accessible Therapy for Amblyopia","authors":"O. Hurd, S. Kurniawan, M. Teodorescu","doi":"10.1109/VR.2019.8797997","DOIUrl":"https://doi.org/10.1109/VR.2019.8797997","url":null,"abstract":"This paper discusses a virtual reality (VR) therapeutic video game for treatment of the neurological eye disorder, Amblyopia. Amblyopia is often referred to as lazy eye, and it entails weaker vision in one eye due to a poor connection between the eye and the brain. Until recently it was thought to be untreatable in adults, but new research has proven that with consistent therapy even adults can improve their Amblyopia, especially through perceptual learning and video games. Even so, therapy compliance remains low due to the fact that conventional therapies are perceived as either invasive, dull and/or boring. Our game aims to make Amblyopia therapy more immersive, enjoyable and playful. The game was perceived by our users to be a fun and accessible alternative, as it involves adhering a Bangerter foil (an opaque sticker) on a VR headset to blur vision in an Amblyopic person's dominant eye while having them playa VR video game. To perform well in the video game, their brain must adapt to rely on seeing with their weaker eye, thereby reforging that neurological connection. While testing our game, we also studied users behavior to investigate what visual and kinetic components were more effective therapeutically. Our findings generally show positive results, showing that visual acuity in adults increases with 45 minutes of therapy. Amblyopia has many negative symptoms including poor depth perception (nec-essary for daily activities such as driving), so this therapy could be life changing for adults with Amblyopia.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116101938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1