首页 > 最新文献

2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)最新文献

英文 中文
Reducing Seasickness in Onboard Marine VR Use through Visual Compensation of Vessel Motion 通过船舶运动的视觉补偿减少船上VR使用中的晕船现象
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797800
A. Stevens, T. Butkiewicz
We developed a virtual reality interface for cleaning sonar point cloud data. Experimentally, users performed better when using this VR interface compared to a mouse-and-keyboard with a desktop monitor. However, hydrographers often clean data aboard moving vessels, which can create motion sickness. Users of VR experience motion sickness as well, in the form of simulator sickness. Combining the two is a worst-case scenario for motion sickness. Advice for avoiding seasickness includes focusing on the horizon or objects in the distance, to keep your frame of reference external. We explored moving the surroundings in a virtual environment to match vessel motion, to assess whether it provides similar visual cues that could prevent seasickness. An informal evaluation in a seasickness-inducing simulator was conducted, and subjective preliminary results hint at such compensation's potential for reducing motion sickness, enabling the use of immersive VR technologies aboard underway ships.
我们开发了一个用于清理声纳点云数据的虚拟现实界面。在实验中,用户在使用这个虚拟现实界面时的表现比使用带有桌面显示器的鼠标键盘要好。然而,海道测量员经常在移动的船上清理数据,这可能会导致晕车。VR的用户也会以模拟器病的形式体验晕动病。两者结合是晕车最糟糕的情况。避免晕船的建议包括将注意力集中在地平线或远处的物体上,以保持你的参照系在外部。我们探索了在虚拟环境中移动周围环境以匹配船只的运动,以评估它是否提供了类似的视觉线索,可以防止晕船。在晕船诱导模拟器中进行了非正式评估,主观的初步结果暗示了这种补偿在减少晕船方面的潜力,使沉浸式VR技术能够在航行中的船上使用。
{"title":"Reducing Seasickness in Onboard Marine VR Use through Visual Compensation of Vessel Motion","authors":"A. Stevens, T. Butkiewicz","doi":"10.1109/VR.2019.8797800","DOIUrl":"https://doi.org/10.1109/VR.2019.8797800","url":null,"abstract":"We developed a virtual reality interface for cleaning sonar point cloud data. Experimentally, users performed better when using this VR interface compared to a mouse-and-keyboard with a desktop monitor. However, hydrographers often clean data aboard moving vessels, which can create motion sickness. Users of VR experience motion sickness as well, in the form of simulator sickness. Combining the two is a worst-case scenario for motion sickness. Advice for avoiding seasickness includes focusing on the horizon or objects in the distance, to keep your frame of reference external. We explored moving the surroundings in a virtual environment to match vessel motion, to assess whether it provides similar visual cues that could prevent seasickness. An informal evaluation in a seasickness-inducing simulator was conducted, and subjective preliminary results hint at such compensation's potential for reducing motion sickness, enabling the use of immersive VR technologies aboard underway ships.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127526142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Keynote Speaker: Virtual Reality for Enhancing Human Perceptional Diversity Towards an Inclusive Society 主讲人:虚拟现实增强人类感知多样性,迈向包容性社会
Pub Date : 2019-03-23 DOI: 10.1109/vr.2019.8798046
Yoichi Ochiai
We conducted research project towards an inclusive society from the viewpoint of the computational assistive technologies. This project aims to explore AI-assisted human-machine integration techniques for overcoming impairments and disabilities. By connecting assistive hardware and auditory/visual/tactile sensors and actuators with a user-adaptive and interactive learning framework, we propose and develop a proof of concept of our “xDiversity AI platform” to meet the various abilities, needs, and demands in our society. For example, one of our studies is a wheelchair for automatic driving using “AI technology” called “tele wheelchair”. Its purpose is not fully automated driving but labor saving at nursing care sites and nursing care by natural communication. These attempts to solve the challenges facing the body and sense organs with the help of AI and others. In this keynote we explain the case studies and out final goal for the social design and deployment of the assistive technologies towards an inclusive society.
我们从计算辅助技术的角度进行了一个包容性社会的研究项目。该项目旨在探索人工智能辅助的人机集成技术,以克服障碍和残疾。通过将辅助硬件、听觉/视觉/触觉传感器和执行器与用户自适应和交互式学习框架相连接,我们提出并开发了“xDiversity AI平台”的概念验证,以满足我们社会的各种能力、需求和需求。例如,我们的研究之一是使用“人工智能技术”的自动驾驶轮椅,称为“远程轮椅”。它的目的不是完全自动驾驶,而是在护理现场节省劳动力,通过自然沟通进行护理。这些尝试在人工智能和其他技术的帮助下解决身体和感觉器官面临的挑战。在这个主题演讲中,我们解释了案例研究,并提出了社会设计和部署辅助技术以实现包容性社会的最终目标。
{"title":"Keynote Speaker: Virtual Reality for Enhancing Human Perceptional Diversity Towards an Inclusive Society","authors":"Yoichi Ochiai","doi":"10.1109/vr.2019.8798046","DOIUrl":"https://doi.org/10.1109/vr.2019.8798046","url":null,"abstract":"We conducted research project towards an inclusive society from the viewpoint of the computational assistive technologies. This project aims to explore AI-assisted human-machine integration techniques for overcoming impairments and disabilities. By connecting assistive hardware and auditory/visual/tactile sensors and actuators with a user-adaptive and interactive learning framework, we propose and develop a proof of concept of our “xDiversity AI platform” to meet the various abilities, needs, and demands in our society. For example, one of our studies is a wheelchair for automatic driving using “AI technology” called “tele wheelchair”. Its purpose is not fully automated driving but labor saving at nursing care sites and nursing care by natural communication. These attempts to solve the challenges facing the body and sense organs with the help of AI and others. In this keynote we explain the case studies and out final goal for the social design and deployment of the assistive technologies towards an inclusive society.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121100833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Study in Virtual Reality on (Non-)Gamers‘ Attitudes and Behaviors 虚拟现实对(非)游戏玩家态度和行为的研究
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797750
Sebastian Stadler, H. Cornet, F. Frenkler
Virtual Reality (VR) constitutes an advantageous alternative for research considering scenarios that are not feasible in real-life conditions. Thus, this technology was used in the presented study for the behavioral observation of participants when being exposed to autonomous vehicles (AVs). Further data was collected via questionnaires before, directly after the experience and one month later to measure the impact that the experience had on participants' general attitude towards AVs. Despite a nonsignificance of the results, first insights suggest that participants with low prior gaming experience were more impacted than gamers. Future work will involve bigger sample size and refined questionnaires.
虚拟现实(VR)为考虑在现实生活条件下不可行的场景的研究提供了有利的替代方案。因此,在本研究中,这项技术被用于观察参与者在接触自动驾驶汽车(AVs)时的行为。在体验前、体验后和一个月后通过问卷收集进一步的数据,以衡量体验对参与者对自动驾驶汽车的总体态度的影响。尽管结果并不显著,但初步观察表明,之前游戏经验较低的参与者比游戏玩家受到的影响更大。未来的工作将涉及更大的样本量和更精细的问卷。
{"title":"A Study in Virtual Reality on (Non-)Gamers‘ Attitudes and Behaviors","authors":"Sebastian Stadler, H. Cornet, F. Frenkler","doi":"10.1109/VR.2019.8797750","DOIUrl":"https://doi.org/10.1109/VR.2019.8797750","url":null,"abstract":"Virtual Reality (VR) constitutes an advantageous alternative for research considering scenarios that are not feasible in real-life conditions. Thus, this technology was used in the presented study for the behavioral observation of participants when being exposed to autonomous vehicles (AVs). Further data was collected via questionnaires before, directly after the experience and one month later to measure the impact that the experience had on participants' general attitude towards AVs. Despite a nonsignificance of the results, first insights suggest that participants with low prior gaming experience were more impacted than gamers. Future work will involve bigger sample size and refined questionnaires.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126852383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Optimised Molecular Graphics on the HoloLens HoloLens上优化的分子图形
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798111
C. Müller, Matthias Braun, T. Ertl
The advent of modern and affordable augmented reality head sets like Microsoft HoloLens has sparked new interest in using virtual and augmented reality technology in the analysis of molecular data. For all visualisation in immersive, mixed-reality scenarios, a sufficiently high rendering speed is an important factor, which leads to the issue of limited processing power available on fully untethered devices facing the situation of handling computationally expensive visualisations. Recent research shows that the space-filling model of even small data sets from the Protein Data Bank (PDB) cannot be rendered at desirable frame rates on the HoloLens. In this work, we report on how to improve the rendering speed of atom-based visualisation of proteins and how the rendering of more abstract representations of the molecules compares against it. We complement our findings with in-depth GPU and CPU performance numbers.
微软HoloLens等现代且价格合理的增强现实头显的出现,引发了人们对使用虚拟和增强现实技术分析分子数据的新兴趣。对于沉浸式、混合现实场景中的所有可视化,足够高的渲染速度是一个重要因素,这导致在完全不受约束的设备上,面对处理计算昂贵的可视化的情况,可用的处理能力有限的问题。最近的研究表明,即使是来自蛋白质数据库(PDB)的小数据集的空间填充模型也不能在HoloLens上以理想的帧速率呈现。在这项工作中,我们报告了如何提高基于原子的蛋白质可视化的渲染速度,以及如何将更抽象的分子表示的渲染与之进行比较。我们用深入的GPU和CPU性能数字来补充我们的发现。
{"title":"Optimised Molecular Graphics on the HoloLens","authors":"C. Müller, Matthias Braun, T. Ertl","doi":"10.1109/VR.2019.8798111","DOIUrl":"https://doi.org/10.1109/VR.2019.8798111","url":null,"abstract":"The advent of modern and affordable augmented reality head sets like Microsoft HoloLens has sparked new interest in using virtual and augmented reality technology in the analysis of molecular data. For all visualisation in immersive, mixed-reality scenarios, a sufficiently high rendering speed is an important factor, which leads to the issue of limited processing power available on fully untethered devices facing the situation of handling computationally expensive visualisations. Recent research shows that the space-filling model of even small data sets from the Protein Data Bank (PDB) cannot be rendered at desirable frame rates on the HoloLens. In this work, we report on how to improve the rendering speed of atom-based visualisation of proteins and how the rendering of more abstract representations of the molecules compares against it. We complement our findings with in-depth GPU and CPU performance numbers.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"67 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115698116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Dense 3D Scene Reconstruction from Multiple Spherical Images for 3-DoF+ VR Applications 密集的三维场景重建从多个球面图像3-DoF+ VR应用
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798281
T. L. T. D. Silveira, C. Jung
We propose a novel method for estimating the 3D geometry of indoor scenes based on multiple spherical images. Our technique produces a dense depth map registered to a reference view so that depth-image-based-rendering (DIBR) techniques can be explored for providing three-degrees-of-freedom plus immersive experiences to virtual reality users. The core of our method is to explore large displacement optical flow algorithms to obtain point correspondences, and use cross-checking and geometric constraints to detect and remove bad matches. We show that selecting a subset of the best dense matches leads to better pose estimates than traditional approaches based on sparse feature matching, and explore a weighting scheme to obtain the depth maps. Finally, we adapt a fast image-guided filter to the spherical domain for enforcing local spatial consistency, improving the 3D estimates. Experimental results indicate that our method quantitatively outperforms competitive approaches on computer-generated images and synthetic data under noisy correspondences and camera poses. Also, we show that the estimated depth maps obtained from only a few real spherical captures of the scene are capable of producing coherent synthesized binocular stereoscopic views by using traditional DIBR methods.
提出了一种基于多球面图像的室内场景三维几何形状估计方法。我们的技术产生一个密集的深度图注册到参考视图,以便深度图像渲染(DIBR)技术可以探索为虚拟现实用户提供三自由度和身临其境的体验。该方法的核心是探索大位移光流算法来获取点对应,并使用交叉检查和几何约束来检测和去除不良匹配。我们证明了选择最佳密集匹配的子集比基于稀疏特征匹配的传统方法可以获得更好的姿态估计,并探索了一种加权方案来获得深度图。最后,我们将快速图像引导滤波器应用于球面域,以增强局部空间一致性,提高三维估计。实验结果表明,在噪声对应和相机姿态下,我们的方法在定量上优于竞争对手的计算机生成图像和合成数据。此外,我们还表明,使用传统的DIBR方法,仅从少量真实球面捕获的场景中获得的估计深度图能够产生相干的合成双目立体视图。
{"title":"Dense 3D Scene Reconstruction from Multiple Spherical Images for 3-DoF+ VR Applications","authors":"T. L. T. D. Silveira, C. Jung","doi":"10.1109/VR.2019.8798281","DOIUrl":"https://doi.org/10.1109/VR.2019.8798281","url":null,"abstract":"We propose a novel method for estimating the 3D geometry of indoor scenes based on multiple spherical images. Our technique produces a dense depth map registered to a reference view so that depth-image-based-rendering (DIBR) techniques can be explored for providing three-degrees-of-freedom plus immersive experiences to virtual reality users. The core of our method is to explore large displacement optical flow algorithms to obtain point correspondences, and use cross-checking and geometric constraints to detect and remove bad matches. We show that selecting a subset of the best dense matches leads to better pose estimates than traditional approaches based on sparse feature matching, and explore a weighting scheme to obtain the depth maps. Finally, we adapt a fast image-guided filter to the spherical domain for enforcing local spatial consistency, improving the 3D estimates. Experimental results indicate that our method quantitatively outperforms competitive approaches on computer-generated images and synthetic data under noisy correspondences and camera poses. Also, we show that the estimated depth maps obtained from only a few real spherical captures of the scene are capable of producing coherent synthesized binocular stereoscopic views by using traditional DIBR methods.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128232785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Virtual Reality Instruction Followed by Enactment Can Increase Procedural Knowledge in a Science Lesson 虚拟现实教学后再辅以制定,可增加科学课的程序性知识
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797755
N. K. Andreasen, Sarune Baceviciute, Prajakt Pande, G. Makransky
A 2×2 between-subjects experiment (a) investigated and compared the instructional effectiveness of immersive virtual reality (VR) versus video as media for teaching scientific procedural knowledge, and (b) examined the efficacy of enactment as a generative learning strategy in combination with the respective instructional media. A total of 117 high school students (74 females) were randomly distributed across four instructional groups — VR and enactment, video and enactment, only VR, and only video. Outcome measures included declarative knowledge, procedural knowledge, knowledge transfer, and subjective ratings of perceived enjoyment. Results indicated that there were no main effects or interactions for the outcomes of declarative knowledge or transfer. However, there was a significant interaction between media and method for the outcome of procedural knowledge with the VR and enactment group having the highest performance. Furthermore, media also seemed to have a significant effect on student perceived enjoyment, indicating that the groups enjoyed the VR simulation significantly more than the video. The results deepen our understanding of how we learn with immersive technology, as well as suggest important implications for implementing VR in schools.
2×2受试者之间的实验(A)调查并比较了沉浸式虚拟现实(VR)与视频作为教学科学程序知识的媒介的教学效果,以及(b)检查了将制定作为一种生成学习策略与各自的教学媒介相结合的效果。117名高中生(74名女生)被随机分配到四个教学组——虚拟现实和表演、视频和表演、只有虚拟现实和只有视频。结果测量包括陈述性知识、程序性知识、知识转移和感知享受的主观评分。结果表明,陈述性知识或迁移的结果没有主要影响或相互作用。然而,媒体和方法对程序性知识的结果有显著的相互作用,VR和制定组的表现最高。此外,媒体似乎对学生的感知享受也有显著影响,这表明各组对VR模拟的享受明显超过视频。研究结果加深了我们对如何使用沉浸式技术学习的理解,并对在学校实施VR提出了重要建议。
{"title":"Virtual Reality Instruction Followed by Enactment Can Increase Procedural Knowledge in a Science Lesson","authors":"N. K. Andreasen, Sarune Baceviciute, Prajakt Pande, G. Makransky","doi":"10.1109/VR.2019.8797755","DOIUrl":"https://doi.org/10.1109/VR.2019.8797755","url":null,"abstract":"A 2×2 between-subjects experiment (a) investigated and compared the instructional effectiveness of immersive virtual reality (VR) versus video as media for teaching scientific procedural knowledge, and (b) examined the efficacy of enactment as a generative learning strategy in combination with the respective instructional media. A total of 117 high school students (74 females) were randomly distributed across four instructional groups — VR and enactment, video and enactment, only VR, and only video. Outcome measures included declarative knowledge, procedural knowledge, knowledge transfer, and subjective ratings of perceived enjoyment. Results indicated that there were no main effects or interactions for the outcomes of declarative knowledge or transfer. However, there was a significant interaction between media and method for the outcome of procedural knowledge with the VR and enactment group having the highest performance. Furthermore, media also seemed to have a significant effect on student perceived enjoyment, indicating that the groups enjoyed the VR simulation significantly more than the video. The results deepen our understanding of how we learn with immersive technology, as well as suggest important implications for implementing VR in schools.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116115544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Haptic Interface Based on Optical Fiber Force Myography Sensor 基于光纤力肌传感器的触觉界面
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797788
E. Fujiwara, Yu Tzu Wu, M. K. Gomes, W. H. A. Silva, C. Suzuki
A haptic grasp interface based on the force myography technique is reported. The hand movements and forces during the object manipulation are assessed by an optical fiber sensor attached to the forearm, so the virtual contact is computed, and the reaction forces are delivered to the subject by graphical and vibrotactile feedbacks. The system was successfully tested for different objects, providing a non-invasive and realistic approach for applications in virtual-reality environments.
报道了一种基于力肌图技术的触觉抓握界面。通过连接在前臂上的光纤传感器来评估手在物体操作过程中的运动和力,从而计算虚拟接触,并通过图形和振动触觉反馈将反作用力传递给受试者。该系统已成功针对不同对象进行了测试,为虚拟现实环境中的应用提供了一种非侵入性和逼真的方法。
{"title":"Haptic Interface Based on Optical Fiber Force Myography Sensor","authors":"E. Fujiwara, Yu Tzu Wu, M. K. Gomes, W. H. A. Silva, C. Suzuki","doi":"10.1109/VR.2019.8797788","DOIUrl":"https://doi.org/10.1109/VR.2019.8797788","url":null,"abstract":"A haptic grasp interface based on the force myography technique is reported. The hand movements and forces during the object manipulation are assessed by an optical fiber sensor attached to the forearm, so the virtual contact is computed, and the reaction forces are delivered to the subject by graphical and vibrotactile feedbacks. The system was successfully tested for different objects, providing a non-invasive and realistic approach for applications in virtual-reality environments.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"354 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131750575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Repurposing Labeled Photographs for Facial Tracking with Alternative Camera Intrinsics 重新利用标记照片与替代相机内在的面部跟踪
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798303
Caio Brito, Kenny Mitchell
Acquiring manually labeled training data for a specific application is expensive and while such data is often fully available for casual camera imagery, it is not a good fit for novel cameras. To overcome this, we present a repurposing approach that relies on spherical image warping to retarget an existing dataset of landmark labeled casual photography of people's faces with arbitrary poses from regular camera lenses to target cameras with significantly different intrinsics, such as those often attached to the head mounted displays (HMDs) with wide-angle lenses necessary to observe mouth and other features at close proximity and infrared only sensing for eye observations. Our method can predict landmarks of the HMD wearer in facial sub-regions in a divide-and-conquer fashion with particular focus on mouth and eyes. We demonstrate animated avatars in realtime using the face landmarks as input without user-specific nor application-specific dataset.
为特定的应用程序获取手动标记的训练数据是昂贵的,虽然这些数据通常完全可用于休闲相机图像,但它不适合新型相机。为了克服这一点,我们提出了一种再利用方法,该方法依赖于球形图像扭曲,将现有的具有任意姿势的地标标记的人脸随意摄影数据集从普通相机镜头重新定位到具有显著不同特性的目标相机,例如那些经常连接到头戴式显示器(hmd)上的广角镜头,用于近距离观察嘴巴和其他特征,以及仅用于眼睛观察的红外传感。我们的方法可以以分而治之的方式预测HMD佩戴者面部子区域的地标,特别关注嘴巴和眼睛。我们使用面部地标作为输入实时演示动画头像,没有特定于用户或特定于应用程序的数据集。
{"title":"Repurposing Labeled Photographs for Facial Tracking with Alternative Camera Intrinsics","authors":"Caio Brito, Kenny Mitchell","doi":"10.1109/VR.2019.8798303","DOIUrl":"https://doi.org/10.1109/VR.2019.8798303","url":null,"abstract":"Acquiring manually labeled training data for a specific application is expensive and while such data is often fully available for casual camera imagery, it is not a good fit for novel cameras. To overcome this, we present a repurposing approach that relies on spherical image warping to retarget an existing dataset of landmark labeled casual photography of people's faces with arbitrary poses from regular camera lenses to target cameras with significantly different intrinsics, such as those often attached to the head mounted displays (HMDs) with wide-angle lenses necessary to observe mouth and other features at close proximity and infrared only sensing for eye observations. Our method can predict landmarks of the HMD wearer in facial sub-regions in a divide-and-conquer fashion with particular focus on mouth and eyes. We demonstrate animated avatars in realtime using the face landmarks as input without user-specific nor application-specific dataset.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132818080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Evaluation on a Wheelchair Simulator Using Limited-Motion Patterns and Vection-Inducing Movies 基于受限运动模式和矢量诱导电影的轮椅模拟器评价
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8797726
Akihiro Miyata, Hironobu Uno, Kenro Go
Existing virtual reality (VR) based wheelchair simulators have difficulty providing both visual and motion feedback at low cost. To address this issue, we propose a VR-based wheelchair simulator using a combination of motions attainable by an electric-powered wheelchair and vection-inducing movies displayed on a head-mounted display. This approach enables the user to have a richer simulation experience, because the scenes of the movie change as if the wheelchair performs motions that are not actually performable. We developed a proof of concept using only consumer products and conducted evaluation tasks, confirming that our approach can provide a richer experience for barrier simulations.
现有的基于虚拟现实(VR)的轮椅模拟器难以以低成本同时提供视觉和运动反馈。为了解决这个问题,我们提出了一个基于vr的轮椅模拟器,它结合了电动轮椅可以实现的运动和头戴式显示器上显示的矢量诱导电影。这种方法使用户能够拥有更丰富的模拟体验,因为电影场景的变化就好像轮椅执行了实际上无法执行的动作一样。我们开发了一个仅使用消费品的概念验证,并进行了评估任务,确认我们的方法可以为屏障模拟提供更丰富的体验。
{"title":"Evaluation on a Wheelchair Simulator Using Limited-Motion Patterns and Vection-Inducing Movies","authors":"Akihiro Miyata, Hironobu Uno, Kenro Go","doi":"10.1109/VR.2019.8797726","DOIUrl":"https://doi.org/10.1109/VR.2019.8797726","url":null,"abstract":"Existing virtual reality (VR) based wheelchair simulators have difficulty providing both visual and motion feedback at low cost. To address this issue, we propose a VR-based wheelchair simulator using a combination of motions attainable by an electric-powered wheelchair and vection-inducing movies displayed on a head-mounted display. This approach enables the user to have a richer simulation experience, because the scenes of the movie change as if the wheelchair performs motions that are not actually performable. We developed a proof of concept using only consumer products and conducted evaluation tasks, confirming that our approach can provide a richer experience for barrier simulations.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131119630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Grasping objects in immersive Virtual Reality 沉浸式虚拟现实中的抓取对象
Pub Date : 2019-03-23 DOI: 10.1109/VR.2019.8798155
Manuela Chessa, Guido Maiello, Lina K. Klein, Vivian C. Paulun, F. Solari
Grasping is one of the fundamental actions we perform to interact with objects in real environments, and in the real world we rarely experience difficulty picking up objects. Grasping plays a fundamental role for interactive virtual reality (VR) systems that are increasingly employed not only for recreational purposes, but also for training in industrial contexts, in medical tasks, and for rehabilitation protocols. To ensure the effectiveness of such VR applications, we must understand whether the same grasping behaviors and strategies employed in the real world are adopted when interacting with objects in VR. To this aim, we replicated in VR an experimental paradigm employed to investigate grasping behavior in the real world. We tracked participants' forefinger and thumb as they picked up, in a VR environment, unfamiliar objects presented at different orientations, and exhibiting the same physics behavior of their real counterparts. We compared grasping behavior within and across participants, in VR and in the corresponding real world situation. Our findings highlight the similarities and differences in grasping behavior in real and virtual environments.
抓取是我们在现实环境中与物体互动的基本动作之一,在现实世界中,我们很少遇到抓取物体的困难。抓取在交互式虚拟现实(VR)系统中起着至关重要的作用,该系统不仅越来越多地用于娱乐目的,而且还用于工业背景下的培训、医疗任务和康复协议。为了确保这种VR应用的有效性,我们必须了解在VR中与物体交互时是否采用了与现实世界中相同的抓取行为和策略。为此,我们在虚拟现实中复制了一个用于研究现实世界中抓取行为的实验范式。在虚拟现实环境中,我们追踪了参与者的食指和拇指,因为他们拿起了以不同方向呈现的不熟悉物体,并表现出与真实物体相同的物理行为。我们比较了参与者内部和参与者之间,在虚拟现实和相应的现实世界中的抓取行为。我们的发现强调了真实和虚拟环境中抓取行为的异同。
{"title":"Grasping objects in immersive Virtual Reality","authors":"Manuela Chessa, Guido Maiello, Lina K. Klein, Vivian C. Paulun, F. Solari","doi":"10.1109/VR.2019.8798155","DOIUrl":"https://doi.org/10.1109/VR.2019.8798155","url":null,"abstract":"Grasping is one of the fundamental actions we perform to interact with objects in real environments, and in the real world we rarely experience difficulty picking up objects. Grasping plays a fundamental role for interactive virtual reality (VR) systems that are increasingly employed not only for recreational purposes, but also for training in industrial contexts, in medical tasks, and for rehabilitation protocols. To ensure the effectiveness of such VR applications, we must understand whether the same grasping behaviors and strategies employed in the real world are adopted when interacting with objects in VR. To this aim, we replicated in VR an experimental paradigm employed to investigate grasping behavior in the real world. We tracked participants' forefinger and thumb as they picked up, in a VR environment, unfamiliar objects presented at different orientations, and exhibiting the same physics behavior of their real counterparts. We compared grasping behavior within and across participants, in VR and in the corresponding real world situation. Our findings highlight the similarities and differences in grasping behavior in real and virtual environments.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"205 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133134927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
期刊
2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1