首页 > 最新文献

2015 IEEE Virtual Reality (VR)最新文献

英文 中文
Investigating the impact of perturbed visual and proprioceptive information in near-field immersive virtual environment 研究近场沉浸式虚拟环境中受干扰的视觉和本体感受信息的影响
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223350
Elham Ebrahimi, Bliss M. Altenhoff, C. Pagano, Sabarish V. Babu, J. A. Jones
We report the results of an empirical evaluation to examine the carryover effects of calibrations to one of three perturbations of visual and proprioceptive feedback: i) Minus condition (-20% gain) in which a visual stylus appeared at 80% of the distance of a physical stylus, ii) Neutral condition (0% gain) in which a visual stylus was co-located with a physical stylus, and iii) Plus condition (+20% gain) in which the visual stylus appeared at 120% of the distance of the physical stylus. Feedback was shown to calibrate distance judgments quickly within an IVE, with estimates being farthest after calibrating to visual information appearing nearer (Minus condition), and nearest after calibrating to visual information appearing further (Plus condition).
我们报告的结果实证评估检查校准的延滞效应三种扰动的视觉和本体感受的反馈:我)-条件(增长-20%)的视觉笔出现在物理针的距离的80%,2)中性条件(增长0%)的视觉与物理手写笔,笔是共存和iii)和条件(+ 20%)的视觉笔出现在物理针的距离的120%。反馈被证明可以在IVE内快速校准距离判断,在校准到视觉信息出现的距离较近(负条件)后,估计距离最远(正条件),校准到视觉信息出现的距离较远(正条件)后,估计距离最近。
{"title":"Investigating the impact of perturbed visual and proprioceptive information in near-field immersive virtual environment","authors":"Elham Ebrahimi, Bliss M. Altenhoff, C. Pagano, Sabarish V. Babu, J. A. Jones","doi":"10.1109/VR.2015.7223350","DOIUrl":"https://doi.org/10.1109/VR.2015.7223350","url":null,"abstract":"We report the results of an empirical evaluation to examine the carryover effects of calibrations to one of three perturbations of visual and proprioceptive feedback: i) Minus condition (-20% gain) in which a visual stylus appeared at 80% of the distance of a physical stylus, ii) Neutral condition (0% gain) in which a visual stylus was co-located with a physical stylus, and iii) Plus condition (+20% gain) in which the visual stylus appeared at 120% of the distance of the physical stylus. Feedback was shown to calibrate distance judgments quickly within an IVE, with estimates being farthest after calibrating to visual information appearing nearer (Minus condition), and nearest after calibrating to visual information appearing further (Plus condition).","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124660090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual reality training of manual procedures in the nuclear sector 核部门手工程序的虚拟现实培训
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223455
J. Cíger, Mehdi Sbaouni, Christian Segot
A glove box simulator is presented for a safe and cost-effective training of the operators in the nuclear industry. The focus is on learning of the proper safety procedures and correct maintenance of a glove box in the presence of potentially radioactive substances. Two common situations are explored - operator working in the glove box and operator performing maintenance on it.
为了对核工业操作人员进行安全、经济的培训,提出了一种手套箱模拟器。重点是学习在存在潜在放射性物质的情况下正确的安全程序和手套箱的正确维护。探讨了两种常见的情况-操作员在手套箱中工作和操作员对其进行维护。
{"title":"Virtual reality training of manual procedures in the nuclear sector","authors":"J. Cíger, Mehdi Sbaouni, Christian Segot","doi":"10.1109/VR.2015.7223455","DOIUrl":"https://doi.org/10.1109/VR.2015.7223455","url":null,"abstract":"A glove box simulator is presented for a safe and cost-effective training of the operators in the nuclear industry. The focus is on learning of the proper safety procedures and correct maintenance of a glove box in the presence of potentially radioactive substances. Two common situations are explored - operator working in the glove box and operator performing maintenance on it.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129897699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A real-time welding training system base on virtual reality 基于虚拟现实的实时焊接培训系统
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223419
Benkai Xie, Qiang Zhou, Liang Yu
Onew360 is a training simulator for simulating gas metal arc welding (GMAW) welding. This system is comprised of standard welding hardware components (helmet, gun, work-piece), a PC, a head-mounted display, a tracking system for both the torch and the user's head, and external audio speakers. The track model of welding simulator using single-camera vision measurement technology to calculate the position of the welding gun and helmet, and the simulation model using simple model method to simulate the weld geometry based on the orientation and speed of the welding torch. So that the system produce a realistic, interactive, and immersive welding experience.
Onew360是一款模拟气体金属弧焊(GMAW)焊接的培训模拟器。该系统由标准焊接硬件组件(头盔,枪,工件),PC,头戴式显示器,火炬和用户头部的跟踪系统以及外部音频扬声器组成。焊接模拟器的轨迹模型采用单摄像头视觉测量技术计算焊枪和头盔的位置,仿真模型采用简单模型法模拟基于焊枪方向和速度的焊缝几何形状。因此,该系统产生了一个现实的,互动的,身临其境的焊接体验。
{"title":"A real-time welding training system base on virtual reality","authors":"Benkai Xie, Qiang Zhou, Liang Yu","doi":"10.1109/VR.2015.7223419","DOIUrl":"https://doi.org/10.1109/VR.2015.7223419","url":null,"abstract":"Onew360 is a training simulator for simulating gas metal arc welding (GMAW) welding. This system is comprised of standard welding hardware components (helmet, gun, work-piece), a PC, a head-mounted display, a tracking system for both the torch and the user's head, and external audio speakers. The track model of welding simulator using single-camera vision measurement technology to calculate the position of the welding gun and helmet, and the simulation model using simple model method to simulate the weld geometry based on the orientation and speed of the welding torch. So that the system produce a realistic, interactive, and immersive welding experience.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114525170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
3DTouch: A wearable 3D input device for 3D applications 3DTouch:用于3D应用的可穿戴3D输入设备
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223451
Anh M Nguyen, Amy Banic
3D applications appear in every corner of life in the current technology era. There is a need for an ubiquitous 3D input device that works with many different platforms, from head-mounted displays (HMDs) to mobile touch devices, 3DTVs, and even the Cave Automatic Virtual Environments. We present 3DTouch [1], a novel wearable 3D input device worn on the fingertip for 3D manipulation tasks. 3DTouch is designed to fill the missing gap of a 3D input device that is self-contained, mobile, and universally works across various 3D platforms. This video presents a working prototype of our solution, which is described in details in the paper [1]. Our approach relies on a relative positioning technique using an optical laser sensor (OPS) and a 9-DOF inertial measurement unit (IMU). The device employs touch input for the benefits of passive haptic feedback, and movement stability. On the other hand, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices. We propose a set of 3D interaction techniques including selection, translation, and rotation using 3DTouch. An evaluation also demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for subtle touch interaction in 3D space. We envision that modular solutions like 3DTouch opens up a whole new design space for interaction techniques to further develop on.
在当今科技时代,3D应用出现在生活的各个角落。我们需要一种无处不在的3D输入设备,它可以在许多不同的平台上工作,从头戴式显示器(hmd)到移动触摸设备、3dtv,甚至Cave自动虚拟环境。我们提出了3DTouch[1],一种新型的可穿戴3D输入设备,佩戴在指尖上,用于3D操作任务。3DTouch旨在填补3D输入设备的空白,该设备是独立的,可移动的,并且可以在各种3D平台上通用。这个视频展示了我们解决方案的一个工作原型,在论文[1]中有详细的描述。我们的方法依赖于使用光学激光传感器(OPS)和9自由度惯性测量单元(IMU)的相对定位技术。该设备采用触摸输入,以获得被动触觉反馈和运动稳定性的好处。另一方面,通过触摸交互,3DTouch在概念上比3D空间输入设备更容易使用数小时。我们提出了一套使用3DTouch的3D交互技术,包括选择、平移和旋转。评估还表明,该设备的跟踪精度为1.10毫米和2.33度,用于3D空间的细微触摸交互。我们设想,像3DTouch这样的模块化解决方案为交互技术的进一步发展开辟了一个全新的设计空间。
{"title":"3DTouch: A wearable 3D input device for 3D applications","authors":"Anh M Nguyen, Amy Banic","doi":"10.1109/VR.2015.7223451","DOIUrl":"https://doi.org/10.1109/VR.2015.7223451","url":null,"abstract":"3D applications appear in every corner of life in the current technology era. There is a need for an ubiquitous 3D input device that works with many different platforms, from head-mounted displays (HMDs) to mobile touch devices, 3DTVs, and even the Cave Automatic Virtual Environments. We present 3DTouch [1], a novel wearable 3D input device worn on the fingertip for 3D manipulation tasks. 3DTouch is designed to fill the missing gap of a 3D input device that is self-contained, mobile, and universally works across various 3D platforms. This video presents a working prototype of our solution, which is described in details in the paper [1]. Our approach relies on a relative positioning technique using an optical laser sensor (OPS) and a 9-DOF inertial measurement unit (IMU). The device employs touch input for the benefits of passive haptic feedback, and movement stability. On the other hand, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices. We propose a set of 3D interaction techniques including selection, translation, and rotation using 3DTouch. An evaluation also demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for subtle touch interaction in 3D space. We envision that modular solutions like 3DTouch opens up a whole new design space for interaction techniques to further develop on.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116262524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Applying latency to half of a self-avatar's body to change real walking patterns 将延迟应用于自我化身身体的一半,以改变真实的行走模式
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223329
G. Samaraweera, A. Perdomo, J. Quarles
Latency (i.e., time delay) in a Virtual Environment is known to disrupt user performance, presence and induce simulator sickness. However, can we utilize the effects caused by experiencing latency to benefit virtual rehabilitation technologies? We investigate this question by conducting an experiment that is aimed at altering gait by introducing latency applied to one side of a self-avatar with a front-facing mirror. This work was motivated by previous findings where participants altered their gait with increasing latency, even when participants failed to notice considerably high latencies as 150ms or 225ms. In this paper, we present the results of a study that applies this novel technique to average healthy persons (i.e., to demonstrate the feasibility of the approach before applying it to persons with disabilities). The results indicate a tendency to create asymmetric gait in persons with symmetric gait when latency is applied to one side of their self-avatar. Thus, the study shows the potential of applying one-sided latency in a self-avatar, which could be used to develop asymmetric gait rehabilitation techniques.
众所周知,虚拟环境中的延迟(即时间延迟)会破坏用户的性能、存在感和诱发模拟器病。然而,我们能否利用体验延迟所带来的影响来受益于虚拟康复技术?我们通过进行一项实验来研究这个问题,该实验旨在通过在带有正面镜子的自我化身的一侧引入延迟来改变步态。这项研究的动机是先前的研究结果,即参与者随着延迟的增加而改变他们的步态,即使参与者没有注意到150毫秒或225毫秒的相当高的延迟。在本文中,我们介绍了一项研究的结果,该研究将这种新技术应用于普通健康人(即,在将该方法应用于残疾人之前,先证明该方法的可行性)。结果表明,当延迟作用于自我化身的一侧时,步态对称的人倾向于产生不对称的步态。因此,该研究显示了在自我化身中应用单侧潜伏期的潜力,这可以用于开发不对称步态康复技术。
{"title":"Applying latency to half of a self-avatar's body to change real walking patterns","authors":"G. Samaraweera, A. Perdomo, J. Quarles","doi":"10.1109/VR.2015.7223329","DOIUrl":"https://doi.org/10.1109/VR.2015.7223329","url":null,"abstract":"Latency (i.e., time delay) in a Virtual Environment is known to disrupt user performance, presence and induce simulator sickness. However, can we utilize the effects caused by experiencing latency to benefit virtual rehabilitation technologies? We investigate this question by conducting an experiment that is aimed at altering gait by introducing latency applied to one side of a self-avatar with a front-facing mirror. This work was motivated by previous findings where participants altered their gait with increasing latency, even when participants failed to notice considerably high latencies as 150ms or 225ms. In this paper, we present the results of a study that applies this novel technique to average healthy persons (i.e., to demonstrate the feasibility of the approach before applying it to persons with disabilities). The results indicate a tendency to create asymmetric gait in persons with symmetric gait when latency is applied to one side of their self-avatar. Thus, the study shows the potential of applying one-sided latency in a self-avatar, which could be used to develop asymmetric gait rehabilitation techniques.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121557855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Optical see-through head up displays' effect on depth judgments of real world objects 光学透明平视显示器对真实世界物体深度判断的影响
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223465
Missie Smith, Nadejda Doutcheva, Joseph L. Gabbard, G. Burnett
Recent research indicates that users consistently underestimate depth judgments to Augmented Reality (AR) graphics when viewed through optical see-through displays. However, to our knowledge, little work has examined how AR graphics may affect depth judgments of real world objects that have been overlaid or annotated with AR graphics. This study begins a preliminary analysis whether AR graphics have directional effects on users' depth perception of real-world objects, as might be experienced in vehicle driving scenarios (e.g., as viewed via an optical see-through head-up display or HUD). Twenty-four participants were asked to judge the depth of a physical pedestrian proxy figure moving towards them at a constant rate of 1 meter/second. Participants were shown an initial target location that varied in distance from 11 to 20 m and were then asked to press a button to indicate when the moving target was perceived to be at the previously specified target location. Each participant experienced three different display conditions: no AR visual display (control), a conformal AR graphic overlaid on the pedestrian via a HUD, and the same graphic presented on a tablet physically located on the pedestrian. Participants completed 10 trials (one for each target distance between 11 and 20 inclusive) per display condition for a total of 30 trials per participant. The judged distance from the correct location was recorded, and after each trial, participants' confidence in determining the correct distance was captured. Across all conditions, participants underestimated the distance of the physical object consistent with existing literature. Greater variability was observed in the accuracy of distance judgments under the AR HUD condition relative to the other two display conditions. In addition, participant confidence levels were considerably lower in the AR HUD condition.
最近的研究表明,当用户通过光学透明显示器观看增强现实(AR)图像时,他们总是低估深度判断。然而,据我们所知,很少有研究研究AR图形如何影响用AR图形覆盖或注释的现实世界对象的深度判断。这项研究开始初步分析AR图形是否对用户对现实世界物体的深度感知有方向性影响,就像在车辆驾驶场景中可能经历的那样(例如,通过光学透明抬头显示器或HUD观看)。24名参与者被要求判断一个以1米/秒的恒定速度向他们移动的行人替身的深度。研究人员向参与者展示了一个距离从11米到20米不等的初始目标位置,然后要求他们按下一个按钮,以表明当移动目标被感知到在先前指定的目标位置时。每个参与者都经历了三种不同的显示条件:没有AR视觉显示(控制),通过HUD覆盖在行人上的保形AR图形,以及在行人身上的平板电脑上显示相同的图形。每个显示条件下,参与者完成10次试验(11 ~ 20个目标距离各1次),每个参与者共30次试验。从正确位置判断的距离被记录下来,并且在每次试验之后,参与者对确定正确距离的信心被捕获。在所有条件下,参与者都低估了与现有文献一致的物理物体的距离。与其他两种显示条件相比,AR HUD条件下距离判断的准确性变化更大。此外,在AR HUD条件下,参与者的信心水平明显较低。
{"title":"Optical see-through head up displays' effect on depth judgments of real world objects","authors":"Missie Smith, Nadejda Doutcheva, Joseph L. Gabbard, G. Burnett","doi":"10.1109/VR.2015.7223465","DOIUrl":"https://doi.org/10.1109/VR.2015.7223465","url":null,"abstract":"Recent research indicates that users consistently underestimate depth judgments to Augmented Reality (AR) graphics when viewed through optical see-through displays. However, to our knowledge, little work has examined how AR graphics may affect depth judgments of real world objects that have been overlaid or annotated with AR graphics. This study begins a preliminary analysis whether AR graphics have directional effects on users' depth perception of real-world objects, as might be experienced in vehicle driving scenarios (e.g., as viewed via an optical see-through head-up display or HUD). Twenty-four participants were asked to judge the depth of a physical pedestrian proxy figure moving towards them at a constant rate of 1 meter/second. Participants were shown an initial target location that varied in distance from 11 to 20 m and were then asked to press a button to indicate when the moving target was perceived to be at the previously specified target location. Each participant experienced three different display conditions: no AR visual display (control), a conformal AR graphic overlaid on the pedestrian via a HUD, and the same graphic presented on a tablet physically located on the pedestrian. Participants completed 10 trials (one for each target distance between 11 and 20 inclusive) per display condition for a total of 30 trials per participant. The judged distance from the correct location was recorded, and after each trial, participants' confidence in determining the correct distance was captured. Across all conditions, participants underestimated the distance of the physical object consistent with existing literature. Greater variability was observed in the accuracy of distance judgments under the AR HUD condition relative to the other two display conditions. In addition, participant confidence levels were considerably lower in the AR HUD condition.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127671468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Low cost virtual reality for medical training 用于医疗培训的低成本虚拟现实
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223437
A. Mathur
This demo depicts a low cost virtual reality set-up that may be used for medical training and instruction purposes. Using devices such as the Oculus Rift and Razer Hydra, an immersive experience, including hand interactivity can be given. Software running on a PC integrates these devices and presents an interactive and immersive training environment, where trainees are asked to perform a mixed bag of both, simple and complex tasks. These tasks range from identification of certain organs to performing of an actual incision. Trainees learn by doing, albeit in the virtual world. Components of the system are relatively affordable and simple to use, thereby making such a set-up incredibly easy to deploy.
本演示描述了一种低成本的虚拟现实设置,可用于医疗培训和教学目的。使用Oculus Rift和Razer Hydra等设备,可以提供身临其境的体验,包括手部互动。在PC上运行的软件集成了这些设备,并提供了一个交互式的沉浸式培训环境,学员被要求执行简单和复杂的混合任务。这些任务的范围从识别某些器官到执行实际的切口。受训人员在实践中学习,尽管是在虚拟世界中。该系统的组件相对便宜且易于使用,因此使这种设置非常容易部署。
{"title":"Low cost virtual reality for medical training","authors":"A. Mathur","doi":"10.1109/VR.2015.7223437","DOIUrl":"https://doi.org/10.1109/VR.2015.7223437","url":null,"abstract":"This demo depicts a low cost virtual reality set-up that may be used for medical training and instruction purposes. Using devices such as the Oculus Rift and Razer Hydra, an immersive experience, including hand interactivity can be given. Software running on a PC integrates these devices and presents an interactive and immersive training environment, where trainees are asked to perform a mixed bag of both, simple and complex tasks. These tasks range from identification of certain organs to performing of an actual incision. Trainees learn by doing, albeit in the virtual world. Components of the system are relatively affordable and simple to use, thereby making such a set-up incredibly easy to deploy.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128037838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Three-dimensional VR interaction using the movement of a mobile display 三维虚拟现实互动使用移动显示器的运动
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223446
Lili Wang, T. Komuro
In this study, we propose a VR system for allowing various types of interaction with virtual objects using an autostereoscopic mobile display and an accelerometer. The system obtains the orientation and motion information from the accelerometer attached to the mobile display and reflects them to the motion of virtual objects. It can present 3D images with motion parallax by estimating the position of the user's viewpoint and by displaying properly projected images. Furthermore, our method enables to connect the real space and the virtual space seamlessly through the mobile display by determining the coordinate system so that one of the horizontal surfaces in the virtual space coincides with the display surface. To show the effectiveness of this concept, we implemented an application to simulate food cooking by regarding the mobile display as a frying pan.
在这项研究中,我们提出了一个虚拟现实系统,允许使用自动立体移动显示器和加速度计与虚拟物体进行各种类型的交互。该系统从附着在移动显示器上的加速度计获取方向和运动信息,并将其反映到虚拟物体的运动中。它可以通过估计用户视点的位置和显示适当的投影图像来呈现具有运动视差的3D图像。此外,我们的方法通过移动显示,通过确定坐标系,使虚拟空间中的一个水平面与显示平面重合,实现了真实空间与虚拟空间的无缝连接。为了展示这一概念的有效性,我们实现了一个应用程序,通过将移动显示器视为煎锅来模拟食物烹饪。
{"title":"Three-dimensional VR interaction using the movement of a mobile display","authors":"Lili Wang, T. Komuro","doi":"10.1109/VR.2015.7223446","DOIUrl":"https://doi.org/10.1109/VR.2015.7223446","url":null,"abstract":"In this study, we propose a VR system for allowing various types of interaction with virtual objects using an autostereoscopic mobile display and an accelerometer. The system obtains the orientation and motion information from the accelerometer attached to the mobile display and reflects them to the motion of virtual objects. It can present 3D images with motion parallax by estimating the position of the user's viewpoint and by displaying properly projected images. Furthermore, our method enables to connect the real space and the virtual space seamlessly through the mobile display by determining the coordinate system so that one of the horizontal surfaces in the virtual space coincides with the display surface. To show the effectiveness of this concept, we implemented an application to simulate food cooking by regarding the mobile display as a frying pan.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128113572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wings and flying in immersive VR — Controller type, sound effects and experienced ownership and agency 翅膀和飞行沉浸式VR -控制器类型,声音效果和经验丰富的所有权和代理
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223405
Erik Sikström, Amalia de Götzen, S. Serafin
An experiment investigated the subjective experiences of ownership and agency of a pair of virtual wings attached to a motion controlled avatar in an immersive virtual reality setup. A between groups comparison of two ways of controlling the movement of the wings and flight ability. One where the subjects achieved the wing motion and flight ability by using a hand-held video game controller and the other by moving the shoulder. Through four repetitions of a flight task with varying amounts of self-produced audio feedback (from the movement of the virtual limbs), the subjects evaluated their experienced embodiment of the wings on a body ownership and agency questionnaire. The results shows significant differences between the controllers in some of the questionnaire items and that adding self-produced sounds to the avatar, slightly changed the subjects evaluations.
一项实验调查了沉浸式虚拟现实设置中,附在运动控制化身上的一对虚拟翅膀的所有权和代理的主观体验。组间比较两种控制翅膀运动的方式和飞行能力。其中一个是受试者通过使用手持视频游戏控制器实现翅膀运动和飞行能力,另一个是通过移动肩膀。通过四次重复的飞行任务和不同数量的自制音频反馈(来自虚拟肢体的运动),受试者在身体所有权和代理问卷上评估他们对翅膀的经验体现。结果显示,控制者在某些问卷项目上存在显著差异,而在虚拟角色中加入自己制作的声音会略微改变受试者的评估。
{"title":"Wings and flying in immersive VR — Controller type, sound effects and experienced ownership and agency","authors":"Erik Sikström, Amalia de Götzen, S. Serafin","doi":"10.1109/VR.2015.7223405","DOIUrl":"https://doi.org/10.1109/VR.2015.7223405","url":null,"abstract":"An experiment investigated the subjective experiences of ownership and agency of a pair of virtual wings attached to a motion controlled avatar in an immersive virtual reality setup. A between groups comparison of two ways of controlling the movement of the wings and flight ability. One where the subjects achieved the wing motion and flight ability by using a hand-held video game controller and the other by moving the shoulder. Through four repetitions of a flight task with varying amounts of self-produced audio feedback (from the movement of the virtual limbs), the subjects evaluated their experienced embodiment of the wings on a body ownership and agency questionnaire. The results shows significant differences between the controllers in some of the questionnaire items and that adding self-produced sounds to the avatar, slightly changed the subjects evaluations.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131374909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
flapAssist: How the integration of VR and visualization tools fosters the factory planning process flapAssist: VR和可视化工具的集成如何促进工厂规划过程
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223355
Sascha Gebhardt, S. Pick, H. Voet, J. Utsch, T. A. Khawli, U. Eppelt, R. Reinhard, Chris Buescher, B. Hentschel, T. Kuhlen
Virtual Reality (VR) systems are of growing importance to aid decision support in the context of the digital factory, especially factory layout planning. While current solutions either focus on virtual walkthroughs or the visualization of more abstract information, a solution that provides both, does currently not exist. To close this gap, we present a holistic VR application, called Factory Layout Planning Assistant (flapAssist). It is meant to serve as a platform for planning the layout of factories, while also providing a wide range of analysis features. By being scalable from desktops to CAVEs and providing a link to a central integration platform, flapAssist integrates well in established factory planning workflows.
在数字化工厂的背景下,虚拟现实(VR)系统在辅助决策支持方面越来越重要,尤其是在工厂布局规划方面。虽然当前的解决方案要么专注于虚拟演练,要么专注于更抽象信息的可视化,但目前还不存在同时提供这两种解决方案的解决方案。为了缩小这一差距,我们提出了一个整体的VR应用程序,称为工厂布局规划助手(flapAssist)。它旨在作为规划工厂布局的平台,同时还提供广泛的分析功能。通过从桌面扩展到洞穴,并提供到中央集成平台的链接,flapAssist可以很好地集成到已建立的工厂计划工作流中。
{"title":"flapAssist: How the integration of VR and visualization tools fosters the factory planning process","authors":"Sascha Gebhardt, S. Pick, H. Voet, J. Utsch, T. A. Khawli, U. Eppelt, R. Reinhard, Chris Buescher, B. Hentschel, T. Kuhlen","doi":"10.1109/VR.2015.7223355","DOIUrl":"https://doi.org/10.1109/VR.2015.7223355","url":null,"abstract":"Virtual Reality (VR) systems are of growing importance to aid decision support in the context of the digital factory, especially factory layout planning. While current solutions either focus on virtual walkthroughs or the visualization of more abstract information, a solution that provides both, does currently not exist. To close this gap, we present a holistic VR application, called Factory Layout Planning Assistant (flapAssist). It is meant to serve as a platform for planning the layout of factories, while also providing a wide range of analysis features. By being scalable from desktops to CAVEs and providing a link to a central integration platform, flapAssist integrates well in established factory planning workflows.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134591370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
期刊
2015 IEEE Virtual Reality (VR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1