R. Eagleson, P. Wucherer, P. Stefan, Yaroslav Duschko, S. Ribaupierre, C. Vollmar, P. Fallavollita, Nassir Navab
We present a prototype of a system in development for pre-operative planning. The proposed NeuroTable uses a combination of traditional rendering and novel visualization techniques rendered to facilitate real-time collaboration between neurosurgeons during intervention planning. A set of multimodal 2D and 3D renderings convey the relation between the region of interest and the surrounding anatomical structures. A haptic device is used for interaction with the NeuroTable to facilitate immersive control over the 3D cursor and navigation modes for the neurosurgeons during their discourse of planning. A pilot experimental study was conducted to assess the performance of users in targeting points within the preoperative 3D scan. Then, two clinicians participated in the evaluation of the table in discussing and planning a case. Results indicate that the NeuroTable facilitated the discourse and we discuss the results of the speed and accuracy for the specification of entry and target points.
{"title":"Collaborative table-top VR display for neurosurgical planning","authors":"R. Eagleson, P. Wucherer, P. Stefan, Yaroslav Duschko, S. Ribaupierre, C. Vollmar, P. Fallavollita, Nassir Navab","doi":"10.1109/VR.2015.7223349","DOIUrl":"https://doi.org/10.1109/VR.2015.7223349","url":null,"abstract":"We present a prototype of a system in development for pre-operative planning. The proposed NeuroTable uses a combination of traditional rendering and novel visualization techniques rendered to facilitate real-time collaboration between neurosurgeons during intervention planning. A set of multimodal 2D and 3D renderings convey the relation between the region of interest and the surrounding anatomical structures. A haptic device is used for interaction with the NeuroTable to facilitate immersive control over the 3D cursor and navigation modes for the neurosurgeons during their discourse of planning. A pilot experimental study was conducted to assess the performance of users in targeting points within the preoperative 3D scan. Then, two clinicians participated in the evaluation of the table in discussing and planning a case. Results indicate that the NeuroTable facilitated the discourse and we discuss the results of the speed and accuracy for the specification of entry and target points.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127058302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Merwan Achibet, Adrien Girard, Anthony Talvas, M. Marchal, A. Lécuyer
Haptic feedback is known to improve 3D interaction in virtual environments but current haptic interfaces remain complex and tailored to desktop interaction. In this paper, we introduce the “Elastic-Arm”, a novel approach for incorporating haptic feedback in immersive virtual environments in a simple and cost-effective way. The Elastic-Arm is based on a body-mounted elastic armature that links the user's hand to her shoulder. As a result, a progressive resistance force is perceived when extending the arm. This haptic feedback can be incorporated with various 3D interaction techniques and we illustrate the possibilities offered by our system through several use cases based on well-known examples such as the Bubble technique, Redirected Touching and pseudo-haptics. These illustrative use cases provide users with haptic feedback during selection and navigation tasks but they also enhance their perception of the virtual environment. Taken together, these examples suggest that the Elastic-Arm can be transposed in numerous applications and with various 3D interaction metaphors in which a mobile hap-tic feedback can be beneficial. It could also pave the way for the design of new interaction techniques based on “human-scale” egocentric haptic feedback.
{"title":"Elastic-Arm: Human-scale passive haptic feedback for augmenting interaction and perception in virtual environments","authors":"Merwan Achibet, Adrien Girard, Anthony Talvas, M. Marchal, A. Lécuyer","doi":"10.1109/VR.2015.7223325","DOIUrl":"https://doi.org/10.1109/VR.2015.7223325","url":null,"abstract":"Haptic feedback is known to improve 3D interaction in virtual environments but current haptic interfaces remain complex and tailored to desktop interaction. In this paper, we introduce the “Elastic-Arm”, a novel approach for incorporating haptic feedback in immersive virtual environments in a simple and cost-effective way. The Elastic-Arm is based on a body-mounted elastic armature that links the user's hand to her shoulder. As a result, a progressive resistance force is perceived when extending the arm. This haptic feedback can be incorporated with various 3D interaction techniques and we illustrate the possibilities offered by our system through several use cases based on well-known examples such as the Bubble technique, Redirected Touching and pseudo-haptics. These illustrative use cases provide users with haptic feedback during selection and navigation tasks but they also enhance their perception of the virtual environment. Taken together, these examples suggest that the Elastic-Arm can be transposed in numerous applications and with various 3D interaction metaphors in which a mobile hap-tic feedback can be beneficial. It could also pave the way for the design of new interaction techniques based on “human-scale” egocentric haptic feedback.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126998262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Nickel, Eugen Pröger, Andy Lungfiel, Rolf Kergel
Industry and services have a growing interest in prevention through design with regard to safety of machinery to avoid redesign after construction. A VR planning model of a future river lock has been developed and investigated for risk assessment support. Inspections using the model in 1:1 scale promoted safety analyses of the operational concept, identified hazards and planning flaws, and triggered measures for reducing risks of the future lock. VR simulation contributed to occupational safety and health early in machinery design while saving effort and time.
{"title":"Flexible, dynamic VR simulation of a future river lock facilitates prevention through design in occupational safety and health","authors":"P. Nickel, Eugen Pröger, Andy Lungfiel, Rolf Kergel","doi":"10.1109/VR.2015.7223457","DOIUrl":"https://doi.org/10.1109/VR.2015.7223457","url":null,"abstract":"Industry and services have a growing interest in prevention through design with regard to safety of machinery to avoid redesign after construction. A VR planning model of a future river lock has been developed and investigated for risk assessment support. Inspections using the model in 1:1 scale promoted safety analyses of the operational concept, identified hazards and planning flaws, and triggered measures for reducing risks of the future lock. VR simulation contributed to occupational safety and health early in machinery design while saving effort and time.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116885701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alec G. Moore, Nicolas S. Herrera, Tyler Hurst, Ryan P. McMahan, Sandra Poeschl
Context-dependent memory studies have indicated that olfaction, the sense of smell, has a special odor memory that can significantly improve recall in some cases. Virtual reality (VR), which has been investigated as a training tool, could feasibly benefit from odor memory by incorporating olfactory stimuli. There have been a few studies on this concept for semantic learning, but not for procedural training. To address this gap in knowledge, we investigated the effects of olfaction on the transfer of knowledge from training to next-day execution for building a complex LEGO jet-plane model. Our results indicate that the pleasantness of an odor significantly affects training transfer more than whether the encoding and recall contexts match.
{"title":"The effects of olfaction on training transfer for an assembly task","authors":"Alec G. Moore, Nicolas S. Herrera, Tyler Hurst, Ryan P. McMahan, Sandra Poeschl","doi":"10.1109/VR.2015.7223383","DOIUrl":"https://doi.org/10.1109/VR.2015.7223383","url":null,"abstract":"Context-dependent memory studies have indicated that olfaction, the sense of smell, has a special odor memory that can significantly improve recall in some cases. Virtual reality (VR), which has been investigated as a training tool, could feasibly benefit from odor memory by incorporating olfactory stimuli. There have been a few studies on this concept for semantic learning, but not for procedural training. To address this gap in knowledge, we investigated the effects of olfaction on the transfer of knowledge from training to next-day execution for building a complex LEGO jet-plane model. Our results indicate that the pleasantness of an odor significantly affects training transfer more than whether the encoding and recall contexts match.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122131989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simon Cailhol, P. Fillatreau, J. Fourquet, Yingshen Zhao
This work considers Virtual Reality (VR) applications dealing with objects manipulation (such as industrial product assembly, disassembly or maintenance simulation). For such applications, the operator performing the simulation can be assisted by path planning techniques from the robotics research field. A novel automatic path planner involving geometrical, topological and semantic information of the environment is proposed for the guidance of the user through a haptic device. The interaction allows on one hand, the automatic path planner providing assistance to the human operator, and on the other hand, the human operator to reset the whole planning process suggesting a better suited path. Control sharing techniques are used to improve the assisted manipulation ergonomics by dynamically balancing the automatic path planner authority according to the operator involvement in the task, and by predicting user's intent to integrate it as early as possible in the planning process.
{"title":"A multi-layer approach of interactive path planning for assisted manipulation in virtual reality","authors":"Simon Cailhol, P. Fillatreau, J. Fourquet, Yingshen Zhao","doi":"10.1109/VR.2015.7223344","DOIUrl":"https://doi.org/10.1109/VR.2015.7223344","url":null,"abstract":"This work considers Virtual Reality (VR) applications dealing with objects manipulation (such as industrial product assembly, disassembly or maintenance simulation). For such applications, the operator performing the simulation can be assisted by path planning techniques from the robotics research field. A novel automatic path planner involving geometrical, topological and semantic information of the environment is proposed for the guidance of the user through a haptic device. The interaction allows on one hand, the automatic path planner providing assistance to the human operator, and on the other hand, the human operator to reset the whole planning process suggesting a better suited path. Control sharing techniques are used to improve the assisted manipulation ergonomics by dynamically balancing the automatic path planner authority according to the operator involvement in the task, and by predicting user's intent to integrate it as early as possible in the planning process.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"292 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123075745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Argelaguet, A. Olivier, G. Bruder, J. Pettré, A. Lécuyer
In this paper, we investigate obstacle avoidance behavior during real walking in a large immersive projection setup. We analyze the walking behavior of users when avoiding real and virtual static obstacles. In order to generalize our study, we consider both anthropomorphic and inanimate objects, each having his virtual and real counterpart. The results showed that users exhibit different locomotion behaviors in the presence of real and virtual obstacles, and in the presence of anthropomorphic and inanimate objects. Precisely, the results showed a decrease of walking speed as well as an increase of the clearance distance (i. e., the minimal distance between the walker and the obstacle) when facing virtual obstacles compared to real ones. Moreover, our results suggest that users act differently due to their perception of the obstacle: users keep more distance when the obstacle is anthropomorphic compared to an inanimate object and when the orientation of anthropomorphic obstacle is from the profile compared to a front position. We discuss implications on future large shared immersive projection spaces.
{"title":"Virtual proxemics: Locomotion in the presence of obstacles in large immersive projection environments","authors":"F. Argelaguet, A. Olivier, G. Bruder, J. Pettré, A. Lécuyer","doi":"10.1109/VR.2015.7223327","DOIUrl":"https://doi.org/10.1109/VR.2015.7223327","url":null,"abstract":"In this paper, we investigate obstacle avoidance behavior during real walking in a large immersive projection setup. We analyze the walking behavior of users when avoiding real and virtual static obstacles. In order to generalize our study, we consider both anthropomorphic and inanimate objects, each having his virtual and real counterpart. The results showed that users exhibit different locomotion behaviors in the presence of real and virtual obstacles, and in the presence of anthropomorphic and inanimate objects. Precisely, the results showed a decrease of walking speed as well as an increase of the clearance distance (i. e., the minimal distance between the walker and the obstacle) when facing virtual obstacles compared to real ones. Moreover, our results suggest that users act differently due to their perception of the obstacle: users keep more distance when the obstacle is anthropomorphic compared to an inanimate object and when the orientation of anthropomorphic obstacle is from the profile compared to a front position. We discuss implications on future large shared immersive projection spaces.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128345403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. S. Lakshmprabha, S. Kasderidis, Panagiotis G. Mousouliotis, L. Petrou, O. Beltramello
Pose estimation is a major requirement for any augmented reality (AR) application. Cameras and inertial measurement units (IMUs) have been used for pose estimation not only in AR but also in many other fields. The level of accuracy and pose update required in an AR application is more demanding than in any other field. In certain AR applications, (maintenance for example) a small change in pose can cause a huge deviation in the rendering of the virtual content. This misleads the user in terms of an object location and can display incorrect information. Further, the huge amount of processing power required for the camera based pose estimation results in a bulky system. This reduces the mobility and ergonomics of the system. This demonstration shows a fast pose estimation using a camera and an IMU on a mobile platform for augmented reality in a maintenance application.
{"title":"Augmented reality for maintenance application on a mobile platform","authors":"N. S. Lakshmprabha, S. Kasderidis, Panagiotis G. Mousouliotis, L. Petrou, O. Beltramello","doi":"10.1109/VR.2015.7223442","DOIUrl":"https://doi.org/10.1109/VR.2015.7223442","url":null,"abstract":"Pose estimation is a major requirement for any augmented reality (AR) application. Cameras and inertial measurement units (IMUs) have been used for pose estimation not only in AR but also in many other fields. The level of accuracy and pose update required in an AR application is more demanding than in any other field. In certain AR applications, (maintenance for example) a small change in pose can cause a huge deviation in the rendering of the virtual content. This misleads the user in terms of an object location and can display incorrect information. Further, the huge amount of processing power required for the camera based pose estimation results in a bulky system. This reduces the mobility and ergonomics of the system. This demonstration shows a fast pose estimation using a camera and an IMU on a mobile platform for augmented reality in a maintenance application.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117244490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jun'ichi Nakano, Takuji Narumi, T. Tanikawa, M. Hirose
In this study, we developed a system for mobile devices designed to provide a virtual experience of past scenery depicted in old photographs by superimposing them on landscapes in video see-through frames. A user is asked to capture a photograph of a landscape to be used as a keyframe and enter correspondence points between the new and old photos. The old photograph is deformed by projective transform with homography computed based on correspondence points. We then superimpose of the old photograph onto video see-through frames of current landscape. To achieve real-time and robust superimposition on mobile devices, both motion-sensor-based pose information and camera-image-keypoint-tracking-based pose information is used for device's camera pose tracking.
{"title":"Implementation of on-site virtual time machine for mobile devices","authors":"Jun'ichi Nakano, Takuji Narumi, T. Tanikawa, M. Hirose","doi":"10.1109/VR.2015.7223387","DOIUrl":"https://doi.org/10.1109/VR.2015.7223387","url":null,"abstract":"In this study, we developed a system for mobile devices designed to provide a virtual experience of past scenery depicted in old photographs by superimposing them on landscapes in video see-through frames. A user is asked to capture a photograph of a landscape to be used as a keyframe and enter correspondence points between the new and old photos. The old photograph is deformed by projective transform with homography computed based on correspondence points. We then superimpose of the old photograph onto video see-through frames of current landscape. To achieve real-time and robust superimposition on mobile devices, both motion-sensor-based pose information and camera-image-keypoint-tracking-based pose information is used for device's camera pose tracking.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116345985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a system, intended for automotive design review use cases, that incorporates a tracked tablet in a CAVE, where both the tablet and the CAVE provide different views and interaction possibilities within the same virtual scene. At its core, this idea is not novel. However, the literature reveals few examples of this paradigm in which virtual information is presented on a second physical device to augment an immersive virtual environment. Similarly, it is unclear where the system should be positioned within existing augmented/mixed/virtual reality taxonomies. We argue that interactions occur within a nesting of virtual and physical contexts, and that formalizing these relationships is important when attempting to understand perceptual issues. The goal of this paper is, thus, to describe the new system by proposing a scheme to formally identify sources of bias and then adapting an existing taxonomy to classify such systems.
{"title":"Nested immersion: Describing and classifying augmented virtual reality","authors":"William E. Marsh, F. Mérienne","doi":"10.1109/VR.2015.7223466","DOIUrl":"https://doi.org/10.1109/VR.2015.7223466","url":null,"abstract":"We present a system, intended for automotive design review use cases, that incorporates a tracked tablet in a CAVE, where both the tablet and the CAVE provide different views and interaction possibilities within the same virtual scene. At its core, this idea is not novel. However, the literature reveals few examples of this paradigm in which virtual information is presented on a second physical device to augment an immersive virtual environment. Similarly, it is unclear where the system should be positioned within existing augmented/mixed/virtual reality taxonomies. We argue that interactions occur within a nesting of virtual and physical contexts, and that formalizing these relationships is important when attempting to understand perceptual issues. The goal of this paper is, thus, to describe the new system by proposing a scheme to formally identify sources of bias and then adapting an existing taxonomy to classify such systems.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"319 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115958017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Cabral, F. Ferreira, Olavo Belloc, G. Miller, C. Kurashima, R. Lopes, I. Stavness, J. C. A. Silva, S. Fels, M. Zuffo
In this poster we present Portable-Spheree, an interactive spherical rear-projected 3D-content-display that provides perspective-corrected views according to the user's head position, to provide parallax, shading and occlusion depth cues. Portable-Spheree is an evolution of the Spheree and it is developed in a smaller form factor, using more projectors and a dark-translucent screen with increased contrast. We present some preliminary results of this new configuration as well as applications with spatial interaction that might benefit from this new form factor.
{"title":"Portable-Spheree: A portable 3D perspective-corrected interactive spherical scalable display","authors":"M. Cabral, F. Ferreira, Olavo Belloc, G. Miller, C. Kurashima, R. Lopes, I. Stavness, J. C. A. Silva, S. Fels, M. Zuffo","doi":"10.1109/VR.2015.7223343","DOIUrl":"https://doi.org/10.1109/VR.2015.7223343","url":null,"abstract":"In this poster we present Portable-Spheree, an interactive spherical rear-projected 3D-content-display that provides perspective-corrected views according to the user's head position, to provide parallax, shading and occlusion depth cues. Portable-Spheree is an evolution of the Spheree and it is developed in a smaller form factor, using more projectors and a dark-translucent screen with increased contrast. We present some preliminary results of this new configuration as well as applications with spatial interaction that might benefit from this new form factor.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128137394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}