Gareth Henshall, S. Pop, Marc R. Edwards, L. A. Cenydd, N. John
Work in progress for the development of a novel virtual training environment for training a kidney biopsy procedure is presented. Our goal is to provide an affordable high fidelity simulation through the integration of some of the latest off-the-shelf technology components. The range of forces that are encountered during this procedure have been recorded using a custom designed force sensitive glove and then applied within the simulation.
{"title":"Towards a high fidelity simulation of the kidney biopsy procedure","authors":"Gareth Henshall, S. Pop, Marc R. Edwards, L. A. Cenydd, N. John","doi":"10.1109/VR.2015.7223360","DOIUrl":"https://doi.org/10.1109/VR.2015.7223360","url":null,"abstract":"Work in progress for the development of a novel virtual training environment for training a kidney biopsy procedure is presented. Our goal is to provide an affordable high fidelity simulation through the integration of some of the latest off-the-shelf technology components. The range of forces that are encountered during this procedure have been recorded using a custom designed force sensitive glove and then applied within the simulation.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114900825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Ikeno, Ryuta Okazaki, Taku Hachisu, H. Kajimoto
It is known that visual, auditory, and tactile modalities affect the experiences of eating and drinking. One such example is the “glug” sound and vibration from a Japanese sake bottle when pouring liquid. Our previous studies have modeled the wave of the vibration by summation of two decaying sinusoidal waves with different frequencies. In this paper, to enrich expression of various types of liquid, we included two new properties of liquid: the viscosity and the residual amount of liquid, both based on recorded data.
{"title":"Presentation of virtual liquid by modeling vibration of a Japanese sake bottle","authors":"S. Ikeno, Ryuta Okazaki, Taku Hachisu, H. Kajimoto","doi":"10.1109/VR.2015.7223430","DOIUrl":"https://doi.org/10.1109/VR.2015.7223430","url":null,"abstract":"It is known that visual, auditory, and tactile modalities affect the experiences of eating and drinking. One such example is the “glug” sound and vibration from a Japanese sake bottle when pouring liquid. Our previous studies have modeled the wave of the vibration by summation of two decaying sinusoidal waves with different frequencies. In this paper, to enrich expression of various types of liquid, we included two new properties of liquid: the viscosity and the residual amount of liquid, both based on recorded data.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"321 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116438942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motoko Kanegae, Jun Morita, S. Shimamura, Yuji Uema, Maiko Takahashi, M. Inami, T. Hayashida, M. Sugimoto
This paper introduces a registration and projection method for directly projecting the tumor region for breast cancer surgery assistance based on the breast procedure of our collaborating doctor. We investigated the steps of the breast cancer procedure of our collaborating doctor and how it can be applied for tumor region projection. We propose a novel way of MRI acquisition so we may correlate the MRI coordinates to the patient in the real world. By calculating the transformation matrix from the MRI coordinates and the coordinates from the markers that is on the patient, we are able to register the acquired MRI data to the patient. Our registration and presentation method of the tumor region was then evaluated by medical doctors.
{"title":"Registration and projection method of tumor region projection for breast cancer surgery","authors":"Motoko Kanegae, Jun Morita, S. Shimamura, Yuji Uema, Maiko Takahashi, M. Inami, T. Hayashida, M. Sugimoto","doi":"10.1109/VR.2015.7223365","DOIUrl":"https://doi.org/10.1109/VR.2015.7223365","url":null,"abstract":"This paper introduces a registration and projection method for directly projecting the tumor region for breast cancer surgery assistance based on the breast procedure of our collaborating doctor. We investigated the steps of the breast cancer procedure of our collaborating doctor and how it can be applied for tumor region projection. We propose a novel way of MRI acquisition so we may correlate the MRI coordinates to the patient in the real world. By calculating the transformation matrix from the MRI coordinates and the coordinates from the markers that is on the patient, we are able to register the acquired MRI data to the patient. Our registration and presentation method of the tumor region was then evaluated by medical doctors.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116730762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Inserting a needle to perform a biopsy requires a high haptic sensitivity. The traditional learning methods based on observation and training on real patients are questionable. In this paper, we present a preliminary evaluation of a VR trainer for needle insertion tasks. The system aims to replicate an existing physical setup while overcoming some of its limitations. Results permit to validate some design choices and suggest some UI improvements.
{"title":"Preliminary evaluation of a virtual needle insertion training system","authors":"Duc Van Nguyen, Safa Ben Lakhal, A. Chellali","doi":"10.1109/VR.2015.7223388","DOIUrl":"https://doi.org/10.1109/VR.2015.7223388","url":null,"abstract":"Inserting a needle to perform a biopsy requires a high haptic sensitivity. The traditional learning methods based on observation and training on real patients are questionable. In this paper, we present a preliminary evaluation of a VR trainer for needle insertion tasks. The system aims to replicate an existing physical setup while overcoming some of its limitations. Results permit to validate some design choices and suggest some UI improvements.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114702311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yujiro Okuya, Y. Ikei, Tomohiro Amemiya, K. Hirota
The present paper describes the system to present a pseudo-walking sensation to a sitting participant. The vibration was added to the heel and toe to imitate cutaneous sensation of the sole during walking. The sound of footsteps was also provided to the participant through headphones simultaneously. In this sound presentation, we used a spatial sound of footsteps of another walker as well as own footstep sound. Another walker's sound was moved along several trajectories in a VR space. We conducted an experiment to elucidate the effect of third person's footstep sound moved differently on the walking sensation of a participant. The result showed that third person's sound enhanced not only walking sensation but also translational sensation of a sitting participant. Furthermore, the effect was the highest when third person's sound came from front to backward of the participant in a VR space.
{"title":"Third person's footsteps enhanced moving sensation of seated person","authors":"Yujiro Okuya, Y. Ikei, Tomohiro Amemiya, K. Hirota","doi":"10.1109/VR.2015.7223390","DOIUrl":"https://doi.org/10.1109/VR.2015.7223390","url":null,"abstract":"The present paper describes the system to present a pseudo-walking sensation to a sitting participant. The vibration was added to the heel and toe to imitate cutaneous sensation of the sole during walking. The sound of footsteps was also provided to the participant through headphones simultaneously. In this sound presentation, we used a spatial sound of footsteps of another walker as well as own footstep sound. Another walker's sound was moved along several trajectories in a VR space. We conducted an experiment to elucidate the effect of third person's footstep sound moved differently on the walking sensation of a participant. The result showed that third person's sound enhanced not only walking sensation but also translational sensation of a sitting participant. Furthermore, the effect was the highest when third person's sound came from front to backward of the participant in a VR space.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123527881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Katz, Dalai Felinto, Damien Touraine, David Poirier-Quinot, P. Bourdot
BlenderVR is an open-source project framework for interactive and immersive applications based on an extension of the Blender Game Engine to Virtual Reality applications. BlenderVR is a generalization of the BlenderCAVE project, accounting for alternate platforms (e.g., HMD, video-walls). The goal is to provide a flexible and easy to use framework for the creation of VR applications for various platforms, making use of the existing power of the BGE's graphics rendering and physics engine. Compatible with 3 major Operating Systems, BlenderVR has been developed by VR researchers with support from the Blender Community. BlenderVR currently handles multi-screen/multi-user tracked stereoscopic rendering through efficient low-level master/slave synchronization process with multimodal interactions via OSC and VRPN protocols.
{"title":"BlenderVR: Open-source framework for interactive and immersive VR","authors":"B. Katz, Dalai Felinto, Damien Touraine, David Poirier-Quinot, P. Bourdot","doi":"10.1109/VR.2015.7223366","DOIUrl":"https://doi.org/10.1109/VR.2015.7223366","url":null,"abstract":"BlenderVR is an open-source project framework for interactive and immersive applications based on an extension of the Blender Game Engine to Virtual Reality applications. BlenderVR is a generalization of the BlenderCAVE project, accounting for alternate platforms (e.g., HMD, video-walls). The goal is to provide a flexible and easy to use framework for the creation of VR applications for various platforms, making use of the existing power of the BGE's graphics rendering and physics engine. Compatible with 3 major Operating Systems, BlenderVR has been developed by VR researchers with support from the Blender Community. BlenderVR currently handles multi-screen/multi-user tracked stereoscopic rendering through efficient low-level master/slave synchronization process with multimodal interactions via OSC and VRPN protocols.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123646273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When users are interacting collaboratively in a virtual environment it cannot be guaranteed that every user has the same input device or that they have access to the same information. Our research aims at understanding the effects of such asymmetries on the user embodiment in collaborative virtual environments (CVEs). In order to do this, we have developed a prototyping platform for cooperative interaction between two users. To change the information a user has, we are incorporating “special views”[1] for each person. Also, an easily expandable array of input devices is supported, e.g. Mouse/Keyboard, Novint Falcon, Razer Hydra, ART Flightstick, Leap Motion, etc. Those devices can provide additional information to a user, like haptic feedback, but they can also be used to restrain a user, for example by providing a device with less degrees of freedom.
当用户在虚拟环境中进行协作交互时,不能保证每个用户都有相同的输入设备,或者他们可以访问相同的信息。我们的研究旨在理解这种不对称对协作虚拟环境(CVEs)中用户体现的影响。为了做到这一点,我们开发了一个原型平台,用于两个用户之间的协作交互。为了改变用户拥有的信息,我们为每个人加入了“特殊视图”[1]。此外,还支持易于扩展的输入设备阵列,例如鼠标/键盘,Novint Falcon, Razer Hydra, ART Flightstick, Leap Motion等。这些设备可以为用户提供额外的信息,比如触觉反馈,但它们也可以用来限制用户,例如,通过提供自由度较小的设备。
{"title":"Cooperation in virtual environments with individual views","authors":"Vincent Küszter, G. Brunnett, Daniel Pietschmann","doi":"10.1109/VR.2015.7223372","DOIUrl":"https://doi.org/10.1109/VR.2015.7223372","url":null,"abstract":"When users are interacting collaboratively in a virtual environment it cannot be guaranteed that every user has the same input device or that they have access to the same information. Our research aims at understanding the effects of such asymmetries on the user embodiment in collaborative virtual environments (CVEs). In order to do this, we have developed a prototyping platform for cooperative interaction between two users. To change the information a user has, we are incorporating “special views”[1] for each person. Also, an easily expandable array of input devices is supported, e.g. Mouse/Keyboard, Novint Falcon, Razer Hydra, ART Flightstick, Leap Motion, etc. Those devices can provide additional information to a user, like haptic feedback, but they can also be used to restrain a user, for example by providing a device with less degrees of freedom.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"505 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122367127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an efficient approach for probeless light estimation and coherent rendering of Augmented Reality in dynamic scenes. This approach can handle dynamically changing scene geometry and dynamically changing light sources in real time with a single mobile RGB-D sensor and without relying on an invasive lightprobe. We jointly filter both in-view dynamic geometry and outside-view static geometry. The resulting reconstruction provides the input for efficient global illumination computation in image-space. We demonstrate that our approach can deliver state-of-the-art Augmented Reality rendering effects for scenes that are more scalable and more dynamic than previous work.
{"title":"Image-space illumination for augmented reality in dynamic environments","authors":"Lukas Gruber, Jonathan Ventura, D. Schmalstieg","doi":"10.1109/VR.2015.7223334","DOIUrl":"https://doi.org/10.1109/VR.2015.7223334","url":null,"abstract":"We present an efficient approach for probeless light estimation and coherent rendering of Augmented Reality in dynamic scenes. This approach can handle dynamically changing scene geometry and dynamically changing light sources in real time with a single mobile RGB-D sensor and without relying on an invasive lightprobe. We jointly filter both in-view dynamic geometry and outside-view static geometry. The resulting reconstruction provides the input for efficient global illumination computation in image-space. We demonstrate that our approach can deliver state-of-the-art Augmented Reality rendering effects for scenes that are more scalable and more dynamic than previous work.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127078512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-23DOI: 10.1109/3DUI.2015.7131717
Mahdi Nabiyouni, Ayshwarya Saktheeswaran, D. Bowman, Ambika Karanth
One of the goals of much virtual reality (VR) research is to increase realism. In particular, many techniques for locomotion in VR attempt to approximate real-world walking. However, it is not yet fully understood how the design of more realistic locomotion techniques affects user task performance. We performed an experiment to compare a semi-natural locomotion technique (based on the Virtusphere device) with a traditional, non-natural technique (based on a game controller) and a fully natural technique (real walking). We found that the Virtusphere technique was significantly slower and less accurate than both of the other techniques. Based on this result and others in the literature, we speculate that locomotion techniques with moderate interaction fidelity will often have performance inferior to both high-fidelity techniques and well-designed low-fidelity techniques. We argue that our experimental results are an effect of interaction fidelity, and perform an analysis of the fidelity of the three locomotion techniques to support this argument.
{"title":"Comparing the performance of natural, semi-natural, and non-natural locomotion techniques in virtual reality","authors":"Mahdi Nabiyouni, Ayshwarya Saktheeswaran, D. Bowman, Ambika Karanth","doi":"10.1109/3DUI.2015.7131717","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131717","url":null,"abstract":"One of the goals of much virtual reality (VR) research is to increase realism. In particular, many techniques for locomotion in VR attempt to approximate real-world walking. However, it is not yet fully understood how the design of more realistic locomotion techniques affects user task performance. We performed an experiment to compare a semi-natural locomotion technique (based on the Virtusphere device) with a traditional, non-natural technique (based on a game controller) and a fully natural technique (real walking). We found that the Virtusphere technique was significantly slower and less accurate than both of the other techniques. Based on this result and others in the literature, we speculate that locomotion techniques with moderate interaction fidelity will often have performance inferior to both high-fidelity techniques and well-designed low-fidelity techniques. We argue that our experimental results are an effect of interaction fidelity, and perform an analysis of the fidelity of the three locomotion techniques to support this argument.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129074267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eric O. Boyer, Lucyle Vandervoorde, Frédéric Bevilacqua, S. Hanneton
In this study, we investigated the ability of blindfolded adults to discriminate between concave and convex auditory virtual surfaces. We used a Leap MotionTM device to measure the movements of the hand and fingers. Participants were asked to explore the space above the device with the palm of one hand and an auditory feedback was produced only when the palm was moving into the boundaries of the surface. In order to demonstrate that curvature direction was correctly perceived by our participants, we estimated their discrimination thresholds with a psychophysical staircase procedure. Two groups of participants were fed with two different sonification of the surface. Results showed that most of the participants were able to learn the task. The best results were obtained with an auditory feedback related to the component of the hand velocity tangential to the virtual surface. This work proposes a contribution to the introduction in virtual reality of auditory virtual objects.
{"title":"Touching sounds: Perception of the curvature of auditory virtual surfaces","authors":"Eric O. Boyer, Lucyle Vandervoorde, Frédéric Bevilacqua, S. Hanneton","doi":"10.1109/VR.2015.7223341","DOIUrl":"https://doi.org/10.1109/VR.2015.7223341","url":null,"abstract":"In this study, we investigated the ability of blindfolded adults to discriminate between concave and convex auditory virtual surfaces. We used a Leap MotionTM device to measure the movements of the hand and fingers. Participants were asked to explore the space above the device with the palm of one hand and an auditory feedback was produced only when the palm was moving into the boundaries of the surface. In order to demonstrate that curvature direction was correctly perceived by our participants, we estimated their discrimination thresholds with a psychophysical staircase procedure. Two groups of participants were fed with two different sonification of the surface. Results showed that most of the participants were able to learn the task. The best results were obtained with an auditory feedback related to the component of the hand velocity tangential to the virtual surface. This work proposes a contribution to the introduction in virtual reality of auditory virtual objects.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129662753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}