Motion perception in immersive virtual reality environments significantly differs from the real world. For example, previous work has shown that users tend to underestimate travel distances in immersive virtual environments (VEs). As a solution to this problem, some researchers propose to scale the mapped virtual camera motion relative to the tracked real-world movement of a user until real and virtual motion appear to match, i. e., real-world movements could be mapped with a larger gain to the VE in order to compensate for the underestimation. Although this approach usually results in more accurate self-motion judgments by users, introducing discrepancies between real and virtual motion can become a problem, in particular, due to misalignments of both worlds and distorted space cognition. In this paper we describe a different approach that introduces apparent self-motion illusions by manipulating optic flow fields during movements in VEs. These manipulations can affect self-motion perception in VEs, but omit a quantitative discrepancy between real and virtual motions. We introduce four illusions and show in experiments that optic flow manipulation can significantly affect users' self-motion judgments. Furthermore, we show that with such manipulation of optic flow fields the underestimation of travel distances can be compensated.
{"title":"Self-motion illusions in immersive virtual reality environments","authors":"G. Bruder, Frank Steinicke, Phil Wieland","doi":"10.1109/VR.2011.5759434","DOIUrl":"https://doi.org/10.1109/VR.2011.5759434","url":null,"abstract":"Motion perception in immersive virtual reality environments significantly differs from the real world. For example, previous work has shown that users tend to underestimate travel distances in immersive virtual environments (VEs). As a solution to this problem, some researchers propose to scale the mapped virtual camera motion relative to the tracked real-world movement of a user until real and virtual motion appear to match, i. e., real-world movements could be mapped with a larger gain to the VE in order to compensate for the underestimation. Although this approach usually results in more accurate self-motion judgments by users, introducing discrepancies between real and virtual motion can become a problem, in particular, due to misalignments of both worlds and distorted space cognition. In this paper we describe a different approach that introduces apparent self-motion illusions by manipulating optic flow fields during movements in VEs. These manipulations can affect self-motion perception in VEs, but omit a quantitative discrepancy between real and virtual motions. We introduce four illusions and show in experiments that optic flow manipulation can significantly affect users' self-motion judgments. Furthermore, we show that with such manipulation of optic flow fields the underestimation of travel distances can be compensated.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129551986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reconstruction plates (deformable metal plates) are widely used in reduction and internal fixation surgeries for bone fractures at sites with irregular or individually different anatomical morphology. Traditional surgical procedures require intra-operative manual implant templating, which often leads to prolonged operations with unnecessary damage or hemorrhage. In this paper, we present a novel approach that uses computer graphics and augmented reality (AR) techniques to assist preoperative implant templating, substantially improving these surgical procedures. We exploit the symmetry of the human body to virtually reconstruct the broken skeletal structure using intact contralateral bones. Then 3D models of virtual implants are generated along a drawn path and rendered in an AR environment, to guide preoperative implant templating, thus reducing surgical invasiveness and operation duration. A successful clinical application is presented to demonstrate the effectiveness of our method.
{"title":"AR aided implant templating for unilateral fracture reduction and internal fixation surgery","authors":"Fangyang Shen, S. Yue, Qi Yue","doi":"10.1109/VR.2011.5759459","DOIUrl":"https://doi.org/10.1109/VR.2011.5759459","url":null,"abstract":"Reconstruction plates (deformable metal plates) are widely used in reduction and internal fixation surgeries for bone fractures at sites with irregular or individually different anatomical morphology. Traditional surgical procedures require intra-operative manual implant templating, which often leads to prolonged operations with unnecessary damage or hemorrhage. In this paper, we present a novel approach that uses computer graphics and augmented reality (AR) techniques to assist preoperative implant templating, substantially improving these surgical procedures. We exploit the symmetry of the human body to virtually reconstruct the broken skeletal structure using intact contralateral bones. Then 3D models of virtual implants are generated along a drawn path and rendered in an AR environment, to guide preoperative implant templating, thus reducing surgical invasiveness and operation duration. A successful clinical application is presented to demonstrate the effectiveness of our method.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128914419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thin computing clients, such as smartphones and tablets, have experienced recent growth in display resolutions and graphics processing power. In this poster, we show how to leverage these trends to create an experimental wide field of view, 3D stereoscopic head mounted display (HMD), based on two high resolution smart-phones. This HMD prototype is unique in that the graphics system is entirely onboard, allowing it to be lightweight, wireless, and convenient to use.
{"title":"A design for a smartphone-based head mounted display","authors":"J. Olson, D. Krum, Evan A. Suma, M. Bolas","doi":"10.1109/VR.2011.5759484","DOIUrl":"https://doi.org/10.1109/VR.2011.5759484","url":null,"abstract":"Thin computing clients, such as smartphones and tablets, have experienced recent growth in display resolutions and graphics processing power. In this poster, we show how to leverage these trends to create an experimental wide field of view, 3D stereoscopic head mounted display (HMD), based on two high resolution smart-phones. This HMD prototype is unique in that the graphics system is entirely onboard, allowing it to be lightweight, wireless, and convenient to use.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117105841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julien-Charles Levesque, D. Laurendeau, M. Mokhtari
In this paper, a 3D bimanual gestural interface using data gloves is presented. We build upon past contributions on gestural interfaces and bimanual interactions to create an efficient and intuitive gestural interface that can be used in immersive environments. The proposed interface uses the hands in an asymmetric style, with the left hand providing the mode of interaction and the right hand acting at a finer level of detail.
{"title":"Bimanual gestural interface for virtual environments","authors":"Julien-Charles Levesque, D. Laurendeau, M. Mokhtari","doi":"10.1109/VR.2011.5759479","DOIUrl":"https://doi.org/10.1109/VR.2011.5759479","url":null,"abstract":"In this paper, a 3D bimanual gestural interface using data gloves is presented. We build upon past contributions on gestural interfaces and bimanual interactions to create an efficient and intuitive gestural interface that can be used in immersive environments. The proposed interface uses the hands in an asymmetric style, with the left hand providing the mode of interaction and the right hand acting at a finer level of detail.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131121578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frédéric Roulland, S. Castellani, Pascal Valobra, Victor Ciriza, J. O'Neill, Y. Deng
In this paper we describe the Mixed Reality system that we are developing to facilitate a real-world application, that of collaborative remote troubleshooting of broken office devices. The architecture of the system is centered on a 3D virtual representation of the device augmented with status data of the actual device coming from its internal sensors. The purpose of this paper is to illustrate how this approach supports the interactions required by the remote collaborative troubleshooting activity whilst taking into account technical constraints that come from a real world application. We believe it constitutes an interesting opportunity for using Mixed Reality in this domain.
{"title":"Mixed reality for supporting office devices troubleshooting","authors":"Frédéric Roulland, S. Castellani, Pascal Valobra, Victor Ciriza, J. O'Neill, Y. Deng","doi":"10.1109/VR.2011.5759458","DOIUrl":"https://doi.org/10.1109/VR.2011.5759458","url":null,"abstract":"In this paper we describe the Mixed Reality system that we are developing to facilitate a real-world application, that of collaborative remote troubleshooting of broken office devices. The architecture of the system is centered on a 3D virtual representation of the device augmented with status data of the actual device coming from its internal sensors. The purpose of this paper is to illustrate how this approach supports the interactions required by the remote collaborative troubleshooting activity whilst taking into account technical constraints that come from a real world application. We believe it constitutes an interesting opportunity for using Mixed Reality in this domain.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114376421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The knowledge of the propagation behavior of radio waves is a fundamental prerequisite for planning and optimizing mobile radio networks. Propagation effects are usually simulated numerically, since real-world measurement campaigns are time-consuming and expensive. Automatic planning algorithms can explore a vast amount of network configurations to find good deployment schemes. However, complex urban scenarios demand for a great emphasis on site-specific details in the propagation environment which are often not covered by automatic approaches. Therefore, we have combined the simulation of radio waves with an interactive exploration and modification of the propagation environment in a virtual reality prototype application. By coupling real-time simulation and manipulation tasks we can provide an uninterrupted user-centered workflow.
{"title":"A virtual reality system for the simulation and manipulation of wireless communication networks","authors":"T. Rick, Anette von Kapri, T. Kuhlen","doi":"10.1109/VR.2011.5759446","DOIUrl":"https://doi.org/10.1109/VR.2011.5759446","url":null,"abstract":"The knowledge of the propagation behavior of radio waves is a fundamental prerequisite for planning and optimizing mobile radio networks. Propagation effects are usually simulated numerically, since real-world measurement campaigns are time-consuming and expensive. Automatic planning algorithms can explore a vast amount of network configurations to find good deployment schemes. However, complex urban scenarios demand for a great emphasis on site-specific details in the propagation environment which are often not covered by automatic approaches. Therefore, we have combined the simulation of radio waves with an interactive exploration and modification of the propagation environment in a virtual reality prototype application. By coupling real-time simulation and manipulation tasks we can provide an uninterrupted user-centered workflow.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129662811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In daily work routines of police officers, situations may occur that have not been trained previously because of high costs, because of the danger, time or effort involved. But some knowledge-in-use is required to respond adequately to these complex situations, thus virtual training seems to be the obvious choice. In order to develop a virtual training environment, we need an understanding of the underlying learning processes. This paper explicates a theory-driven design process of a virtual training environment and its application in a German state police department. We conceptualize how the acquisition of knowledge-in-use in virtual training environments takes place and focus on what is possible in virtual training.
{"title":"Acquiring knowledge-in-use in virtual training environments: A theory driven design process","authors":"Johanna Bertram, Johannes Moskaliuk, U. Cress","doi":"10.1109/VR.2011.5759465","DOIUrl":"https://doi.org/10.1109/VR.2011.5759465","url":null,"abstract":"In daily work routines of police officers, situations may occur that have not been trained previously because of high costs, because of the danger, time or effort involved. But some knowledge-in-use is required to respond adequately to these complex situations, thus virtual training seems to be the obvious choice. In order to develop a virtual training environment, we need an understanding of the underlying learning processes. This paper explicates a theory-driven design process of a virtual training environment and its application in a German state police department. We conceptualize how the acquisition of knowledge-in-use in virtual training environments takes place and focus on what is possible in virtual training.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130162176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel approach for detecting and tracking markers with randomly scattered dots for augmented reality applications. Compared with traditional markers with square pattern, our random dot markers have several significant advantages for flexible marker design, robustness against occlusion and user interaction. The retrieval and tracking of these markers are based on geometric feature based keypoint matching and tracking. We experimentally demonstrate that the discriminative ability of forty random dots per marker is applicable for retrieving up to one thousand markers.
{"title":"Random dot markers","authors":"Hideaki Uchiyama, H. Saito","doi":"10.1109/VR.2011.5759433","DOIUrl":"https://doi.org/10.1109/VR.2011.5759433","url":null,"abstract":"This paper presents a novel approach for detecting and tracking markers with randomly scattered dots for augmented reality applications. Compared with traditional markers with square pattern, our random dot markers have several significant advantages for flexible marker design, robustness against occlusion and user interaction. The retrieval and tracking of these markers are based on geometric feature based keypoint matching and tracking. We experimentally demonstrate that the discriminative ability of forty random dots per marker is applicable for retrieving up to one thousand markers.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126841391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Oskiper, Han-Pang Chiu, Zhiwei Zhu, S. Samarasekera, Rakesh Kumar
In this paper, we present a unified approach for a drift-free and jitter-reduced vision-aided navigation system. This approach is based on an error-state Kalman filter algorithm using both relative (local) measurements obtained from image based motion estimation through visual odometry, and global measurements as a result of landmark matching through a pre-built visual landmark database. To improve the accuracy in pose estimation for augmented reality applications, we capture the 3D local reconstruction uncertainty of each landmark point as a covariance matrix and implicity rely more on closer points in the filter. We conduct a number of experiments aimed at evaluating different aspects of our Kalman filter framework, and show our approach can provide highly-accurate and stable pose both indoors and outdoors over large areas. The results demonstrate both the long term stability and the overall accuracy of our algorithm as intended to provide a solution to the camera tracking problem in augmented reality applications.
{"title":"Stable vision-aided navigation for large-area augmented reality","authors":"T. Oskiper, Han-Pang Chiu, Zhiwei Zhu, S. Samarasekera, Rakesh Kumar","doi":"10.1109/VR.2011.5759438","DOIUrl":"https://doi.org/10.1109/VR.2011.5759438","url":null,"abstract":"In this paper, we present a unified approach for a drift-free and jitter-reduced vision-aided navigation system. This approach is based on an error-state Kalman filter algorithm using both relative (local) measurements obtained from image based motion estimation through visual odometry, and global measurements as a result of landmark matching through a pre-built visual landmark database. To improve the accuracy in pose estimation for augmented reality applications, we capture the 3D local reconstruction uncertainty of each landmark point as a covariance matrix and implicity rely more on closer points in the filter. We conduct a number of experiments aimed at evaluating different aspects of our Kalman filter framework, and show our approach can provide highly-accurate and stable pose both indoors and outdoors over large areas. The results demonstrate both the long term stability and the overall accuracy of our algorithm as intended to provide a solution to the camera tracking problem in augmented reality applications.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"30 18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133765049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual Reality (VR) systems cannot be promoted for complex applications (involving the interpretation of massive and intricate databases) without creating natural and ”transparent” user interfaces: intuitive interfaces are required to bring non-expert users to use VR technologies. Many studies have been carried out on multimodal and collaborative systems in VR. Although these two aspects are usually studied separately, they share interesting similarities. Our work focuses on the way to manage multimodal and collaborative interactions in a same process. We present here the similarities between these two processes and the main features of a reconfigurable multimodal and collaborative supervisor for Virtual Environments (VEs). The aim of such a system is to ensure the merge of pieces of information coming from VR devices (tracking, gestures, speech, haptics, etc.), to control immersive multi-user applications using the main communication and sensorimotor channels of humans. The framework's architecture of this supervisor wants to be generic, modular and reconfigurable (via an XML configuration file), in order to be applied to many different contexts.
{"title":"Designing a reconfigurable multimodal and collaborative supervisor for Virtual Environment","authors":"Pierre Martin, P. Bourdot","doi":"10.1109/VR.2011.5759480","DOIUrl":"https://doi.org/10.1109/VR.2011.5759480","url":null,"abstract":"Virtual Reality (VR) systems cannot be promoted for complex applications (involving the interpretation of massive and intricate databases) without creating natural and ”transparent” user interfaces: intuitive interfaces are required to bring non-expert users to use VR technologies. Many studies have been carried out on multimodal and collaborative systems in VR. Although these two aspects are usually studied separately, they share interesting similarities. Our work focuses on the way to manage multimodal and collaborative interactions in a same process. We present here the similarities between these two processes and the main features of a reconfigurable multimodal and collaborative supervisor for Virtual Environments (VEs). The aim of such a system is to ensure the merge of pieces of information coming from VR devices (tracking, gestures, speech, haptics, etc.), to control immersive multi-user applications using the main communication and sensorimotor channels of humans. The framework's architecture of this supervisor wants to be generic, modular and reconfigurable (via an XML configuration file), in order to be applied to many different contexts.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132271354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}