3D city model, which consisted of the 3D building models and their geospatial position and orientation, is becoming a valuable resource in virtual reality, navigation systems, civil engineering, etc. The purpose of this research is to propose the new framework to generate the 3D city model that satisfies visual and physical requirements in ground oriented simulation system. At the same time, the framework should meet the demand of the automatic creation and cost-effectiveness, which facilitates the usability of the proposed approach. To do that, I suggest the framework that leverages the mobile mapping system which automatically gathers high resolution images and supplement sensor information like position and direction of the image. And to resolve the problem from the sensor noise and a large number of the occlusions, the fusion of digital map data will be used. This paper describes the overall framework with major process and the recommended or demanded techniques for each processing step.
{"title":"A framework for the automatic 3D city modeling using the panoramic image from mobile mapping system and digital maps","authors":"Hyungki Kim, Soonhung Han","doi":"10.1109/VR.2014.6802087","DOIUrl":"https://doi.org/10.1109/VR.2014.6802087","url":null,"abstract":"3D city model, which consisted of the 3D building models and their geospatial position and orientation, is becoming a valuable resource in virtual reality, navigation systems, civil engineering, etc. The purpose of this research is to propose the new framework to generate the 3D city model that satisfies visual and physical requirements in ground oriented simulation system. At the same time, the framework should meet the demand of the automatic creation and cost-effectiveness, which facilitates the usability of the proposed approach. To do that, I suggest the framework that leverages the mobile mapping system which automatically gathers high resolution images and supplement sensor information like position and direction of the image. And to resolve the problem from the sensor noise and a large number of the occlusions, the fusion of digital map data will be used. This paper describes the overall framework with major process and the recommended or demanded techniques for each processing step.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128666947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuichiro Fujimoto, Jun Miyazaki, Takafumi Taketomi, H. Kato, B. Thomas, Goshiro Yamamoto, Ross T. Smith
We demonstrate the geometrically-correct projection-based texture mapping onto a deformable object like a cloth. This system can be used to simulate design that involves change in shape, such as sheets of malleable material. The geometrically-correct projection-based texture mapping onto a cloth is conducted using the measurement of object's 3D shape and the detection of the retro-reflective marker on the object's surface. Rapid prototyping is used as an example application of this projection technique.
{"title":"Geometrically-correct projection-based texture mapping onto a cloth","authors":"Yuichiro Fujimoto, Jun Miyazaki, Takafumi Taketomi, H. Kato, B. Thomas, Goshiro Yamamoto, Ross T. Smith","doi":"10.1109/VR.2014.6802105","DOIUrl":"https://doi.org/10.1109/VR.2014.6802105","url":null,"abstract":"We demonstrate the geometrically-correct projection-based texture mapping onto a deformable object like a cloth. This system can be used to simulate design that involves change in shape, such as sheets of malleable material. The geometrically-correct projection-based texture mapping onto a cloth is conducted using the measurement of object's 3D shape and the detection of the retro-reflective marker on the object's surface. Rapid prototyping is used as an example application of this projection technique.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116348373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Markus Tatzgern, Denis Kalkofen, R. Grasset, D. Schmalstieg
Annotations of objects in 3D environments are commonly controlled using view management techniques. State-of-the-art view management strategies for external labels operate in 2D image space. This creates problems, because the 2D view of a 3D scene changes over time, and temporal behavior of elements in a 3D scene is not obvious in 2D image space. We propose managing the placement of external labels in 3D object space instead. We use 3D geometric constraints to achieve label placement that fulfills the desired objectives (e.g., avoiding overlapping labels), but also behaves consistently over time as the viewpoint changes. We propose two geometric constraints: a 3D pole constraint, where labels move along a 3D pole sticking out from the annotated object, and a plane constraint, where labels move in a dominant plane in the world. This formulation is compatible with standard optimization approaches for labeling, but overcomes the lack of temporal coherence.
{"title":"Hedgehog labeling: View management techniques for external labels in 3D space","authors":"Markus Tatzgern, Denis Kalkofen, R. Grasset, D. Schmalstieg","doi":"10.1109/VR.2014.6802046","DOIUrl":"https://doi.org/10.1109/VR.2014.6802046","url":null,"abstract":"Annotations of objects in 3D environments are commonly controlled using view management techniques. State-of-the-art view management strategies for external labels operate in 2D image space. This creates problems, because the 2D view of a 3D scene changes over time, and temporal behavior of elements in a 3D scene is not obvious in 2D image space. We propose managing the placement of external labels in 3D object space instead. We use 3D geometric constraints to achieve label placement that fulfills the desired objectives (e.g., avoiding overlapping labels), but also behaves consistently over time as the viewpoint changes. We propose two geometric constraints: a 3D pole constraint, where labels move along a 3D pole sticking out from the annotated object, and a plane constraint, where labels move in a dominant plane in the world. This formulation is compatible with standard optimization approaches for labeling, but overcomes the lack of temporal coherence.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133527182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This demo encapsulates the possible manifestation of Middle Eastern indigenous dance, Al Ardha, in the form of a serious gaming environment. The presentation also illustrates the interconnection and possible transformation of Intangible Cultural Heritage (ICH) content, such as traditional dances, into a digital kinesthetic learning system. The system is called Mimicry Understanding and Safeguarding Environment (MUSE). It is designed to help museum visitors learn traditional or indigenous dances withthe help of motion-sensing technologies. MUSE is a multidisciplinary research project and is expected to analyze the intricacies of various indigenous dances, particularly the Arabic sword dance. MUSE interface is expected to facilitate museum visitors' awareness, learning, and practice of the Al Ardha dance of the Middle Eastern region. Through its easy-to-learn and user-friendly interface, MUSE can facilitate and foster playfulness and user engagement to enhance the experience of museum visitors.
{"title":"MUSE: Understanding traditional dances","authors":"Muqeem Khan","doi":"10.1109/VR.2014.6802107","DOIUrl":"https://doi.org/10.1109/VR.2014.6802107","url":null,"abstract":"This demo encapsulates the possible manifestation of Middle Eastern indigenous dance, Al Ardha, in the form of a serious gaming environment. The presentation also illustrates the interconnection and possible transformation of Intangible Cultural Heritage (ICH) content, such as traditional dances, into a digital kinesthetic learning system. The system is called Mimicry Understanding and Safeguarding Environment (MUSE). It is designed to help museum visitors learn traditional or indigenous dances withthe help of motion-sensing technologies. MUSE is a multidisciplinary research project and is expected to analyze the intricacies of various indigenous dances, particularly the Arabic sword dance. MUSE interface is expected to facilitate museum visitors' awareness, learning, and practice of the Al Ardha dance of the Middle Eastern region. Through its easy-to-learn and user-friendly interface, MUSE can facilitate and foster playfulness and user engagement to enhance the experience of museum visitors.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128075496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we introduce a system to capture the enhanced 3D structure of a room-sized dynamic scene with commodity depth cameras such as Microsoft Kinects. It is challenging to capture the entire dynamic room. First, the raw data from depth cameras are noisy due to the conflicts of the room's large volume and cameras' limited optimal working distance. Second, the severe occlusions between objects lead to dramatic missing data in the captured 3D. Our system incorporates temporal information to achieve a noise-free and complete 3D capture of the entire room. More specifically, we pre-scan the static parts of the room offline, and track their movements online. For the dynamic objects, we perform non-rigid alignment between frames and accumulate data over time. Our system also supports the topology changes of the objects and their interactions. We demonstrate the success of our system with various situations.
{"title":"Temporally enhanced 3D capture of room-sized dynamic scenes with commodity depth cameras","authors":"Mingsong Dou, H. Fuchs","doi":"10.1109/VR.2014.6802048","DOIUrl":"https://doi.org/10.1109/VR.2014.6802048","url":null,"abstract":"In this paper, we introduce a system to capture the enhanced 3D structure of a room-sized dynamic scene with commodity depth cameras such as Microsoft Kinects. It is challenging to capture the entire dynamic room. First, the raw data from depth cameras are noisy due to the conflicts of the room's large volume and cameras' limited optimal working distance. Second, the severe occlusions between objects lead to dramatic missing data in the captured 3D. Our system incorporates temporal information to achieve a noise-free and complete 3D capture of the entire room. More specifically, we pre-scan the static parts of the room offline, and track their movements online. For the dynamic objects, we perform non-rigid alignment between frames and accumulate data over time. Our system also supports the topology changes of the objects and their interactions. We demonstrate the success of our system with various situations.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121251253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Users wearing a head-mounted display while relying on Walking-In-Place techniques for virtual locomotion tend to physically drift in the direction which they are headed within the virtual environment. It has previously been demonstrated that different types of feedback may be used to constrain the movement of the user. This poster presents a within-subjects study comparing four methods for ensuring that the user remains within a certain area. The participants were asked to determine which method the generally preferred and assess the perceived helpfulness and intrusiveness of the different methods. The results indicate that passive haptic feedback (a carpet) was preferred and also was regarded as the most helpful and the least intrusive. However, gathered qualitative data suggest that this method might be used in combination with feedback in other modalities.
{"title":"A comparison of four different approaches to reducing unintended positional drift during walking-In-Place locomotion","authors":"N. C. Nilsson, S. Serafin, R. Nordahl","doi":"10.1109/VR.2014.6802071","DOIUrl":"https://doi.org/10.1109/VR.2014.6802071","url":null,"abstract":"Users wearing a head-mounted display while relying on Walking-In-Place techniques for virtual locomotion tend to physically drift in the direction which they are headed within the virtual environment. It has previously been demonstrated that different types of feedback may be used to constrain the movement of the user. This poster presents a within-subjects study comparing four methods for ensuring that the user remains within a certain area. The participants were asked to determine which method the generally preferred and assess the perceived helpfulness and intrusiveness of the different methods. The results indicate that passive haptic feedback (a carpet) was preferred and also was regarded as the most helpful and the least intrusive. However, gathered qualitative data suggest that this method might be used in combination with feedback in other modalities.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116136417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Plopski, T. Mashita, K. Kiyokawa, H. Takemura
We present an approach which enables real-time augmentation of an environment composed of materials with different texture and reflectance properties without the need of application-specific hardware or extensive preparation. Our solution uses a set of RGB images of a reconstructed model to optimize the reflectance parameters and light location. Each image is decomposed into its specular and diffuse components and we estimate the location of multiple light sources from specular highlights. The environment is stored in a voxel grid and we optimize the reflectance properties and colour of each voxel through inverse rendering. We verify our approach with a simulated environment and present results from a corresponding reconstructed environment.
{"title":"Reflectance and light source estimation for indoor AR Applications","authors":"Alexander Plopski, T. Mashita, K. Kiyokawa, H. Takemura","doi":"10.1109/VR.2014.6802072","DOIUrl":"https://doi.org/10.1109/VR.2014.6802072","url":null,"abstract":"We present an approach which enables real-time augmentation of an environment composed of materials with different texture and reflectance properties without the need of application-specific hardware or extensive preparation. Our solution uses a set of RGB images of a reconstructed model to optimize the reflectance parameters and light location. Each image is decomposed into its specular and diffuse components and we estimate the location of multiple light sources from specular highlights. The environment is stored in a voxel grid and we optimize the reflectance properties and colour of each voxel through inverse rendering. We verify our approach with a simulated environment and present results from a corresponding reconstructed environment.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133706136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present the preliminary results of an ongoing study exploring how mixed-agency teams influence feelings of social presence. Participants worked with a team composed of either two virtual humans or a team composed of one virtual human and one real human. We found that while the presence of a human teammate did not affect overall feelings of social presence, the presence of a human teammate did appear to strengthen participants' perceptions that their virtual teammates were not real.
{"title":"Social presence in mixed agency interactions","authors":"Andrew C. Robb, Benjamin C. Lok","doi":"10.1109/VR.2014.6802076","DOIUrl":"https://doi.org/10.1109/VR.2014.6802076","url":null,"abstract":"In this paper, we present the preliminary results of an ongoing study exploring how mixed-agency teams influence feelings of social presence. Participants worked with a team composed of either two virtual humans or a team composed of one virtual human and one real human. We found that while the presence of a human teammate did not affect overall feelings of social presence, the presence of a human teammate did appear to strengthen participants' perceptions that their virtual teammates were not real.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132532274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fire accidents are one of the most frequently occurring disasters leading to life and property loss. Training for fire escape skills and designing efficient building evacuation plans are increasingly important research topics. Virtual environments (VE) are widely used to support training and simulating human response under emergency situations. However, current VE based training systems have limited intelligent non-player characters (NPC) which are lacking in support of fire science knowledge. This project will introduce a hybrid method combining gaming technology, agent programming and fire science knowledge to design an evacuation training system for fire wardens. The success of using professional numerical fire simulation tool to support NPC behaviour in virtual environment will provide a new way to enhance the realism of virtual environment by using high fidelity fire data sources.
{"title":"Simulating cooperative fire evacuation training in a virtual environment using gaming technology","authors":"Mingze Xi, Shamus P. Smith","doi":"10.1109/VR.2014.6802090","DOIUrl":"https://doi.org/10.1109/VR.2014.6802090","url":null,"abstract":"Fire accidents are one of the most frequently occurring disasters leading to life and property loss. Training for fire escape skills and designing efficient building evacuation plans are increasingly important research topics. Virtual environments (VE) are widely used to support training and simulating human response under emergency situations. However, current VE based training systems have limited intelligent non-player characters (NPC) which are lacking in support of fire science knowledge. This project will introduce a hybrid method combining gaming technology, agent programming and fire science knowledge to design an evacuation training system for fire wardens. The success of using professional numerical fire simulation tool to support NPC behaviour in virtual environment will provide a new way to enhance the realism of virtual environment by using high fidelity fire data sources.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126447923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Understanding and modeling how a crowd behaves in a wide variety of situations is an important problem in many areas. For example, during the planning stages, city, traffic, and evacuation engineers use crowd behavior modeling to predict usage patterns and to do safety analysis of their designs. Several research areas benefit from realistic simulation of crowds such as augmented reality, animation, games, virtual therapy, and virtual training. Not only is a realistic rendering of a virtual environment required for these applications, but also a realistic simulation of virtual humans is essential to providing an immersive experience for the users. The goal of my research is to provide an interactive simulation of a crowd, where large numbers of human-like agents interact with each other. A large part of my research comes from understanding and observing what human-like behavior is and how it affects interactions with others and the environment. In this proposal, I discuss my approaches to model crowd behaviors with variation and dynamic changes.
{"title":"Simulating crowd interactions in virtual environments (doctoral consortium)","authors":"Sujeong Kim, M. Lin, Dinesh Manocha","doi":"10.1109/VR.2014.6802088","DOIUrl":"https://doi.org/10.1109/VR.2014.6802088","url":null,"abstract":"Understanding and modeling how a crowd behaves in a wide variety of situations is an important problem in many areas. For example, during the planning stages, city, traffic, and evacuation engineers use crowd behavior modeling to predict usage patterns and to do safety analysis of their designs. Several research areas benefit from realistic simulation of crowds such as augmented reality, animation, games, virtual therapy, and virtual training. Not only is a realistic rendering of a virtual environment required for these applications, but also a realistic simulation of virtual humans is essential to providing an immersive experience for the users. The goal of my research is to provide an interactive simulation of a crowd, where large numbers of human-like agents interact with each other. A large part of my research comes from understanding and observing what human-like behavior is and how it affects interactions with others and the environment. In this proposal, I discuss my approaches to model crowd behaviors with variation and dynamic changes.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115441801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}