A new approach is presented to automatically build a dynamic and multi-resolution 360/spl deg/ panorama (DMP) from image sequences taken by a hand-held camera. A multi-resolution representation is built for the more interesting areas by means of camera zooming. The dynamic objects in the scene can be detected and represented separately. The DMP construction method is fast, robust and automatic, achieving 1 Hz in a 266 MHz PC.
{"title":"Automating the construction of dynamic and multi-resolution 360/spl deg/ panorama for natural scenes with moving objects","authors":"Zhigang Zhu, Guangyou Xu, Heng Luo, Qiang Wang","doi":"10.1109/VR.1999.756937","DOIUrl":"https://doi.org/10.1109/VR.1999.756937","url":null,"abstract":"A new approach is presented to automatically build a dynamic and multi-resolution 360/spl deg/ panorama (DMP) from image sequences taken by a hand-held camera. A multi-resolution representation is built for the more interesting areas by means of camera zooming. The dynamic objects in the scene can be detected and represented separately. The DMP construction method is fast, robust and automatic, achieving 1 Hz in a 266 MHz PC.","PeriodicalId":175913,"journal":{"name":"Proceedings IEEE Virtual Reality (Cat. No. 99CB36316)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134500389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe the design and the implementation of a force display for immersive projection displays. To give the user maximum freedom of motion in a large working space, it is necessary to use a portable force display that is grounded on the user's body. Therefore, we developed a portable (wearable) force display called HapticGEAR which makes use of the tension of wires grounded on the user's back. This device is designed to reduce user's fatigue and to have fewer influence on the user's motion and sight.
{"title":"Development of wearable force display (HapticGEAR) for immersive projection displays","authors":"M. Hirose, T. Ogi, H. Yano, N. Kakehi","doi":"10.1109/VR.1999.756931","DOIUrl":"https://doi.org/10.1109/VR.1999.756931","url":null,"abstract":"We describe the design and the implementation of a force display for immersive projection displays. To give the user maximum freedom of motion in a large working space, it is necessary to use a portable force display that is grounded on the user's body. Therefore, we developed a portable (wearable) force display called HapticGEAR which makes use of the tension of wires grounded on the user's back. This device is designed to reduce user's fatigue and to have fewer influence on the user's motion and sight.","PeriodicalId":175913,"journal":{"name":"Proceedings IEEE Virtual Reality (Cat. No. 99CB36316)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133030897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Julier, R. King, Brad Colbert, J. Durbin, L. Rosenblum
This paper describes the software architecture of Dragon, a real-time situational awareness virtual environment for battlefield visualization. Dragon receives data from a number of different sources and creates a single, coherent, and consistent three-dimensional display. We describe the problem of Battlefield Visualization and the challenges it imposes. We discuss the Dragon architecture, the rational for its design, and its performance in an actual application. The battlefield VR system is also suitable for similar civilian domains such as large-scale disaster relief and hostage rescue.
{"title":"The software architecture of a real-time battlefield visualization virtual environment","authors":"S. Julier, R. King, Brad Colbert, J. Durbin, L. Rosenblum","doi":"10.1109/VR.1999.756920","DOIUrl":"https://doi.org/10.1109/VR.1999.756920","url":null,"abstract":"This paper describes the software architecture of Dragon, a real-time situational awareness virtual environment for battlefield visualization. Dragon receives data from a number of different sources and creates a single, coherent, and consistent three-dimensional display. We describe the problem of Battlefield Visualization and the challenges it imposes. We discuss the Dragon architecture, the rational for its design, and its performance in an actual application. The battlefield VR system is also suitable for similar civilian domains such as large-scale disaster relief and hostage rescue.","PeriodicalId":175913,"journal":{"name":"Proceedings IEEE Virtual Reality (Cat. No. 99CB36316)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116434181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tomoko Imai, Andrew E. Johnson, J. Leigh, Dave Pape, T. DeFanti
The paper discusses a virtual mail system. V-mail is designed to be used in the CAVE virtual environment. The participants' mail messages and the ongoing modifications of the VE are maintained by a central server. V-mail's user interface is embodied in a virtual friend or pet that follows the participant as he/she interacts with the VE.
{"title":"The virtual mail system","authors":"Tomoko Imai, Andrew E. Johnson, J. Leigh, Dave Pape, T. DeFanti","doi":"10.1109/VR.1999.756930","DOIUrl":"https://doi.org/10.1109/VR.1999.756930","url":null,"abstract":"The paper discusses a virtual mail system. V-mail is designed to be used in the CAVE virtual environment. The participants' mail messages and the ongoing modifications of the VE are maintained by a central server. V-mail's user interface is embodied in a virtual friend or pet that follows the participant as he/she interacts with the VE.","PeriodicalId":175913,"journal":{"name":"Proceedings IEEE Virtual Reality (Cat. No. 99CB36316)","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121115428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Compared to most other virtual environments, the Responsive Workbench/sup TM/ offers a much better integration of the virtual and the real worlds. The Responsive Workbench/sup TM/ is a projection-based virtual environment with one horizontal table-sized projection plane. However interacting with these mixed 3D worlds remains a research challenge. This paper introduces a prop-like device, the virtual palette and describes the virtual remote control panel (VRCP), a two-handed interaction technique, based on the Virtual Palette, for controlling applications.
{"title":"The virtual palette and the virtual remote control panel: a device and an interaction paradigm for the Responsive Workbench/sup TM/","authors":"S. Coquillart, G. Wesche","doi":"10.1109/VR.1999.756953","DOIUrl":"https://doi.org/10.1109/VR.1999.756953","url":null,"abstract":"Compared to most other virtual environments, the Responsive Workbench/sup TM/ offers a much better integration of the virtual and the real worlds. The Responsive Workbench/sup TM/ is a projection-based virtual environment with one horizontal table-sized projection plane. However interacting with these mixed 3D worlds remains a research challenge. This paper introduces a prop-like device, the virtual palette and describes the virtual remote control panel (VRCP), a two-handed interaction technique, based on the Virtual Palette, for controlling applications.","PeriodicalId":175913,"journal":{"name":"Proceedings IEEE Virtual Reality (Cat. No. 99CB36316)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121458570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Kitamura, Tomohiko Higashi, T. Masaki, F. Kishino
A technique is proposed for object manipulation with a virtual tool using multiple exact interactions. Exact test is introduced that uses real-time collision detection for both hand-and-tool and tool-and-object interactions. Chopsticks are adopted for one of the trials of our technique; although they have a very simple shape, they do have multiple functions. Here, a virtual object is manipulated by the virtual chopsticks, which are used by the motion of a user's hand captured by a hand gesture input device. Exact interaction is applied between the hand and chopsticks based on correct table manners. Experimental results demonstrate the effectiveness of the proposed multiple exact interactions, especially for precise object alignment tasks.
{"title":"Virtual chopsticks: object manipulation using multiple exact interactions","authors":"Y. Kitamura, Tomohiko Higashi, T. Masaki, F. Kishino","doi":"10.1109/VR.1999.756951","DOIUrl":"https://doi.org/10.1109/VR.1999.756951","url":null,"abstract":"A technique is proposed for object manipulation with a virtual tool using multiple exact interactions. Exact test is introduced that uses real-time collision detection for both hand-and-tool and tool-and-object interactions. Chopsticks are adopted for one of the trials of our technique; although they have a very simple shape, they do have multiple functions. Here, a virtual object is manipulated by the virtual chopsticks, which are used by the motion of a user's hand captured by a hand gesture input device. Exact interaction is applied between the hand and chopsticks based on correct table manners. Experimental results demonstrate the effectiveness of the proposed multiple exact interactions, especially for precise object alignment tasks.","PeriodicalId":175913,"journal":{"name":"Proceedings IEEE Virtual Reality (Cat. No. 99CB36316)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122570679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David E. Johnson, Thomas V. Thompson, Matthew Kaplan, Donald D. Nelson, E. Cohen
We present a method for painting texture maps directly onto trimmed NURBS models using a haptic interface. The haptic interface enables an artist to use a natural painting style while creating a texture. It avoids the traditional difficulty of mapping between the 2D texture space and the 3D model space by using parametric information available from our haptic tracing algorithm. The system maps user movement in 3D to movement in the 2D texture space and adaptively resizes the paintbrush in texture space to create a uniform stroke on the model.
{"title":"Painting textures with a haptic interface","authors":"David E. Johnson, Thomas V. Thompson, Matthew Kaplan, Donald D. Nelson, E. Cohen","doi":"10.1109/VR.1999.756963","DOIUrl":"https://doi.org/10.1109/VR.1999.756963","url":null,"abstract":"We present a method for painting texture maps directly onto trimmed NURBS models using a haptic interface. The haptic interface enables an artist to use a natural painting style while creating a texture. It avoids the traditional difficulty of mapping between the 2D texture space and the 3D model space by using parametric information available from our haptic tracing algorithm. The system maps user movement in 3D to movement in the 2D texture space and adaptively resizes the paintbrush in texture space to create a uniform stroke on the model.","PeriodicalId":175913,"journal":{"name":"Proceedings IEEE Virtual Reality (Cat. No. 99CB36316)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127208224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The promise of immersive virtual environment (VE) applications has been that they can make interaction with computer information and processes easier by providing a medium that more closely matches the user's real environment. Meeting this promise, however; has proven to be difficult, as the interface between the user and the computer is not complete, the metaphors for interacting with information are not always obvious, and the tools for including current interaction techniques are not sufficient to the task. We present SVIFT, the Simple Virtual Interactor Framework and Toolkit, which supports efforts to meet the interaction needs of immersive VE applications. SVIFT allows for the design and implementation of various interaction techniques that can be easily incorporated into many VE applications and combined with other interaction techniques to produce more complex interactions. We also discuss the differences between desktop and immersive environment interaction, and the implications of those differences on the design of an interactor framework.
{"title":"A framework for interactors in immersive virtual environments","authors":"G. Kessler","doi":"10.1109/VR.1999.756950","DOIUrl":"https://doi.org/10.1109/VR.1999.756950","url":null,"abstract":"The promise of immersive virtual environment (VE) applications has been that they can make interaction with computer information and processes easier by providing a medium that more closely matches the user's real environment. Meeting this promise, however; has proven to be difficult, as the interface between the user and the computer is not complete, the metaphors for interacting with information are not always obvious, and the tools for including current interaction techniques are not sufficient to the task. We present SVIFT, the Simple Virtual Interactor Framework and Toolkit, which supports efforts to meet the interaction needs of immersive VE applications. SVIFT allows for the design and implementation of various interaction techniques that can be easily incorporated into many VE applications and combined with other interaction techniques to produce more complex interactions. We also discuss the differences between desktop and immersive environment interaction, and the implications of those differences on the design of an interactor framework.","PeriodicalId":175913,"journal":{"name":"Proceedings IEEE Virtual Reality (Cat. No. 99CB36316)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123145416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Leigh, Andrew E. Johnson, T. DeFanti, Maxine D. Brown
This paper presents an overview of the tele-immersion applications that have been built by collaborators around the world using the CAVERNsoft toolkit, and the lessons learned from building these applications. In particular the lessons learned are presented as a set of rules-of-thumb for developing tele-immersive applications in general.
{"title":"A review of tele-immersive applications in the CAVE research network","authors":"J. Leigh, Andrew E. Johnson, T. DeFanti, Maxine D. Brown","doi":"10.1109/VR.1999.756949","DOIUrl":"https://doi.org/10.1109/VR.1999.756949","url":null,"abstract":"This paper presents an overview of the tele-immersion applications that have been built by collaborators around the world using the CAVERNsoft toolkit, and the lessons learned from building these applications. In particular the lessons learned are presented as a set of rules-of-thumb for developing tele-immersive applications in general.","PeriodicalId":175913,"journal":{"name":"Proceedings IEEE Virtual Reality (Cat. No. 99CB36316)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124134733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Registration for outdoor systems for Augmented Reality (AR) cannot rely on the methods developed for indoor use (e.g., magnetic tracking, fiducial markers). Although GPS and the earth's magnetic field can be used to obtain a rough estimate of position and orientation, the precision of this registration method is not high enough for satisfying AR overlay. Computer vision methods can help to improve the registration precision by tracking visual clues whose real world positions are known. We have developed a system that can exploit horizon silhouettes for improving the orientation precision of a camera which is aligned with the user's view. It has been shown that this approach is able to provide registration even as a stand-alone system, although the usual limitations of computer vision prohibit to use it under unfavorable conditions. This paper describes the approach of registration by using horizon silhouettes. Based on the known observer location (from GPS), the 360 degree silhouette is computed from a digital elevation map database. Registration is achieved, when the extracted visual horizon silhouette segment is matched onto this predicted silhouette. Significant features (mountain peaks) are cues which provide hypotheses for the match. Several criteria are tested to find the best matching hypothesis. The system is implemented on a PC under Windows NT. Results are shown in this paper.
{"title":"Registration for outdoor augmented reality applications using computer vision techniques and hybrid sensors","authors":"R. Behringer","doi":"10.1109/VR.1999.756958","DOIUrl":"https://doi.org/10.1109/VR.1999.756958","url":null,"abstract":"Registration for outdoor systems for Augmented Reality (AR) cannot rely on the methods developed for indoor use (e.g., magnetic tracking, fiducial markers). Although GPS and the earth's magnetic field can be used to obtain a rough estimate of position and orientation, the precision of this registration method is not high enough for satisfying AR overlay. Computer vision methods can help to improve the registration precision by tracking visual clues whose real world positions are known. We have developed a system that can exploit horizon silhouettes for improving the orientation precision of a camera which is aligned with the user's view. It has been shown that this approach is able to provide registration even as a stand-alone system, although the usual limitations of computer vision prohibit to use it under unfavorable conditions. This paper describes the approach of registration by using horizon silhouettes. Based on the known observer location (from GPS), the 360 degree silhouette is computed from a digital elevation map database. Registration is achieved, when the extracted visual horizon silhouette segment is matched onto this predicted silhouette. Significant features (mountain peaks) are cues which provide hypotheses for the match. Several criteria are tested to find the best matching hypothesis. The system is implemented on a PC under Windows NT. Results are shown in this paper.","PeriodicalId":175913,"journal":{"name":"Proceedings IEEE Virtual Reality (Cat. No. 99CB36316)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115918008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}