Pub Date : 1998-03-14DOI: 10.1109/VRAIS.1998.658499
Peter Ebbesmeyer
Presents a new approach for using texture-mapped quadrilaterals as approximate representations for objects that are far away from the viewpoint. The method is suited for interactive visualization of complex indoor environments such as CAD models of large plants. In a pre-processing stage, the 3D model is partitioned by virtual walls. These virtual walls are simple quadrilaterals which divide a large room into a set of separated cells. During the walkthrough phase, the system only renders the geometry of cells surrounding the current viewpoint. All distant geometry is culled and replaced by "textured virtual walls" representing the same part of the model as the culled geometry. A description of techniques is given for minimizing visual artifacts and for controlling the transitions between textures and geometry if the viewpoint moves towards a virtual wall. The approach makes extensive use of texture-mapping hardware. It considerably reduces the number of polygons rendered by the 3D graphics pipeline and therefore contributes to achieving interactive frame rates.
{"title":"Textured virtual walls achieving interactive frame rates during walkthroughs of complex indoor environments","authors":"Peter Ebbesmeyer","doi":"10.1109/VRAIS.1998.658499","DOIUrl":"https://doi.org/10.1109/VRAIS.1998.658499","url":null,"abstract":"Presents a new approach for using texture-mapped quadrilaterals as approximate representations for objects that are far away from the viewpoint. The method is suited for interactive visualization of complex indoor environments such as CAD models of large plants. In a pre-processing stage, the 3D model is partitioned by virtual walls. These virtual walls are simple quadrilaterals which divide a large room into a set of separated cells. During the walkthrough phase, the system only renders the geometry of cells surrounding the current viewpoint. All distant geometry is culled and replaced by \"textured virtual walls\" representing the same part of the model as the culled geometry. A description of techniques is given for minimizing visual artifacts and for controlling the transitions between textures and geometry if the viewpoint moves towards a virtual wall. The approach makes extensive use of texture-mapping hardware. It considerably reduces the number of polygons rendered by the 3D graphics pipeline and therefore contributes to achieving interactive frame rates.","PeriodicalId":105542,"journal":{"name":"Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125175133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-03-14DOI: 10.1109/VRAIS.1998.658497
Dorothy C. Strickland
{"title":"Creating A Virtual Bridge To Reality: The Latest Uses Of Virtual Reality For Mental Health","authors":"Dorothy C. Strickland","doi":"10.1109/VRAIS.1998.658497","DOIUrl":"https://doi.org/10.1109/VRAIS.1998.658497","url":null,"abstract":"","PeriodicalId":105542,"journal":{"name":"Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134244487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-03-14DOI: 10.1109/VRAIS.1998.658453
Zhigang Zhu, Guangyou Xu, X. Lin
This paper presents a systematic approach to automatically construct 3D natural scenes from video sequences. Dense layered depth maps are derived from image sequences captured by a vibrated camera with only approximately known motion. The approach consists of (1) image stabilization by motion filtering and (2) depth estimation by spatio-temporal texture analysis. The two stage method not only generalized the so called panoramic image method and epipolar plane image method to handle image sequence vibrations due to the uncontrollable camera fluctuations, but also bypasses the feature extraction and matching problems encountered in stereo or visual motion. Our approach allows automatic modeling of the real environment for inclusion in VR representations.
{"title":"Constructing 3D natural scene from video sequences with vibrated motions","authors":"Zhigang Zhu, Guangyou Xu, X. Lin","doi":"10.1109/VRAIS.1998.658453","DOIUrl":"https://doi.org/10.1109/VRAIS.1998.658453","url":null,"abstract":"This paper presents a systematic approach to automatically construct 3D natural scenes from video sequences. Dense layered depth maps are derived from image sequences captured by a vibrated camera with only approximately known motion. The approach consists of (1) image stabilization by motion filtering and (2) depth estimation by spatio-temporal texture analysis. The two stage method not only generalized the so called panoramic image method and epipolar plane image method to handle image sequence vibrations due to the uncontrollable camera fluctuations, but also bypasses the feature extraction and matching problems encountered in stereo or visual motion. Our approach allows automatic modeling of the real environment for inclusion in VR representations.","PeriodicalId":105542,"journal":{"name":"Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121472930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-03-14DOI: 10.1109/VRAIS.1998.658487
Andrew E. Johnson, Maria Roussos, J. Leigh, C. Vasilakis, C. Barnes, T. Moher
This paper describes the NICE project, an immersive learning environment for children implemented in the CAVE and related multi-user virtual reality (VR) technologies. The NICE project provides an engaging setting where children construct and cultivate simple virtual ecosystems, collaborate via networks with other remotely-located children, and create stories from their interactions in the real and virtual world.
{"title":"The NICE project: learning together in a virtual world","authors":"Andrew E. Johnson, Maria Roussos, J. Leigh, C. Vasilakis, C. Barnes, T. Moher","doi":"10.1109/VRAIS.1998.658487","DOIUrl":"https://doi.org/10.1109/VRAIS.1998.658487","url":null,"abstract":"This paper describes the NICE project, an immersive learning environment for children implemented in the CAVE and related multi-user virtual reality (VR) technologies. The NICE project provides an engaging setting where children construct and cultivate simple virtual ecosystems, collaborate via networks with other remotely-located children, and create stories from their interactions in the real and virtual world.","PeriodicalId":105542,"journal":{"name":"Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122677468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-03-14DOI: 10.1109/VRAIS.1998.658486
C. G. Guan, L. Serra, R. Kockro, N. Hern, W. Nowinski, Chumpon Chan
We present a virtual reality application to neurosurgical pre-operative planning which is undergoing clinical evaluation at the Singapore General Hospital. The application, based on the ISS Virtual Workbench, lets the neurosurgeon study the brain pathology, blood vessels, skull and the surrounding tissue using real-time volumetric rendering of the patient data. With this information, the surgeon can plan the best approach for surgery. At the moment, seven cases have been planned. The system features measuring markers, multi-modal data fusion of a patient's data, different visualization modes, tissue enhancement through manipulation of colour, look-up tables, cloning of region of interest, and interactive pathology outlining.
{"title":"Volume-based tumor neurosurgery planning in the Virtual Workbench","authors":"C. G. Guan, L. Serra, R. Kockro, N. Hern, W. Nowinski, Chumpon Chan","doi":"10.1109/VRAIS.1998.658486","DOIUrl":"https://doi.org/10.1109/VRAIS.1998.658486","url":null,"abstract":"We present a virtual reality application to neurosurgical pre-operative planning which is undergoing clinical evaluation at the Singapore General Hospital. The application, based on the ISS Virtual Workbench, lets the neurosurgeon study the brain pathology, blood vessels, skull and the surrounding tissue using real-time volumetric rendering of the patient data. With this information, the surgeon can plan the best approach for surgery. At the moment, seven cases have been planned. The system features measuring markers, multi-modal data fusion of a patient's data, different visualization modes, tissue enhancement through manipulation of colour, look-up tables, cloning of region of interest, and interactive pathology outlining.","PeriodicalId":105542,"journal":{"name":"Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127858773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-03-14DOI: 10.1109/VRAIS.1998.658469
L. Harris, M. Jenkin, D. Zikovitz
The vast majority of virtual environments concentrate on constructing a realistic visual simulation while ignoring non-visual environmental cues. Although these missing cues can to some extent be ignored by an operator, the lack of appropriate cues may contribute to cybersickness and may affect operator performance. We examine the role of vestibular cues to self-motion on an operator's sense of self-motion within a virtual environment. We show that the presence of vestibular cues has a very significant effect on an operator's estimate of self-motion. The addition of vestibular cues, however, is not always beneficial.
{"title":"Vestibular cues and virtual environments","authors":"L. Harris, M. Jenkin, D. Zikovitz","doi":"10.1109/VRAIS.1998.658469","DOIUrl":"https://doi.org/10.1109/VRAIS.1998.658469","url":null,"abstract":"The vast majority of virtual environments concentrate on constructing a realistic visual simulation while ignoring non-visual environmental cues. Although these missing cues can to some extent be ignored by an operator, the lack of appropriate cues may contribute to cybersickness and may affect operator performance. We examine the role of vestibular cues to self-motion on an operator's sense of self-motion within a virtual environment. We show that the presence of vestibular cues has a very significant effect on an operator's estimate of self-motion. The addition of vestibular cues, however, is not always beneficial.","PeriodicalId":105542,"journal":{"name":"Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115468564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-03-14DOI: 10.1109/VRAIS.1998.658419
N. H. Bakker, P. Werkhoven, P. O. Passenier
In most applications of virtual environments (VEs), like training and design evaluation, a good sense of orientation is needed in the VE. Orientation performance when moving around in the real world relies on visual as well as proprioceptive feedback. However, the navigation metaphors which are used to move around the VE often lack proprioceptive feedback. Furthermore, the visual feedback in a VE is often relatively poor compared to the visual feedback available in the real world. Therefore, we have quantified the influence of visual and proprioceptive feedback on orientation performance in VEs. Subjects were immersed in a virtual forest and were asked to turn specific angles using three navigation metaphors, differing in the kind of proprioceptive feedback which is provided (no proprioceptive feedback, vestibular feedback, and vestibular and kinesthetic feedback). The results indicate that the most accurate turn performance is found when kinesthetic feedback is present, in a condition where subjects use their legs to turn around. This indicates that incorporating this kind of feedback in navigation metaphors is quite beneficial. Orientation on only the visual component is most inaccurate, leading to progressively larger undershoots for larger angles.
{"title":"Aiding orientation performance in virtual environments with proprioceptive feedback","authors":"N. H. Bakker, P. Werkhoven, P. O. Passenier","doi":"10.1109/VRAIS.1998.658419","DOIUrl":"https://doi.org/10.1109/VRAIS.1998.658419","url":null,"abstract":"In most applications of virtual environments (VEs), like training and design evaluation, a good sense of orientation is needed in the VE. Orientation performance when moving around in the real world relies on visual as well as proprioceptive feedback. However, the navigation metaphors which are used to move around the VE often lack proprioceptive feedback. Furthermore, the visual feedback in a VE is often relatively poor compared to the visual feedback available in the real world. Therefore, we have quantified the influence of visual and proprioceptive feedback on orientation performance in VEs. Subjects were immersed in a virtual forest and were asked to turn specific angles using three navigation metaphors, differing in the kind of proprioceptive feedback which is provided (no proprioceptive feedback, vestibular feedback, and vestibular and kinesthetic feedback). The results indicate that the most accurate turn performance is found when kinesthetic feedback is present, in a condition where subjects use their legs to turn around. This indicates that incorporating this kind of feedback in navigation metaphors is quite beneficial. Orientation on only the visual component is most inaccurate, leading to progressively larger undershoots for larger angles.","PeriodicalId":105542,"journal":{"name":"Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134212870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-03-14DOI: 10.1109/VRAIS.1998.658485
M. Schulz, T. Ertl, Thomas Reuding
The use of virtual prototypes generated from engineering simulations can be crucial to the efficient development of innovative products. Performance predictions and functional evaluations of a design are possible long before results of real prototype tests are available. With the rise in model complexity, data quantity, computing performance and accuracy, we increasingly fined ourselves lacking the tools, methods and metaphors to deal with the information that is being generated. We present new results of on-going research at the University of Erlangen and BMW in the development of a virtual environment for car-body engineering applications as illustrated by examples from acoustics, vibration and impact dynamics.
{"title":"Crashing in cyberspace-evaluating structural behaviour of car bodies in a virtual environment","authors":"M. Schulz, T. Ertl, Thomas Reuding","doi":"10.1109/VRAIS.1998.658485","DOIUrl":"https://doi.org/10.1109/VRAIS.1998.658485","url":null,"abstract":"The use of virtual prototypes generated from engineering simulations can be crucial to the efficient development of innovative products. Performance predictions and functional evaluations of a design are possible long before results of real prototype tests are available. With the rise in model complexity, data quantity, computing performance and accuracy, we increasingly fined ourselves lacking the tools, methods and metaphors to deal with the information that is being generated. We present new results of on-going research at the University of Erlangen and BMW in the development of a virtual environment for car-body engineering applications as illustrated by examples from acoustics, vibration and impact dynamics.","PeriodicalId":105542,"journal":{"name":"Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116146078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1997-12-31DOI: 10.1109/VRAIS.1998.658490
S. Stansfield, D. Shawver, A. Sobel
This paper presents a prototype virtual reality (VR) system for training medical first responders. The initial application is to battlefield medicine and focuses on the training of medical corpsmen and other front-line personnel who might be called upon to provide emergency triage on the battlefield. The system is built upon Sandia's multi-user, distributed VR platform and provides an interactive, immersive simulation capability. The user is represented by an Avatar and is able to manipulate his virtual instruments and carry out medical procedures. A dynamic casualty simulation provides realistic cues to the patient's condition (e.g. changing blood pressure and pulse) and responds to the actions of the trainee (e.g. a change in the color of a patient's skin may result from a check of the capillary refill rate). The current casualty simulation is of an injury resulting in a tension pneumothorax. This casualty model was developed by the University of Pennsylvania and integrated into the Sandia MediSim system.
{"title":"MediSim: a prototype VR system for training medical first responders","authors":"S. Stansfield, D. Shawver, A. Sobel","doi":"10.1109/VRAIS.1998.658490","DOIUrl":"https://doi.org/10.1109/VRAIS.1998.658490","url":null,"abstract":"This paper presents a prototype virtual reality (VR) system for training medical first responders. The initial application is to battlefield medicine and focuses on the training of medical corpsmen and other front-line personnel who might be called upon to provide emergency triage on the battlefield. The system is built upon Sandia's multi-user, distributed VR platform and provides an interactive, immersive simulation capability. The user is represented by an Avatar and is able to manipulate his virtual instruments and carry out medical procedures. A dynamic casualty simulation provides realistic cues to the patient's condition (e.g. changing blood pressure and pulse) and responds to the actions of the trainee (e.g. a change in the color of a patient's skin may result from a check of the capillary refill rate). The current casualty simulation is of an injury resulting in a tension pneumothorax. This casualty model was developed by the University of Pennsylvania and integrated into the Sandia MediSim system.","PeriodicalId":105542,"journal":{"name":"Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1997-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122610835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}