Tobias Haubrich, Sven Seele, R. Herpers, Peter Becker
Underlying semantics are important to facilitate traffic simulations in virtual environments. The work presented here introduces an universal and extensible model for the representation of road network logics. Additionally, setup processes of road network logics following the model are described, and the integration of different traffic simulation approaches is discussed. For evaluation, scenes according to different scenarios were created and semantics were integrated. Results indicate significant time savings from automatic generation of road network logics.
{"title":"Integration of road network logics into virtual environments","authors":"Tobias Haubrich, Sven Seele, R. Herpers, Peter Becker","doi":"10.1109/VR.2014.6802060","DOIUrl":"https://doi.org/10.1109/VR.2014.6802060","url":null,"abstract":"Underlying semantics are important to facilitate traffic simulations in virtual environments. The work presented here introduces an universal and extensible model for the representation of road network logics. Additionally, setup processes of road network logics following the model are described, and the integration of different traffic simulation approaches is discussed. For evaluation, scenes according to different scenarios were created and semantics were integrated. Results indicate significant time savings from automatic generation of road network logics.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124318391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lizeth Vega-Medina, B. Perez-Gutierrez, Gerardo Tibamoso, A. Uribe-Quevedo, Norman Jaimes
Central venous access is an invasive medical procedure of high complexity used in critically ill patients. Its implementation requires great skills and knowledge from a health care specialist. Advances in patient simulators present solutions regarding children and adults, however, newborn simulators are scarce. This paper presents the development of a newborn's central venous access simulator for training and educational purposes. The simulation system is composed of three main subsystems: a procedural simulation, showing how the procedure must be correctly performed; a haptic subsystem for increasing realism; and finally a newborn manikin with image projection for guiding the procedure.
{"title":"VR central venous access simulation system for newborns","authors":"Lizeth Vega-Medina, B. Perez-Gutierrez, Gerardo Tibamoso, A. Uribe-Quevedo, Norman Jaimes","doi":"10.1109/VR.2014.6802081","DOIUrl":"https://doi.org/10.1109/VR.2014.6802081","url":null,"abstract":"Central venous access is an invasive medical procedure of high complexity used in critically ill patients. Its implementation requires great skills and knowledge from a health care specialist. Advances in patient simulators present solutions regarding children and adults, however, newborn simulators are scarce. This paper presents the development of a newborn's central venous access simulator for training and educational purposes. The simulation system is composed of three main subsystems: a procedural simulation, showing how the procedure must be correctly performed; a haptic subsystem for increasing realism; and finally a newborn manikin with image projection for guiding the procedure.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117237459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simulation training using virtual patients can provide many benefits for nursing and medical students in learning patient interaction skills, especially since virtual patients can represent a wide range of patient scenarios and demographics that are not typically represented through roleplay or clinical rotations. However, creation of a single virtual patient scenario may take six months or more, so it is difficult to provide many unique scenarios during the course of a nurse's education. In this work, I propose the user-centered design, creation, and evaluation of a scenario-builder tool to enable nursing faculty to create their own scenarios, reducing development costs.
{"title":"Development of a scenario builder tool for scaffolded virtual patients","authors":"Lauren Cairco, L. Hodges","doi":"10.1109/VR.2014.6802086","DOIUrl":"https://doi.org/10.1109/VR.2014.6802086","url":null,"abstract":"Simulation training using virtual patients can provide many benefits for nursing and medical students in learning patient interaction skills, especially since virtual patients can represent a wide range of patient scenarios and demographics that are not typically represented through roleplay or clinical rotations. However, creation of a single virtual patient scenario may take six months or more, so it is difficult to provide many unique scenarios during the course of a nurse's education. In this work, I propose the user-centered design, creation, and evaluation of a scenario-builder tool to enable nursing faculty to create their own scenarios, reducing development costs.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131567946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jérôme Ardouin, A. Lécuyer, M. Marchal, É. Marchand
In this paper we introduce a novel approach for stereoscopic rendering of virtual environments with a wide Field-of-View (FoV) up to 360°. Handling such a wide FoV implies the use of non-planar projections and generates specific problems such as for rasterization and clipping of primitives. We propose a novel pre-clip stage specifically adapted to geometric approaches for which problems occur with polygons spanning across the projection discontinuities. Our approach integrates seamlessly with immersive virtual reality systems as it is compatible with stereoscopy, head-tracking, and multi-surface projections. The benchmarking of our approach with different hardware setups could show that it is well compliant with real-time constraint, and capable of displaying a wide range of FoVs. Thus, our geometric approach could be used in various VR applications in which the user needs to extend the FoV and apprehend more visual information.
{"title":"Stereoscopic rendering of virtual environments with wide Field-of-Views up to 360°","authors":"Jérôme Ardouin, A. Lécuyer, M. Marchal, É. Marchand","doi":"10.1109/VR.2014.6802042","DOIUrl":"https://doi.org/10.1109/VR.2014.6802042","url":null,"abstract":"In this paper we introduce a novel approach for stereoscopic rendering of virtual environments with a wide Field-of-View (FoV) up to 360°. Handling such a wide FoV implies the use of non-planar projections and generates specific problems such as for rasterization and clipping of primitives. We propose a novel pre-clip stage specifically adapted to geometric approaches for which problems occur with polygons spanning across the projection discontinuities. Our approach integrates seamlessly with immersive virtual reality systems as it is compatible with stereoscopy, head-tracking, and multi-surface projections. The benchmarking of our approach with different hardware setups could show that it is well compliant with real-time constraint, and capable of displaying a wide range of FoVs. Thus, our geometric approach could be used in various VR applications in which the user needs to extend the FoV and apprehend more visual information.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"237 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130907007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel approach to direct virtual evacuees in emergency using non-uniform safety fields. The safety fields uses safety level to describe the danger degrees corresponding to the current situation of their positions. Our method also uses spread potential to generate smooth escape routes. We also combine our method with the most current agent-based collision avoidance algorithm to prevent oscillation. In practice, our algorithm can perform real-time navigation for dozens of evacuees in emergency scenarios.
{"title":"Real-time path planning in emergency using non-uniform safety fields","authors":"Bangrui Liu, A. Hao","doi":"10.1109/VR.2014.6802067","DOIUrl":"https://doi.org/10.1109/VR.2014.6802067","url":null,"abstract":"We present a novel approach to direct virtual evacuees in emergency using non-uniform safety fields. The safety fields uses safety level to describe the danger degrees corresponding to the current situation of their positions. Our method also uses spread potential to generate smooth escape routes. We also combine our method with the most current agent-based collision avoidance algorithm to prevent oscillation. In practice, our algorithm can perform real-time navigation for dozens of evacuees in emergency scenarios.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130283388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A common problem with Optical See-Through (OST) Augmented Reality (AR) is misalignment or registration error with the amount of acceptable error being heavily dependent upon the type of application. Approximation methods, driven by user feedback, have been developed to estimate the necessary corrections for alignment errors. These calibration methods, however, are susceptable to induced error from system and environmental sources, such as human alignment error. The proposed research plan is intended to further the development of accurate and robust calibration methods for OST AR systems by quantifying the impact of specific factors shown to contribute to calibration error. An important aspect of this research will be to develop methods for examining each factor in isolation in order to determine the independent error contribution of each source. This will facilitate the establishment of acceptable thresholds for each type of error and be a meaningful step toward defining quality metrics for OST AR calibration techniques.
{"title":"Quantification of error from system and environmental sources in Optical See-Through head mounted display calibration methods","authors":"Kenneth R. Moser","doi":"10.1109/VR.2014.6802089","DOIUrl":"https://doi.org/10.1109/VR.2014.6802089","url":null,"abstract":"A common problem with Optical See-Through (OST) Augmented Reality (AR) is misalignment or registration error with the amount of acceptable error being heavily dependent upon the type of application. Approximation methods, driven by user feedback, have been developed to estimate the necessary corrections for alignment errors. These calibration methods, however, are susceptable to induced error from system and environmental sources, such as human alignment error. The proposed research plan is intended to further the development of accurate and robust calibration methods for OST AR systems by quantifying the impact of specific factors shown to contribute to calibration error. An important aspect of this research will be to develop methods for examining each factor in isolation in order to determine the independent error contribution of each source. This will facilitate the establishment of acceptable thresholds for each type of error and be a meaningful step toward defining quality metrics for OST AR calibration techniques.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114850478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pramod Chembrammel, Naveen Kumar Sankaran, T. Kesavadas
Summary form only given. In the video, experiments demonstrating the control of robot using a brain computer interface are shown. This is done using “actemes”. Actemes are fundamental units of action. The actemes can be combined to perform complex tasks. The user who performs the tasks imagines the complex actions as an ordered combination of actemes. The EEG of the user during the process is collected and classified in real-time using independent component analysis (ICA)[1, 2] and a neural network classifier[3, 4] to identify the actemes. This ordered set of actemes can be considered to be meaningful sentence in a language. The order in which the complex action is constructed is called the grammar. Two such tasks are demonstrated using these experiments. The first task involves insertion of round peg into a hole and rotating the peg inorder to simulate the insertion of a screw into threaded hole. The actemes involved are “insert” and “rotate”. The second task is a point-to-point movement of a peg in which the peg is to be moved from a circular zone at the bottom right to the circular zone centered at the cross hair at the top left of the screen. The representative image is shown in Fig 1.
{"title":"Control of robot using a brain computer interface","authors":"Pramod Chembrammel, Naveen Kumar Sankaran, T. Kesavadas","doi":"10.1109/VR.2014.6802094","DOIUrl":"https://doi.org/10.1109/VR.2014.6802094","url":null,"abstract":"Summary form only given. In the video, experiments demonstrating the control of robot using a brain computer interface are shown. This is done using “actemes”. Actemes are fundamental units of action. The actemes can be combined to perform complex tasks. The user who performs the tasks imagines the complex actions as an ordered combination of actemes. The EEG of the user during the process is collected and classified in real-time using independent component analysis (ICA)[1, 2] and a neural network classifier[3, 4] to identify the actemes. This ordered set of actemes can be considered to be meaningful sentence in a language. The order in which the complex action is constructed is called the grammar. Two such tasks are demonstrated using these experiments. The first task involves insertion of round peg into a hole and rotating the peg inorder to simulate the insertion of a screw into threaded hole. The actemes involved are “insert” and “rotate”. The second task is a point-to-point movement of a peg in which the peg is to be moved from a circular zone at the bottom right to the circular zone centered at the cross hair at the top left of the screen. The representative image is shown in Fig 1.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127237493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We have conducted an experiment to study the effect of an occluding surface on the accuracy of near field depth matching in augmented reality (AR). Our experiment was based on replicating a similar experiment conducted by Edwards et al. [2]. We used an AR haploscope [1], which allows us to independently manipulate accommodative demand and vergence angle of the visible image. Fifteen observers matched the perceived depth of an AR-presented virtual object with a physical pointer. Overall, observers overestimated depth by 5 mm or less in the presence of the occluder, while in the absence of an occluder they overestimated depth by 5 to 10 mm. The data from Edwards et al. [2] is normalized, and when we performed the same normalization procedure on our own data, our results do not agree with Edwards et al. [2]. We suspect that eye vergence explains these results.
{"title":"The effect of an occluder on near field depth matching in optical see-through augmented reality","authors":"Chunya Hua, Kenneth R. Moser, J. Swan","doi":"10.1109/VR.2014.6802095","DOIUrl":"https://doi.org/10.1109/VR.2014.6802095","url":null,"abstract":"We have conducted an experiment to study the effect of an occluding surface on the accuracy of near field depth matching in augmented reality (AR). Our experiment was based on replicating a similar experiment conducted by Edwards et al. [2]. We used an AR haploscope [1], which allows us to independently manipulate accommodative demand and vergence angle of the visible image. Fifteen observers matched the perceived depth of an AR-presented virtual object with a physical pointer. Overall, observers overestimated depth by 5 mm or less in the presence of the occluder, while in the absence of an occluder they overestimated depth by 5 to 10 mm. The data from Edwards et al. [2] is normalized, and when we performed the same normalization procedure on our own data, our results do not agree with Edwards et al. [2]. We suspect that eye vergence explains these results.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"2672 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125686900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}