Catherine Stocker, Ben Sunshine-Hill, John Drake, Ian Perera, Joseph T. Kider, N. Badler
In this paper we investigate the utility of an interactive, desktop-based virtual reality (VR) system for training personnel in hazardous working environments. Employing a novel software model, CRAM (Course Resource with Active Materials), we asked participants to learn a specific aircraft maintenance task. The evaluation sought to identify the type of familiarization training that would be most useful prior to hands on training, as well as after, as skill maintenance. We found that participants develop an increased awareness of hazards when training with stimulating technology — in particular (1) interactive, virtual simulations and (2) videos of an instructor demonstrating a task — versus simply studying (3) a set of written instructions. The results also indicate participants desire to train with these technologies over the standard written instructions. Finally, demographic data collected during the evaluation elucidates future directions for VR systems to develop a more robust and stimulating hazard training environment.
{"title":"CRAM it! A comparison of virtual, live-action and written training systems for preparing personnel to work in hazardous environments","authors":"Catherine Stocker, Ben Sunshine-Hill, John Drake, Ian Perera, Joseph T. Kider, N. Badler","doi":"10.1109/VR.2011.5759444","DOIUrl":"https://doi.org/10.1109/VR.2011.5759444","url":null,"abstract":"In this paper we investigate the utility of an interactive, desktop-based virtual reality (VR) system for training personnel in hazardous working environments. Employing a novel software model, CRAM (Course Resource with Active Materials), we asked participants to learn a specific aircraft maintenance task. The evaluation sought to identify the type of familiarization training that would be most useful prior to hands on training, as well as after, as skill maintenance. We found that participants develop an increased awareness of hazards when training with stimulating technology — in particular (1) interactive, virtual simulations and (2) videos of an instructor demonstrating a task — versus simply studying (3) a set of written instructions. The results also indicate participants desire to train with these technologies over the standard written instructions. Finally, demographic data collected during the evaluation elucidates future directions for VR systems to develop a more robust and stimulating hazard training environment.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129131167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper describes the use of interactive Virtual Prototypes (iVP) for the specification of consumer products and for the evaluation of the perceived quality of the product already in its conceptual form. iVPs are based on multimodal interaction including force-feedback and sound in addition to 3D stereoscopic visualization. The fidelity of the prototypes has been evaluated in comparison with the corresponding real products, when used for performing the same tests. Differently from the traditional use of Virtual Prototypes, which aims at evaluating the product design, we have used iVPs for the interaction design of a new product, i.e. it has been used as a means to define the design parameters used for the specification of a new product.
{"title":"The use of interactive Virtual Prototypes for products specification in the concept design phase","authors":"M. Bordegoni, F. Ferrise, Joseba Lizaranzu","doi":"10.1109/VR.2011.5759466","DOIUrl":"https://doi.org/10.1109/VR.2011.5759466","url":null,"abstract":"The paper describes the use of interactive Virtual Prototypes (iVP) for the specification of consumer products and for the evaluation of the perceived quality of the product already in its conceptual form. iVPs are based on multimodal interaction including force-feedback and sound in addition to 3D stereoscopic visualization. The fidelity of the prototypes has been evaluated in comparison with the corresponding real products, when used for performing the same tests. Differently from the traditional use of Virtual Prototypes, which aims at evaluating the product design, we have used iVPs for the interaction design of a new product, i.e. it has been used as a means to define the design parameters used for the specification of a new product.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"197 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116724836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We conducted a pilot study to determine if users can accurately perceive the spatial position of a virtual ball traveling at high speed, track and hit it at the correct time with a real racket in a stereoscopic immersive tennis training system. To ensure that the tennis ball appears spatially accurate from the player's view point when projected onto the display walls, the positions of both eyes are obtained using head tracking. The tracker to display latency was determined to be 40 milliseconds. The pilot study with proficient players shows that they are able to accurately intersect the trajectory of the virtual ball with their racket, with an average error of less than 0.1m. This initial finding will be useful for immersive virtual reality applications involving high speed tasks.
{"title":"Pilot study on the spatial and temporal accuracies of hitting a high-speed virtual ball in tennis simulation","authors":"Fong Wee Teck","doi":"10.1109/VR.2011.5759492","DOIUrl":"https://doi.org/10.1109/VR.2011.5759492","url":null,"abstract":"We conducted a pilot study to determine if users can accurately perceive the spatial position of a virtual ball traveling at high speed, track and hit it at the correct time with a real racket in a stereoscopic immersive tennis training system. To ensure that the tennis ball appears spatially accurate from the player's view point when projected onto the display walls, the positions of both eyes are obtained using head tracking. The tracker to display latency was determined to be 40 milliseconds. The pilot study with proficient players shows that they are able to accurately intersect the trajectory of the virtual ball with their racket, with an average error of less than 0.1m. This initial finding will be useful for immersive virtual reality applications involving high speed tasks.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127356082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Magnus Axholt, Martin A. Skoglund, Stephen D. O'Connell, M. Cooper, S. Ellis, A. Ynnerman
The parameter estimation variance of the Single Point Active Alignment Method (SPAAM) is studied through an experiment where 11 subjects are instructed to create alignments using an Optical See-Through Head Mounted Display (OSTHMD) such that three separate correspondence point distributions are acquired. Modeling the OSTHMD and the subject's dominant eye as a pinhole camera, findings show that a correspondence point distribution well distributed along the user's line of sight yields less variant parameter estimates. The estimated eye point location is studied in particular detail. The findings of the experiment are complemented with simulated data which show that image plane orientation is sensitive to the number of correspondence points. The simulated data also illustrates some interesting properties on the numerical stability of the calibration problem as a function of alignment noise, number of correspondence points, and correspondence point distribution.
{"title":"Parameter estimation variance of the single point active alignment method in optical see-through head mounted display calibration","authors":"Magnus Axholt, Martin A. Skoglund, Stephen D. O'Connell, M. Cooper, S. Ellis, A. Ynnerman","doi":"10.1109/VR.2011.5759432","DOIUrl":"https://doi.org/10.1109/VR.2011.5759432","url":null,"abstract":"The parameter estimation variance of the Single Point Active Alignment Method (SPAAM) is studied through an experiment where 11 subjects are instructed to create alignments using an Optical See-Through Head Mounted Display (OSTHMD) such that three separate correspondence point distributions are acquired. Modeling the OSTHMD and the subject's dominant eye as a pinhole camera, findings show that a correspondence point distribution well distributed along the user's line of sight yields less variant parameter estimates. The estimated eye point location is studied in particular detail. The findings of the experiment are complemented with simulated data which show that image plane orientation is sensitive to the number of correspondence points. The simulated data also illustrates some interesting properties on the numerical stability of the calibration problem as a function of alignment noise, number of correspondence points, and correspondence point distribution.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115238283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel bimanual body-directed travel technique, PenguFly (PF), and compare it with two standard travel-by-pointing techniques by conducting a between-subject experiment in a CAVE. In PF, the positions of the user's head and hands are projected onto the ground, and travel direction and speed are computed based on direction and magnitude of the vector from the midpoint of the projected hand positions to the projected head position. The two baseline conditions both use a single hand to control the direction, with speed controlled discretely by button pushes with the same hand in one case, and continuously by the distance between the hands in the other case. Users were asked to travel through a simple virtual world and collect virtual coins within a set time. We found no significant differences between travel conditions for reported presence or usability, but a significant increase in nausea with PF. Total travel distance was significantly higher for the baseline condition with discrete speed selection, whereas travel accuracy in terms of coin-to-distance ratio was higher with PF.
{"title":"Comparing steering-based travel techniques for search tasks in a CAVE","authors":"Anette von Kapri, T. Rick, Steven K. Feiner","doi":"10.1109/VR.2011.5759443","DOIUrl":"https://doi.org/10.1109/VR.2011.5759443","url":null,"abstract":"We present a novel bimanual body-directed travel technique, PenguFly (PF), and compare it with two standard travel-by-pointing techniques by conducting a between-subject experiment in a CAVE. In PF, the positions of the user's head and hands are projected onto the ground, and travel direction and speed are computed based on direction and magnitude of the vector from the midpoint of the projected hand positions to the projected head position. The two baseline conditions both use a single hand to control the direction, with speed controlled discretely by button pushes with the same hand in one case, and continuously by the distance between the hands in the other case. Users were asked to travel through a simple virtual world and collect virtual coins within a set time. We found no significant differences between travel conditions for reported presence or usability, but a significant increase in nausea with PF. Total travel distance was significantly higher for the baseline condition with discrete speed selection, whereas travel accuracy in terms of coin-to-distance ratio was higher with PF.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133218862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a three-dimensional (3D) haptic modeling system that enables a user to create 3D models as though he/she is drawing pictures in mid-air and touch the created models. In the proposed system, we use our ungrounded pen-shaped kinesthetic display as the interface. The kinesthetic display device can generate kinesthetic sensations on the user's fingers. Because our system does not use mechanical linkages, the user can freely move his/her hand and feel the sensation of touching virtual objects. In the proposed system, the user can easily and intuitively create various 3D shapes by drawing closed curves in air using the device. The created shapes are generated in a physics-based simulation environment and are displayed as 3D images. Therefore, the user can touch and see the drawn shapes as though they exist in reality.
{"title":"3D Haptic modeling system using ungrounded pen-shaped kinesthetic display","authors":"Sho Kamuro, K. Minamizawa, S. Tachi","doi":"10.1109/VR.2011.5759476","DOIUrl":"https://doi.org/10.1109/VR.2011.5759476","url":null,"abstract":"We propose a three-dimensional (3D) haptic modeling system that enables a user to create 3D models as though he/she is drawing pictures in mid-air and touch the created models. In the proposed system, we use our ungrounded pen-shaped kinesthetic display as the interface. The kinesthetic display device can generate kinesthetic sensations on the user's fingers. Because our system does not use mechanical linkages, the user can freely move his/her hand and feel the sensation of touching virtual objects. In the proposed system, the user can easily and intuitively create various 3D shapes by drawing closed curves in air using the device. The created shapes are generated in a physics-based simulation environment and are displayed as 3D images. Therefore, the user can touch and see the drawn shapes as though they exist in reality.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123336308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a novel type of markers with randomly scattered dots for augmented reality applications. Compared with traditional square markers, our markers have several significant advantages for flexible marker design, robustness against occlusion and user interaction. Our markers do not need to have a black frame, and their shape is not limited to square because the retrieval and tracking of the markers are based on geometric feature based keypoint matching. In our demonstration, we show real-time simultaneous retrieval and tracking of the markers on a laptop.
{"title":"Random dot markers","authors":"Hideaki Uchiyama, H. Saito","doi":"10.1109/VR.2011.5759503","DOIUrl":"https://doi.org/10.1109/VR.2011.5759503","url":null,"abstract":"We introduce a novel type of markers with randomly scattered dots for augmented reality applications. Compared with traditional square markers, our markers have several significant advantages for flexible marker design, robustness against occlusion and user interaction. Our markers do not need to have a black frame, and their shape is not limited to square because the retrieval and tracking of the markers are based on geometric feature based keypoint matching. In our demonstration, we show real-time simultaneous retrieval and tracking of the markers on a laptop.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121081261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Crystian Wendel M. Leão, J. P. Lima, V. Teichrieb, Eduardo S. Albuquerque, J. Kelner
Augmented Reality applications overlap virtual objects over a real scene, taking into account context, in order to add information to the end user. Nowadays, more advanced applications also make use of Diminished Reality that removes real objects from a scene. This paper describes an approach that combines Augmented Reality and Diminished Reality techniques to modify real objects present in applications. The proposed approach removes an object and replaces it with its purposely-modified replica. The solution uses dynamic texture techniques and Inpaint to enhance the visual response of the modification performed. The results are promising considering both realism of the modified real object and performance of the application.
{"title":"Demo — Altered reality: Augmenting and diminishing reality in real time","authors":"Crystian Wendel M. Leão, J. P. Lima, V. Teichrieb, Eduardo S. Albuquerque, J. Kelner","doi":"10.1109/VR.2011.5759497","DOIUrl":"https://doi.org/10.1109/VR.2011.5759497","url":null,"abstract":"Augmented Reality applications overlap virtual objects over a real scene, taking into account context, in order to add information to the end user. Nowadays, more advanced applications also make use of Diminished Reality that removes real objects from a scene. This paper describes an approach that combines Augmented Reality and Diminished Reality techniques to modify real objects present in applications. The proposed approach removes an object and replaces it with its purposely-modified replica. The solution uses dynamic texture techniques and Inpaint to enhance the visual response of the modification performed. The results are promising considering both realism of the modified real object and performance of the application.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124004042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takuji Narumi, Shinya Nishizaka, Takashi Kajinami, T. Tanikawa, M. Hirose
In our research demonstration, we show a "MetaCookie+" that enables the user to experience various tastes without changing the chemical composition of foods by using the influence between modalities. It is a pseudo-gustatory display by combining the Edible Marker system which can detect the state [number/shape/6-DOF coordinate] of each piece of bitten or divided food in real time, and the "Pseudo-gustation" method to change the perceived taste of food by changing its appearance and scent.
{"title":"MetaCookie+","authors":"Takuji Narumi, Shinya Nishizaka, Takashi Kajinami, T. Tanikawa, M. Hirose","doi":"10.1109/VR.2011.5759500","DOIUrl":"https://doi.org/10.1109/VR.2011.5759500","url":null,"abstract":"In our research demonstration, we show a \"MetaCookie+\" that enables the user to experience various tastes without changing the chemical composition of foods by using the influence between modalities. It is a pseudo-gustatory display by combining the Edible Marker system which can detect the state [number/shape/6-DOF coordinate] of each piece of bitten or divided food in real time, and the \"Pseudo-gustation\" method to change the perceived taste of food by changing its appearance and scent.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116161287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}