The current advent of consumer level optical see-through (OST) head-mounted displays (HMD's) has greatly broadened the accessibility of Augmented Reality (AR) to not only researchers but also the general public as well. This increased user base heightens the need for robust automatic calibration mechanisms suited for nontechnical users. We are developing a fully automated calibration system for two stereo OST HMD's, a consumer level and prototype model, based on the recently introduced interaction free display calibration (INDICA) method. Our current efforts are also focused on the development of an evaluation process to assess the performance of the system during use by non-expert subjects.
{"title":"Continuous automatic calibration for optical see-through displays","authors":"Kenneth R. Moser, Yuta Itoh, J. Swan","doi":"10.1109/VR.2015.7223385","DOIUrl":"https://doi.org/10.1109/VR.2015.7223385","url":null,"abstract":"The current advent of consumer level optical see-through (OST) head-mounted displays (HMD's) has greatly broadened the accessibility of Augmented Reality (AR) to not only researchers but also the general public as well. This increased user base heightens the need for robust automatic calibration mechanisms suited for nontechnical users. We are developing a fully automated calibration system for two stereo OST HMD's, a consumer level and prototype model, based on the recently introduced interaction free display calibration (INDICA) method. Our current efforts are also focused on the development of an evaluation process to assess the performance of the system during use by non-expert subjects.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115709379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tatsuya Kodera, M. Sugimoto, Ross T. Smith, B. Thomas
We propose a three dimensional position measurement method employing planar photo detectors to calibrate a Spatial Augmented Reality system of unknown geometry. In Spatial Augmented Reality, projectors overlay images onto an object in the physical environment. For this purpose, the alignment of the images and physical objects is required. Traditional camera based 3D position tracking systems, such as multi-camera motion capture systems, detect the positions of optical markers in two-dimensional image plane of each camera device, so those systems require multiple camera devices at known locations to obtain 3D position of the markers. We introduce a detection method of 3D position of a planar photo detector by projecting gradient patterns. The main contribution of our method is to realize an alignment of the projected images with the physical objects and measuring the geometry of the objects simultaneously for Spatial Augmented Reality applications.
{"title":"3D position measurement of planar photo detector using gradient patterns","authors":"Tatsuya Kodera, M. Sugimoto, Ross T. Smith, B. Thomas","doi":"10.1109/VR.2015.7223370","DOIUrl":"https://doi.org/10.1109/VR.2015.7223370","url":null,"abstract":"We propose a three dimensional position measurement method employing planar photo detectors to calibrate a Spatial Augmented Reality system of unknown geometry. In Spatial Augmented Reality, projectors overlay images onto an object in the physical environment. For this purpose, the alignment of the images and physical objects is required. Traditional camera based 3D position tracking systems, such as multi-camera motion capture systems, detect the positions of optical markers in two-dimensional image plane of each camera device, so those systems require multiple camera devices at known locations to obtain 3D position of the markers. We introduce a detection method of 3D position of a planar photo detector by projecting gradient patterns. The main contribution of our method is to realize an alignment of the projected images with the physical objects and measuring the geometry of the objects simultaneously for Spatial Augmented Reality applications.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114564845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this demo we will demonstrate integration of general content delivery from a windows desktop to a multi-projector display of arbitrary, shape, size and resolution automatically calibrated using our calibration methods. We have developed these sophisticated completely automatic geometric and color registration techniques in our lab for deploying seamless multi-projector displays on popular non-planar surfaces (e.g. cylinders, domes, truncated domes). This work has gotten significant attention in both VR and Visualization venues in the past 5 years and this will be the first time such calibration will be integrated with content delivery.
{"title":"A multi-projector display system of arbitrary shape, size and resolution","authors":"A. Majumder, Duy-Quoc Lai, M. A. Tehrani","doi":"10.1145/2782782.2792500","DOIUrl":"https://doi.org/10.1145/2782782.2792500","url":null,"abstract":"In this demo we will demonstrate integration of general content delivery from a windows desktop to a multi-projector display of arbitrary, shape, size and resolution automatically calibrated using our calibration methods. We have developed these sophisticated completely automatic geometric and color registration techniques in our lab for deploying seamless multi-projector displays on popular non-planar surfaces (e.g. cylinders, domes, truncated domes). This work has gotten significant attention in both VR and Visualization venues in the past 5 years and this will be the first time such calibration will be integrated with content delivery.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116212669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Rousset, C. Bourdin, Cedric Goulon, Jocelyn Monnoyer, J. Vercher
Virtual reality (driving simulators) tends to generalize for the study of human behavior in mobility. It is thus crucial to ensure that perception of space and motion is little or not affected by the virtual environment (VE). The aim of this study was to determine a metrics of distance perception in VEs and whether this metrics depends on interactive factors: stereoscopy and motion parallax. After a training session, participants were asked, while driving, to estimate the relative location (5 to 80 m) of a car on the same road. The overall results suggest that distance perception in this range does not depend on interactive factors. In average, as generally reported, subjects underestimated the distances whatever the vision conditions. However, the study revealed a large interpersonal variability: two profiles of participants were defined, those who quite accurately perceived distances in VR and those who underestimated distances as usually reported. Overall, this classification was correlated to the level of performance of participants during the training phase. Furthermore, learning performance is predictive of the behavior of participants.
{"title":"Does virtual reality affect visual perception of egocentric distance?","authors":"Thomas Rousset, C. Bourdin, Cedric Goulon, Jocelyn Monnoyer, J. Vercher","doi":"10.1109/VR.2015.7223403","DOIUrl":"https://doi.org/10.1109/VR.2015.7223403","url":null,"abstract":"Virtual reality (driving simulators) tends to generalize for the study of human behavior in mobility. It is thus crucial to ensure that perception of space and motion is little or not affected by the virtual environment (VE). The aim of this study was to determine a metrics of distance perception in VEs and whether this metrics depends on interactive factors: stereoscopy and motion parallax. After a training session, participants were asked, while driving, to estimate the relative location (5 to 80 m) of a car on the same road. The overall results suggest that distance perception in this range does not depend on interactive factors. In average, as generally reported, subjects underestimated the distances whatever the vision conditions. However, the study revealed a large interpersonal variability: two profiles of participants were defined, those who quite accurately perceived distances in VR and those who underestimated distances as usually reported. Overall, this classification was correlated to the level of performance of participants during the training phase. Furthermore, learning performance is predictive of the behavior of participants.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124884227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Hartl, Jens Grubert, Christian Reinbacher, Clemens Arth, D. Schmalstieg
Paper documents such as passports, visas and banknotes are frequently checked by inspection of security elements. In particular, view-dependent elements such as holograms are interesting, but the expertise of individuals performing the task varies greatly. Augmented Reality systems can provide all relevant information on standard mobile devices. Hologram verification still takes long and causes considerable load for the user. We aim to address this drawback by first presenting a work flow for recording and automatic matching of hologram patches. Several user interfaces for hologram verification are presented, aiming to noticeably reduce verification time. We evaluate the most promising interfaces in a user study with prototype applications running on off-the-shelf hardware. Our results indicate that there is a significant difference in capture time between interfaces but that users do not prefer the fastest interface.
{"title":"Mobile user interfaces for efficient verification of holograms","authors":"Andreas Hartl, Jens Grubert, Christian Reinbacher, Clemens Arth, D. Schmalstieg","doi":"10.1109/VR.2015.7223333","DOIUrl":"https://doi.org/10.1109/VR.2015.7223333","url":null,"abstract":"Paper documents such as passports, visas and banknotes are frequently checked by inspection of security elements. In particular, view-dependent elements such as holograms are interesting, but the expertise of individuals performing the task varies greatly. Augmented Reality systems can provide all relevant information on standard mobile devices. Hologram verification still takes long and causes considerable load for the user. We aim to address this drawback by first presenting a work flow for recording and automatic matching of hologram patches. Several user interfaces for hologram verification are presented, aiming to noticeably reduce verification time. We evaluate the most promising interfaces in a user study with prototype applications running on off-the-shelf hardware. Our results indicate that there is a significant difference in capture time between interfaces but that users do not prefer the fastest interface.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123077462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jun Morita, S. Shimamura, Motoko Kanegae, Yuji Uema, Maiko Takahashi, M. Inami, T. Hayashida, M. Sugimoto
In this paper we propose an augmented reality system that superimposes MRI onto the patient model. We use a half-silvered mirror and a handheld device to superimpose the MRI onto the patient model. By tracking the coordinates of the patient model and the handheld device using optical markers, we are able to transform the images to the correlated position. Voxel data of the MRI are made so that the user is able to view the MRI from many different angles.
{"title":"MRI overlay system using optical see-through for marking assistance","authors":"Jun Morita, S. Shimamura, Motoko Kanegae, Yuji Uema, Maiko Takahashi, M. Inami, T. Hayashida, M. Sugimoto","doi":"10.1109/VR.2015.7223384","DOIUrl":"https://doi.org/10.1109/VR.2015.7223384","url":null,"abstract":"In this paper we propose an augmented reality system that superimposes MRI onto the patient model. We use a half-silvered mirror and a handheld device to superimpose the MRI onto the patient model. By tracking the coordinates of the patient model and the handheld device using optical markers, we are able to transform the images to the correlated position. Voxel data of the MRI are made so that the user is able to view the MRI from many different angles.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123274857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel underwater VR game - Shark Punch - in which the user must fend off a virtual Great White shark with real punches in a real underwater environment. This poster presents our underwater VR system and our iterative design process through field tests with a user with disabilities. We conclude with proposed usability, accessibility, and system design guidelines for future underwater VR rehabilitation games.
{"title":"Shark punch: A virtual reality game for aquatic rehabilitation","authors":"J. Quarles","doi":"10.1109/VR.2015.7223397","DOIUrl":"https://doi.org/10.1109/VR.2015.7223397","url":null,"abstract":"We present a novel underwater VR game - Shark Punch - in which the user must fend off a virtual Great White shark with real punches in a real underwater environment. This poster presents our underwater VR system and our iterative design process through field tests with a user with disabilities. We conclude with proposed usability, accessibility, and system design guidelines for future underwater VR rehabilitation games.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124605783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Byun, Emily Chang, Maria Alejandra Montenegro, Alexander Moser, Christina Tarn, Shirley J. Saldamarco, R. Comley
Marioneta is an installation for the Children's Museum of Pittsburgh which uses the Microsoft Kinect v2 to allow guests to embody a collection of antique puppets in a virtual environment. Final installation in the museum is shown in Fig 1. The focus is on creating an experience wherein elements in the world react to the users' actions through these puppets[1]. Puppet models in the experience are based on a collection of puppets donated to the museum by Margo Lovelace. The original puppets are carefully exhibited in a large display case on the wall in the museum. Many of the puppets in the museum's collection are antiques, fragile or valuable and not suited to hands-on play by the museum's young visitors. Marioneta uses technology to make museum puppets available for imaginative and interesting play[2]. The experience is composed of auto-rotating seasonal stages and season related interactive objects that have visual and audial feedback. Users can throw a pumpkin in fall, pick up an ice ball in winter, play with cowbells in spring, and break a lantern filled with fireflies in summer. One of the stage scenes is shown in Fig 2. Marioneta is an updated version of Virpets, which began in 2001 and remained over 10 years in the museum[3].
{"title":"Marioneta: Virtual puppeteer experience","authors":"H. Byun, Emily Chang, Maria Alejandra Montenegro, Alexander Moser, Christina Tarn, Shirley J. Saldamarco, R. Comley","doi":"10.1109/VR.2015.7223448","DOIUrl":"https://doi.org/10.1109/VR.2015.7223448","url":null,"abstract":"Marioneta is an installation for the Children's Museum of Pittsburgh which uses the Microsoft Kinect v2 to allow guests to embody a collection of antique puppets in a virtual environment. Final installation in the museum is shown in Fig 1. The focus is on creating an experience wherein elements in the world react to the users' actions through these puppets[1]. Puppet models in the experience are based on a collection of puppets donated to the museum by Margo Lovelace. The original puppets are carefully exhibited in a large display case on the wall in the museum. Many of the puppets in the museum's collection are antiques, fragile or valuable and not suited to hands-on play by the museum's young visitors. Marioneta uses technology to make museum puppets available for imaginative and interesting play[2]. The experience is composed of auto-rotating seasonal stages and season related interactive objects that have visual and audial feedback. Users can throw a pumpkin in fall, pick up an ice ball in winter, play with cowbells in spring, and break a lantern filled with fireflies in summer. One of the stage scenes is shown in Fig 2. Marioneta is an updated version of Virpets, which began in 2001 and remained over 10 years in the museum[3].","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132257014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jason Hochreiter, Salam Daher, A. Nagendran, Laura González, G. Welch
We demonstrate a generalizable method for unified multitouch detection and response on a human head-shaped surface with a rear-projection animated 3D face. The method helps achieve hands-on touch-sensitive training with dynamic physical-virtual patient behavior. The method, which is generalizable to other non-parametric rear-projection surfaces, requires one or more infrared (IR) cameras, one or more projectors, IR light sources, and a rear-projection surface. IR light reflected off of human fingers is captured by cameras with matched IR pass filters, allowing for the localization of multiple finger touch events. These events are tightly coupled with the rendering system to produce auditory and visual responses on the animated face displayed using the projector(s), resulting in a responsive, interactive experience. We illustrate the applicability of our physical prototype in a medical training scenario.
{"title":"Touch sensing on non-parametric rear-projection surfaces: A physical-virtual head for hands-on healthcare training","authors":"Jason Hochreiter, Salam Daher, A. Nagendran, Laura González, G. Welch","doi":"10.1109/VR.2015.7223326","DOIUrl":"https://doi.org/10.1109/VR.2015.7223326","url":null,"abstract":"We demonstrate a generalizable method for unified multitouch detection and response on a human head-shaped surface with a rear-projection animated 3D face. The method helps achieve hands-on touch-sensitive training with dynamic physical-virtual patient behavior. The method, which is generalizable to other non-parametric rear-projection surfaces, requires one or more infrared (IR) cameras, one or more projectors, IR light sources, and a rear-projection surface. IR light reflected off of human fingers is captured by cameras with matched IR pass filters, allowing for the localization of multiple finger touch events. These events are tightly coupled with the rendering system to produce auditory and visual responses on the animated face displayed using the projector(s), resulting in a responsive, interactive experience. We illustrate the applicability of our physical prototype in a medical training scenario.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132190837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Stengel, S. Grogorick, M. Eisemann, E. Eisemann, M. Magnor
We present a complete hardware and software solution for integrating binocular eye tracking into current state-of-the-art lens-based Head-mounted Displays (HMDs) without affecting the user's wide field-of-view off the display. The system uses robust and efficient new algorithms for calibration and pupil tracking and allows realtime eye tracking and gaze estimation. Estimating the relative gaze direction of the user opens the door to a much wider spectrum of virtual reality applications and games when using HMDs. We show a 3d-printed prototype of a low-cost HMD with eye tracking that is simple to fabricate and discuss a variety of VR applications utilizing gaze estimation.
{"title":"Non-obscuring binocular eye tracking for wide field-of-view head-mounted-displays","authors":"Michael Stengel, S. Grogorick, M. Eisemann, E. Eisemann, M. Magnor","doi":"10.1109/VR.2015.7223443","DOIUrl":"https://doi.org/10.1109/VR.2015.7223443","url":null,"abstract":"We present a complete hardware and software solution for integrating binocular eye tracking into current state-of-the-art lens-based Head-mounted Displays (HMDs) without affecting the user's wide field-of-view off the display. The system uses robust and efficient new algorithms for calibration and pupil tracking and allows realtime eye tracking and gaze estimation. Estimating the relative gaze direction of the user opens the door to a much wider spectrum of virtual reality applications and games when using HMDs. We show a 3d-printed prototype of a low-cost HMD with eye tracking that is simple to fabricate and discuss a variety of VR applications utilizing gaze estimation.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114853732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}