Water Cherenkov neutrino detectors, like the existing Super-Kamiokande ("Super-K") and the planned Hyper-Kamiokande ("Hyper-K") detectors, are used to study neutrino particle physics. These detectors consist of large vessels holding many thousands of tons of water, as well as tens of thousands of photomultiplier tubes. Such detectors produce large and multi-layered datasets well-suited to immersive visualization and interaction. We have developed a novel virtual reality (VR) application called Neutrino-KAVE which functions as an visualization and data interaction application. Neutrino-KAVE displays the collocation of photon sensors and their color-coded data within a to-scale representation of the Super-K or Hyper-K detector, and provides a new visualization technique for neutrino interaction patterns. Neutrino-KAVE also provides both a mechanism for modifying aspects of the presented data set, and a user interface for system control of this multifaceted application. In this paper, we describe in detail our implementation and design choices. We also report on its use cases, initial reception, and future development.
{"title":"Neutrino-KAVE: An immersive visualization and fitting tool for neutrino physics education","authors":"Elizabeth Izatt, K. Scholberg, Regis Kopper","doi":"10.1109/VR.2014.6802062","DOIUrl":"https://doi.org/10.1109/VR.2014.6802062","url":null,"abstract":"Water Cherenkov neutrino detectors, like the existing Super-Kamiokande (\"Super-K\") and the planned Hyper-Kamiokande (\"Hyper-K\") detectors, are used to study neutrino particle physics. These detectors consist of large vessels holding many thousands of tons of water, as well as tens of thousands of photomultiplier tubes. Such detectors produce large and multi-layered datasets well-suited to immersive visualization and interaction. We have developed a novel virtual reality (VR) application called Neutrino-KAVE which functions as an visualization and data interaction application. Neutrino-KAVE displays the collocation of photon sensors and their color-coded data within a to-scale representation of the Super-K or Hyper-K detector, and provides a new visualization technique for neutrino interaction patterns. Neutrino-KAVE also provides both a mechanism for modifying aspects of the presented data set, and a user interface for system control of this multifaceted application. In this paper, we describe in detail our implementation and design choices. We also report on its use cases, initial reception, and future development.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127488248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual reality has the potential to measure and help many vision problems. More than 3% of the population have amblyopia, commonly known as lazy eye, a weakness and impairment of vision in one or both of the eyes [1]. Amblyopia often results in a suppression of the information coming from the bad eye, and a loss of stereoscopic vision as a result. It was long thought that people with amblyopia could not improve the vision in their bad eye or gain stereoscopic vision after a critical age of 10-12 years old. Recent research indicates the adult brain is more plastic with regards to suppressiion than previously thought. [2]. Inspired by this, we have built a virtual reality game, called Diplopia, using Unity3D which utilizes the Oculus Rift head-mounted display (HMD) and the Leap Motion controller to help people with amblyopia restore vision in their amblyopic eye.
{"title":"Diplopia: A virtual reality game designed to help amblyopics","authors":"J. Blaha, Manish Gupta","doi":"10.1109/VR.2014.6802102","DOIUrl":"https://doi.org/10.1109/VR.2014.6802102","url":null,"abstract":"Virtual reality has the potential to measure and help many vision problems. More than 3% of the population have amblyopia, commonly known as lazy eye, a weakness and impairment of vision in one or both of the eyes [1]. Amblyopia often results in a suppression of the information coming from the bad eye, and a loss of stereoscopic vision as a result. It was long thought that people with amblyopia could not improve the vision in their bad eye or gain stereoscopic vision after a critical age of 10-12 years old. Recent research indicates the adult brain is more plastic with regards to suppressiion than previously thought. [2]. Inspired by this, we have built a virtual reality game, called Diplopia, using Unity3D which utilizes the Oculus Rift head-mounted display (HMD) and the Leap Motion controller to help people with amblyopia restore vision in their amblyopic eye.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116934024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ashley L. Guinan, Markus N. Montandon, Andrew J. Doxon, W. Provancher
Our lab has developed a haptic feedback device to provide ungrounded tactile feedback through the motion of actuated sliding plate contactors. Interaction with a virtual environment is provided to a user through a device equipped with tactile feedback and six degree-of-freedom spatial position sensing. Our tactile feedback device is composed of three sliding plate skin stretch displays positioned around the handle, providing feedback to a user's palm. Our dual-handed tactile feedback system allows independent motion of hands, while providing feedback that creates a kinesthetic experience. We demonstrate fundamental physical interactions such as mass, spring, and damper interactions, which are the building blocks used in every virtual model. Various virtual environments are used to demonstrate physical interactions with objects.
{"title":"An ungrounded tactile feedback device to portray force and torque-like interactions in virtual environments","authors":"Ashley L. Guinan, Markus N. Montandon, Andrew J. Doxon, W. Provancher","doi":"10.1109/VR.2014.6802106","DOIUrl":"https://doi.org/10.1109/VR.2014.6802106","url":null,"abstract":"Our lab has developed a haptic feedback device to provide ungrounded tactile feedback through the motion of actuated sliding plate contactors. Interaction with a virtual environment is provided to a user through a device equipped with tactile feedback and six degree-of-freedom spatial position sensing. Our tactile feedback device is composed of three sliding plate skin stretch displays positioned around the handle, providing feedback to a user's palm. Our dual-handed tactile feedback system allows independent motion of hands, while providing feedback that creates a kinesthetic experience. We demonstrate fundamental physical interactions such as mass, spring, and damper interactions, which are the building blocks used in every virtual model. Various virtual environments are used to demonstrate physical interactions with objects.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123025405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We tuned and evaluated visual feedback techniques for virtual grasps. To date, development of such feedback has been largely ad-hoc, with minimal work that can guide technique selection. We considered several techniques including both standard and novel aspects. In terms of impact on real hand behavior, the best techniques all directly reveal penetrating hand configuration in some way. Subjectively, color changes are most liked.
{"title":"Design and evaluation of visual feedback for virtual grasp","authors":"Mores Prachyabrued, C. Borst","doi":"10.1109/VR.2014.6802075","DOIUrl":"https://doi.org/10.1109/VR.2014.6802075","url":null,"abstract":"We tuned and evaluated visual feedback techniques for virtual grasps. To date, development of such feedback has been largely ad-hoc, with minimal work that can guide technique selection. We considered several techniques including both standard and novel aspects. In terms of impact on real hand behavior, the best techniques all directly reveal penetrating hand configuration in some way. Subjectively, color changes are most liked.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116510194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pierre Martin, É. Marchand, P. Houlier, Isabelle Marchal
Using Simultaneous Localization And Mapping (SLAM) methods become more and more common in Augmented Reality (AR). To achieve real-time requirement and to cope with scale factor and the lack of absolute positioning issue, we propose to decouple the localization and the mapping step. We explain the benefits of this approach and how a SLAM strategy can still be used in a way that is meaningful for the end user.
{"title":"Decoupled mapping and localization for Augmented Reality on a mobile phone","authors":"Pierre Martin, É. Marchand, P. Houlier, Isabelle Marchal","doi":"10.1109/VR.2014.6802069","DOIUrl":"https://doi.org/10.1109/VR.2014.6802069","url":null,"abstract":"Using Simultaneous Localization And Mapping (SLAM) methods become more and more common in Augmented Reality (AR). To achieve real-time requirement and to cope with scale factor and the lack of absolute positioning issue, we propose to decouple the localization and the mapping step. We explain the benefits of this approach and how a SLAM strategy can still be used in a way that is meaningful for the end user.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116536931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this document we discuss a study that investigates the effect of eye position on the apparent location of imagery presented in an off-the-shelf head worn display. We test a range of reasonable eye positions that may result from person-to-person variations in display placement and interpupillary distances. It was observed that the pattern of geometric distortions introduced by the display's optical system changes substantially as the eye moves from one position to the next. These visual displacements can be on the order of several degrees and increase in magnitude towards the peripheral edges of the field of view. Though many systems calibrate for interpupillary distance and optical distortions separately, this may be insufficient as eye position influences distortion characteristics.
{"title":"The effect of eye position on the view of virtual geometry","authors":"J. A. Jones, D. Krum, M. Bolas","doi":"10.1109/VR.2014.6802064","DOIUrl":"https://doi.org/10.1109/VR.2014.6802064","url":null,"abstract":"In this document we discuss a study that investigates the effect of eye position on the apparent location of imagery presented in an off-the-shelf head worn display. We test a range of reasonable eye positions that may result from person-to-person variations in display placement and interpupillary distances. It was observed that the pattern of geometric distortions introduced by the display's optical system changes substantially as the eye moves from one position to the next. These visual displacements can be on the order of several degrees and increase in magnitude towards the peripheral edges of the field of view. Though many systems calibrate for interpupillary distance and optical distortions separately, this may be insufficient as eye position influences distortion characteristics.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132224886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Grani, F. Argelaguet, V. Gouranton, M. Badawi, R. Gaugne, S. Serafin, A. Lécuyer
We describe an experiment whose goal is to investigate the usage of different audio rendering techniques delivered through headphones while walking inside a wide four-side CAVE environment. In our experiment, participants had to physically walked along a virtual path exposed to different auditory stimuli. Each subject was exposed to three conditions: Stereo, Binaural sound spatially congruent with visual and binaural sound spatially incongruent with visuals and had to rate subjectively each. The results of the experiment showed increased preference ratings for the binaural audio rendering, followed by stereo rendering. As expected incongruent spatial cues were ranked significantly lower. Binaural rendering can deliver an increased immersive experience and do no require specialized hardware.
{"title":"Design and evaluation of Binaural auditory rendering for CAVEs","authors":"F. Grani, F. Argelaguet, V. Gouranton, M. Badawi, R. Gaugne, S. Serafin, A. Lécuyer","doi":"10.1109/VR.2014.6802057","DOIUrl":"https://doi.org/10.1109/VR.2014.6802057","url":null,"abstract":"We describe an experiment whose goal is to investigate the usage of different audio rendering techniques delivered through headphones while walking inside a wide four-side CAVE environment. In our experiment, participants had to physically walked along a virtual path exposed to different auditory stimuli. Each subject was exposed to three conditions: Stereo, Binaural sound spatially congruent with visual and binaural sound spatially incongruent with visuals and had to rate subjectively each. The results of the experiment showed increased preference ratings for the binaural audio rendering, followed by stereo rendering. As expected incongruent spatial cues were ranked significantly lower. Binaural rendering can deliver an increased immersive experience and do no require specialized hardware.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"7 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131879780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anne Kenyon, J. Rosendale, Samuel G. Fulcomer, D. Laidlaw
We present the design of Brown University's new Cave, which is expected to be fully operational in February 2014. With one arc-minute resolution, 3.8 π steradians of visual surround, head-tracked stereo, and an almost seamless screen, this Cave offers advances to the state-of-the-art virtual reality experience. This improvement is achieved with the installation of 69 high-resolution long throw projectors, a cylindrical screen with conical ceiling, and a 135 square foot rear-projection floor. Though Caves have been around for over 20 years, they have remained impractical for many potential uses due to their limited resolution, brightness, and overall immersion. Brown's new Cave aims to bridge this gap.
{"title":"The design of a retinal resolution fully immersive VR display","authors":"Anne Kenyon, J. Rosendale, Samuel G. Fulcomer, D. Laidlaw","doi":"10.1109/VR.2014.6802065","DOIUrl":"https://doi.org/10.1109/VR.2014.6802065","url":null,"abstract":"We present the design of Brown University's new Cave, which is expected to be fully operational in February 2014. With one arc-minute resolution, 3.8 π steradians of visual surround, head-tracked stereo, and an almost seamless screen, this Cave offers advances to the state-of-the-art virtual reality experience. This improvement is achieved with the installation of 69 high-resolution long throw projectors, a cylindrical screen with conical ceiling, and a 135 square foot rear-projection floor. Though Caves have been around for over 20 years, they have remained impractical for many potential uses due to their limited resolution, brightness, and overall immersion. Brown's new Cave aims to bridge this gap.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131341506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This demo presents an AR application that helps the user to solve a jigsaw puzzle that consists of non-textured pieces with a discriminative shape. The pieces are detected, their poses are estimated and the ones that are correctly assembled are highlighted. In order to detect the pieces, the Depth-Assisted Rectification of Contours (DARC) method is used, which performs detection and pose estimation of texture-less planar objects using an RGB-D camera.
本演示演示了一个AR应用程序,该应用程序可以帮助用户解决由具有不同形状的无纹理碎片组成的拼图。检测碎片,估计它们的姿势,并突出显示正确组装的碎片。为了检测碎片,使用深度辅助轮廓校正(deep - assisted correction of Contours, DARC)方法,该方法使用RGB-D相机对无纹理的平面物体进行检测和姿态估计。
{"title":"AR jigsaw puzzle with RGB-D based detection of texture-less pieces","authors":"J. P. Lima, J. M. Teixeira, V. Teichrieb","doi":"10.1109/VR.2014.6802109","DOIUrl":"https://doi.org/10.1109/VR.2014.6802109","url":null,"abstract":"This demo presents an AR application that helps the user to solve a jigsaw puzzle that consists of non-textured pieces with a discriminative shape. The pieces are detected, their poses are estimated and the ones that are correctly assembled are highlighted. In order to detect the pieces, the Depth-Assisted Rectification of Contours (DARC) method is used, which performs detection and pose estimation of texture-less planar objects using an RGB-D camera.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123849886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Csongei, Liem Hoang, C. Sandor, Yong-Beom Lee
The goal of our work is to create highly realistic graphics for Augmented Reality on mobile phones. One of the greatest challenges for this is to provide realistic lighting of the virtual objects that matches the real world lighting. This becomes even more difficult with the limited capabilities of mobile phone GPUs. Our approach differs in the following important aspects compared to previous attempts: (1) most have relied on rasterizer approaches, while our approach is based on raytracing; (2) we perform distributed rendering in order to address the limited mobile GPU capabilities; (3) we use image-based lighting from a pre-captured panorama to incorporate real world lighting. We utilize two markers: one for object tracking and one for registering the panorama. Our initial results are encouraging, as the visual quality resembles real objects and also the reference renderings which were created offline. However, we still need to validate our approach in human subject studies, especially with regards to the trade-off between latency of remote rendering and visual quality.
{"title":"Global illumination for Augmented Reality on mobile phones","authors":"Michael Csongei, Liem Hoang, C. Sandor, Yong-Beom Lee","doi":"10.1109/VR.2014.6802055","DOIUrl":"https://doi.org/10.1109/VR.2014.6802055","url":null,"abstract":"The goal of our work is to create highly realistic graphics for Augmented Reality on mobile phones. One of the greatest challenges for this is to provide realistic lighting of the virtual objects that matches the real world lighting. This becomes even more difficult with the limited capabilities of mobile phone GPUs. Our approach differs in the following important aspects compared to previous attempts: (1) most have relied on rasterizer approaches, while our approach is based on raytracing; (2) we perform distributed rendering in order to address the limited mobile GPU capabilities; (3) we use image-based lighting from a pre-captured panorama to incorporate real world lighting. We utilize two markers: one for object tracking and one for registering the panorama. Our initial results are encouraging, as the visual quality resembles real objects and also the reference renderings which were created offline. However, we still need to validate our approach in human subject studies, especially with regards to the trade-off between latency of remote rendering and visual quality.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124974161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}