Jonathan Mercier-Ganady, F. Lotte, E. Loup-Escande, M. Marchal, A. Lécuyer
Imagine you are facing a mirror, seeing at the same time both your real body and a virtual display of your brain in activity and perfectly superimposed to your real image “inside your real skull”. In this paper, we introduce a novel augmented reality paradigm called “Mind-Mirror” which enables the experience of seeing “through your own head”, visualizing your brain “in action and in situ”. Our approach relies on the use of a semi-transparent mirror positioned in front of a computer screen. A virtual brain is displayed on screen and automatically follows the head movements using an optical face-tracking system. The brain activity is extracted and processed in real-time with the help of an electroencephalography cap (EEG) worn by the user. A rear view is also proposed thanks to an additional webcam recording the rear of the user's head. The use of EEG classification techniques enables to test a Neurofeedback scenario in which the user can train and progressively learn how to control different mental states, such as “concentrated” versus “relaxed”. The results of a user study comparing a standard visualization used in Neurofeedback to our approach showed that the Mind-Mirror could be successfully used and that the participants have particularly appreciated its innovation and originality. We believe that, in addition to applications in Neurofeedback and Brain-Computer Interfaces, the Mind-Mirror could also be used as a novel visualization tool for education, training or entertainment applications.
{"title":"The Mind-Mirror: See your brain in action in your head using EEG and augmented reality","authors":"Jonathan Mercier-Ganady, F. Lotte, E. Loup-Escande, M. Marchal, A. Lécuyer","doi":"10.1109/VR.2014.6802047","DOIUrl":"https://doi.org/10.1109/VR.2014.6802047","url":null,"abstract":"Imagine you are facing a mirror, seeing at the same time both your real body and a virtual display of your brain in activity and perfectly superimposed to your real image “inside your real skull”. In this paper, we introduce a novel augmented reality paradigm called “Mind-Mirror” which enables the experience of seeing “through your own head”, visualizing your brain “in action and in situ”. Our approach relies on the use of a semi-transparent mirror positioned in front of a computer screen. A virtual brain is displayed on screen and automatically follows the head movements using an optical face-tracking system. The brain activity is extracted and processed in real-time with the help of an electroencephalography cap (EEG) worn by the user. A rear view is also proposed thanks to an additional webcam recording the rear of the user's head. The use of EEG classification techniques enables to test a Neurofeedback scenario in which the user can train and progressively learn how to control different mental states, such as “concentrated” versus “relaxed”. The results of a user study comparing a standard visualization used in Neurofeedback to our approach showed that the Mind-Mirror could be successfully used and that the participants have particularly appreciated its innovation and originality. We believe that, in addition to applications in Neurofeedback and Brain-Computer Interfaces, the Mind-Mirror could also be used as a novel visualization tool for education, training or entertainment applications.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115316356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Markus Tatzgern, R. Grasset, Denis Kalkofen, D. Schmalstieg
Augmented Reality (AR) applications require knowledge about the real world environment in which they are used. This knowledge is often gathered while developing the AR application and stored for future uses of the application. Consequently, changes to the real world lead to a mismatch between the previously recorded data and the real world. New capturing techniques based on dense Simultaneous Localization and Mapping (SLAM) not only allow users to capture real world scenes at run-time, but also enables them to capture changes of the world. However, instead of using previously recorded and prepared scenes, users must interact with an unprepared environment. In this paper, we present a set of new interaction techniques that support users in handling captured real world environments. The techniques present virtual viewpoints of the scene based on a scene analysis and provide natural transitions between the AR view and virtual viewpoints. We demonstrate our approach with a SLAM based prototype that allows us to capture a real world scene and describe example applications of our system.
增强现实(AR)应用程序需要了解使用它们的真实世界环境。这些知识通常是在开发AR应用程序时收集的,并为应用程序的未来使用而存储。因此,对真实世界的更改会导致先前记录的数据与真实世界之间的不匹配。基于SLAM (dense Simultaneous Localization and Mapping)的新型捕获技术不仅可以让用户在运行时捕获真实世界场景,还可以捕捉世界的变化。然而,用户必须与一个没有准备好的环境进行交互,而不是使用先前记录和准备好的场景。在本文中,我们提出了一组新的交互技术,支持用户处理捕获的真实世界环境。该技术基于场景分析呈现场景的虚拟视点,并提供AR视图和虚拟视点之间的自然转换。我们用基于SLAM的原型来演示我们的方法,该原型允许我们捕捉真实世界的场景并描述我们系统的示例应用程序。
{"title":"Transitional Augmented Reality navigation for live captured scenes","authors":"Markus Tatzgern, R. Grasset, Denis Kalkofen, D. Schmalstieg","doi":"10.1109/VR.2014.6802045","DOIUrl":"https://doi.org/10.1109/VR.2014.6802045","url":null,"abstract":"Augmented Reality (AR) applications require knowledge about the real world environment in which they are used. This knowledge is often gathered while developing the AR application and stored for future uses of the application. Consequently, changes to the real world lead to a mismatch between the previously recorded data and the real world. New capturing techniques based on dense Simultaneous Localization and Mapping (SLAM) not only allow users to capture real world scenes at run-time, but also enables them to capture changes of the world. However, instead of using previously recorded and prepared scenes, users must interact with an unprepared environment. In this paper, we present a set of new interaction techniques that support users in handling captured real world environments. The techniques present virtual viewpoints of the scene based on a scene analysis and provide natural transitions between the AR view and virtual viewpoints. We demonstrate our approach with a SLAM based prototype that allows us to capture a real world scene and describe example applications of our system.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115068544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rozenn Bouville Berthelot, Thomas Lopez, Florian Nouviale, V. Gouranton, B. Arnaldi
Summary form only given. The CORVETTE project aims at producing significant innovations in the field of collaborative virtual training. For that purpose, CORVETTE combines various technologies to enhance the effective collaboration between users and virtual humans performing a common task. First, CORVETTE proposes a model of collaborative interaction in virtual environments allowing actors to efficiently collaborate as a team whether they are controlled by a user or by a virtual human [4]. Moreover, the environment is simulated in real-time, at real scale and is using physics as well as physically-based humanoids to improve the realism of the training. Second, thanks to the interaction model, we defined a protocol of exchange of avatars [5, 3, 2]. Thus, an actor can dynamically exchange the control of his/her avatar with the one controlled by another user or by a virtual agent. Moreover, to improve the exchange protocol, we designed a new knowledge model embedded in each avatar. It allows users and virtual humans to retrieve knowledge previously gathered by an avatar following an exchange. The preservation of knowledge is indeed especially crucial for teamwork. Finally, we handle verbal communication between users and virtual humans with speech recognition and synthesis. Actors' knowledge is enhanced through dialogue and used for decision-making and conversation [1].
{"title":"CORVETTE: Collaborative environment for technical training and experiment","authors":"Rozenn Bouville Berthelot, Thomas Lopez, Florian Nouviale, V. Gouranton, B. Arnaldi","doi":"10.1109/VR.2014.6802093","DOIUrl":"https://doi.org/10.1109/VR.2014.6802093","url":null,"abstract":"Summary form only given. The CORVETTE project aims at producing significant innovations in the field of collaborative virtual training. For that purpose, CORVETTE combines various technologies to enhance the effective collaboration between users and virtual humans performing a common task. First, CORVETTE proposes a model of collaborative interaction in virtual environments allowing actors to efficiently collaborate as a team whether they are controlled by a user or by a virtual human [4]. Moreover, the environment is simulated in real-time, at real scale and is using physics as well as physically-based humanoids to improve the realism of the training. Second, thanks to the interaction model, we defined a protocol of exchange of avatars [5, 3, 2]. Thus, an actor can dynamically exchange the control of his/her avatar with the one controlled by another user or by a virtual agent. Moreover, to improve the exchange protocol, we designed a new knowledge model embedded in each avatar. It allows users and virtual humans to retrieve knowledge previously gathered by an avatar following an exchange. The preservation of knowledge is indeed especially crucial for teamwork. Finally, we handle verbal communication between users and virtual humans with speech recognition and synthesis. Actors' knowledge is enhanced through dialogue and used for decision-making and conversation [1].","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116858361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. In this virtual-reality art installation - a mathematical play space - the viewer-participant creates an immersive visual and sonic experience. It is based on the mathematical and physical simulation of over one million particles with momentum and elastic reflection in an environment with gravity. The final scene has a realistic rendering of water with reflections and lighting based on spherical harmonics. Sound components are triggered and modified by the user and particle interaction. The application was originally developed using a CUDA particle system running within Thumb, a virtual-reality framework developed by Robert Kooima. It is now being ported to CalVR, developed by researchers at the California Institute for Telecommunications and Information Technology (Calit2) Qualcomm Institute at University of California, San Diego.
{"title":"Particle dreams in spherical harmonics","authors":"D. Sandin, Robert Kooima, L. Spiegel, T. DeFanti","doi":"10.1109/VR.2014.6802098","DOIUrl":"https://doi.org/10.1109/VR.2014.6802098","url":null,"abstract":"Summary form only given. In this virtual-reality art installation - a mathematical play space - the viewer-participant creates an immersive visual and sonic experience. It is based on the mathematical and physical simulation of over one million particles with momentum and elastic reflection in an environment with gravity. The final scene has a realistic rendering of water with reflections and lighting based on spherical harmonics. Sound components are triggered and modified by the user and particle interaction. The application was originally developed using a CUDA particle system running within Thumb, a virtual-reality framework developed by Robert Kooima. It is now being ported to CalVR, developed by researchers at the California Institute for Telecommunications and Information Technology (Calit2) Qualcomm Institute at University of California, San Diego.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116510058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan Sebastián Casallas, J. Oliver, Jonathan W. Kelly, F. Mérienne, S. Garbaya
Selection of moving targets is a common, yet complex task in human-computer interaction (HCI) and virtual reality (VR). Predicting user intention may be beneficial to address the challenges inherent in interaction techniques for moving-target selection. This article extends previous models by integrating relative head-target and hand-target features to predict intended moving targets. The features are calculated in a time window ending at roughly two-thirds of the total target selection time and evaluated using decision trees. With two targets, this model is able to predict user choice with up to ~ 72% accuracy on general moving-target selection tasks and up to ~ 78% by also including task-related target properties.
{"title":"Using relative head and hand-target features to predict intention in 3D moving-target selection","authors":"Juan Sebastián Casallas, J. Oliver, Jonathan W. Kelly, F. Mérienne, S. Garbaya","doi":"10.1109/VR.2014.6802050","DOIUrl":"https://doi.org/10.1109/VR.2014.6802050","url":null,"abstract":"Selection of moving targets is a common, yet complex task in human-computer interaction (HCI) and virtual reality (VR). Predicting user intention may be beneficial to address the challenges inherent in interaction techniques for moving-target selection. This article extends previous models by integrating relative head-target and hand-target features to predict intended moving targets. The features are calculated in a time window ending at roughly two-thirds of the total target selection time and evaluated using decision trees. With two targets, this model is able to predict user choice with up to ~ 72% accuracy on general moving-target selection tasks and up to ~ 78% by also including task-related target properties.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114800682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alvaro Espitia-Contreras, Pedro Sanchez-Caiman, A. Uribe-Quevedo
Anthropometry is known as the science that studies the human body dimensions, this measurements are acquire using special devices and techniques whose results are analyzed through statistics. Anthropometry plays an important role within the industrial design process in areas such as clothing, ergonomics, and biomechanics, where statistical data about body medians allow optimizing product design. Recently, image processing and hardware advances are allowing the development of applications that allow an user to preview wardrobe, costumes, games or advergames and even different types of environments according to the user measurements. This project proposes the development of a complimentary tool for acquiring user anthropometric data for characterizing users in the Mil. Nueva Granada University in Colombia, South America using Microsoft's Kinect skeletal tracking for developing and assess the design of workspaces in several areas such as laboratories.
{"title":"Development of a Kinect-based anthropometric measurement application","authors":"Alvaro Espitia-Contreras, Pedro Sanchez-Caiman, A. Uribe-Quevedo","doi":"10.1109/VR.2014.6802056","DOIUrl":"https://doi.org/10.1109/VR.2014.6802056","url":null,"abstract":"Anthropometry is known as the science that studies the human body dimensions, this measurements are acquire using special devices and techniques whose results are analyzed through statistics. Anthropometry plays an important role within the industrial design process in areas such as clothing, ergonomics, and biomechanics, where statistical data about body medians allow optimizing product design. Recently, image processing and hardware advances are allowing the development of applications that allow an user to preview wardrobe, costumes, games or advergames and even different types of environments according to the user measurements. This project proposes the development of a complimentary tool for acquiring user anthropometric data for characterizing users in the Mil. Nueva Granada University in Colombia, South America using Microsoft's Kinect skeletal tracking for developing and assess the design of workspaces in several areas such as laboratories.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124035875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Museum spaces are ideal settings for interactive experiences that combine entertainment, education and innovative technologies. LunAR Park is an augmented reality application designed for a planetarium setting that utilizes existing lunar exhibits to immerse the visitor in an enhanced world of interactive lunar exploration referencing amusement park experiences. The application was originally presented as part of Moon Lust, an exhibition at the Adler Planetarium and Astronomical Museum in Chicago that explored global interests on lunar exploration and habitation through interactive technologies. The content of LunAR Park was inspired by pre-space age depictions of the lunar landscape at the original Luna Park in Coney Island, the advancement of lunar expeditions of the past century, and the romantic notions of future colonization of the moon. LunAR Park transforms four lunar themed exhibits into a virtual amusement park that brings the surface of the moon to life. The users interact with the augmented environment through iPads and navigate the virtual landscape by physically traversing the space around the four exhibits.
{"title":"LunAR Park: Augmented reality, retro-futurism & a ride to the moon","authors":"Alexander Betts, B. L. Silva, P. Oikonomou","doi":"10.1109/VR.2014.6802092","DOIUrl":"https://doi.org/10.1109/VR.2014.6802092","url":null,"abstract":"Museum spaces are ideal settings for interactive experiences that combine entertainment, education and innovative technologies. LunAR Park is an augmented reality application designed for a planetarium setting that utilizes existing lunar exhibits to immerse the visitor in an enhanced world of interactive lunar exploration referencing amusement park experiences. The application was originally presented as part of Moon Lust, an exhibition at the Adler Planetarium and Astronomical Museum in Chicago that explored global interests on lunar exploration and habitation through interactive technologies. The content of LunAR Park was inspired by pre-space age depictions of the lunar landscape at the original Luna Park in Coney Island, the advancement of lunar expeditions of the past century, and the romantic notions of future colonization of the moon. LunAR Park transforms four lunar themed exhibits into a virtual amusement park that brings the surface of the moon to life. The users interact with the augmented environment through iPads and navigate the virtual landscape by physically traversing the space around the four exhibits.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117311757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Recently a number of affordable game controllers have been adopted by virtual reality (VR) researchers [1][4]. We present a video1 of a VR demo called TurboTuscany, where we employ such controllers; our demo combines a Kinect controlled full body avatar with Oculus Rift head-mounted-display [2]. We implemented three positional head tracking schemes that use Kinect, Razer Hydra, and PlayStation (PS) Move controllers. In the demo the Kinect tracked avatar can be used to climb ladders, play with soccer balls, and otherwise move or interact with physically simulated objects. PS Move or Razer Hydra controller is used to control locomotion, and for selecting and manipulating objects. Our subjective experience is that the best head tracking immersion is achieved by using Kinect together with PS Move, as the latter is more accurate and responsive while having a large tracking volume. We also noticed that Oculus Rift's orientation tracking has less latency than any of the positional trackers that we used, while Razer Hydra has less latency than PS Move, and Kinect has the largest latency. Besides positional tracking, our demo uses these three trackers to correct the yaw drift of Oculus Rift. TurboTuscany was developed by using our RUIS toolkit, which is a software platform for VR application development [3]. The demo and RUIS toolkit can be downloaded online2.
{"title":"Full body interaction in virtual reality with affordable hardware","authors":"Tuukka M. Takala, Mikael Matveinen","doi":"10.1109/VR.2014.6802099","DOIUrl":"https://doi.org/10.1109/VR.2014.6802099","url":null,"abstract":"Summary form only given. Recently a number of affordable game controllers have been adopted by virtual reality (VR) researchers [1][4]. We present a video1 of a VR demo called TurboTuscany, where we employ such controllers; our demo combines a Kinect controlled full body avatar with Oculus Rift head-mounted-display [2]. We implemented three positional head tracking schemes that use Kinect, Razer Hydra, and PlayStation (PS) Move controllers. In the demo the Kinect tracked avatar can be used to climb ladders, play with soccer balls, and otherwise move or interact with physically simulated objects. PS Move or Razer Hydra controller is used to control locomotion, and for selecting and manipulating objects. Our subjective experience is that the best head tracking immersion is achieved by using Kinect together with PS Move, as the latter is more accurate and responsive while having a large tracking volume. We also noticed that Oculus Rift's orientation tracking has less latency than any of the positional trackers that we used, while Razer Hydra has less latency than PS Move, and Kinect has the largest latency. Besides positional tracking, our demo uses these three trackers to correct the yaw drift of Oculus Rift. TurboTuscany was developed by using our RUIS toolkit, which is a software platform for VR application development [3]. The demo and RUIS toolkit can be downloaded online2.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129153985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Chapoulie, R. Guerchouche, Pierre-David Petit, G. Chaurasia, P. Robert, G. Drettakis
We present a novel VR solution for Reminiscence Therapy (RT), developed jointly by a group of memory clinicians and computer scientists. RT involves the discussion of past activities, events or experiences with others, often with the aid of tangible props which are familiar items from the past; it is a popular intervention in dementia care. We introduce an immersive VR system designed for RT, which allows easy presentation of familiar environments. In particular, our system supports highly-realistic Image-Based Rendering in an immersive setting. To evaluate the effectiveness and utility of our system for RT, we perform a study with healthy elderly participants to test if our VR system can help with the generation of autobiographical memories. We adapt a verbal Autobiographical Fluency protocol to our VR context, in which elderly participants are asked to generate memories based on images they are shown. We compare the use of our image-based system for an unknown and a familiar environment. The results of our study show that the number of memories generated for a familiar environment is higher than that for an unknown environment using our system. This indicates that IBR can convey familiarity of a given scene, which is an essential requirement for the use of VR in RT. Our results also show that our system is as effective as traditional RT protocols, while acceptability and motivation scores demonstrate that our system is well tolerated by elderly participants.
{"title":"Reminiscence Therapy using Image-Based Rendering in VR","authors":"E. Chapoulie, R. Guerchouche, Pierre-David Petit, G. Chaurasia, P. Robert, G. Drettakis","doi":"10.1109/VR.2014.6802049","DOIUrl":"https://doi.org/10.1109/VR.2014.6802049","url":null,"abstract":"We present a novel VR solution for Reminiscence Therapy (RT), developed jointly by a group of memory clinicians and computer scientists. RT involves the discussion of past activities, events or experiences with others, often with the aid of tangible props which are familiar items from the past; it is a popular intervention in dementia care. We introduce an immersive VR system designed for RT, which allows easy presentation of familiar environments. In particular, our system supports highly-realistic Image-Based Rendering in an immersive setting. To evaluate the effectiveness and utility of our system for RT, we perform a study with healthy elderly participants to test if our VR system can help with the generation of autobiographical memories. We adapt a verbal Autobiographical Fluency protocol to our VR context, in which elderly participants are asked to generate memories based on images they are shown. We compare the use of our image-based system for an unknown and a familiar environment. The results of our study show that the number of memories generated for a familiar environment is higher than that for an unknown environment using our system. This indicates that IBR can convey familiarity of a given scene, which is an essential requirement for the use of VR in RT. Our results also show that our system is as effective as traditional RT protocols, while acceptability and motivation scores demonstrate that our system is well tolerated by elderly participants.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122504262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Tredinnick, James Vanderheiden, Clayton Suplinski, J. Madsen
Neutrinos are nearly massless, weakly interacting particles that come from a variety of sources including the sun, radioactive decay and cosmic rays. Neutrinos are unique cosmic messengers that provide new ways to explore the Universe as well as opportunities to better understand the basic building blocks of matter. IceCube, the largest operating neutrino detector in the world, is located in the ice sheet at the South Pole. This paper describes an interactive VR application for visualization of the IceCube's neutrino data within a C6 CAVE system. The dynamic display of data in a true scale recreation of the light sensor system allows events to be viewed from arbitrary locations both forward and backward in time. Initial feedback from user experiences within the system have been positive, showing promise for both further insight into analyzing data as well as opportunities for physics and neutrino education.
{"title":"CAVE visualization of the IceCube neutrino detector","authors":"R. Tredinnick, James Vanderheiden, Clayton Suplinski, J. Madsen","doi":"10.1109/VR.2014.6802079","DOIUrl":"https://doi.org/10.1109/VR.2014.6802079","url":null,"abstract":"Neutrinos are nearly massless, weakly interacting particles that come from a variety of sources including the sun, radioactive decay and cosmic rays. Neutrinos are unique cosmic messengers that provide new ways to explore the Universe as well as opportunities to better understand the basic building blocks of matter. IceCube, the largest operating neutrino detector in the world, is located in the ice sheet at the South Pole. This paper describes an interactive VR application for visualization of the IceCube's neutrino data within a C6 CAVE system. The dynamic display of data in a true scale recreation of the light sensor system allows events to be viewed from arbitrary locations both forward and backward in time. Initial feedback from user experiences within the system have been positive, showing promise for both further insight into analyzing data as well as opportunities for physics and neutrino education.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130951302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}