首页 > 最新文献

2014 IEEE Virtual Reality (VR)最新文献

英文 中文
The Mind-Mirror: See your brain in action in your head using EEG and augmented reality 心灵镜子:使用脑电图和增强现实技术在你的头脑中观察你的大脑活动
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802047
Jonathan Mercier-Ganady, F. Lotte, E. Loup-Escande, M. Marchal, A. Lécuyer
Imagine you are facing a mirror, seeing at the same time both your real body and a virtual display of your brain in activity and perfectly superimposed to your real image “inside your real skull”. In this paper, we introduce a novel augmented reality paradigm called “Mind-Mirror” which enables the experience of seeing “through your own head”, visualizing your brain “in action and in situ”. Our approach relies on the use of a semi-transparent mirror positioned in front of a computer screen. A virtual brain is displayed on screen and automatically follows the head movements using an optical face-tracking system. The brain activity is extracted and processed in real-time with the help of an electroencephalography cap (EEG) worn by the user. A rear view is also proposed thanks to an additional webcam recording the rear of the user's head. The use of EEG classification techniques enables to test a Neurofeedback scenario in which the user can train and progressively learn how to control different mental states, such as “concentrated” versus “relaxed”. The results of a user study comparing a standard visualization used in Neurofeedback to our approach showed that the Mind-Mirror could be successfully used and that the participants have particularly appreciated its innovation and originality. We believe that, in addition to applications in Neurofeedback and Brain-Computer Interfaces, the Mind-Mirror could also be used as a novel visualization tool for education, training or entertainment applications.
想象一下,你正对着一面镜子,同时看到你真实的身体和你大脑活动的虚拟显示,并完美地叠加在你真实的形象“在你真实的头骨里”。在本文中,我们介绍了一种新的增强现实范式,称为“心灵镜像”,它使“通过你自己的头”看到的体验,可视化你的大脑“在行动和原位”。我们的方法依赖于在电脑屏幕前放置一面半透明的镜子。虚拟大脑显示在屏幕上,并使用光学面部跟踪系统自动跟踪头部运动。在用户佩戴的脑电图帽(EEG)的帮助下,大脑活动被实时提取和处理。由于增加了一个记录用户后脑勺的网络摄像头,后视镜也被提出。使用脑电图分类技术可以测试一个神经反馈场景,在这个场景中,用户可以训练并逐步学习如何控制不同的精神状态,比如“集中”和“放松”。一项用户研究的结果是,将Neurofeedback中使用的标准可视化方法与我们的方法进行比较,结果表明Mind-Mirror可以成功使用,参与者特别欣赏它的创新和独创性。我们相信,除了在神经反馈和脑机接口方面的应用之外,心智镜还可以作为一种新颖的可视化工具用于教育、培训或娱乐应用。
{"title":"The Mind-Mirror: See your brain in action in your head using EEG and augmented reality","authors":"Jonathan Mercier-Ganady, F. Lotte, E. Loup-Escande, M. Marchal, A. Lécuyer","doi":"10.1109/VR.2014.6802047","DOIUrl":"https://doi.org/10.1109/VR.2014.6802047","url":null,"abstract":"Imagine you are facing a mirror, seeing at the same time both your real body and a virtual display of your brain in activity and perfectly superimposed to your real image “inside your real skull”. In this paper, we introduce a novel augmented reality paradigm called “Mind-Mirror” which enables the experience of seeing “through your own head”, visualizing your brain “in action and in situ”. Our approach relies on the use of a semi-transparent mirror positioned in front of a computer screen. A virtual brain is displayed on screen and automatically follows the head movements using an optical face-tracking system. The brain activity is extracted and processed in real-time with the help of an electroencephalography cap (EEG) worn by the user. A rear view is also proposed thanks to an additional webcam recording the rear of the user's head. The use of EEG classification techniques enables to test a Neurofeedback scenario in which the user can train and progressively learn how to control different mental states, such as “concentrated” versus “relaxed”. The results of a user study comparing a standard visualization used in Neurofeedback to our approach showed that the Mind-Mirror could be successfully used and that the participants have particularly appreciated its innovation and originality. We believe that, in addition to applications in Neurofeedback and Brain-Computer Interfaces, the Mind-Mirror could also be used as a novel visualization tool for education, training or entertainment applications.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115316356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
Transitional Augmented Reality navigation for live captured scenes 过渡增强现实导航实时捕获的场景
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802045
Markus Tatzgern, R. Grasset, Denis Kalkofen, D. Schmalstieg
Augmented Reality (AR) applications require knowledge about the real world environment in which they are used. This knowledge is often gathered while developing the AR application and stored for future uses of the application. Consequently, changes to the real world lead to a mismatch between the previously recorded data and the real world. New capturing techniques based on dense Simultaneous Localization and Mapping (SLAM) not only allow users to capture real world scenes at run-time, but also enables them to capture changes of the world. However, instead of using previously recorded and prepared scenes, users must interact with an unprepared environment. In this paper, we present a set of new interaction techniques that support users in handling captured real world environments. The techniques present virtual viewpoints of the scene based on a scene analysis and provide natural transitions between the AR view and virtual viewpoints. We demonstrate our approach with a SLAM based prototype that allows us to capture a real world scene and describe example applications of our system.
增强现实(AR)应用程序需要了解使用它们的真实世界环境。这些知识通常是在开发AR应用程序时收集的,并为应用程序的未来使用而存储。因此,对真实世界的更改会导致先前记录的数据与真实世界之间的不匹配。基于SLAM (dense Simultaneous Localization and Mapping)的新型捕获技术不仅可以让用户在运行时捕获真实世界场景,还可以捕捉世界的变化。然而,用户必须与一个没有准备好的环境进行交互,而不是使用先前记录和准备好的场景。在本文中,我们提出了一组新的交互技术,支持用户处理捕获的真实世界环境。该技术基于场景分析呈现场景的虚拟视点,并提供AR视图和虚拟视点之间的自然转换。我们用基于SLAM的原型来演示我们的方法,该原型允许我们捕捉真实世界的场景并描述我们系统的示例应用程序。
{"title":"Transitional Augmented Reality navigation for live captured scenes","authors":"Markus Tatzgern, R. Grasset, Denis Kalkofen, D. Schmalstieg","doi":"10.1109/VR.2014.6802045","DOIUrl":"https://doi.org/10.1109/VR.2014.6802045","url":null,"abstract":"Augmented Reality (AR) applications require knowledge about the real world environment in which they are used. This knowledge is often gathered while developing the AR application and stored for future uses of the application. Consequently, changes to the real world lead to a mismatch between the previously recorded data and the real world. New capturing techniques based on dense Simultaneous Localization and Mapping (SLAM) not only allow users to capture real world scenes at run-time, but also enables them to capture changes of the world. However, instead of using previously recorded and prepared scenes, users must interact with an unprepared environment. In this paper, we present a set of new interaction techniques that support users in handling captured real world environments. The techniques present virtual viewpoints of the scene based on a scene analysis and provide natural transitions between the AR view and virtual viewpoints. We demonstrate our approach with a SLAM based prototype that allows us to capture a real world scene and describe example applications of our system.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115068544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
CORVETTE: Collaborative environment for technical training and experiment CORVETTE:技术培训和实验的协作环境
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802093
Rozenn Bouville Berthelot, Thomas Lopez, Florian Nouviale, V. Gouranton, B. Arnaldi
Summary form only given. The CORVETTE project aims at producing significant innovations in the field of collaborative virtual training. For that purpose, CORVETTE combines various technologies to enhance the effective collaboration between users and virtual humans performing a common task. First, CORVETTE proposes a model of collaborative interaction in virtual environments allowing actors to efficiently collaborate as a team whether they are controlled by a user or by a virtual human [4]. Moreover, the environment is simulated in real-time, at real scale and is using physics as well as physically-based humanoids to improve the realism of the training. Second, thanks to the interaction model, we defined a protocol of exchange of avatars [5, 3, 2]. Thus, an actor can dynamically exchange the control of his/her avatar with the one controlled by another user or by a virtual agent. Moreover, to improve the exchange protocol, we designed a new knowledge model embedded in each avatar. It allows users and virtual humans to retrieve knowledge previously gathered by an avatar following an exchange. The preservation of knowledge is indeed especially crucial for teamwork. Finally, we handle verbal communication between users and virtual humans with speech recognition and synthesis. Actors' knowledge is enhanced through dialogue and used for decision-making and conversation [1].
只提供摘要形式。CORVETTE项目旨在协同虚拟训练领域产生重大创新。为此,CORVETTE结合了各种技术,以增强用户和执行共同任务的虚拟人之间的有效协作。首先,CORVETTE提出了一种虚拟环境中的协作交互模型,允许参与者作为一个团队有效地协作,无论他们是由用户还是由虚拟人控制[4]。此外,环境是实时模拟的,以真实的规模,并使用物理以及基于物理的类人机器人来提高训练的真实感。其次,通过交互模型,我们定义了一个角色交换协议[5,3,2]。因此,演员可以动态地与另一个用户或虚拟代理控制的角色交换他/她的角色的控制权。此外,为了改进交换协议,我们设计了一个新的知识模型嵌入到每个虚拟角色中。它允许用户和虚拟人在交流后检索以前由化身收集的知识。对于团队合作来说,知识的保存确实尤为重要。最后,我们通过语音识别和合成来处理用户和虚拟人之间的口头交流。演员的知识通过对话得到增强,并用于决策和对话[1]。
{"title":"CORVETTE: Collaborative environment for technical training and experiment","authors":"Rozenn Bouville Berthelot, Thomas Lopez, Florian Nouviale, V. Gouranton, B. Arnaldi","doi":"10.1109/VR.2014.6802093","DOIUrl":"https://doi.org/10.1109/VR.2014.6802093","url":null,"abstract":"Summary form only given. The CORVETTE project aims at producing significant innovations in the field of collaborative virtual training. For that purpose, CORVETTE combines various technologies to enhance the effective collaboration between users and virtual humans performing a common task. First, CORVETTE proposes a model of collaborative interaction in virtual environments allowing actors to efficiently collaborate as a team whether they are controlled by a user or by a virtual human [4]. Moreover, the environment is simulated in real-time, at real scale and is using physics as well as physically-based humanoids to improve the realism of the training. Second, thanks to the interaction model, we defined a protocol of exchange of avatars [5, 3, 2]. Thus, an actor can dynamically exchange the control of his/her avatar with the one controlled by another user or by a virtual agent. Moreover, to improve the exchange protocol, we designed a new knowledge model embedded in each avatar. It allows users and virtual humans to retrieve knowledge previously gathered by an avatar following an exchange. The preservation of knowledge is indeed especially crucial for teamwork. Finally, we handle verbal communication between users and virtual humans with speech recognition and synthesis. Actors' knowledge is enhanced through dialogue and used for decision-making and conversation [1].","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116858361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Particle dreams in spherical harmonics 粒子在球谐中做梦
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802098
D. Sandin, Robert Kooima, L. Spiegel, T. DeFanti
Summary form only given. In this virtual-reality art installation - a mathematical play space - the viewer-participant creates an immersive visual and sonic experience. It is based on the mathematical and physical simulation of over one million particles with momentum and elastic reflection in an environment with gravity. The final scene has a realistic rendering of water with reflections and lighting based on spherical harmonics. Sound components are triggered and modified by the user and particle interaction. The application was originally developed using a CUDA particle system running within Thumb, a virtual-reality framework developed by Robert Kooima. It is now being ported to CalVR, developed by researchers at the California Institute for Telecommunications and Information Technology (Calit2) Qualcomm Institute at University of California, San Diego.
只提供摘要形式。在这个虚拟现实艺术装置中——一个数学游戏空间——观众和参与者创造了一种身临其境的视觉和声音体验。它是基于在重力环境中对100多万个具有动量和弹性反射的粒子的数学和物理模拟。最后一个场景有一个真实的水渲染,基于球面谐波的反射和照明。声音组件由用户和粒子交互触发和修改。该应用程序最初是使用运行在Thumb中的CUDA粒子系统开发的,Thumb是Robert Kooima开发的虚拟现实框架。它现在被移植到CalVR,由加州大学圣地亚哥分校的加州电信和信息技术研究所(Calit2)高通研究所的研究人员开发。
{"title":"Particle dreams in spherical harmonics","authors":"D. Sandin, Robert Kooima, L. Spiegel, T. DeFanti","doi":"10.1109/VR.2014.6802098","DOIUrl":"https://doi.org/10.1109/VR.2014.6802098","url":null,"abstract":"Summary form only given. In this virtual-reality art installation - a mathematical play space - the viewer-participant creates an immersive visual and sonic experience. It is based on the mathematical and physical simulation of over one million particles with momentum and elastic reflection in an environment with gravity. The final scene has a realistic rendering of water with reflections and lighting based on spherical harmonics. Sound components are triggered and modified by the user and particle interaction. The application was originally developed using a CUDA particle system running within Thumb, a virtual-reality framework developed by Robert Kooima. It is now being ported to CalVR, developed by researchers at the California Institute for Telecommunications and Information Technology (Calit2) Qualcomm Institute at University of California, San Diego.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116510058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using relative head and hand-target features to predict intention in 3D moving-target selection 利用相对头部和手部目标特征预测三维运动目标选择中的意图
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802050
Juan Sebastián Casallas, J. Oliver, Jonathan W. Kelly, F. Mérienne, S. Garbaya
Selection of moving targets is a common, yet complex task in human-computer interaction (HCI) and virtual reality (VR). Predicting user intention may be beneficial to address the challenges inherent in interaction techniques for moving-target selection. This article extends previous models by integrating relative head-target and hand-target features to predict intended moving targets. The features are calculated in a time window ending at roughly two-thirds of the total target selection time and evaluated using decision trees. With two targets, this model is able to predict user choice with up to ~ 72% accuracy on general moving-target selection tasks and up to ~ 78% by also including task-related target properties.
在人机交互(HCI)和虚拟现实(VR)中,运动目标的选择是一个常见而又复杂的任务。预测用户意图可能有助于解决移动目标选择交互技术中固有的挑战。本文通过整合相对头部目标和手目标特征来扩展先前的模型,以预测预期的移动目标。特征在大约三分之二的总目标选择时间结束的时间窗口内计算,并使用决策树进行评估。对于两个目标,该模型能够在一般移动目标选择任务上预测用户选择的准确率高达72%,并且在包含任务相关目标属性的情况下,该模型能够预测用户选择的准确率高达78%。
{"title":"Using relative head and hand-target features to predict intention in 3D moving-target selection","authors":"Juan Sebastián Casallas, J. Oliver, Jonathan W. Kelly, F. Mérienne, S. Garbaya","doi":"10.1109/VR.2014.6802050","DOIUrl":"https://doi.org/10.1109/VR.2014.6802050","url":null,"abstract":"Selection of moving targets is a common, yet complex task in human-computer interaction (HCI) and virtual reality (VR). Predicting user intention may be beneficial to address the challenges inherent in interaction techniques for moving-target selection. This article extends previous models by integrating relative head-target and hand-target features to predict intended moving targets. The features are calculated in a time window ending at roughly two-thirds of the total target selection time and evaluated using decision trees. With two targets, this model is able to predict user choice with up to ~ 72% accuracy on general moving-target selection tasks and up to ~ 78% by also including task-related target properties.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114800682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Development of a Kinect-based anthropometric measurement application 基于运动学的人体测量应用的开发
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802056
Alvaro Espitia-Contreras, Pedro Sanchez-Caiman, A. Uribe-Quevedo
Anthropometry is known as the science that studies the human body dimensions, this measurements are acquire using special devices and techniques whose results are analyzed through statistics. Anthropometry plays an important role within the industrial design process in areas such as clothing, ergonomics, and biomechanics, where statistical data about body medians allow optimizing product design. Recently, image processing and hardware advances are allowing the development of applications that allow an user to preview wardrobe, costumes, games or advergames and even different types of environments according to the user measurements. This project proposes the development of a complimentary tool for acquiring user anthropometric data for characterizing users in the Mil. Nueva Granada University in Colombia, South America using Microsoft's Kinect skeletal tracking for developing and assess the design of workspaces in several areas such as laboratories.
人体测量学被称为研究人体尺寸的科学,这种测量是使用特殊的设备和技术获得的,其结果通过统计学进行分析。人体测量学在服装、人体工程学和生物力学等领域的工业设计过程中发挥着重要作用,在这些领域中,关于身体中位数的统计数据可以优化产品设计。最近,图像处理和硬件的进步使应用程序的开发成为可能,这些应用程序允许用户根据用户的测量预览衣橱、服装、游戏或广告游戏,甚至不同类型的环境。该项目建议开发一种免费工具,用于获取用户人体测量数据,以表征南美哥伦比亚新格拉纳达大学的用户,使用微软的Kinect骨骼跟踪来开发和评估实验室等几个领域的工作空间设计。
{"title":"Development of a Kinect-based anthropometric measurement application","authors":"Alvaro Espitia-Contreras, Pedro Sanchez-Caiman, A. Uribe-Quevedo","doi":"10.1109/VR.2014.6802056","DOIUrl":"https://doi.org/10.1109/VR.2014.6802056","url":null,"abstract":"Anthropometry is known as the science that studies the human body dimensions, this measurements are acquire using special devices and techniques whose results are analyzed through statistics. Anthropometry plays an important role within the industrial design process in areas such as clothing, ergonomics, and biomechanics, where statistical data about body medians allow optimizing product design. Recently, image processing and hardware advances are allowing the development of applications that allow an user to preview wardrobe, costumes, games or advergames and even different types of environments according to the user measurements. This project proposes the development of a complimentary tool for acquiring user anthropometric data for characterizing users in the Mil. Nueva Granada University in Colombia, South America using Microsoft's Kinect skeletal tracking for developing and assess the design of workspaces in several areas such as laboratories.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124035875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
LunAR Park: Augmented reality, retro-futurism & a ride to the moon 月球公园:增强现实,复古未来主义和月球之旅
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802092
Alexander Betts, B. L. Silva, P. Oikonomou
Museum spaces are ideal settings for interactive experiences that combine entertainment, education and innovative technologies. LunAR Park is an augmented reality application designed for a planetarium setting that utilizes existing lunar exhibits to immerse the visitor in an enhanced world of interactive lunar exploration referencing amusement park experiences. The application was originally presented as part of Moon Lust, an exhibition at the Adler Planetarium and Astronomical Museum in Chicago that explored global interests on lunar exploration and habitation through interactive technologies. The content of LunAR Park was inspired by pre-space age depictions of the lunar landscape at the original Luna Park in Coney Island, the advancement of lunar expeditions of the past century, and the romantic notions of future colonization of the moon. LunAR Park transforms four lunar themed exhibits into a virtual amusement park that brings the surface of the moon to life. The users interact with the augmented environment through iPads and navigate the virtual landscape by physically traversing the space around the four exhibits.
博物馆空间是结合娱乐、教育和创新技术的互动体验的理想场所。月球公园是一个为天文馆设计的增强现实应用程序,利用现有的月球展品,让游客沉浸在一个交互式月球探索的增强世界中,参考游乐园的体验。该应用程序最初是作为月球欲望的一部分展示的,月球欲望是芝加哥阿德勒天文馆和天文博物馆的一个展览,该展览通过互动技术探讨了全球对月球探测和居住的兴趣。《月球公园》的内容灵感来自于前太空时代对康尼岛月球公园原始月球景观的描绘,上个世纪月球探险的进展,以及未来月球殖民的浪漫想法。月球公园将四个月球主题展品改造成一个虚拟游乐园,让月球表面栩栩如生。用户通过ipad与增强的环境进行互动,并通过物理方式穿越四个展品周围的空间来导航虚拟景观。
{"title":"LunAR Park: Augmented reality, retro-futurism & a ride to the moon","authors":"Alexander Betts, B. L. Silva, P. Oikonomou","doi":"10.1109/VR.2014.6802092","DOIUrl":"https://doi.org/10.1109/VR.2014.6802092","url":null,"abstract":"Museum spaces are ideal settings for interactive experiences that combine entertainment, education and innovative technologies. LunAR Park is an augmented reality application designed for a planetarium setting that utilizes existing lunar exhibits to immerse the visitor in an enhanced world of interactive lunar exploration referencing amusement park experiences. The application was originally presented as part of Moon Lust, an exhibition at the Adler Planetarium and Astronomical Museum in Chicago that explored global interests on lunar exploration and habitation through interactive technologies. The content of LunAR Park was inspired by pre-space age depictions of the lunar landscape at the original Luna Park in Coney Island, the advancement of lunar expeditions of the past century, and the romantic notions of future colonization of the moon. LunAR Park transforms four lunar themed exhibits into a virtual amusement park that brings the surface of the moon to life. The users interact with the augmented environment through iPads and navigate the virtual landscape by physically traversing the space around the four exhibits.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117311757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Full body interaction in virtual reality with affordable hardware 在虚拟现实中的全身互动与负担得起的硬件
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802099
Tuukka M. Takala, Mikael Matveinen
Summary form only given. Recently a number of affordable game controllers have been adopted by virtual reality (VR) researchers [1][4]. We present a video1 of a VR demo called TurboTuscany, where we employ such controllers; our demo combines a Kinect controlled full body avatar with Oculus Rift head-mounted-display [2]. We implemented three positional head tracking schemes that use Kinect, Razer Hydra, and PlayStation (PS) Move controllers. In the demo the Kinect tracked avatar can be used to climb ladders, play with soccer balls, and otherwise move or interact with physically simulated objects. PS Move or Razer Hydra controller is used to control locomotion, and for selecting and manipulating objects. Our subjective experience is that the best head tracking immersion is achieved by using Kinect together with PS Move, as the latter is more accurate and responsive while having a large tracking volume. We also noticed that Oculus Rift's orientation tracking has less latency than any of the positional trackers that we used, while Razer Hydra has less latency than PS Move, and Kinect has the largest latency. Besides positional tracking, our demo uses these three trackers to correct the yaw drift of Oculus Rift. TurboTuscany was developed by using our RUIS toolkit, which is a software platform for VR application development [3]. The demo and RUIS toolkit can be downloaded online2.
只提供摘要形式。最近,虚拟现实(VR)研究人员采用了一些价格实惠的游戏控制器。我们展示了一个名为TurboTuscany的VR演示视频,我们在其中使用了这样的控制器;我们的演示结合了Kinect控制的全身化身和Oculus Rift头戴式显示器[2]。我们使用Kinect、Razer Hydra和PlayStation (PS) Move控制器实现了三种位置头部跟踪方案。在演示中,Kinect追踪的化身可以用来爬梯子,踢足球,或者移动或与物理模拟物体互动。PS Move或Razer Hydra控制器用于控制运动,以及选择和操作对象。我们的主观经验是,最好的头部跟踪沉浸感是通过使用Kinect和PS Move来实现的,因为后者的跟踪量更大,更准确,反应更灵敏。我们还注意到Oculus Rift的方向跟踪比我们使用的任何位置跟踪器的延迟都要短,而Razer Hydra的延迟比PS Move要短,而Kinect的延迟最大。除了位置跟踪,我们的演示使用这三个跟踪器来纠正Oculus Rift的偏航漂移。TurboTuscany是使用我们的RUIS工具包开发的,这是一个用于VR应用程序开发的软件平台。演示和RUIS工具包可以在线下载2。
{"title":"Full body interaction in virtual reality with affordable hardware","authors":"Tuukka M. Takala, Mikael Matveinen","doi":"10.1109/VR.2014.6802099","DOIUrl":"https://doi.org/10.1109/VR.2014.6802099","url":null,"abstract":"Summary form only given. Recently a number of affordable game controllers have been adopted by virtual reality (VR) researchers [1][4]. We present a video1 of a VR demo called TurboTuscany, where we employ such controllers; our demo combines a Kinect controlled full body avatar with Oculus Rift head-mounted-display [2]. We implemented three positional head tracking schemes that use Kinect, Razer Hydra, and PlayStation (PS) Move controllers. In the demo the Kinect tracked avatar can be used to climb ladders, play with soccer balls, and otherwise move or interact with physically simulated objects. PS Move or Razer Hydra controller is used to control locomotion, and for selecting and manipulating objects. Our subjective experience is that the best head tracking immersion is achieved by using Kinect together with PS Move, as the latter is more accurate and responsive while having a large tracking volume. We also noticed that Oculus Rift's orientation tracking has less latency than any of the positional trackers that we used, while Razer Hydra has less latency than PS Move, and Kinect has the largest latency. Besides positional tracking, our demo uses these three trackers to correct the yaw drift of Oculus Rift. TurboTuscany was developed by using our RUIS toolkit, which is a software platform for VR application development [3]. The demo and RUIS toolkit can be downloaded online2.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129153985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Reminiscence Therapy using Image-Based Rendering in VR 在VR中使用基于图像渲染的回忆疗法
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802049
E. Chapoulie, R. Guerchouche, Pierre-David Petit, G. Chaurasia, P. Robert, G. Drettakis
We present a novel VR solution for Reminiscence Therapy (RT), developed jointly by a group of memory clinicians and computer scientists. RT involves the discussion of past activities, events or experiences with others, often with the aid of tangible props which are familiar items from the past; it is a popular intervention in dementia care. We introduce an immersive VR system designed for RT, which allows easy presentation of familiar environments. In particular, our system supports highly-realistic Image-Based Rendering in an immersive setting. To evaluate the effectiveness and utility of our system for RT, we perform a study with healthy elderly participants to test if our VR system can help with the generation of autobiographical memories. We adapt a verbal Autobiographical Fluency protocol to our VR context, in which elderly participants are asked to generate memories based on images they are shown. We compare the use of our image-based system for an unknown and a familiar environment. The results of our study show that the number of memories generated for a familiar environment is higher than that for an unknown environment using our system. This indicates that IBR can convey familiarity of a given scene, which is an essential requirement for the use of VR in RT. Our results also show that our system is as effective as traditional RT protocols, while acceptability and motivation scores demonstrate that our system is well tolerated by elderly participants.
我们提出了一种新的VR解决方案,用于记忆治疗(RT),由一组记忆临床医生和计算机科学家共同开发。RT包括与他人讨论过去的活动、事件或经历,通常借助于有形的道具,即过去熟悉的物品;这是一种很受欢迎的痴呆症治疗方法。我们介绍了一个为RT设计的沉浸式VR系统,它可以轻松地呈现熟悉的环境。特别是,我们的系统在沉浸式设置中支持高度逼真的基于图像的渲染。为了评估我们的RT系统的有效性和实用性,我们对健康的老年人参与者进行了一项研究,以测试我们的VR系统是否有助于自传式记忆的产生。我们将口头自传流畅性协议应用到我们的虚拟现实环境中,要求老年参与者根据他们看到的图像产生记忆。我们比较了我们的基于图像的系统在未知环境和熟悉环境下的使用。我们的研究结果表明,使用我们的系统,在熟悉的环境中产生的记忆数量要高于在未知的环境中产生的记忆数量。这表明IBR可以传达给定场景的熟悉度,这是VR在RT中使用的基本要求。我们的研究结果还表明,我们的系统与传统的RT协议一样有效,而可接受性和动机评分表明我们的系统在老年参与者中具有良好的耐受性。
{"title":"Reminiscence Therapy using Image-Based Rendering in VR","authors":"E. Chapoulie, R. Guerchouche, Pierre-David Petit, G. Chaurasia, P. Robert, G. Drettakis","doi":"10.1109/VR.2014.6802049","DOIUrl":"https://doi.org/10.1109/VR.2014.6802049","url":null,"abstract":"We present a novel VR solution for Reminiscence Therapy (RT), developed jointly by a group of memory clinicians and computer scientists. RT involves the discussion of past activities, events or experiences with others, often with the aid of tangible props which are familiar items from the past; it is a popular intervention in dementia care. We introduce an immersive VR system designed for RT, which allows easy presentation of familiar environments. In particular, our system supports highly-realistic Image-Based Rendering in an immersive setting. To evaluate the effectiveness and utility of our system for RT, we perform a study with healthy elderly participants to test if our VR system can help with the generation of autobiographical memories. We adapt a verbal Autobiographical Fluency protocol to our VR context, in which elderly participants are asked to generate memories based on images they are shown. We compare the use of our image-based system for an unknown and a familiar environment. The results of our study show that the number of memories generated for a familiar environment is higher than that for an unknown environment using our system. This indicates that IBR can convey familiarity of a given scene, which is an essential requirement for the use of VR in RT. Our results also show that our system is as effective as traditional RT protocols, while acceptability and motivation scores demonstrate that our system is well tolerated by elderly participants.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122504262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
CAVE visualization of the IceCube neutrino detector 冰立方中微子探测器的CAVE可视化
Pub Date : 2014-04-24 DOI: 10.1109/VR.2014.6802079
R. Tredinnick, James Vanderheiden, Clayton Suplinski, J. Madsen
Neutrinos are nearly massless, weakly interacting particles that come from a variety of sources including the sun, radioactive decay and cosmic rays. Neutrinos are unique cosmic messengers that provide new ways to explore the Universe as well as opportunities to better understand the basic building blocks of matter. IceCube, the largest operating neutrino detector in the world, is located in the ice sheet at the South Pole. This paper describes an interactive VR application for visualization of the IceCube's neutrino data within a C6 CAVE system. The dynamic display of data in a true scale recreation of the light sensor system allows events to be viewed from arbitrary locations both forward and backward in time. Initial feedback from user experiences within the system have been positive, showing promise for both further insight into analyzing data as well as opportunities for physics and neutrino education.
中微子是一种几乎没有质量、相互作用微弱的粒子,其来源多种多样,包括太阳、放射性衰变和宇宙射线。中微子是独特的宇宙信使,它为探索宇宙提供了新的途径,也为更好地理解物质的基本组成部分提供了机会。冰立方是世界上最大的中微子探测器,位于南极的冰盖上。本文介绍了一种交互式VR应用程序,用于在C6 CAVE系统中可视化冰立方的中微子数据。在光传感器系统的真实尺度再现中动态显示数据,允许从任意位置向前和向后查看事件。系统内用户体验的初步反馈是积极的,显示了对分析数据的进一步洞察以及物理学和中微子教育的机会。
{"title":"CAVE visualization of the IceCube neutrino detector","authors":"R. Tredinnick, James Vanderheiden, Clayton Suplinski, J. Madsen","doi":"10.1109/VR.2014.6802079","DOIUrl":"https://doi.org/10.1109/VR.2014.6802079","url":null,"abstract":"Neutrinos are nearly massless, weakly interacting particles that come from a variety of sources including the sun, radioactive decay and cosmic rays. Neutrinos are unique cosmic messengers that provide new ways to explore the Universe as well as opportunities to better understand the basic building blocks of matter. IceCube, the largest operating neutrino detector in the world, is located in the ice sheet at the South Pole. This paper describes an interactive VR application for visualization of the IceCube's neutrino data within a C6 CAVE system. The dynamic display of data in a true scale recreation of the light sensor system allows events to be viewed from arbitrary locations both forward and backward in time. Initial feedback from user experiences within the system have been positive, showing promise for both further insight into analyzing data as well as opportunities for physics and neutrino education.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130951302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2014 IEEE Virtual Reality (VR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1