首页 > 最新文献

2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)最新文献

英文 中文
Immersive Captioning: Developing a framework for evaluating user needs 沉浸式字幕:开发评估用户需求的框架
Chris J. Hughes, Marta B. Zapata, Matthew Johnston, P. Orero
This article focuses on captioning for immersive environments and the research aims to identify how to display them for an optimal viewing experience. This work began four years ago with some partial findings. This second stage of research, built from the lessons learnt, focuses on the design requirements cornerstone: prototyping. A tool has been developed towards quick and realistic prototyping and testing. The framework integrates methods used in existing solutions. Given how easy it is to contrast and compare, the need to further the first framework was obvious. A second improved solution was developed, almost as a showcase on how ideas can quickly be implemented for user testing. After an overview on captions in immersive environments, the article describes its implementation, based on web technologies opening for any device with a web browser. This includes desktop computers, mobile devices and head mounted displays. The article finishes with a description of the new caption modes and methods, hoping to be a useful tool towards testing and standardisation.
本文的重点是沉浸式环境的字幕,研究的目的是确定如何显示它们以获得最佳的观看体验。这项工作开始于四年前,当时只有部分发现。研究的第二阶段,建立在经验教训之上,关注于设计需求的基石:原型设计。已经开发了一种工具,用于快速和现实的原型和测试。该框架集成了现有解决方案中使用的方法。考虑到对比和比较是多么容易,显然需要进一步完善第一个框架。第二个改进的解决方案被开发出来,几乎是展示了如何快速实现用户测试的想法。在概述了沉浸式环境中的字幕之后,本文描述了它的实现,基于web技术,可以在任何带有web浏览器的设备上开放。这包括台式电脑、移动设备和头戴式显示器。文章最后描述了新的标题模式和方法,希望对测试和标准化提供有用的工具。
{"title":"Immersive Captioning: Developing a framework for evaluating user needs","authors":"Chris J. Hughes, Marta B. Zapata, Matthew Johnston, P. Orero","doi":"10.1109/AIVR50618.2020.00063","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00063","url":null,"abstract":"This article focuses on captioning for immersive environments and the research aims to identify how to display them for an optimal viewing experience. This work began four years ago with some partial findings. This second stage of research, built from the lessons learnt, focuses on the design requirements cornerstone: prototyping. A tool has been developed towards quick and realistic prototyping and testing. The framework integrates methods used in existing solutions. Given how easy it is to contrast and compare, the need to further the first framework was obvious. A second improved solution was developed, almost as a showcase on how ideas can quickly be implemented for user testing. After an overview on captions in immersive environments, the article describes its implementation, based on web technologies opening for any device with a web browser. This includes desktop computers, mobile devices and head mounted displays. The article finishes with a description of the new caption modes and methods, hoping to be a useful tool towards testing and standardisation.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129583633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Investigating learners’ motivation towards a virtual reality learning environment: a pilot study in vehicle painting 调查学习者对虚拟现实学习环境的动机:汽车涂装的试点研究
Miriam Mulders
The HandleVR project develops a Virtual Reality (VR) training based on the 4C/ID model [1] to train vocational competencies in the field of vehicle painting. The paper presents the results of a pilot study with fourteen aspirant vehicle painters who tested two prototypical tasks in VR and evaluated its suitability, i.a. regarding their learning motivation. The results indicate that VR training is highly motivating and some aspects (e.g., a virtual trainer) in particular promote motivation. Further research is needed to take advantage of these positive motivational effects to support meaningful learning.
HandleVR项目开发了一个基于4C/ID模型[1]的虚拟现实(VR)培训,以培养车辆涂装领域的职业能力。本文介绍了一项试点研究的结果,14名有抱负的汽车画家在VR中测试了两个原型任务,并评估了其适用性,即他们的学习动机。研究结果表明,虚拟现实训练具有很强的激励作用,某些方面(如虚拟培训师)尤其能促进激励。需要进一步的研究来利用这些积极的动机效应来支持有意义的学习。
{"title":"Investigating learners’ motivation towards a virtual reality learning environment: a pilot study in vehicle painting","authors":"Miriam Mulders","doi":"10.1109/AIVR50618.2020.00081","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00081","url":null,"abstract":"The HandleVR project develops a Virtual Reality (VR) training based on the 4C/ID model [1] to train vocational competencies in the field of vehicle painting. The paper presents the results of a pilot study with fourteen aspirant vehicle painters who tested two prototypical tasks in VR and evaluated its suitability, i.a. regarding their learning motivation. The results indicate that VR training is highly motivating and some aspects (e.g., a virtual trainer) in particular promote motivation. Further research is needed to take advantage of these positive motivational effects to support meaningful learning.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129922814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
SnapMove: Movement Projection Mapping in Virtual Reality SnapMove:虚拟现实中的运动投影映射
B. Cohn, A. Maselli, E. Ofek, Mar González-Franco
We present SnapMove a technique to reproject reaching movements inside Virtual Reality. SnapMove can be used to reduce the need of large, fatiguing or difficult motions. We designed multiple reprojection techniques, linear or planar, uni-manual, bi-manual or head snap, that can be used for reaching, throwing and virtual tool manipulation. In a user study (n=21) we explore if the self-avatar follower effect can be modulated depending on the cost of the motion introduced by remapping. SnapMove was successful in re-projecting user’s hand position from e.g. a lower area, to a higher avatar-hand position–a mapping which can be ideal for limiting fatigue. It was also successful in preserving avatar embodiment and gradually bring users to perform movements with higher cost energies, which have most interest for rehabilitation scenarios. We implemented applications for menu interaction, climbing, rowing, and throwing darts. Overall, SnapMove can make interactions in virtual environments easier. We discuss the potential impact of SnapMove for application in gaming, accessibility and therapy.
我们提出了SnapMove一种技术来重新投影到达运动在虚拟现实。SnapMove可用于减少对大型,疲劳或困难运动的需求。我们设计了多种重投射技术,线性或平面,单手,双手或头部快照,可用于伸手,投掷和虚拟工具操作。在一项用户研究(n=21)中,我们探讨了自我化身追随者效应是否可以根据重新映射引入的运动成本进行调节。SnapMove成功地将用户的手位置从例如较低的区域重新投影到较高的虚拟手位置-这种映射可以理想地限制疲劳。它还成功地保留了化身的化身,并逐渐使用户执行对康复场景最感兴趣的高消耗能量的动作。我们实现了菜单交互、攀爬、划船和投掷飞镖的应用程序。总的来说,SnapMove可以使虚拟环境中的交互更容易。我们讨论了SnapMove在游戏、可访问性和治疗方面应用的潜在影响。
{"title":"SnapMove: Movement Projection Mapping in Virtual Reality","authors":"B. Cohn, A. Maselli, E. Ofek, Mar González-Franco","doi":"10.1109/AIVR50618.2020.00024","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00024","url":null,"abstract":"We present SnapMove a technique to reproject reaching movements inside Virtual Reality. SnapMove can be used to reduce the need of large, fatiguing or difficult motions. We designed multiple reprojection techniques, linear or planar, uni-manual, bi-manual or head snap, that can be used for reaching, throwing and virtual tool manipulation. In a user study (n=21) we explore if the self-avatar follower effect can be modulated depending on the cost of the motion introduced by remapping. SnapMove was successful in re-projecting user’s hand position from e.g. a lower area, to a higher avatar-hand position–a mapping which can be ideal for limiting fatigue. It was also successful in preserving avatar embodiment and gradually bring users to perform movements with higher cost energies, which have most interest for rehabilitation scenarios. We implemented applications for menu interaction, climbing, rowing, and throwing darts. Overall, SnapMove can make interactions in virtual environments easier. We discuss the potential impact of SnapMove for application in gaming, accessibility and therapy.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123730562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Rainbow Learner: Lighting Environment Estimation from a Structural-color based AR Marker 彩虹学习器:基于结构颜色的AR标记的照明环境估计
Yuji Tsukagoshi, Yuuki Uranishi, J. Orlosky, Kiyomi Ito, H. Takemura
This paper proposes a method for estimating lighting environments from an AR marker coupled with the structural color patterns inherent to a compact disc (CD) form-factor. To achieve photometric consistency, these patterns are used as input to a Conditional Generative Adversarial Network (CGAN), which allows us to efficiently and quickly generate estimations of an environment map. We construct a dataset from pairs of images of the structural color pattern and environment map captured in multiple scenes, and the CGAN is then trained with this dataset. Experiments show that we can generate visually accurate reconstructions with this method for certain scenes, and that the environment map can be estimated in real time.
本文提出了一种从AR标记加上CD形状因素固有的结构颜色模式来估计照明环境的方法。为了实现光度一致性,这些模式被用作条件生成对抗网络(CGAN)的输入,这使我们能够高效快速地生成环境地图的估计。我们从多个场景中捕获的结构颜色图案和环境地图的图像对构建一个数据集,然后使用该数据集训练CGAN。实验表明,该方法可以对特定场景生成视觉上准确的重建图,并且可以实时估计环境地图。
{"title":"Rainbow Learner: Lighting Environment Estimation from a Structural-color based AR Marker","authors":"Yuji Tsukagoshi, Yuuki Uranishi, J. Orlosky, Kiyomi Ito, H. Takemura","doi":"10.1109/AIVR50618.2020.00074","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00074","url":null,"abstract":"This paper proposes a method for estimating lighting environments from an AR marker coupled with the structural color patterns inherent to a compact disc (CD) form-factor. To achieve photometric consistency, these patterns are used as input to a Conditional Generative Adversarial Network (CGAN), which allows us to efficiently and quickly generate estimations of an environment map. We construct a dataset from pairs of images of the structural color pattern and environment map captured in multiple scenes, and the CGAN is then trained with this dataset. Experiments show that we can generate visually accurate reconstructions with this method for certain scenes, and that the environment map can be estimated in real time.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121262412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Immersive Visualization of Dengue Vector Breeding Sites Extracted from Street View Images 从街景图像中提取登革热媒介滋生地点的沉浸式可视化
Mores Prachyabrued, P. Haddawy, Krittayoch Tengputtipong, Myat Su Yin, D. Bicout, Yongjua Laosiritaworn
Dengue is considered one of the most serious global health burdens. The primary vector of dengue is the Aedes aegypti mosquito, which has adapted to human habitats and breeds primarily in artificial containers that can contain water. Control of dengue relies on effective mosquito vector control, for which detection and mapping of potential breeding sites is essential. The two traditional approaches to this have been to use satellite images, which do not provide sufficient resolution to detect a large proportion of the breeding sites, and manual counting, which is too labor intensive to be used on a routine basis over large areas. Our recent work has addressed this problem by applying convolutional neural nets to detect outdoor containers representing potential breeding sites in Google street view images. The challenge is now not a paucity of data, but rather transforming the large volumes of data produced into meaningful information. In this paper, we present the design of an immersive visualization using a tiled-display wall that supports an early but crucial stage of dengue investigation, by enabling researchers to interactively explore and discover patterns in the datasets, which can help in forming hypotheses that can drive quantitative analyses. The tool is also useful in uncovering patterns that may be too sparse to be discovered by correlational analyses and in identifying outliers that may justify further study. We demonstrate the usefulness of our approach with two usage scenarios that lead to insights into the relationship between dengue incidence and container counts.
登革热被认为是全球最严重的健康负担之一。登革热的主要媒介是埃及伊蚊,它已经适应了人类栖息地,并主要在可盛水的人工容器中繁殖。登革热的控制依赖于有效的蚊虫媒介控制,为此,检测和绘制潜在滋生地点的地图至关重要。这方面的两种传统方法是使用卫星图像,但卫星图像不能提供足够的分辨率来探测大部分繁殖地点;另一种方法是人工计数,这是一种劳动密集型的方法,无法在大面积的常规基础上使用。我们最近的工作通过应用卷积神经网络来检测谷歌街景图像中代表潜在繁殖地点的户外容器来解决这个问题。现在的挑战不是缺乏数据,而是如何将产生的大量数据转化为有意义的信息。在本文中,我们介绍了一种使用瓷砖显示墙的沉浸式可视化设计,该设计支持登革热调查的早期但关键阶段,使研究人员能够交互式地探索和发现数据集中的模式,这有助于形成可以推动定量分析的假设。该工具还可以用于发现可能过于稀疏而无法通过相关分析发现的模式,以及识别可能证明进一步研究的异常值。我们通过两种使用场景证明了我们方法的有效性,从而深入了解登革热发病率与容器数量之间的关系。
{"title":"Immersive Visualization of Dengue Vector Breeding Sites Extracted from Street View Images","authors":"Mores Prachyabrued, P. Haddawy, Krittayoch Tengputtipong, Myat Su Yin, D. Bicout, Yongjua Laosiritaworn","doi":"10.1109/AIVR50618.2020.00016","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00016","url":null,"abstract":"Dengue is considered one of the most serious global health burdens. The primary vector of dengue is the Aedes aegypti mosquito, which has adapted to human habitats and breeds primarily in artificial containers that can contain water. Control of dengue relies on effective mosquito vector control, for which detection and mapping of potential breeding sites is essential. The two traditional approaches to this have been to use satellite images, which do not provide sufficient resolution to detect a large proportion of the breeding sites, and manual counting, which is too labor intensive to be used on a routine basis over large areas. Our recent work has addressed this problem by applying convolutional neural nets to detect outdoor containers representing potential breeding sites in Google street view images. The challenge is now not a paucity of data, but rather transforming the large volumes of data produced into meaningful information. In this paper, we present the design of an immersive visualization using a tiled-display wall that supports an early but crucial stage of dengue investigation, by enabling researchers to interactively explore and discover patterns in the datasets, which can help in forming hypotheses that can drive quantitative analyses. The tool is also useful in uncovering patterns that may be too sparse to be discovered by correlational analyses and in identifying outliers that may justify further study. We demonstrate the usefulness of our approach with two usage scenarios that lead to insights into the relationship between dengue incidence and container counts.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"38 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133478488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Smartphone Thermal Temperature Analysis for Virtual and Augmented Reality 虚拟与增强现实智能手机热温度分析
Xiaoyang Zhang, Harshit Vadodaria, Na Li, K. Kang, Yao Liu
Emerging virtual and augmented reality applications are envisioned to significantly enhance user experiences. An important issue related to user experience is thermal management in smartphones widely adopted for virtual and augmented reality applications. Although smartphone overheating has been reported many times, a systematic measurement and analysis of their thermal behaviors is relatively scarce, especially for virtual and augmented reality applications. To address the issue, we build a temperature measurement and analysis framework for virtual and augmented reality applications using a robot, infrared cameras, and smartphones. Using the framework, we analyze a comprehensive set of data including the battery power consumption, smartphone surface temperature, and temperature of key hardware components, such as the battery, CPU, GPU,and WiFi module. When a 360° virtual reality video is streamed to a smartphone, the phone surface temperature reaches near $39^{circ} mathrm{C}$. Also, the temperature of the phone surface and its main hardware components generally increases till the end of our 20 -minute experiments despite thermal control undertaken by smartphones, such as CPU/GPU frequency scaling. Our thermal analysis results of a popular AR game are even more serious: the battery power consumption frequently exceeds the thermal design power by 20-80 %, while the peak battery, CPU, GPU, and WiFi module temperature exceeds $45,70,70$, and $65^{circ} mathrm{C}$ respectively.
新兴的虚拟现实和增强现实应用有望显著增强用户体验。与用户体验相关的一个重要问题是智能手机的热管理,智能手机被广泛应用于虚拟和增强现实应用。虽然智能手机过热已经被报道了很多次,但对其热行为的系统测量和分析相对较少,特别是对于虚拟和增强现实应用。为了解决这个问题,我们使用机器人、红外摄像机和智能手机为虚拟和增强现实应用构建了一个温度测量和分析框架。使用该框架,我们分析了一组全面的数据,包括电池功耗,智能手机表面温度,以及关键硬件组件(如电池,CPU, GPU和WiFi模块)的温度。当360°虚拟现实视频流到智能手机时,手机表面温度接近$39^{circ} mathm {C}$。此外,尽管智能手机进行了热控制,例如CPU/GPU频率缩放,但手机表面及其主要硬件组件的温度普遍升高,直到我们20分钟的实验结束。我们对一款热门AR游戏的热分析结果更加严重:电池功耗经常超过热设计功耗20- 80%,而电池、CPU、GPU和WiFi模块的峰值温度分别超过$45、$ 70、$ 70和$65^{circ} maththrm {C}$。
{"title":"A Smartphone Thermal Temperature Analysis for Virtual and Augmented Reality","authors":"Xiaoyang Zhang, Harshit Vadodaria, Na Li, K. Kang, Yao Liu","doi":"10.1109/AIVR50618.2020.00061","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00061","url":null,"abstract":"Emerging virtual and augmented reality applications are envisioned to significantly enhance user experiences. An important issue related to user experience is thermal management in smartphones widely adopted for virtual and augmented reality applications. Although smartphone overheating has been reported many times, a systematic measurement and analysis of their thermal behaviors is relatively scarce, especially for virtual and augmented reality applications. To address the issue, we build a temperature measurement and analysis framework for virtual and augmented reality applications using a robot, infrared cameras, and smartphones. Using the framework, we analyze a comprehensive set of data including the battery power consumption, smartphone surface temperature, and temperature of key hardware components, such as the battery, CPU, GPU,and WiFi module. When a 360° virtual reality video is streamed to a smartphone, the phone surface temperature reaches near $39^{circ} mathrm{C}$. Also, the temperature of the phone surface and its main hardware components generally increases till the end of our 20 -minute experiments despite thermal control undertaken by smartphones, such as CPU/GPU frequency scaling. Our thermal analysis results of a popular AR game are even more serious: the battery power consumption frequently exceeds the thermal design power by 20-80 %, while the peak battery, CPU, GPU, and WiFi module temperature exceeds $45,70,70$, and $65^{circ} mathrm{C}$ respectively.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131858813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Under The (Plastic) Sea - Sensitizing People Toward Ecological Behavior Using Virtual Reality Controlled by Users’ Physical Activity 在(塑料)海洋下——利用用户身体活动控制的虚拟现实使人们对生态行为敏感
Carolin Straßmann, Alexander Arntz, S. Eimler
As environmental pollution continues to expand, new ways for raising awareness for the consequences need to be explored. Virtual reality has emerged as an effective tool for behavioral change. This paper investigates if virtual reality applications controlled through physical activity can support an even stronger effect, because it enhances the attention and recall performance by stimulating the working memory through motor functions. This was tested in an experimental study using a virtual reality head-mounted display in combination with the ICAROS fitness device enabling participants to explore either a plastic-polluted or non-polluted sea. Results indicated that using a regular controller elicits more presence and a more intense Flow experience than the ICAROS condition, which people controlled via their physical activity. Moreover, the plastic-polluted stimulus was more effective in inducing attitude change than a nonpolluted sea.
随着环境污染的不断扩大,需要探索提高人们对其后果认识的新方法。虚拟现实已经成为改变行为的有效工具。本文研究了通过身体活动控制的虚拟现实应用是否可以支持更强的效果,因为它通过运动功能刺激工作记忆来提高注意力和回忆表现。这在一项实验研究中进行了测试,使用虚拟现实头戴式显示器与ICAROS健身设备相结合,使参与者能够探索塑料污染或未污染的海洋。结果表明,与ICAROS条件相比,使用常规控制器可以引发更多的存在感和更强烈的心流体验,而ICAROS条件是人们通过身体活动来控制的。此外,塑料污染的刺激比未污染的海洋更有效地诱导态度变化。
{"title":"Under The (Plastic) Sea - Sensitizing People Toward Ecological Behavior Using Virtual Reality Controlled by Users’ Physical Activity","authors":"Carolin Straßmann, Alexander Arntz, S. Eimler","doi":"10.1109/AIVR50618.2020.00036","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00036","url":null,"abstract":"As environmental pollution continues to expand, new ways for raising awareness for the consequences need to be explored. Virtual reality has emerged as an effective tool for behavioral change. This paper investigates if virtual reality applications controlled through physical activity can support an even stronger effect, because it enhances the attention and recall performance by stimulating the working memory through motor functions. This was tested in an experimental study using a virtual reality head-mounted display in combination with the ICAROS fitness device enabling participants to explore either a plastic-polluted or non-polluted sea. Results indicated that using a regular controller elicits more presence and a more intense Flow experience than the ICAROS condition, which people controlled via their physical activity. Moreover, the plastic-polluted stimulus was more effective in inducing attitude change than a nonpolluted sea.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114782337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exploring the possibilities of Extended Reality in the world of firefighting 在消防领域探索扩展现实的可能性
Janne Heirman, S. Selleri, Tom De Vleeschauwer, Charles Hamesse, Michel Bellemans, Evarest Schoofs, R. Haelterman
Firefighting is a crucial part of the Navy’s training program, as it must ensure the safety on board. This training is dangerous, expensive and environmentally unfriendly. Therefore, the Navy is looking for a safer form of training that can enhance the current one. Extended Reality technology offers new ways of training, with the promise to alleviate issues related to training danger, costs and environmental pollution. In this work, we develop and evaluate a Virtual Reality simulator and a proof of concept of a Mixed Reality simulator, together with a firehose controller adapted to the needs of the Navy’s firefighting training program.
消防是海军训练计划的重要组成部分,因为它必须确保船上的安全。这种训练既危险、昂贵又不环保。因此,海军正在寻找一种更安全的训练形式,以增强现有的训练形式。扩展现实技术提供了新的培训方式,有望减轻与培训危险、成本和环境污染相关的问题。在这项工作中,我们开发和评估了一个虚拟现实模拟器和一个混合现实模拟器的概念验证,以及一个适应海军消防培训计划需要的消防水带控制器。
{"title":"Exploring the possibilities of Extended Reality in the world of firefighting","authors":"Janne Heirman, S. Selleri, Tom De Vleeschauwer, Charles Hamesse, Michel Bellemans, Evarest Schoofs, R. Haelterman","doi":"10.1109/AIVR50618.2020.00055","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00055","url":null,"abstract":"Firefighting is a crucial part of the Navy’s training program, as it must ensure the safety on board. This training is dangerous, expensive and environmentally unfriendly. Therefore, the Navy is looking for a safer form of training that can enhance the current one. Extended Reality technology offers new ways of training, with the promise to alleviate issues related to training danger, costs and environmental pollution. In this work, we develop and evaluate a Virtual Reality simulator and a proof of concept of a Mixed Reality simulator, together with a firehose controller adapted to the needs of the Navy’s firefighting training program.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"5 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120864127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
The Efficacy of a Virtual Reality-Based Mindfulness Intervention 基于虚拟现实的正念干预的效果
Caglar Yildirim, Tara O'Grady
Mindfulness can be defined as increased awareness of and sustained attentiveness to the present moment. Recently, there has been a growing interest in the applications of mindfulness for empirical research in wellbeing and the use of virtual reality (VR) environments and 3D interfaces as a conduit for mindfulness training. Accordingly, the current experiment investigated whether a brief VR-based mindfulness intervention could induce a greater level of state mindfulness, when compared to an audio-based intervention and control group. Results indicated two mindfulness interventions, VRbased and audio-based, induced a greater state of mindfulness, compared to the control group. Participants in the VR-based mindfulness intervention group reported a greater state of mindfulness than those in the guided audio group, indicating the immersive mindfulness intervention was more robust. Collectively, these results provide empirical support for the efficaciousness of a brief VR-based mindfulness intervention in inducing a robust state of mindfulness in laboratory settings.
正念可以被定义为增加对当下时刻的意识和持续的关注。最近,人们对正念在幸福感实证研究中的应用以及虚拟现实(VR)环境和3D界面作为正念训练渠道的应用越来越感兴趣。因此,本实验调查了与基于音频的干预和对照组相比,基于vr的短暂正念干预是否能诱导更高水平的状态正念。结果表明,与对照组相比,基于vr和基于音频的两种正念干预措施诱导了更强的正念状态。基于vr的正念干预组的参与者比音频引导组的参与者报告了更强的正念状态,这表明沉浸式正念干预更为稳健。总的来说,这些结果为在实验室环境中,基于vr的短暂正念干预在诱导稳健正念状态方面的有效性提供了实证支持。
{"title":"The Efficacy of a Virtual Reality-Based Mindfulness Intervention","authors":"Caglar Yildirim, Tara O'Grady","doi":"10.1109/AIVR50618.2020.00035","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00035","url":null,"abstract":"Mindfulness can be defined as increased awareness of and sustained attentiveness to the present moment. Recently, there has been a growing interest in the applications of mindfulness for empirical research in wellbeing and the use of virtual reality (VR) environments and 3D interfaces as a conduit for mindfulness training. Accordingly, the current experiment investigated whether a brief VR-based mindfulness intervention could induce a greater level of state mindfulness, when compared to an audio-based intervention and control group. Results indicated two mindfulness interventions, VRbased and audio-based, induced a greater state of mindfulness, compared to the control group. Participants in the VR-based mindfulness intervention group reported a greater state of mindfulness than those in the guided audio group, indicating the immersive mindfulness intervention was more robust. Collectively, these results provide empirical support for the efficaciousness of a brief VR-based mindfulness intervention in inducing a robust state of mindfulness in laboratory settings.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126684446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
CrowdAR Table An AR system for Real-time Interactive Crowd Simulation 一种用于实时交互式人群模拟的AR系统
Noud Savenije, Roland Geraerts, Wolfgang Hürst
Spatial augmented reality, where virtual information is projected into a user’s real environment, provides tremendous opportunities for immersive analytics. In this demonstration, we focus on real-time interactive crowd simulation, that is, the illustration of how crowds move under certain circumstances. Our augmented reality system, called CrowdAR, allows users to study a crowd’s motion behavior by projecting the output of our simulation software onto an augmented reality table and objects on this table. Our prototype system is currently being revised and extended to serve as a museum exhibit. Using real-time interaction, it can teach scientific principles about simulations and illustrate how these, in combination with augmented reality, can be used for crowd behavior analysis.
空间增强现实,将虚拟信息投射到用户的真实环境中,为沉浸式分析提供了巨大的机会。在这个演示中,我们专注于实时交互人群模拟,即在特定情况下人群如何移动的说明。我们的增强现实系统,叫做CrowdAR,允许用户通过将我们的模拟软件的输出投影到增强现实桌子和桌子上的物体上,来研究人群的运动行为。我们的原型系统目前正在修改和扩展,以作为博物馆展览。通过实时交互,它可以教授有关模拟的科学原理,并说明如何将这些原理与增强现实相结合,用于人群行为分析。
{"title":"CrowdAR Table An AR system for Real-time Interactive Crowd Simulation","authors":"Noud Savenije, Roland Geraerts, Wolfgang Hürst","doi":"10.1109/AIVR50618.2020.00021","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00021","url":null,"abstract":"Spatial augmented reality, where virtual information is projected into a user’s real environment, provides tremendous opportunities for immersive analytics. In this demonstration, we focus on real-time interactive crowd simulation, that is, the illustration of how crowds move under certain circumstances. Our augmented reality system, called CrowdAR, allows users to study a crowd’s motion behavior by projecting the output of our simulation software onto an augmented reality table and objects on this table. Our prototype system is currently being revised and extended to serve as a museum exhibit. Using real-time interaction, it can teach scientific principles about simulations and illustrate how these, in combination with augmented reality, can be used for crowd behavior analysis.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123778225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1