首页 > 最新文献

2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)最新文献

英文 中文
An AR Work Instructions Authoring Tool for Human-Operated Industrial Assembly Lines 用于人工操作的工业装配线的AR工作指令编写工具
T. Lavric, Emmanuel Bricard, M. Preda, T. Zaharia
AR technology has started replacing classical training procedures and is increasingly adopted in the industrial environment as training tool. The key challenge that has been underestimated is the required effort of authoring AR instructions. This research investigates the context of humanoperated assembly lines in manufacturing factories. The main objective is to identify and implement a way of authoring step-bystep AR instruction procedures, in a manner that satisfies industrial requirements identified in our case study and in the literature. Our proposal focuses in particular on speed, simplicity and flexibility. As a result, the proposed authoring tool makes it possible to author AR instructions in a very short time, does not require technical skills and is easy to operate by untrained workers. Compared to existing solutions, our proposal does not rely on a preparation stage. The entire authoring procedure is performed directly and only inside an AR headset.
增强现实技术已经开始取代传统的培训程序,并越来越多地在工业环境中被用作培训工具。被低估的关键挑战是编写AR指令所需的工作量。本研究调查了制造工厂中人工操作装配线的背景。主要目标是确定并实现一种编写逐步增强现实指导程序的方法,以满足我们的案例研究和文献中确定的工业要求的方式。我们的建议特别注重速度、简单性和灵活性。因此,建议的编写工具可以在很短的时间内编写AR指令,不需要技术技能,并且未经培训的工人很容易操作。与现有的解决方案相比,我们的建议不依赖于准备阶段。整个创作过程是直接执行的,只在AR头显内。
{"title":"An AR Work Instructions Authoring Tool for Human-Operated Industrial Assembly Lines","authors":"T. Lavric, Emmanuel Bricard, M. Preda, T. Zaharia","doi":"10.1109/AIVR50618.2020.00037","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00037","url":null,"abstract":"AR technology has started replacing classical training procedures and is increasingly adopted in the industrial environment as training tool. The key challenge that has been underestimated is the required effort of authoring AR instructions. This research investigates the context of humanoperated assembly lines in manufacturing factories. The main objective is to identify and implement a way of authoring step-bystep AR instruction procedures, in a manner that satisfies industrial requirements identified in our case study and in the literature. Our proposal focuses in particular on speed, simplicity and flexibility. As a result, the proposed authoring tool makes it possible to author AR instructions in a very short time, does not require technical skills and is easy to operate by untrained workers. Compared to existing solutions, our proposal does not rely on a preparation stage. The entire authoring procedure is performed directly and only inside an AR headset.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"358 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127580117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
FaceAUG: A Cross-Platform Application for Real-Time Face Augmentation in Web Browser FaceAUG:一个跨平台的实时面部增强应用程序
T. Sun
This paper presents FaceAUG, a cross-platform application for real-time face augmentation in a web browser. Human faces are detected and tracked in real-time from the video stream of the embedded or separated webcam of the user device. Then, the application overlays different 2D or 3D augmented reality (AR) filters and effects over the region of the detected face(s) to achieve a mixed virtual and AR effect. A 2D effect can be a photo frame or a 2D face mask using an image from the local repository. A 3D effect is a 3D face model with a colored material, an image texture, or a video texture. The application uses TensorFlow.js to load the pre-trained Face Mesh model for predicting the regions and landmarks of the faces that appear in the video stream. Three.js is used to create the face geometries and render them using the material and texture selected by the user. FaceAUG can be used on any device, as long as an internal or external camera and a state-of-the-art web browser are accessible on the device. The application is implemented using front-end techniques and is therefore functional without any server-side supports at back-end. Experimental results on different platforms verified the effectiveness of the proposed approach.
本文介绍了FaceAUG,一个跨平台的实时人脸增强应用程序。从用户设备的嵌入式或分离的网络摄像头的视频流中实时检测和跟踪人脸。然后,应用程序在检测到的人脸区域上覆盖不同的2D或3D增强现实(AR)过滤器和效果,以实现混合的虚拟和AR效果。2D效果可以是使用本地存储库中的图像的相框或2D面罩。3D效果是具有彩色材料、图像纹理或视频纹理的3D面部模型。该应用程序使用TensorFlow.js加载预训练的人脸网格模型,用于预测视频流中出现的人脸的区域和地标。js用于创建面部几何图形,并使用用户选择的材料和纹理来渲染它们。FaceAUG可以在任何设备上使用,只要设备上有内部或外部摄像头和最先进的网络浏览器。该应用程序是使用前端技术实现的,因此在后端无需任何服务器端支持即可运行。在不同平台上的实验结果验证了该方法的有效性。
{"title":"FaceAUG: A Cross-Platform Application for Real-Time Face Augmentation in Web Browser","authors":"T. Sun","doi":"10.1109/AIVR50618.2020.00058","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00058","url":null,"abstract":"This paper presents FaceAUG, a cross-platform application for real-time face augmentation in a web browser. Human faces are detected and tracked in real-time from the video stream of the embedded or separated webcam of the user device. Then, the application overlays different 2D or 3D augmented reality (AR) filters and effects over the region of the detected face(s) to achieve a mixed virtual and AR effect. A 2D effect can be a photo frame or a 2D face mask using an image from the local repository. A 3D effect is a 3D face model with a colored material, an image texture, or a video texture. The application uses TensorFlow.js to load the pre-trained Face Mesh model for predicting the regions and landmarks of the faces that appear in the video stream. Three.js is used to create the face geometries and render them using the material and texture selected by the user. FaceAUG can be used on any device, as long as an internal or external camera and a state-of-the-art web browser are accessible on the device. The application is implemented using front-end techniques and is therefore functional without any server-side supports at back-end. Experimental results on different platforms verified the effectiveness of the proposed approach.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129042776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Title Page 标题页
{"title":"Title Page","authors":"","doi":"10.1109/aivr50618.2020.00001","DOIUrl":"https://doi.org/10.1109/aivr50618.2020.00001","url":null,"abstract":"","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131233070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lane Line Map Estimation for Visual Alignment 用于视觉对齐的车道线地图估计
Minjung Son, Hyun Sung Chang
Lane detection is important for visualization-tasks as well as autonomous driving. However, recent approaches have focused principally on the latter part, employing sophisticated sensors. This paper presents a novel lane line map estimation method from single images, which is applicable for visualization tasks such as augmented reality (AR) navigation. Our learning-based approach is designed for sparse lane data under perspective view. It works reliably even in various difficult situations, such as those involving irregular data forms, sensor variations, dynamic environments, and obstacles. We also suggest the visual alignment concept to define visual matching between the estimated lane line map and the corresponding external map, thereby enabling the conversion of various applications related to visualization into score maximization. Experimental results demonstrated that the proposed method could not only be directly used for lane-based 2D data augmentation but also be extended to 3D localization, for viewpoint pose estimation, which is essential for various AR scenarios.
车道检测对于可视化任务和自动驾驶都很重要。然而,最近的方法主要集中在后一部分,采用复杂的传感器。本文提出了一种新的单幅车道线地图估计方法,该方法适用于增强现实(AR)导航等可视化任务。我们的基于学习的方法是针对透视图下的稀疏车道数据设计的。即使在各种困难的情况下,例如涉及不规则数据形式、传感器变化、动态环境和障碍物的情况下,它也能可靠地工作。我们还提出了视觉对齐的概念,以定义估计的车道线地图与相应的外部地图之间的视觉匹配,从而使与可视化相关的各种应用转换为分数最大化。实验结果表明,该方法不仅可以直接用于基于车道的二维数据增强,还可以扩展到三维定位,用于各种AR场景的视点姿态估计。
{"title":"Lane Line Map Estimation for Visual Alignment","authors":"Minjung Son, Hyun Sung Chang","doi":"10.1109/AIVR50618.2020.00041","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00041","url":null,"abstract":"Lane detection is important for visualization-tasks as well as autonomous driving. However, recent approaches have focused principally on the latter part, employing sophisticated sensors. This paper presents a novel lane line map estimation method from single images, which is applicable for visualization tasks such as augmented reality (AR) navigation. Our learning-based approach is designed for sparse lane data under perspective view. It works reliably even in various difficult situations, such as those involving irregular data forms, sensor variations, dynamic environments, and obstacles. We also suggest the visual alignment concept to define visual matching between the estimated lane line map and the corresponding external map, thereby enabling the conversion of various applications related to visualization into score maximization. Experimental results demonstrated that the proposed method could not only be directly used for lane-based 2D data augmentation but also be extended to 3D localization, for viewpoint pose estimation, which is essential for various AR scenarios.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115938682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-quality First-person Rendering Mixed Reality Gaming System for In Home Setting 高品质的第一人称渲染混合现实游戏系统
Yu-Yen Chung, Hung-Jui Guo, H. G. Kumar, B. Prabhakaran
With the advent of low-cost RGB-D cameras, mixed reality serious games using ‘live’ 3D human avatars have become popular. Here, RGB-D cameras are used for capturing and transferring user’ motion and texture onto the 3D human avatar in virtual environments. A system with a single camera is more suitable for such mixed reality games deployed in homes, considering the ease of setting up the system. In these mixed reality games, users can have either a third-person perspective or a first-person perspective of the virtual environments used in the games. Since first-person perspective provides a better Sense of Embodiment (SoE), in this paper, we explore the problem of providing a first-person perspective for mixed reality serious games played in homes. We propose a real time textured humanoid-avatar framework to provide a first-person perspective and address the challenges involved in setting up such a gaming system in homes. Our approach comprises: (a) SMPL humanoid model optimization for capturing user’ movements continuously; (b) a real-time texture transferring and merging OpenGL pipeline to build a global texture atlas across multiple video frames. We target the proposed approach towards a serious game for amputees, called Mr.MAPP (Mixed Reality-based framework for Managing Phantom Pain), where amputee’ intact limb is mirrored in real-time in the virtual environment. For this purpose, our framework also introduces a mirroring method to generate a textured phantom limb in the virtual environment. We carried out a series of visual and metrics-based studies to evaluate the effectiveness of the proposed approaches for skeletal pose fitting and texture transfer to SMPL humanoid models, as well as the mirroring and texturing missing limb (for future amputee based studies).
随着低成本RGB-D相机的出现,使用“实时”3D真人化身的混合现实严肃游戏变得流行起来。在这里,RGB-D相机用于捕捉和传输用户的动作和纹理到虚拟环境中的3D人类化身。考虑到设置系统的便利性,带有单个摄像头的系统更适合部署在家庭中的混合现实游戏。在这些混合现实游戏中,用户可以使用游戏中虚拟环境的第三人称视角或第一人称视角。因为第一人称视角提供了更好的体现感(SoE),所以在本文中,我们将探讨为混合现实严肃游戏提供第一人称视角的问题。我们提出了一个实时纹理人形化身框架,以提供第一人称视角,并解决在家庭中设置这样一个游戏系统所涉及的挑战。我们的方法包括:(a) SMPL类人模型优化,用于连续捕获用户的动作;(b)实时纹理传输和合并OpenGL管道,构建跨多个视频帧的全局纹理图集。我们的目标是提出一种针对截肢者的严肃游戏,称为Mr.MAPP(基于混合现实的管理幻痛框架),在虚拟环境中,截肢者的完整肢体被实时镜像。为此,我们的框架还引入了一种镜像方法来在虚拟环境中生成纹理幻肢。我们进行了一系列基于视觉和度量的研究,以评估所提出的方法在骨骼姿势拟合和纹理转移到SMPL类人模型以及镜像和纹理缺失肢体(用于未来基于截肢者的研究)方面的有效性。
{"title":"High-quality First-person Rendering Mixed Reality Gaming System for In Home Setting","authors":"Yu-Yen Chung, Hung-Jui Guo, H. G. Kumar, B. Prabhakaran","doi":"10.1109/AIVR50618.2020.00070","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00070","url":null,"abstract":"With the advent of low-cost RGB-D cameras, mixed reality serious games using ‘live’ 3D human avatars have become popular. Here, RGB-D cameras are used for capturing and transferring user’ motion and texture onto the 3D human avatar in virtual environments. A system with a single camera is more suitable for such mixed reality games deployed in homes, considering the ease of setting up the system. In these mixed reality games, users can have either a third-person perspective or a first-person perspective of the virtual environments used in the games. Since first-person perspective provides a better Sense of Embodiment (SoE), in this paper, we explore the problem of providing a first-person perspective for mixed reality serious games played in homes. We propose a real time textured humanoid-avatar framework to provide a first-person perspective and address the challenges involved in setting up such a gaming system in homes. Our approach comprises: (a) SMPL humanoid model optimization for capturing user’ movements continuously; (b) a real-time texture transferring and merging OpenGL pipeline to build a global texture atlas across multiple video frames. We target the proposed approach towards a serious game for amputees, called Mr.MAPP (Mixed Reality-based framework for Managing Phantom Pain), where amputee’ intact limb is mirrored in real-time in the virtual environment. For this purpose, our framework also introduces a mirroring method to generate a textured phantom limb in the virtual environment. We carried out a series of visual and metrics-based studies to evaluate the effectiveness of the proposed approaches for skeletal pose fitting and texture transfer to SMPL humanoid models, as well as the mirroring and texturing missing limb (for future amputee based studies).","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129657388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exploring the feasibility of mitigating VR-HMD-induced cybersickness using cathodal transcranial direct current stimulation 探讨利用阴极经颅直流电刺激减轻vr - hmd致晕动病的可行性
Gang Li, Francisco Macía Varela, Abdullah Habib, Qi Zhang, Mark Mcgill, S. Brewster, F. Pollick
Many head-mounted virtual reality display (VR-HMD) applications that involve moving visual environments (e.g., virtual rollercoaster, car and airplane driving) will trigger cybersickness (CS). Previous research Arshad et al. (2015) has explored the inhibitory effect of cathodal transcranial direct current stimulation (tDCS) on vestibular cortical excitability, applied to traditional motion sickness (MS), however its applicability to CS, as typically experienced in immersive VR, remains unknown. The presented double-blinded 2x2x3 mixed design experiment (independent variables: stimulation condition [cathodal/anodal]; timing of VR stimulus exposure [before/after tDCS]; sickness scenario [slight symptoms onset/moderate symptoms onset/recovery]) aims to investigate whether the tDCS protocol adapted from Arshad et al. (2015) is effective at delaying the onset of CS symptoms and/or accelerating recovery from them in healthy participants. Quantitative analysis revealed that the cathodal tDCS indeed delayed the onset of slight symptoms if compared to that in anodal condition. However, there are no significant differences in delaying the onset of moderate symptoms nor shortening time to recovery between the two stimulation types. Possible reasons for present findings are discussed and suggestions for future studies are proposed.
许多头戴式虚拟现实显示器(VR-HMD)应用涉及移动的视觉环境(例如,虚拟过山车、汽车和飞机驾驶),会引发晕动症(CS)。先前的研究Arshad等人(2015)探索了阴极经颅直流电刺激(tDCS)对前庭皮层兴奋性的抑制作用,并将其应用于传统的晕动病(MS),但其对CS的适用性,如沉浸式VR的典型体验,尚不清楚。采用双盲2x2x3混合设计实验(自变量:刺激条件[阴极/阳极];VR刺激暴露时间[tDCS前/后];疾病情景[轻微症状发作/中度症状发作/恢复])旨在调查Arshad等人(2015)改编的tDCS方案是否能有效延缓健康参与者CS症状的发作和/或加速其恢复。定量分析显示,与阳极状态相比,阴极tDCS确实延迟了轻微症状的发作。然而,两种刺激类型在延迟中度症状发作和缩短恢复时间方面没有显著差异。讨论了目前研究结果的可能原因,并对未来的研究提出了建议。
{"title":"Exploring the feasibility of mitigating VR-HMD-induced cybersickness using cathodal transcranial direct current stimulation","authors":"Gang Li, Francisco Macía Varela, Abdullah Habib, Qi Zhang, Mark Mcgill, S. Brewster, F. Pollick","doi":"10.1109/AIVR50618.2020.00030","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00030","url":null,"abstract":"Many head-mounted virtual reality display (VR-HMD) applications that involve moving visual environments (e.g., virtual rollercoaster, car and airplane driving) will trigger cybersickness (CS). Previous research Arshad et al. (2015) has explored the inhibitory effect of cathodal transcranial direct current stimulation (tDCS) on vestibular cortical excitability, applied to traditional motion sickness (MS), however its applicability to CS, as typically experienced in immersive VR, remains unknown. The presented double-blinded 2x2x3 mixed design experiment (independent variables: stimulation condition [cathodal/anodal]; timing of VR stimulus exposure [before/after tDCS]; sickness scenario [slight symptoms onset/moderate symptoms onset/recovery]) aims to investigate whether the tDCS protocol adapted from Arshad et al. (2015) is effective at delaying the onset of CS symptoms and/or accelerating recovery from them in healthy participants. Quantitative analysis revealed that the cathodal tDCS indeed delayed the onset of slight symptoms if compared to that in anodal condition. However, there are no significant differences in delaying the onset of moderate symptoms nor shortening time to recovery between the two stimulation types. Possible reasons for present findings are discussed and suggestions for future studies are proposed.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129069562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Algorithm-Aware Neural Network Based Image Compression for High-Speed Imaging 基于算法感知神经网络的高速成像图像压缩
Reid Pinkham, Tanner Schmidt, A. Berkovich
In wearable AR/VR systems, data transmission between cameras and central processors can account for a significant portion of total system power, particularly in high framerate applications. Thus, it becomes necessary to compress video streams to reduce the cost of data transmission. In this paper we present a CNN-based compression scheme for such vision systems. We demonstrate that, unlike conventional compression techniques, our method can be tuned for specific machine vision applications. This enables increased compression for a given application performance target. We present results for Detectron2 Keypoint Detection and compare the performance and computational complexity of our method to existing compression schemes, such as H.264. We created a new high-framerate dataset which represents common scenarios for wearable AR/VR devices.
在可穿戴AR/VR系统中,摄像头和中央处理器之间的数据传输可能占系统总功率的很大一部分,特别是在高帧率应用中。因此,有必要对视频流进行压缩以降低数据传输的成本。本文提出了一种基于cnn的视觉系统压缩方案。我们证明,与传统的压缩技术不同,我们的方法可以针对特定的机器视觉应用进行调整。这可以为给定的应用程序性能目标增加压缩。我们给出了Detectron2关键点检测的结果,并将我们的方法与现有的压缩方案(如H.264)的性能和计算复杂度进行了比较。我们创建了一个新的高帧率数据集,它代表了可穿戴AR/VR设备的常见场景。
{"title":"Algorithm-Aware Neural Network Based Image Compression for High-Speed Imaging","authors":"Reid Pinkham, Tanner Schmidt, A. Berkovich","doi":"10.1109/AIVR50618.2020.00040","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00040","url":null,"abstract":"In wearable AR/VR systems, data transmission between cameras and central processors can account for a significant portion of total system power, particularly in high framerate applications. Thus, it becomes necessary to compress video streams to reduce the cost of data transmission. In this paper we present a CNN-based compression scheme for such vision systems. We demonstrate that, unlike conventional compression techniques, our method can be tuned for specific machine vision applications. This enables increased compression for a given application performance target. We present results for Detectron2 Keypoint Detection and compare the performance and computational complexity of our method to existing compression schemes, such as H.264. We created a new high-framerate dataset which represents common scenarios for wearable AR/VR devices.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"478 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123396793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Virtual Reality Lifelog Explorer: A Prototype for Immersive Lifelog Analytics 虚拟现实生活日志浏览器:沉浸式生活日志分析的原型
Aaron Duane, B. Jónsson, C. Gurrin
The Virtual Reality Lifelog Explorer is a prototype for immersive personal data analytics, intended as an exploratory effort to produce more sophisticated virtual or augmented reality analysis prototypes in the future. An earlier version of this prototype competed in, and won, the first Lifelog Search Challenge (LSC) held at ACM ICMR in 2018.
虚拟现实生活日志浏览器是沉浸式个人数据分析的原型,旨在作为探索性的努力,在未来产生更复杂的虚拟或增强现实分析原型。该原型的早期版本参加并赢得了2018年在ACM ICMR举行的第一届生命日志搜索挑战赛(LSC)。
{"title":"Virtual Reality Lifelog Explorer: A Prototype for Immersive Lifelog Analytics","authors":"Aaron Duane, B. Jónsson, C. Gurrin","doi":"10.1109/AIVR50618.2020.00019","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00019","url":null,"abstract":"The Virtual Reality Lifelog Explorer is a prototype for immersive personal data analytics, intended as an exploratory effort to produce more sophisticated virtual or augmented reality analysis prototypes in the future. An earlier version of this prototype competed in, and won, the first Lifelog Search Challenge (LSC) held at ACM ICMR in 2018.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129968959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Workload, Presence and Task Performance of Virtual Object Manipulation on WebVR WebVR虚拟对象操作的工作量、存在和任务性能
Wenxin Sun, Mengjie Huang, Rui Yang, Jingjing Zhang, Liu Wang, Ji Han, Yong Yue
WebVR technology is widely used as a visualization approach to display virtual objects on 2D webpages. Much of the current literature on virtual object manipulation on the 2D screen pays particular attention to task performance, but few studies focus on users’ psychological feedback and no literature aims at its relationship with task performance. This paper compares manipulation modes with different degrees of freedom (DoF) in translation and rotation on WebVR to explore users’ workload and presence by self-reported data, and task performance by measuring completion time and error rate. The experiment results present that the increase of DoF is associated with lower perceived workload, while people may feel a higher level of presence during tasks. Additionally, this study only finds a positive correlation between workload and efficiency, and a negative correlation between presence and efficiency, which means that when feeling less workload or more presence, people tend to spend less time to complete translation and rotation tasks on WebVR.
WebVR技术作为一种在二维网页上显示虚拟物体的可视化方法被广泛应用。目前关于2D屏幕上虚拟物体操作的文献大多侧重于任务表现,但很少有研究关注用户的心理反馈,也没有文献关注其与任务表现的关系。本文在WebVR上比较了不同自由度的平移和旋转操作模式,通过自述数据来考察用户的工作量和存在感,通过测量完成时间和错误率来考察任务绩效。实验结果表明,自由度的增加与较低的感知工作量有关,而人们在任务中可能会感到更高的存在感。此外,本研究发现工作量与效率之间存在正相关关系,而存在感与效率之间存在负相关关系,这意味着当工作量减少或存在感增加时,人们在WebVR上完成翻译和轮换任务的时间往往会减少。
{"title":"Workload, Presence and Task Performance of Virtual Object Manipulation on WebVR","authors":"Wenxin Sun, Mengjie Huang, Rui Yang, Jingjing Zhang, Liu Wang, Ji Han, Yong Yue","doi":"10.1109/AIVR50618.2020.00073","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00073","url":null,"abstract":"WebVR technology is widely used as a visualization approach to display virtual objects on 2D webpages. Much of the current literature on virtual object manipulation on the 2D screen pays particular attention to task performance, but few studies focus on users’ psychological feedback and no literature aims at its relationship with task performance. This paper compares manipulation modes with different degrees of freedom (DoF) in translation and rotation on WebVR to explore users’ workload and presence by self-reported data, and task performance by measuring completion time and error rate. The experiment results present that the increase of DoF is associated with lower perceived workload, while people may feel a higher level of presence during tasks. Additionally, this study only finds a positive correlation between workload and efficiency, and a negative correlation between presence and efficiency, which means that when feeling less workload or more presence, people tend to spend less time to complete translation and rotation tasks on WebVR.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133398702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
From Virtual Reality to Neuroscience and Back: a Use Case on Peripersonal Hand Space Plasticity 从虚拟现实到神经科学再回来:个人手空间可塑性的用例
Agata Marta Soccini, F. Ferroni, M. Ardizzi
The human brain does not represent space homogeneously, but it constructs multiple representations of it depending on the source of sensory stimulation and the nature of interaction between the body and the environment. The peripersonal space is defined as an imaginary area coded as separated sector of space, as if there were a boundary between what the body might or might not interact with. We present an experimental pattern that combines the use of virtual reality (VR) and functional magnetic resonance imaging (fMRI) to investigate human behavior and neural basis in case of training of the plasticity of the peripersonal space around the hand. The expected results may provide knowledge on a phenomenon interesting for behavioral neuroscience as well as for the interaction of embodied self-avatars in virtual environments.
人脑并不是同质地表征空间,而是根据感官刺激的来源和身体与环境之间相互作用的性质,构建空间的多重表征。个人空间被定义为一个想象的区域,被编码为空间的分离部分,就好像在身体可能与什么互动或不与什么互动之间有一个界限。我们提出了一种结合虚拟现实(VR)和功能磁共振成像(fMRI)的实验模式,在手部周围个人空间可塑性训练的情况下,研究人类行为和神经基础。预期的结果可能为行为神经科学以及虚拟环境中具身自我化身的相互作用提供有趣的现象知识。
{"title":"From Virtual Reality to Neuroscience and Back: a Use Case on Peripersonal Hand Space Plasticity","authors":"Agata Marta Soccini, F. Ferroni, M. Ardizzi","doi":"10.1109/AIVR50618.2020.00082","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00082","url":null,"abstract":"The human brain does not represent space homogeneously, but it constructs multiple representations of it depending on the source of sensory stimulation and the nature of interaction between the body and the environment. The peripersonal space is defined as an imaginary area coded as separated sector of space, as if there were a boundary between what the body might or might not interact with. We present an experimental pattern that combines the use of virtual reality (VR) and functional magnetic resonance imaging (fMRI) to investigate human behavior and neural basis in case of training of the plasticity of the peripersonal space around the hand. The expected results may provide knowledge on a phenomenon interesting for behavioral neuroscience as well as for the interaction of embodied self-avatars in virtual environments.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130875119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1