首页 > 最新文献

2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)最新文献

英文 中文
Supporting interaction in augmented reality assisted industrial processes using a CNN-based semantic layer 使用基于cnn的语义层支持增强现实辅助工业过程中的交互
Juan Izquierdo-Domenech, Jordi Linares-Pellicer, Jorge Orta-Lopez
Even though Augmented Reality (AR) is far from its maturity, we already have solutions and devices that give us an efficient technological frame in different industrial environments. Widely used mobile devices, such as tablets, or more specific ones, such as the current AR glasses available, are enough to offer solutions that improve many industrial processes; repairing, maintenance, plant control, product line reconfiguration are some examples of these tasks. Many industrial applications already utilise AR-based applications to solve those problems. In this work, we aim to go a little bit further beyond current possibilities that only focus on providing visual guidance. Our main goal is to add a semantic layer for existing AR-based applications, that visually validate worker’s actions based on visual interpretation of switches, potentiometers, analog needles or buttons, among others. This semantic layer allows a new level of interaction by adding automatic interpretation of the context that affects the actions of the operator and the display of information of interest in the AR system. We propose and explain the architecture and training of the Convolutional Neural Networks (CNN) used for the semantic layer and its integration in the AR technology.
尽管增强现实(AR)还远未成熟,但我们已经有了解决方案和设备,可以在不同的工业环境中为我们提供有效的技术框架。广泛使用的移动设备,如平板电脑,或更具体的设备,如目前可用的AR眼镜,足以提供改善许多工业流程的解决方案;维修,维护,工厂控制,生产线重新配置是这些任务的一些例子。许多工业应用程序已经利用基于ar的应用程序来解决这些问题。在这项工作中,我们的目标是进一步超越目前只专注于提供视觉引导的可能性。我们的主要目标是为现有的基于ar的应用程序添加一个语义层,基于对开关、电位器、模拟针或按钮等的视觉解释,可视化地验证工作人员的动作。这个语义层通过添加影响操作员动作的上下文的自动解释和AR系统中感兴趣的信息的显示,允许一个新的交互级别。我们提出并解释了用于语义层的卷积神经网络(CNN)的架构和训练及其在AR技术中的集成。
{"title":"Supporting interaction in augmented reality assisted industrial processes using a CNN-based semantic layer","authors":"Juan Izquierdo-Domenech, Jordi Linares-Pellicer, Jorge Orta-Lopez","doi":"10.1109/AIVR50618.2020.00014","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00014","url":null,"abstract":"Even though Augmented Reality (AR) is far from its maturity, we already have solutions and devices that give us an efficient technological frame in different industrial environments. Widely used mobile devices, such as tablets, or more specific ones, such as the current AR glasses available, are enough to offer solutions that improve many industrial processes; repairing, maintenance, plant control, product line reconfiguration are some examples of these tasks. Many industrial applications already utilise AR-based applications to solve those problems. In this work, we aim to go a little bit further beyond current possibilities that only focus on providing visual guidance. Our main goal is to add a semantic layer for existing AR-based applications, that visually validate worker’s actions based on visual interpretation of switches, potentiometers, analog needles or buttons, among others. This semantic layer allows a new level of interaction by adding automatic interpretation of the context that affects the actions of the operator and the display of information of interest in the AR system. We propose and explain the architecture and training of the Convolutional Neural Networks (CNN) used for the semantic layer and its integration in the AR technology.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123307028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Immersive Virtual Reality Training for Inspecting Flagger Work zones 旗旗工作区的沉浸式虚拟现实培训
Daeyeol Chang, James Hopfenblatt, P. Edara, Bimal Balakrishnan
Construction and maintenance work on roads pose safety risks to both drivers and workers. The responsible agencies regularly inspect work zones for compliance with traffic control and signage standards. The current training practice is to review documents related to temporary traffic control and reports from previous inspections, typically Power Point files with pictures. It would be beneficial if a new mechanism for training could be developed that is as effective as field visits but without the amount of time and effort required to visit multiple field sites. This study developed an immersive training module for transportation agency staff that inspect flagger operations in road construction and maintenance work zones. Human flaggers are commonly used to control traffic at work zones on two lane highways (one lane in each direction). The main objective of the proposed training is to deliver a realistic experience to trainees in an immersive virtual environment using the current traffic control protocols and standards. The module creation consisted of three steps. First, the roadway geometrics, work zone signage, traffic control devices, and the natural environment was created. Second, motion capture technology was used to replicate the actual movement of a human flagger directing traffic in a work zone. The environment and flagger avatar created in the first two steps were integrated and implemented in a simulation in the third step. The module was demonstrated to inspection staff at one state department of transportation (DOT) and revised based on their feedback. The state DOT staff were highly receptive to the use of virtual reality for training and commented on the benefits of the immersive experience that is lacking in their current training practices.
道路建设和养护工作对司机和工人都构成安全风险。负责机构定期检查工作区域是否符合交通管制和标志标准。目前的培训做法是审查与临时交通管制有关的文件和以前检查的报告,通常是带有图片的powerpoint文件。如果能够发展一种新的培训机制,这种机制同实地考察一样有效,但不需要花费大量时间和精力去考察多个实地场址,那将是有益的。本研究开发了一个沉浸式培训模块,用于交通运输机构工作人员在道路建设和维护工作区域检查旗帜操作。人工旗子通常用于控制双车道高速公路工作区域的交通(每个方向一个车道)。拟议培训的主要目标是使用当前的交通控制协议和标准,在沉浸式虚拟环境中为学员提供真实的体验。模块创建包括三个步骤。首先,创建道路几何、工作区标志、交通控制设备和自然环境。其次,使用动作捕捉技术来复制工作区内指挥交通的人类旗子的实际运动。在前两个步骤中创建的环境和旗帜角色在第三个步骤的模拟中被整合和执行。该模块被演示给了一个州交通部的检查人员,并根据他们的反馈进行了修改。该州交通部的工作人员对使用虚拟现实技术进行培训非常接受,并评论了沉浸式体验的好处,这是他们目前培训实践中所缺乏的。
{"title":"Immersive Virtual Reality Training for Inspecting Flagger Work zones","authors":"Daeyeol Chang, James Hopfenblatt, P. Edara, Bimal Balakrishnan","doi":"10.1109/AIVR50618.2020.00066","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00066","url":null,"abstract":"Construction and maintenance work on roads pose safety risks to both drivers and workers. The responsible agencies regularly inspect work zones for compliance with traffic control and signage standards. The current training practice is to review documents related to temporary traffic control and reports from previous inspections, typically Power Point files with pictures. It would be beneficial if a new mechanism for training could be developed that is as effective as field visits but without the amount of time and effort required to visit multiple field sites. This study developed an immersive training module for transportation agency staff that inspect flagger operations in road construction and maintenance work zones. Human flaggers are commonly used to control traffic at work zones on two lane highways (one lane in each direction). The main objective of the proposed training is to deliver a realistic experience to trainees in an immersive virtual environment using the current traffic control protocols and standards. The module creation consisted of three steps. First, the roadway geometrics, work zone signage, traffic control devices, and the natural environment was created. Second, motion capture technology was used to replicate the actual movement of a human flagger directing traffic in a work zone. The environment and flagger avatar created in the first two steps were integrated and implemented in a simulation in the third step. The module was demonstrated to inspection staff at one state department of transportation (DOT) and revised based on their feedback. The state DOT staff were highly receptive to the use of virtual reality for training and commented on the benefits of the immersive experience that is lacking in their current training practices.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131278134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Virtual Reality Framework for Human-Virtual Crowd Interaction Studies 人-虚拟人群互动研究的虚拟现实框架
Michael G. Nelson, Christos Mousas
In this paper we developed a generic framework for authoring virtual crowds with minimal effort. Our intention is to providing to the virtual reality community a framework that allows easy to author virtual crowd scenarios that can be used for human-crowd interaction studies. From previous studies we have conducted, we realized the need of such a framework as it facilitates quicker setup and testing as well as standardizes the measurements and the interaction with virtual crowds. The framework includes assets with realistic human models, and configurations for crowd behavior composition.
在本文中,我们开发了一个通用框架,以最小的努力创作虚拟人群。我们的目的是为虚拟现实社区提供一个框架,可以很容易地创建虚拟人群场景,用于人类群体互动研究。从我们之前进行的研究中,我们意识到这样一个框架的必要性,因为它有助于更快的设置和测试,以及标准化的测量和与虚拟人群的互动。该框架包括具有逼真的人体模型的资产,以及用于人群行为组合的配置。
{"title":"A Virtual Reality Framework for Human-Virtual Crowd Interaction Studies","authors":"Michael G. Nelson, Christos Mousas","doi":"10.1109/AIVR50618.2020.00043","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00043","url":null,"abstract":"In this paper we developed a generic framework for authoring virtual crowds with minimal effort. Our intention is to providing to the virtual reality community a framework that allows easy to author virtual crowd scenarios that can be used for human-crowd interaction studies. From previous studies we have conducted, we realized the need of such a framework as it facilitates quicker setup and testing as well as standardizes the measurements and the interaction with virtual crowds. The framework includes assets with realistic human models, and configurations for crowd behavior composition.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133887822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Annotation Tool for Precise Emotion Ground Truth Label Acquisition while Watching 360° VR Videos 在观看360°VR视频时精确获取情感地面真相标签的注释工具
Tong Xue, Abdallah El Ali, Ding Gangyi, Pablo Santiago César Garcia
We demonstrate an HMD-based annotation tool for collecting precise emotion ground truth labels while users are watching 360° videos in Virtual Reality (VR). Our tool uses an HTC VIVE Pro Eye HMD for displaying 360° videos, a Joy-Con controller for inputting emotion annotations, and an Empatica E4 wristband for capturing physiological signals. Timestamps of these devices are synchronized via an NTP server. Following dimensional emotion models, users can report their emotion in terms of valence and arousal as they watch a video in VR. Annotation feedback is provided through two peripheral visualization techniques: HaloLight and DotSize. Our annotation tool provides a starting point for researchers to design momentary and continuous self-reports in virtual environments to enable fine-grained emotion recognition.
我们展示了一个基于hmd的注释工具,用于在用户在虚拟现实(VR)中观看360°视频时收集精确的情感基础真相标签。我们的工具使用HTC VIVE Pro Eye HMD来显示360°视频,Joy-Con控制器用于输入情感注释,Empatica E4腕带用于捕获生理信号。这些设备的时间戳通过NTP服务器同步。根据维度情绪模型,用户可以在观看VR视频时报告他们的情绪,包括效价和唤醒。注释反馈通过两种外围可视化技术提供:HaloLight和DotSize。我们的注释工具为研究人员在虚拟环境中设计瞬间和连续的自我报告提供了一个起点,以实现细粒度的情感识别。
{"title":"Annotation Tool for Precise Emotion Ground Truth Label Acquisition while Watching 360° VR Videos","authors":"Tong Xue, Abdallah El Ali, Ding Gangyi, Pablo Santiago César Garcia","doi":"10.1109/AIVR50618.2020.00076","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00076","url":null,"abstract":"We demonstrate an HMD-based annotation tool for collecting precise emotion ground truth labels while users are watching 360° videos in Virtual Reality (VR). Our tool uses an HTC VIVE Pro Eye HMD for displaying 360° videos, a Joy-Con controller for inputting emotion annotations, and an Empatica E4 wristband for capturing physiological signals. Timestamps of these devices are synchronized via an NTP server. Following dimensional emotion models, users can report their emotion in terms of valence and arousal as they watch a video in VR. Annotation feedback is provided through two peripheral visualization techniques: HaloLight and DotSize. Our annotation tool provides a starting point for researchers to design momentary and continuous self-reports in virtual environments to enable fine-grained emotion recognition.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132935860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Using Semantic Segmentation to Assist the Creation of Interactive VR Applications 使用语义分割来协助创建交互式VR应用程序
Marcel Tiator, Anna Maria Kerkmann, C. Geiger, P. Grimm
The creation of interactive VR applications from 3D scanned content usually includes a lot of manual and repetitive work. Our research aim is to develop a real-world, cross-domain, automatic, semantic segmentation system that enhances the creation of interactive VR applications. We trained segmentation agents in a superpoint growing environment that we extended with an expert function. This expert function solves the sparse reward signal problem of the previous approaches and enables to use a variant of imitation learning and deep reinforcement learning with dense feedback. Additionally, the function allows to calculate a performance metric for the degree of imitation for different segmentations. We trained our agents with 1182 scenes of the ScanNet data set. More specifically, we trained different neural network architectures with 1170 scenes and tested their performance with 12 scenes. Our intermediate results are promising such that our segmentation system might be able to assist the VR application development from 3D scanned content in near future.
从3D扫描内容创建交互式VR应用程序通常包括大量手动和重复的工作。我们的研究目标是开发一个真实世界,跨领域,自动,语义分割系统,以增强交互式VR应用的创建。我们在一个用专家函数扩展的superpoint生长环境中训练分割代理。该专家函数解决了先前方法的稀疏奖励信号问题,并能够使用具有密集反馈的模仿学习和深度强化学习的变体。此外,该函数允许计算不同分割的模仿程度的性能度量。我们使用ScanNet数据集的1182个场景来训练代理。更具体地说,我们用1170个场景训练了不同的神经网络架构,并用12个场景测试了它们的性能。我们的中间结果是有希望的,这样我们的分割系统可能能够在不久的将来从3D扫描内容中辅助VR应用程序的开发。
{"title":"Using Semantic Segmentation to Assist the Creation of Interactive VR Applications","authors":"Marcel Tiator, Anna Maria Kerkmann, C. Geiger, P. Grimm","doi":"10.1109/AIVR50618.2020.00011","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00011","url":null,"abstract":"The creation of interactive VR applications from 3D scanned content usually includes a lot of manual and repetitive work. Our research aim is to develop a real-world, cross-domain, automatic, semantic segmentation system that enhances the creation of interactive VR applications. We trained segmentation agents in a superpoint growing environment that we extended with an expert function. This expert function solves the sparse reward signal problem of the previous approaches and enables to use a variant of imitation learning and deep reinforcement learning with dense feedback. Additionally, the function allows to calculate a performance metric for the degree of imitation for different segmentations. We trained our agents with 1182 scenes of the ScanNet data set. More specifically, we trained different neural network architectures with 1170 scenes and tested their performance with 12 scenes. Our intermediate results are promising such that our segmentation system might be able to assist the VR application development from 3D scanned content in near future.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123363288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FaceAUG: A Cross-Platform Application for Real-Time Face Augmentation in Web Browser FaceAUG:一个跨平台的实时面部增强应用程序
T. Sun
This paper presents FaceAUG, a cross-platform application for real-time face augmentation in a web browser. Human faces are detected and tracked in real-time from the video stream of the embedded or separated webcam of the user device. Then, the application overlays different 2D or 3D augmented reality (AR) filters and effects over the region of the detected face(s) to achieve a mixed virtual and AR effect. A 2D effect can be a photo frame or a 2D face mask using an image from the local repository. A 3D effect is a 3D face model with a colored material, an image texture, or a video texture. The application uses TensorFlow.js to load the pre-trained Face Mesh model for predicting the regions and landmarks of the faces that appear in the video stream. Three.js is used to create the face geometries and render them using the material and texture selected by the user. FaceAUG can be used on any device, as long as an internal or external camera and a state-of-the-art web browser are accessible on the device. The application is implemented using front-end techniques and is therefore functional without any server-side supports at back-end. Experimental results on different platforms verified the effectiveness of the proposed approach.
本文介绍了FaceAUG,一个跨平台的实时人脸增强应用程序。从用户设备的嵌入式或分离的网络摄像头的视频流中实时检测和跟踪人脸。然后,应用程序在检测到的人脸区域上覆盖不同的2D或3D增强现实(AR)过滤器和效果,以实现混合的虚拟和AR效果。2D效果可以是使用本地存储库中的图像的相框或2D面罩。3D效果是具有彩色材料、图像纹理或视频纹理的3D面部模型。该应用程序使用TensorFlow.js加载预训练的人脸网格模型,用于预测视频流中出现的人脸的区域和地标。js用于创建面部几何图形,并使用用户选择的材料和纹理来渲染它们。FaceAUG可以在任何设备上使用,只要设备上有内部或外部摄像头和最先进的网络浏览器。该应用程序是使用前端技术实现的,因此在后端无需任何服务器端支持即可运行。在不同平台上的实验结果验证了该方法的有效性。
{"title":"FaceAUG: A Cross-Platform Application for Real-Time Face Augmentation in Web Browser","authors":"T. Sun","doi":"10.1109/AIVR50618.2020.00058","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00058","url":null,"abstract":"This paper presents FaceAUG, a cross-platform application for real-time face augmentation in a web browser. Human faces are detected and tracked in real-time from the video stream of the embedded or separated webcam of the user device. Then, the application overlays different 2D or 3D augmented reality (AR) filters and effects over the region of the detected face(s) to achieve a mixed virtual and AR effect. A 2D effect can be a photo frame or a 2D face mask using an image from the local repository. A 3D effect is a 3D face model with a colored material, an image texture, or a video texture. The application uses TensorFlow.js to load the pre-trained Face Mesh model for predicting the regions and landmarks of the faces that appear in the video stream. Three.js is used to create the face geometries and render them using the material and texture selected by the user. FaceAUG can be used on any device, as long as an internal or external camera and a state-of-the-art web browser are accessible on the device. The application is implemented using front-end techniques and is therefore functional without any server-side supports at back-end. Experimental results on different platforms verified the effectiveness of the proposed approach.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129042776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Virtual Reality Integrated Multi-Depth-Camera-System for Real-Time Telepresence and Telemanipulation in Caregiving 虚拟现实集成多深度摄像系统在护理中的实时远程呈现和远程操作
Conrad Fifelski-von Böhlen, Anna Brinkmann, Stephan Mävers, S. Hellmers, A. Hein
Telepresence and telemanipulation robotics are suitable solutions to relieve humans from direct health risks and repetitive or unhealthy work. Through demographic changes in western countries and the COVID-19 pandemic, this relief is also considered for healthcare workers, especially caregivers, as the demands for them rises. The requirements are intuitively usable telerobotic and telepresence systems for remote assistance, to cut the high physical strain in manual patient transfers and the reduction of contact with infected patients. To ensure this, key technologies like 3D imaging and perception systems are essential. In this work, we present a novel, lightweight telepresence and telemanipulation system, specialized for caregiving. It allows an operator, wearing a virtual reality headset, to immerse into a sensor system captured scene on a distant location in real-time, with low latency of 250 ms and up to 30 fps refresh rate. Extensive measurement shows that 97.1% of the relevant point cloud data is below 1 cm error and 99.5 % is below 1.6 cm, making the system suitable for the application.
远程呈现和远程操作机器人技术是减轻人类直接健康风险和重复性或不健康工作的合适解决方案。通过西方国家的人口变化和COVID-19大流行,这种缓解也被考虑到医护人员,特别是护理人员,因为对他们的需求增加了。这些要求是直观可用的远程机器人和远程呈现系统,用于远程援助,以减少人工转移患者时的高体力负荷,并减少与感染患者的接触。为了确保这一点,关键技术,如3D成像和感知系统是必不可少的。在这项工作中,我们提出了一个新的,轻量级的远程呈现和远程操作系统,专门用于护理。它允许操作员戴着虚拟现实耳机,沉浸在传感器系统中,实时捕捉到远处的场景,延迟低至250毫秒,刷新率高达30 fps。大量测量表明,97.1%的相关点云数据误差小于1 cm, 99.5%的点云数据误差小于1.6 cm,系统适合应用。
{"title":"Virtual Reality Integrated Multi-Depth-Camera-System for Real-Time Telepresence and Telemanipulation in Caregiving","authors":"Conrad Fifelski-von Böhlen, Anna Brinkmann, Stephan Mävers, S. Hellmers, A. Hein","doi":"10.1109/AIVR50618.2020.00059","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00059","url":null,"abstract":"Telepresence and telemanipulation robotics are suitable solutions to relieve humans from direct health risks and repetitive or unhealthy work. Through demographic changes in western countries and the COVID-19 pandemic, this relief is also considered for healthcare workers, especially caregivers, as the demands for them rises. The requirements are intuitively usable telerobotic and telepresence systems for remote assistance, to cut the high physical strain in manual patient transfers and the reduction of contact with infected patients. To ensure this, key technologies like 3D imaging and perception systems are essential. In this work, we present a novel, lightweight telepresence and telemanipulation system, specialized for caregiving. It allows an operator, wearing a virtual reality headset, to immerse into a sensor system captured scene on a distant location in real-time, with low latency of 250 ms and up to 30 fps refresh rate. Extensive measurement shows that 97.1% of the relevant point cloud data is below 1 cm error and 99.5 % is below 1.6 cm, making the system suitable for the application.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129542915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Virtual Reality to Study Pedestrian Wayfinding: Motivations and an Experiment on Usability 虚拟现实研究行人寻路:动机和可用性实验
G. Vizzari
Despite the significance of pedestrian simulation in the design of the built environment and in the definition of crowd management procedures, we still lack data about several relevant human decision making processes. To tackle this issue, we propose a novel Virtual Reality system which allows to design controlled experiments in virtual settings, by tracking multiple users that wear Head Mounted Displays (HMD). We employed the system to experimentally evaluate the signage of a building of the University of of Milano–Bicocca and its implications on wayfinding decisions. We present here the results of a preliminary test made with the system, aiming at evaluating its usability and feeling of sickness due to the VR itself, as well as preliminary results on human wayfinding decisions.
尽管行人模拟在建筑环境的设计和人群管理程序的定义中具有重要意义,但我们仍然缺乏一些相关的人类决策过程的数据。为了解决这个问题,我们提出了一种新的虚拟现实系统,它允许在虚拟环境中设计受控实验,通过跟踪佩戴头戴式显示器(HMD)的多个用户。我们利用该系统实验性地评估了米兰-比可卡大学一栋建筑的标识及其对寻路决策的影响。我们在这里展示了对该系统进行的初步测试的结果,旨在评估其可用性和VR本身造成的晕眩感,以及人类寻路决策的初步结果。
{"title":"Virtual Reality to Study Pedestrian Wayfinding: Motivations and an Experiment on Usability","authors":"G. Vizzari","doi":"10.1109/AIVR50618.2020.00042","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00042","url":null,"abstract":"Despite the significance of pedestrian simulation in the design of the built environment and in the definition of crowd management procedures, we still lack data about several relevant human decision making processes. To tackle this issue, we propose a novel Virtual Reality system which allows to design controlled experiments in virtual settings, by tracking multiple users that wear Head Mounted Displays (HMD). We employed the system to experimentally evaluate the signage of a building of the University of of Milano–Bicocca and its implications on wayfinding decisions. We present here the results of a preliminary test made with the system, aiming at evaluating its usability and feeling of sickness due to the VR itself, as well as preliminary results on human wayfinding decisions.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122960085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Artificial Intelligence Assisted Virtual Reality Warfighter Training System 人工智能辅助虚拟现实战士训练系统
Aaron Gluck, Jessica Chen, Ratnadeep Paul
Present military forces need to prepare for increasingly varied and complex situations and maintain the readiness of their warfighters for several different scenarios. There is a constant challenge of working with limited resources people, time, and funding. Military leaders are seeking new training technologies that can meet their goals better, faster, and cheaper. Virtual Reality (VR) and Augmented Reality (AR) have emerged as extremely useful technologies for developing new training tools since they allow for easy creation and maintenance of multiple scenarios and environments. Using AR/VR for training lowers the travel and living costs associated with bringing the trainees to a central training facility, while still completely immersing the trainees in the environment. Another advantage of using AR/VR compared to traditional tools is that by using AR/VR, trainees are completely immersed in the environment, which makes it much more natural for them to consume the training material. At GE Research, we are exploring technologies to incorporate Artificial Intelligence (AI) methodologies in an immersive VR based training environment for warfighters. We have developed an AI assisted VR system for ground soldier training. In this VR training environment, the soldier (Blue Team/Blue Force) is teamed up with an AI assistant that will help them navigate an urban setting and successfully reach their goal in a stealth mode while escaping the gaze of enemy soldiers (Red Team/Red Force). We have used AI enabled virtual humans for the enemy soldiers. In addition, we have created an AI enabled, VR drone simulation to assist the dismounted soldier to move undetected through the environment.
目前的军事力量需要为日益多样化和复杂的情况做好准备,并保持其作战人员对几种不同情况的准备。在有限的资源——人、时间和资金——下工作是一个持续的挑战。军方领导人正在寻求新的训练技术,以更好、更快、更便宜地实现他们的目标。虚拟现实(VR)和增强现实(AR)已经成为开发新培训工具的非常有用的技术,因为它们允许轻松创建和维护多个场景和环境。使用AR/VR进行培训降低了将受训者带到中央培训设施相关的旅行和生活成本,同时仍然使受训者完全沉浸在环境中。与传统工具相比,使用AR/VR的另一个优点是,通过使用AR/VR,受训者完全沉浸在环境中,这使得他们更自然地消费培训材料。在通用电气研究院,我们正在探索将人工智能(AI)方法融入到基于沉浸式VR的作战人员训练环境中的技术。我们开发了一个人工智能辅助的虚拟现实系统,用于地面士兵的训练。在这个虚拟现实训练环境中,士兵(蓝队/蓝军)与AI助手合作,AI助手将帮助他们在城市环境中导航,并在躲避敌方士兵(红队/红军)的注视的同时成功地达到他们的目标。我们用人工智能为敌方士兵启用了虚拟人。此外,我们还创建了一个人工智能的虚拟现实无人机模拟,以帮助下马的士兵在环境中不被发现地移动。
{"title":"Artificial Intelligence Assisted Virtual Reality Warfighter Training System","authors":"Aaron Gluck, Jessica Chen, Ratnadeep Paul","doi":"10.1109/AIVR50618.2020.00080","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00080","url":null,"abstract":"Present military forces need to prepare for increasingly varied and complex situations and maintain the readiness of their warfighters for several different scenarios. There is a constant challenge of working with limited resources people, time, and funding. Military leaders are seeking new training technologies that can meet their goals better, faster, and cheaper. Virtual Reality (VR) and Augmented Reality (AR) have emerged as extremely useful technologies for developing new training tools since they allow for easy creation and maintenance of multiple scenarios and environments. Using AR/VR for training lowers the travel and living costs associated with bringing the trainees to a central training facility, while still completely immersing the trainees in the environment. Another advantage of using AR/VR compared to traditional tools is that by using AR/VR, trainees are completely immersed in the environment, which makes it much more natural for them to consume the training material. At GE Research, we are exploring technologies to incorporate Artificial Intelligence (AI) methodologies in an immersive VR based training environment for warfighters. We have developed an AI assisted VR system for ground soldier training. In this VR training environment, the soldier (Blue Team/Blue Force) is teamed up with an AI assistant that will help them navigate an urban setting and successfully reach their goal in a stealth mode while escaping the gaze of enemy soldiers (Red Team/Red Force). We have used AI enabled virtual humans for the enemy soldiers. In addition, we have created an AI enabled, VR drone simulation to assist the dismounted soldier to move undetected through the environment.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116524879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An AR Work Instructions Authoring Tool for Human-Operated Industrial Assembly Lines 用于人工操作的工业装配线的AR工作指令编写工具
T. Lavric, Emmanuel Bricard, M. Preda, T. Zaharia
AR technology has started replacing classical training procedures and is increasingly adopted in the industrial environment as training tool. The key challenge that has been underestimated is the required effort of authoring AR instructions. This research investigates the context of humanoperated assembly lines in manufacturing factories. The main objective is to identify and implement a way of authoring step-bystep AR instruction procedures, in a manner that satisfies industrial requirements identified in our case study and in the literature. Our proposal focuses in particular on speed, simplicity and flexibility. As a result, the proposed authoring tool makes it possible to author AR instructions in a very short time, does not require technical skills and is easy to operate by untrained workers. Compared to existing solutions, our proposal does not rely on a preparation stage. The entire authoring procedure is performed directly and only inside an AR headset.
增强现实技术已经开始取代传统的培训程序,并越来越多地在工业环境中被用作培训工具。被低估的关键挑战是编写AR指令所需的工作量。本研究调查了制造工厂中人工操作装配线的背景。主要目标是确定并实现一种编写逐步增强现实指导程序的方法,以满足我们的案例研究和文献中确定的工业要求的方式。我们的建议特别注重速度、简单性和灵活性。因此,建议的编写工具可以在很短的时间内编写AR指令,不需要技术技能,并且未经培训的工人很容易操作。与现有的解决方案相比,我们的建议不依赖于准备阶段。整个创作过程是直接执行的,只在AR头显内。
{"title":"An AR Work Instructions Authoring Tool for Human-Operated Industrial Assembly Lines","authors":"T. Lavric, Emmanuel Bricard, M. Preda, T. Zaharia","doi":"10.1109/AIVR50618.2020.00037","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00037","url":null,"abstract":"AR technology has started replacing classical training procedures and is increasingly adopted in the industrial environment as training tool. The key challenge that has been underestimated is the required effort of authoring AR instructions. This research investigates the context of humanoperated assembly lines in manufacturing factories. The main objective is to identify and implement a way of authoring step-bystep AR instruction procedures, in a manner that satisfies industrial requirements identified in our case study and in the literature. Our proposal focuses in particular on speed, simplicity and flexibility. As a result, the proposed authoring tool makes it possible to author AR instructions in a very short time, does not require technical skills and is easy to operate by untrained workers. Compared to existing solutions, our proposal does not rely on a preparation stage. The entire authoring procedure is performed directly and only inside an AR headset.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"358 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127580117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1