首页 > 最新文献

2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)最新文献

英文 中文
Supporting interaction in augmented reality assisted industrial processes using a CNN-based semantic layer 使用基于cnn的语义层支持增强现实辅助工业过程中的交互
Juan Izquierdo-Domenech, Jordi Linares-Pellicer, Jorge Orta-Lopez
Even though Augmented Reality (AR) is far from its maturity, we already have solutions and devices that give us an efficient technological frame in different industrial environments. Widely used mobile devices, such as tablets, or more specific ones, such as the current AR glasses available, are enough to offer solutions that improve many industrial processes; repairing, maintenance, plant control, product line reconfiguration are some examples of these tasks. Many industrial applications already utilise AR-based applications to solve those problems. In this work, we aim to go a little bit further beyond current possibilities that only focus on providing visual guidance. Our main goal is to add a semantic layer for existing AR-based applications, that visually validate worker’s actions based on visual interpretation of switches, potentiometers, analog needles or buttons, among others. This semantic layer allows a new level of interaction by adding automatic interpretation of the context that affects the actions of the operator and the display of information of interest in the AR system. We propose and explain the architecture and training of the Convolutional Neural Networks (CNN) used for the semantic layer and its integration in the AR technology.
尽管增强现实(AR)还远未成熟,但我们已经有了解决方案和设备,可以在不同的工业环境中为我们提供有效的技术框架。广泛使用的移动设备,如平板电脑,或更具体的设备,如目前可用的AR眼镜,足以提供改善许多工业流程的解决方案;维修,维护,工厂控制,生产线重新配置是这些任务的一些例子。许多工业应用程序已经利用基于ar的应用程序来解决这些问题。在这项工作中,我们的目标是进一步超越目前只专注于提供视觉引导的可能性。我们的主要目标是为现有的基于ar的应用程序添加一个语义层,基于对开关、电位器、模拟针或按钮等的视觉解释,可视化地验证工作人员的动作。这个语义层通过添加影响操作员动作的上下文的自动解释和AR系统中感兴趣的信息的显示,允许一个新的交互级别。我们提出并解释了用于语义层的卷积神经网络(CNN)的架构和训练及其在AR技术中的集成。
{"title":"Supporting interaction in augmented reality assisted industrial processes using a CNN-based semantic layer","authors":"Juan Izquierdo-Domenech, Jordi Linares-Pellicer, Jorge Orta-Lopez","doi":"10.1109/AIVR50618.2020.00014","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00014","url":null,"abstract":"Even though Augmented Reality (AR) is far from its maturity, we already have solutions and devices that give us an efficient technological frame in different industrial environments. Widely used mobile devices, such as tablets, or more specific ones, such as the current AR glasses available, are enough to offer solutions that improve many industrial processes; repairing, maintenance, plant control, product line reconfiguration are some examples of these tasks. Many industrial applications already utilise AR-based applications to solve those problems. In this work, we aim to go a little bit further beyond current possibilities that only focus on providing visual guidance. Our main goal is to add a semantic layer for existing AR-based applications, that visually validate worker’s actions based on visual interpretation of switches, potentiometers, analog needles or buttons, among others. This semantic layer allows a new level of interaction by adding automatic interpretation of the context that affects the actions of the operator and the display of information of interest in the AR system. We propose and explain the architecture and training of the Convolutional Neural Networks (CNN) used for the semantic layer and its integration in the AR technology.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123307028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Immersive Virtual Reality Training for Inspecting Flagger Work zones 旗旗工作区的沉浸式虚拟现实培训
Daeyeol Chang, James Hopfenblatt, P. Edara, Bimal Balakrishnan
Construction and maintenance work on roads pose safety risks to both drivers and workers. The responsible agencies regularly inspect work zones for compliance with traffic control and signage standards. The current training practice is to review documents related to temporary traffic control and reports from previous inspections, typically Power Point files with pictures. It would be beneficial if a new mechanism for training could be developed that is as effective as field visits but without the amount of time and effort required to visit multiple field sites. This study developed an immersive training module for transportation agency staff that inspect flagger operations in road construction and maintenance work zones. Human flaggers are commonly used to control traffic at work zones on two lane highways (one lane in each direction). The main objective of the proposed training is to deliver a realistic experience to trainees in an immersive virtual environment using the current traffic control protocols and standards. The module creation consisted of three steps. First, the roadway geometrics, work zone signage, traffic control devices, and the natural environment was created. Second, motion capture technology was used to replicate the actual movement of a human flagger directing traffic in a work zone. The environment and flagger avatar created in the first two steps were integrated and implemented in a simulation in the third step. The module was demonstrated to inspection staff at one state department of transportation (DOT) and revised based on their feedback. The state DOT staff were highly receptive to the use of virtual reality for training and commented on the benefits of the immersive experience that is lacking in their current training practices.
道路建设和养护工作对司机和工人都构成安全风险。负责机构定期检查工作区域是否符合交通管制和标志标准。目前的培训做法是审查与临时交通管制有关的文件和以前检查的报告,通常是带有图片的powerpoint文件。如果能够发展一种新的培训机制,这种机制同实地考察一样有效,但不需要花费大量时间和精力去考察多个实地场址,那将是有益的。本研究开发了一个沉浸式培训模块,用于交通运输机构工作人员在道路建设和维护工作区域检查旗帜操作。人工旗子通常用于控制双车道高速公路工作区域的交通(每个方向一个车道)。拟议培训的主要目标是使用当前的交通控制协议和标准,在沉浸式虚拟环境中为学员提供真实的体验。模块创建包括三个步骤。首先,创建道路几何、工作区标志、交通控制设备和自然环境。其次,使用动作捕捉技术来复制工作区内指挥交通的人类旗子的实际运动。在前两个步骤中创建的环境和旗帜角色在第三个步骤的模拟中被整合和执行。该模块被演示给了一个州交通部的检查人员,并根据他们的反馈进行了修改。该州交通部的工作人员对使用虚拟现实技术进行培训非常接受,并评论了沉浸式体验的好处,这是他们目前培训实践中所缺乏的。
{"title":"Immersive Virtual Reality Training for Inspecting Flagger Work zones","authors":"Daeyeol Chang, James Hopfenblatt, P. Edara, Bimal Balakrishnan","doi":"10.1109/AIVR50618.2020.00066","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00066","url":null,"abstract":"Construction and maintenance work on roads pose safety risks to both drivers and workers. The responsible agencies regularly inspect work zones for compliance with traffic control and signage standards. The current training practice is to review documents related to temporary traffic control and reports from previous inspections, typically Power Point files with pictures. It would be beneficial if a new mechanism for training could be developed that is as effective as field visits but without the amount of time and effort required to visit multiple field sites. This study developed an immersive training module for transportation agency staff that inspect flagger operations in road construction and maintenance work zones. Human flaggers are commonly used to control traffic at work zones on two lane highways (one lane in each direction). The main objective of the proposed training is to deliver a realistic experience to trainees in an immersive virtual environment using the current traffic control protocols and standards. The module creation consisted of three steps. First, the roadway geometrics, work zone signage, traffic control devices, and the natural environment was created. Second, motion capture technology was used to replicate the actual movement of a human flagger directing traffic in a work zone. The environment and flagger avatar created in the first two steps were integrated and implemented in a simulation in the third step. The module was demonstrated to inspection staff at one state department of transportation (DOT) and revised based on their feedback. The state DOT staff were highly receptive to the use of virtual reality for training and commented on the benefits of the immersive experience that is lacking in their current training practices.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131278134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Annotation Tool for Precise Emotion Ground Truth Label Acquisition while Watching 360° VR Videos 在观看360°VR视频时精确获取情感地面真相标签的注释工具
Tong Xue, Abdallah El Ali, Ding Gangyi, Pablo Santiago César Garcia
We demonstrate an HMD-based annotation tool for collecting precise emotion ground truth labels while users are watching 360° videos in Virtual Reality (VR). Our tool uses an HTC VIVE Pro Eye HMD for displaying 360° videos, a Joy-Con controller for inputting emotion annotations, and an Empatica E4 wristband for capturing physiological signals. Timestamps of these devices are synchronized via an NTP server. Following dimensional emotion models, users can report their emotion in terms of valence and arousal as they watch a video in VR. Annotation feedback is provided through two peripheral visualization techniques: HaloLight and DotSize. Our annotation tool provides a starting point for researchers to design momentary and continuous self-reports in virtual environments to enable fine-grained emotion recognition.
我们展示了一个基于hmd的注释工具,用于在用户在虚拟现实(VR)中观看360°视频时收集精确的情感基础真相标签。我们的工具使用HTC VIVE Pro Eye HMD来显示360°视频,Joy-Con控制器用于输入情感注释,Empatica E4腕带用于捕获生理信号。这些设备的时间戳通过NTP服务器同步。根据维度情绪模型,用户可以在观看VR视频时报告他们的情绪,包括效价和唤醒。注释反馈通过两种外围可视化技术提供:HaloLight和DotSize。我们的注释工具为研究人员在虚拟环境中设计瞬间和连续的自我报告提供了一个起点,以实现细粒度的情感识别。
{"title":"Annotation Tool for Precise Emotion Ground Truth Label Acquisition while Watching 360° VR Videos","authors":"Tong Xue, Abdallah El Ali, Ding Gangyi, Pablo Santiago César Garcia","doi":"10.1109/AIVR50618.2020.00076","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00076","url":null,"abstract":"We demonstrate an HMD-based annotation tool for collecting precise emotion ground truth labels while users are watching 360° videos in Virtual Reality (VR). Our tool uses an HTC VIVE Pro Eye HMD for displaying 360° videos, a Joy-Con controller for inputting emotion annotations, and an Empatica E4 wristband for capturing physiological signals. Timestamps of these devices are synchronized via an NTP server. Following dimensional emotion models, users can report their emotion in terms of valence and arousal as they watch a video in VR. Annotation feedback is provided through two peripheral visualization techniques: HaloLight and DotSize. Our annotation tool provides a starting point for researchers to design momentary and continuous self-reports in virtual environments to enable fine-grained emotion recognition.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132935860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Virtual Reality Framework for Human-Virtual Crowd Interaction Studies 人-虚拟人群互动研究的虚拟现实框架
Michael G. Nelson, Christos Mousas
In this paper we developed a generic framework for authoring virtual crowds with minimal effort. Our intention is to providing to the virtual reality community a framework that allows easy to author virtual crowd scenarios that can be used for human-crowd interaction studies. From previous studies we have conducted, we realized the need of such a framework as it facilitates quicker setup and testing as well as standardizes the measurements and the interaction with virtual crowds. The framework includes assets with realistic human models, and configurations for crowd behavior composition.
在本文中,我们开发了一个通用框架,以最小的努力创作虚拟人群。我们的目的是为虚拟现实社区提供一个框架,可以很容易地创建虚拟人群场景,用于人类群体互动研究。从我们之前进行的研究中,我们意识到这样一个框架的必要性,因为它有助于更快的设置和测试,以及标准化的测量和与虚拟人群的互动。该框架包括具有逼真的人体模型的资产,以及用于人群行为组合的配置。
{"title":"A Virtual Reality Framework for Human-Virtual Crowd Interaction Studies","authors":"Michael G. Nelson, Christos Mousas","doi":"10.1109/AIVR50618.2020.00043","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00043","url":null,"abstract":"In this paper we developed a generic framework for authoring virtual crowds with minimal effort. Our intention is to providing to the virtual reality community a framework that allows easy to author virtual crowd scenarios that can be used for human-crowd interaction studies. From previous studies we have conducted, we realized the need of such a framework as it facilitates quicker setup and testing as well as standardizes the measurements and the interaction with virtual crowds. The framework includes assets with realistic human models, and configurations for crowd behavior composition.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133887822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Artificial Intelligence Assisted Virtual Reality Warfighter Training System 人工智能辅助虚拟现实战士训练系统
Aaron Gluck, Jessica Chen, Ratnadeep Paul
Present military forces need to prepare for increasingly varied and complex situations and maintain the readiness of their warfighters for several different scenarios. There is a constant challenge of working with limited resources people, time, and funding. Military leaders are seeking new training technologies that can meet their goals better, faster, and cheaper. Virtual Reality (VR) and Augmented Reality (AR) have emerged as extremely useful technologies for developing new training tools since they allow for easy creation and maintenance of multiple scenarios and environments. Using AR/VR for training lowers the travel and living costs associated with bringing the trainees to a central training facility, while still completely immersing the trainees in the environment. Another advantage of using AR/VR compared to traditional tools is that by using AR/VR, trainees are completely immersed in the environment, which makes it much more natural for them to consume the training material. At GE Research, we are exploring technologies to incorporate Artificial Intelligence (AI) methodologies in an immersive VR based training environment for warfighters. We have developed an AI assisted VR system for ground soldier training. In this VR training environment, the soldier (Blue Team/Blue Force) is teamed up with an AI assistant that will help them navigate an urban setting and successfully reach their goal in a stealth mode while escaping the gaze of enemy soldiers (Red Team/Red Force). We have used AI enabled virtual humans for the enemy soldiers. In addition, we have created an AI enabled, VR drone simulation to assist the dismounted soldier to move undetected through the environment.
目前的军事力量需要为日益多样化和复杂的情况做好准备,并保持其作战人员对几种不同情况的准备。在有限的资源——人、时间和资金——下工作是一个持续的挑战。军方领导人正在寻求新的训练技术,以更好、更快、更便宜地实现他们的目标。虚拟现实(VR)和增强现实(AR)已经成为开发新培训工具的非常有用的技术,因为它们允许轻松创建和维护多个场景和环境。使用AR/VR进行培训降低了将受训者带到中央培训设施相关的旅行和生活成本,同时仍然使受训者完全沉浸在环境中。与传统工具相比,使用AR/VR的另一个优点是,通过使用AR/VR,受训者完全沉浸在环境中,这使得他们更自然地消费培训材料。在通用电气研究院,我们正在探索将人工智能(AI)方法融入到基于沉浸式VR的作战人员训练环境中的技术。我们开发了一个人工智能辅助的虚拟现实系统,用于地面士兵的训练。在这个虚拟现实训练环境中,士兵(蓝队/蓝军)与AI助手合作,AI助手将帮助他们在城市环境中导航,并在躲避敌方士兵(红队/红军)的注视的同时成功地达到他们的目标。我们用人工智能为敌方士兵启用了虚拟人。此外,我们还创建了一个人工智能的虚拟现实无人机模拟,以帮助下马的士兵在环境中不被发现地移动。
{"title":"Artificial Intelligence Assisted Virtual Reality Warfighter Training System","authors":"Aaron Gluck, Jessica Chen, Ratnadeep Paul","doi":"10.1109/AIVR50618.2020.00080","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00080","url":null,"abstract":"Present military forces need to prepare for increasingly varied and complex situations and maintain the readiness of their warfighters for several different scenarios. There is a constant challenge of working with limited resources people, time, and funding. Military leaders are seeking new training technologies that can meet their goals better, faster, and cheaper. Virtual Reality (VR) and Augmented Reality (AR) have emerged as extremely useful technologies for developing new training tools since they allow for easy creation and maintenance of multiple scenarios and environments. Using AR/VR for training lowers the travel and living costs associated with bringing the trainees to a central training facility, while still completely immersing the trainees in the environment. Another advantage of using AR/VR compared to traditional tools is that by using AR/VR, trainees are completely immersed in the environment, which makes it much more natural for them to consume the training material. At GE Research, we are exploring technologies to incorporate Artificial Intelligence (AI) methodologies in an immersive VR based training environment for warfighters. We have developed an AI assisted VR system for ground soldier training. In this VR training environment, the soldier (Blue Team/Blue Force) is teamed up with an AI assistant that will help them navigate an urban setting and successfully reach their goal in a stealth mode while escaping the gaze of enemy soldiers (Red Team/Red Force). We have used AI enabled virtual humans for the enemy soldiers. In addition, we have created an AI enabled, VR drone simulation to assist the dismounted soldier to move undetected through the environment.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116524879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Virtual Reality Integrated Multi-Depth-Camera-System for Real-Time Telepresence and Telemanipulation in Caregiving 虚拟现实集成多深度摄像系统在护理中的实时远程呈现和远程操作
Conrad Fifelski-von Böhlen, Anna Brinkmann, Stephan Mävers, S. Hellmers, A. Hein
Telepresence and telemanipulation robotics are suitable solutions to relieve humans from direct health risks and repetitive or unhealthy work. Through demographic changes in western countries and the COVID-19 pandemic, this relief is also considered for healthcare workers, especially caregivers, as the demands for them rises. The requirements are intuitively usable telerobotic and telepresence systems for remote assistance, to cut the high physical strain in manual patient transfers and the reduction of contact with infected patients. To ensure this, key technologies like 3D imaging and perception systems are essential. In this work, we present a novel, lightweight telepresence and telemanipulation system, specialized for caregiving. It allows an operator, wearing a virtual reality headset, to immerse into a sensor system captured scene on a distant location in real-time, with low latency of 250 ms and up to 30 fps refresh rate. Extensive measurement shows that 97.1% of the relevant point cloud data is below 1 cm error and 99.5 % is below 1.6 cm, making the system suitable for the application.
远程呈现和远程操作机器人技术是减轻人类直接健康风险和重复性或不健康工作的合适解决方案。通过西方国家的人口变化和COVID-19大流行,这种缓解也被考虑到医护人员,特别是护理人员,因为对他们的需求增加了。这些要求是直观可用的远程机器人和远程呈现系统,用于远程援助,以减少人工转移患者时的高体力负荷,并减少与感染患者的接触。为了确保这一点,关键技术,如3D成像和感知系统是必不可少的。在这项工作中,我们提出了一个新的,轻量级的远程呈现和远程操作系统,专门用于护理。它允许操作员戴着虚拟现实耳机,沉浸在传感器系统中,实时捕捉到远处的场景,延迟低至250毫秒,刷新率高达30 fps。大量测量表明,97.1%的相关点云数据误差小于1 cm, 99.5%的点云数据误差小于1.6 cm,系统适合应用。
{"title":"Virtual Reality Integrated Multi-Depth-Camera-System for Real-Time Telepresence and Telemanipulation in Caregiving","authors":"Conrad Fifelski-von Böhlen, Anna Brinkmann, Stephan Mävers, S. Hellmers, A. Hein","doi":"10.1109/AIVR50618.2020.00059","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00059","url":null,"abstract":"Telepresence and telemanipulation robotics are suitable solutions to relieve humans from direct health risks and repetitive or unhealthy work. Through demographic changes in western countries and the COVID-19 pandemic, this relief is also considered for healthcare workers, especially caregivers, as the demands for them rises. The requirements are intuitively usable telerobotic and telepresence systems for remote assistance, to cut the high physical strain in manual patient transfers and the reduction of contact with infected patients. To ensure this, key technologies like 3D imaging and perception systems are essential. In this work, we present a novel, lightweight telepresence and telemanipulation system, specialized for caregiving. It allows an operator, wearing a virtual reality headset, to immerse into a sensor system captured scene on a distant location in real-time, with low latency of 250 ms and up to 30 fps refresh rate. Extensive measurement shows that 97.1% of the relevant point cloud data is below 1 cm error and 99.5 % is below 1.6 cm, making the system suitable for the application.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129542915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Training with a world champion: augmented reality applications in sport Design-led research 与世界冠军一起训练:增强现实在运动设计主导研究中的应用
S. Palmieri, Alessio Righi, M. Bisson, A. Ianniello
Recent and continuous innovations in the field of extended reality and, in particular, augmented reality, are able to revolutionize different aspects of the reference market sectors. At the same time, a constant evolution in the area of artificial intelligence, machine learning and deep learning, if combined with the aforementioned innovations, allows to conceive solutions able to shape new ways to inform, to improve skills and to spend time. The ability to simulate contexts, environments, actions and emotions and the possibility to use the data generated by the simulations in a disruptive way permit to imagine and create learning and strengthening paths.This developing research has been carried out within the Interdepartmental Laboratory EDME (Environmental Design Multisensory Experience), which belongs to the Design Department of Politecnico di Milano. It has been conducted by investigating the state of the art of augmented reality and artificial intelligence technologies, highlighting interesting and highly innovative case studies; from this first phase we moved on to analyze the sport sector in which an important potential for future development was recognized. The last part of the first phase of this research project consisted in the elaboration of a concept for an enabling technological system and a business model with a high innovation coefficient, whose realization is hypothesized for the year 2030.It is intended to demonstrate how a design operation, which started from emerging technologies and a sector of high interest and assumed a scenario of use over ten years, is not only extremely interesting but also, and above all, useful to consciously predict and accompany the aforementioned technological development.
扩展现实领域的最新和持续创新,特别是增强现实,能够彻底改变参考市场部门的不同方面。与此同时,人工智能、机器学习和深度学习领域的不断发展,如果与上述创新相结合,就可以构想出能够塑造新的方式来提供信息、提高技能和花费时间的解决方案。模拟情境、环境、行动和情绪的能力,以及以颠覆性方式使用模拟产生的数据的可能性,允许想象和创造学习和强化路径。这项发展中的研究是在隶属于米兰理工大学设计系的跨部门实验室EDME(环境设计多感官体验)内进行的。它是通过调查增强现实和人工智能技术的最新发展来进行的,突出了有趣和高度创新的案例研究;从第一阶段开始,我们开始分析体育部门,这是一个重要的未来发展潜力。本研究项目第一阶段的最后一部分包括阐述一个使能技术系统和具有高创新系数的商业模式的概念,其实现的假设是在2030年。它旨在展示如何设计操作,从新兴技术和高度感兴趣的部门开始,并假设十年以上的使用场景,不仅非常有趣,而且最重要的是,有意识地预测和伴随上述技术发展是有用的。
{"title":"Training with a world champion: augmented reality applications in sport Design-led research","authors":"S. Palmieri, Alessio Righi, M. Bisson, A. Ianniello","doi":"10.1109/AIVR50618.2020.00075","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00075","url":null,"abstract":"Recent and continuous innovations in the field of extended reality and, in particular, augmented reality, are able to revolutionize different aspects of the reference market sectors. At the same time, a constant evolution in the area of artificial intelligence, machine learning and deep learning, if combined with the aforementioned innovations, allows to conceive solutions able to shape new ways to inform, to improve skills and to spend time. The ability to simulate contexts, environments, actions and emotions and the possibility to use the data generated by the simulations in a disruptive way permit to imagine and create learning and strengthening paths.This developing research has been carried out within the Interdepartmental Laboratory EDME (Environmental Design Multisensory Experience), which belongs to the Design Department of Politecnico di Milano. It has been conducted by investigating the state of the art of augmented reality and artificial intelligence technologies, highlighting interesting and highly innovative case studies; from this first phase we moved on to analyze the sport sector in which an important potential for future development was recognized. The last part of the first phase of this research project consisted in the elaboration of a concept for an enabling technological system and a business model with a high innovation coefficient, whose realization is hypothesized for the year 2030.It is intended to demonstrate how a design operation, which started from emerging technologies and a sector of high interest and assumed a scenario of use over ten years, is not only extremely interesting but also, and above all, useful to consciously predict and accompany the aforementioned technological development.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125428707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Virtual Reality to Study Pedestrian Wayfinding: Motivations and an Experiment on Usability 虚拟现实研究行人寻路:动机和可用性实验
G. Vizzari
Despite the significance of pedestrian simulation in the design of the built environment and in the definition of crowd management procedures, we still lack data about several relevant human decision making processes. To tackle this issue, we propose a novel Virtual Reality system which allows to design controlled experiments in virtual settings, by tracking multiple users that wear Head Mounted Displays (HMD). We employed the system to experimentally evaluate the signage of a building of the University of of Milano–Bicocca and its implications on wayfinding decisions. We present here the results of a preliminary test made with the system, aiming at evaluating its usability and feeling of sickness due to the VR itself, as well as preliminary results on human wayfinding decisions.
尽管行人模拟在建筑环境的设计和人群管理程序的定义中具有重要意义,但我们仍然缺乏一些相关的人类决策过程的数据。为了解决这个问题,我们提出了一种新的虚拟现实系统,它允许在虚拟环境中设计受控实验,通过跟踪佩戴头戴式显示器(HMD)的多个用户。我们利用该系统实验性地评估了米兰-比可卡大学一栋建筑的标识及其对寻路决策的影响。我们在这里展示了对该系统进行的初步测试的结果,旨在评估其可用性和VR本身造成的晕眩感,以及人类寻路决策的初步结果。
{"title":"Virtual Reality to Study Pedestrian Wayfinding: Motivations and an Experiment on Usability","authors":"G. Vizzari","doi":"10.1109/AIVR50618.2020.00042","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00042","url":null,"abstract":"Despite the significance of pedestrian simulation in the design of the built environment and in the definition of crowd management procedures, we still lack data about several relevant human decision making processes. To tackle this issue, we propose a novel Virtual Reality system which allows to design controlled experiments in virtual settings, by tracking multiple users that wear Head Mounted Displays (HMD). We employed the system to experimentally evaluate the signage of a building of the University of of Milano–Bicocca and its implications on wayfinding decisions. We present here the results of a preliminary test made with the system, aiming at evaluating its usability and feeling of sickness due to the VR itself, as well as preliminary results on human wayfinding decisions.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122960085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using Semantic Segmentation to Assist the Creation of Interactive VR Applications 使用语义分割来协助创建交互式VR应用程序
Marcel Tiator, Anna Maria Kerkmann, C. Geiger, P. Grimm
The creation of interactive VR applications from 3D scanned content usually includes a lot of manual and repetitive work. Our research aim is to develop a real-world, cross-domain, automatic, semantic segmentation system that enhances the creation of interactive VR applications. We trained segmentation agents in a superpoint growing environment that we extended with an expert function. This expert function solves the sparse reward signal problem of the previous approaches and enables to use a variant of imitation learning and deep reinforcement learning with dense feedback. Additionally, the function allows to calculate a performance metric for the degree of imitation for different segmentations. We trained our agents with 1182 scenes of the ScanNet data set. More specifically, we trained different neural network architectures with 1170 scenes and tested their performance with 12 scenes. Our intermediate results are promising such that our segmentation system might be able to assist the VR application development from 3D scanned content in near future.
从3D扫描内容创建交互式VR应用程序通常包括大量手动和重复的工作。我们的研究目标是开发一个真实世界,跨领域,自动,语义分割系统,以增强交互式VR应用的创建。我们在一个用专家函数扩展的superpoint生长环境中训练分割代理。该专家函数解决了先前方法的稀疏奖励信号问题,并能够使用具有密集反馈的模仿学习和深度强化学习的变体。此外,该函数允许计算不同分割的模仿程度的性能度量。我们使用ScanNet数据集的1182个场景来训练代理。更具体地说,我们用1170个场景训练了不同的神经网络架构,并用12个场景测试了它们的性能。我们的中间结果是有希望的,这样我们的分割系统可能能够在不久的将来从3D扫描内容中辅助VR应用程序的开发。
{"title":"Using Semantic Segmentation to Assist the Creation of Interactive VR Applications","authors":"Marcel Tiator, Anna Maria Kerkmann, C. Geiger, P. Grimm","doi":"10.1109/AIVR50618.2020.00011","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00011","url":null,"abstract":"The creation of interactive VR applications from 3D scanned content usually includes a lot of manual and repetitive work. Our research aim is to develop a real-world, cross-domain, automatic, semantic segmentation system that enhances the creation of interactive VR applications. We trained segmentation agents in a superpoint growing environment that we extended with an expert function. This expert function solves the sparse reward signal problem of the previous approaches and enables to use a variant of imitation learning and deep reinforcement learning with dense feedback. Additionally, the function allows to calculate a performance metric for the degree of imitation for different segmentations. We trained our agents with 1182 scenes of the ScanNet data set. More specifically, we trained different neural network architectures with 1170 scenes and tested their performance with 12 scenes. Our intermediate results are promising such that our segmentation system might be able to assist the VR application development from 3D scanned content in near future.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123363288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breast3D: An Augmented Reality System for Breast CT and MRI Breast3D:乳房CT和MRI增强现实系统
Benjamin Allison, Xujiong Ye, Faraz Janan
Adoption of Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) - known collectively as Extended Reality (XR) devices has been rapidly increasing over recent years. However, the focus of XR research has shown a lack of diversity in solutions to the problems within medicine, with it being predominantly focused in augmenting surgical procedures. Whilst important, XR applied to aiding medical diagnosis and surgical planning is relatively unexplored. In this paper we present a fully functional mammographic image analysis system, Breast3D, that can reconstruct MRI and CT scan data in XR. With breast cancer Breast Imaging-Reporting and Data System (BI-RADS) risk lexicon, early detection and clinical workflow such as Multi-disciplinary team (MDT) meetings for cancer in mind, our new mammography visualization system reconstructs CT and MRI volumes in a real 3D space. Breast3D is built upon the past literature and inspired from research for diagnosis and surgical planning. In addition to visualising the models in MR using the Microsoft HoloLens, Breast3D is versatile and portable to different XR head-mounted displays such as HTC Vive. Breast3D demonstrates the early potential for XR within diagnostics of 3D mammographic modalities, an application that has been proposed but until now has not been implemented.
近年来,虚拟现实(VR)、增强现实(AR)和混合现实(MR)——统称为扩展现实(XR)设备的采用迅速增加。然而,XR研究的重点表明,在解决医学问题方面缺乏多样性,主要集中在扩大外科手术上。虽然很重要,但XR在辅助医学诊断和手术计划方面的应用还相对较少。在本文中,我们提出了一个全功能的乳房x线图像分析系统,Breast3D,可以重建MRI和CT扫描数据在x射线。结合乳腺癌影像报告和数据系统(BI-RADS)的风险词汇、早期检测和临床工作流程,如多学科团队(MDT)会议,我们的新型乳房x光成像可视化系统在真实的3D空间中重建CT和MRI体积。Breast3D是建立在过去的文献和灵感来自研究诊断和手术计划。除了使用微软HoloLens在MR中可视化模型外,Breast3D还具有多功能和便携性,可用于不同的XR头戴式显示器,如HTC Vive。Breast3D展示了XR在3D乳房x线摄影模式诊断中的早期潜力,这是一种已经提出但迄今尚未实施的应用。
{"title":"Breast3D: An Augmented Reality System for Breast CT and MRI","authors":"Benjamin Allison, Xujiong Ye, Faraz Janan","doi":"10.1109/AIVR50618.2020.00052","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00052","url":null,"abstract":"Adoption of Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) - known collectively as Extended Reality (XR) devices has been rapidly increasing over recent years. However, the focus of XR research has shown a lack of diversity in solutions to the problems within medicine, with it being predominantly focused in augmenting surgical procedures. Whilst important, XR applied to aiding medical diagnosis and surgical planning is relatively unexplored. In this paper we present a fully functional mammographic image analysis system, Breast3D, that can reconstruct MRI and CT scan data in XR. With breast cancer Breast Imaging-Reporting and Data System (BI-RADS) risk lexicon, early detection and clinical workflow such as Multi-disciplinary team (MDT) meetings for cancer in mind, our new mammography visualization system reconstructs CT and MRI volumes in a real 3D space. Breast3D is built upon the past literature and inspired from research for diagnosis and surgical planning. In addition to visualising the models in MR using the Microsoft HoloLens, Breast3D is versatile and portable to different XR head-mounted displays such as HTC Vive. Breast3D demonstrates the early potential for XR within diagnostics of 3D mammographic modalities, an application that has been proposed but until now has not been implemented.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134083048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1