首页 > 最新文献

2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)最新文献

英文 中文
Binarization Using Morphological Decomposition Followed by cGAN 基于形态分解和cGAN的二值化
Cheng-Pan Hsieh, Shih-Kai Lee, Ya-Yi Liao, R. Huang, Jung-Hua Wang
This paper presents a novel binarization scheme for stained decipherable patterns. First, the input image is downsized, which not only saves the computation time, but the key features necessary for the successful decoding is preserved. Then, high or low contrast areas are decomposed by applying morphological operators to the downsized gray image, and subtracting the two resulting output images from each other. If necessary, these areas are further subjected to decomposition to obtain finer separation of regions. After the preprocessing, the binarization can be done either by GMM to estimate a binarization threshold for each region, or the binarization problem is treated as an image-translation task and hence the conditional generative adversarial network (cGAN) is trained using the high or low contrast areas as conditional inputs.
本文提出了一种新的染色可破译图案二值化方案。首先,将输入图像缩小,不仅节省了计算时间,而且保留了成功解码所需的关键特征。然后,通过形态学算子对缩小后的灰度图像进行高对比度或低对比度区域的分解,并将两个输出图像相互相减。如有必要,这些区域进一步进行分解,以获得更精细的区域分离。预处理后,二值化可以通过GMM来估计每个区域的二值化阈值,或者将二值化问题视为图像翻译任务,从而使用高对比度或低对比度区域作为条件输入来训练条件生成对抗网络(cGAN)。
{"title":"Binarization Using Morphological Decomposition Followed by cGAN","authors":"Cheng-Pan Hsieh, Shih-Kai Lee, Ya-Yi Liao, R. Huang, Jung-Hua Wang","doi":"10.1109/AIVR46125.2019.00044","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00044","url":null,"abstract":"This paper presents a novel binarization scheme for stained decipherable patterns. First, the input image is downsized, which not only saves the computation time, but the key features necessary for the successful decoding is preserved. Then, high or low contrast areas are decomposed by applying morphological operators to the downsized gray image, and subtracting the two resulting output images from each other. If necessary, these areas are further subjected to decomposition to obtain finer separation of regions. After the preprocessing, the binarization can be done either by GMM to estimate a binarization threshold for each region, or the binarization problem is treated as an image-translation task and hence the conditional generative adversarial network (cGAN) is trained using the high or low contrast areas as conditional inputs.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121901725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Civil War Battlefield Experience: Historical Event Simulation using Augmented Reality Technology 内战战场体验:使用增强现实技术的历史事件模拟
V. Nguyen, Kwanghee Jung, Seung-Chul Yoo, Seungman Kim, Sohyun Park, M. Currie
In recent years, with the development of modern technology, Virtual Reality (VR) has been proven as an effective means for entertaining and encouraging learning processes. Users immerse themselves in a 3D environment to experience situations that are very difficult or impossible to encounter in real life, such as volcanoes, ancient buildings, or events on a battlefield. Augmented Reality (AR), on the other hand, takes a different approach by allowing users to remain in their physical world while virtual objects are overlaid on physical ones. In education and tourism, VR and AR are becoming platforms for student learning and tourist attractions. Although several studies have been conducted to promote cultural preservation, they are mostly focused on VR for historical building visualization. The use of AR for simulating an event is relatively uncommon, especially for a battlefield simulation. This paper presents a work-in-progress, specifically a web-based AR application that enables both students and tourists to witness a series of battlefield events occurring at the Battle of Palmito Ranch, located near Brownsville, Texas. With markers embedded directly into the printed map, users can experience the last battle of the Civil War in the US.
近年来,随着现代科技的发展,虚拟现实(VR)已被证明是一种有效的娱乐和鼓励学习过程的手段。用户将自己沉浸在3D环境中,体验在现实生活中非常困难或不可能遇到的情况,例如火山,古建筑或战场上的事件。另一方面,增强现实(AR)采用了一种不同的方法,允许用户留在他们的物理世界中,而虚拟对象则覆盖在物理对象上。在教育和旅游领域,VR和AR正在成为学生学习和旅游景点的平台。虽然已经进行了一些促进文化保护的研究,但它们大多集中在VR对历史建筑的可视化上。使用AR来模拟事件是相对罕见的,特别是在战场模拟中。本文介绍了一个正在进行中的工作,特别是一个基于web的AR应用程序,它使学生和游客能够目睹发生在德克萨斯州布朗斯维尔附近的Palmito牧场战役中的一系列战场事件。通过将标记直接嵌入到印刷地图中,用户可以体验美国内战的最后一场战斗。
{"title":"Civil War Battlefield Experience: Historical Event Simulation using Augmented Reality Technology","authors":"V. Nguyen, Kwanghee Jung, Seung-Chul Yoo, Seungman Kim, Sohyun Park, M. Currie","doi":"10.1109/AIVR46125.2019.00068","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00068","url":null,"abstract":"In recent years, with the development of modern technology, Virtual Reality (VR) has been proven as an effective means for entertaining and encouraging learning processes. Users immerse themselves in a 3D environment to experience situations that are very difficult or impossible to encounter in real life, such as volcanoes, ancient buildings, or events on a battlefield. Augmented Reality (AR), on the other hand, takes a different approach by allowing users to remain in their physical world while virtual objects are overlaid on physical ones. In education and tourism, VR and AR are becoming platforms for student learning and tourist attractions. Although several studies have been conducted to promote cultural preservation, they are mostly focused on VR for historical building visualization. The use of AR for simulating an event is relatively uncommon, especially for a battlefield simulation. This paper presents a work-in-progress, specifically a web-based AR application that enables both students and tourists to witness a series of battlefield events occurring at the Battle of Palmito Ranch, located near Brownsville, Texas. With markers embedded directly into the printed map, users can experience the last battle of the Civil War in the US.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129430173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Combining Pairwise Feature Matches from Device Trajectories for Biometric Authentication in Virtual Reality Environments 虚拟现实环境中基于设备轨迹特征配对的生物特征认证
A. Ajit, N. Banerjee, Sean Banerjee
In this paper we provide an approach to perform seamless continual biometric authentication of users in virtual reality (VR) environments by combining position and orientation features from the headset, right hand controller, and left hand controller of a VR system. The rapid growth of VR in mission critical applications in military training, flight simulation, therapy, manufacturing, and education necessitates authentication of users based on their actions within the VR space as opposed to traditional PIN and password based approaches. To mimic goal-oriented interactions as they may occur in VR environments, we capture a VR dataset of trajectories from 33 users throwing a ball at a virtual target with 10 samples per user captured on a training day, and 10 samples on a test day. Due to the sparseness in the number of training samples per user, typical of realistic interactions, we perform authentication by using pairwise relationships between trajectories. Our approach uses a perceptron classifier to learn weights on the matches between position and orientation features on two trajectories from the headset and the hand controllers, such that a low classifier score is obtained for trajectories belonging to the same user, and a high score is obtained otherwise. We also perform extensive evaluation on the choice of position and orientation features, combination of devices, and choice of match metrics and trajectory alignment method on the accuracy, and demonstrate a maximum accuracy of 93.03% for matching 10 test actions per user by using orientation from the right hand controller and headset.
在本文中,我们提供了一种在虚拟现实(VR)环境中对用户进行无缝连续生物识别认证的方法,该方法结合了VR系统的耳机,右手控制器和左手控制器的位置和方向特征。VR在军事训练、飞行模拟、治疗、制造和教育等关键任务应用中的快速增长,需要根据用户在VR空间中的行为对用户进行身份验证,而不是传统的基于PIN和密码的方法。为了模拟在VR环境中可能发生的面向目标的交互,我们捕获了33个用户向虚拟目标投掷球的轨迹的VR数据集,每个用户在训练日捕获10个样本,在测试日捕获10个样本。由于每个用户的训练样本数量的稀疏性,典型的现实交互,我们通过使用轨迹之间的成对关系来执行身份验证。我们的方法使用感知器分类器来学习来自头戴式耳机和手动控制器的两个轨迹上的位置和方向特征之间匹配的权重,这样对于属于同一用户的轨迹,分类器得分较低,而对于属于其他用户的轨迹,分类器得分较高。我们还对位置和方向特征的选择、设备的组合、匹配度量和轨迹对齐方法的选择进行了广泛的评估,并通过使用右手控制器和耳机的方向,证明了每个用户匹配10个测试动作的最大精度为93.03%。
{"title":"Combining Pairwise Feature Matches from Device Trajectories for Biometric Authentication in Virtual Reality Environments","authors":"A. Ajit, N. Banerjee, Sean Banerjee","doi":"10.1109/AIVR46125.2019.00012","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00012","url":null,"abstract":"In this paper we provide an approach to perform seamless continual biometric authentication of users in virtual reality (VR) environments by combining position and orientation features from the headset, right hand controller, and left hand controller of a VR system. The rapid growth of VR in mission critical applications in military training, flight simulation, therapy, manufacturing, and education necessitates authentication of users based on their actions within the VR space as opposed to traditional PIN and password based approaches. To mimic goal-oriented interactions as they may occur in VR environments, we capture a VR dataset of trajectories from 33 users throwing a ball at a virtual target with 10 samples per user captured on a training day, and 10 samples on a test day. Due to the sparseness in the number of training samples per user, typical of realistic interactions, we perform authentication by using pairwise relationships between trajectories. Our approach uses a perceptron classifier to learn weights on the matches between position and orientation features on two trajectories from the headset and the hand controllers, such that a low classifier score is obtained for trajectories belonging to the same user, and a high score is obtained otherwise. We also perform extensive evaluation on the choice of position and orientation features, combination of devices, and choice of match metrics and trajectory alignment method on the accuracy, and demonstrate a maximum accuracy of 93.03% for matching 10 test actions per user by using orientation from the right hand controller and headset.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123848982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Ubiquitous Virtual Humans: A Multi-platform Framework for Embodied AI Agents in XR 无所不在的虚拟人:一种多平台的嵌入式人工智能代理框架
Arno Hartholt, Edward Fast, Adam Reilly, W. Whitcup, Matt Liewer, S. Mozgai
We present an architecture and framework for the development of virtual humans for a range of computing platforms, including mobile, web, Virtual Reality (VR) and Augmented Reality (AR). The framework uses a mix of in-house and commodity technologies to support audio-visual sensing, speech recognition, natural language processing, nonverbal behavior generation and realization, text-to-speech generation, and rendering. This work builds on the Virtual Human Toolkit, which has been extended to support computing platforms beyond Windows. The resulting framework maintains the modularity of the underlying architecture, allows re-use of both logic and content through cloud services, and is extensible by porting lightweight clients. We present the current state of the framework, discuss how we model and animate our characters, and offer lessons learned through several use cases, including expressive character animation in seated VR, shared space and navigation in room-scale VR, autonomous AI in mobile AR, and real-time user performance feedback based on mobile sensors in headset AR.
我们为一系列计算平台(包括移动、网络、虚拟现实(VR)和增强现实(AR))的虚拟人开发提供了一个架构和框架。该框架使用内部和商品技术的混合来支持视听传感、语音识别、自然语言处理、非语言行为的生成和实现、文本到语音的生成和渲染。这项工作建立在虚拟人类工具包的基础上,它已经扩展到支持Windows以外的计算平台。生成的框架维护了底层体系结构的模块化,允许通过云服务重用逻辑和内容,并且可以通过移植轻量级客户端进行扩展。我们介绍了框架的当前状态,讨论了我们如何建模和动画我们的角色,并通过几个用例提供了经验教训,包括坐式VR中的富有表现力的角色动画,房间级VR中的共享空间和导航,移动AR中的自主AI,以及基于耳机AR中的移动传感器的实时用户性能反馈。
{"title":"Ubiquitous Virtual Humans: A Multi-platform Framework for Embodied AI Agents in XR","authors":"Arno Hartholt, Edward Fast, Adam Reilly, W. Whitcup, Matt Liewer, S. Mozgai","doi":"10.1109/AIVR46125.2019.00072","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00072","url":null,"abstract":"We present an architecture and framework for the development of virtual humans for a range of computing platforms, including mobile, web, Virtual Reality (VR) and Augmented Reality (AR). The framework uses a mix of in-house and commodity technologies to support audio-visual sensing, speech recognition, natural language processing, nonverbal behavior generation and realization, text-to-speech generation, and rendering. This work builds on the Virtual Human Toolkit, which has been extended to support computing platforms beyond Windows. The resulting framework maintains the modularity of the underlying architecture, allows re-use of both logic and content through cloud services, and is extensible by porting lightweight clients. We present the current state of the framework, discuss how we model and animate our characters, and offer lessons learned through several use cases, including expressive character animation in seated VR, shared space and navigation in room-scale VR, autonomous AI in mobile AR, and real-time user performance feedback based on mobile sensors in headset AR.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125673770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Room Style Estimation for Style-Aware Recommendation 用于风格感知推荐的房间风格评估
E. Cansizoglu, Hantian Liu, Tomer Weiss, Archi Mitra, Dhaval Dholakia, Jae-Woo Choi, D. Wulin
Interior design is a complex task as evident by multitude of professionals, websites, and books, offering design advice. Additionally, such advice is highly subjective in nature since different experts might have different interior design opinions. Our goal is to offer data-driven recommendations for an interior design task that reflects an individual's room style preferences. We present a style-based image suggestion framework to search for room ideas and relevant products for a given query image. We train a deep neural network classifier by focusing on high volume classes with high-agreement samples using a VGG architecture. The resulting model shows promising results and paves the way to style-aware product recommendation in virtual reality platforms for 3D room design.
室内设计是一项复杂的任务,许多专业人士、网站和书籍都提供了设计建议。此外,这种建议在本质上是高度主观的,因为不同的专家可能有不同的室内设计意见。我们的目标是为反映个人房间风格偏好的室内设计任务提供数据驱动的建议。我们提出了一个基于风格的图像建议框架,为给定的查询图像搜索房间创意和相关产品。我们通过使用VGG架构专注于具有高一致性样本的大容量类来训练深度神经网络分类器。所得到的模型显示出令人满意的结果,为3D房间设计的虚拟现实平台的风格感知产品推荐铺平了道路。
{"title":"Room Style Estimation for Style-Aware Recommendation","authors":"E. Cansizoglu, Hantian Liu, Tomer Weiss, Archi Mitra, Dhaval Dholakia, Jae-Woo Choi, D. Wulin","doi":"10.1109/AIVR46125.2019.00062","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00062","url":null,"abstract":"Interior design is a complex task as evident by multitude of professionals, websites, and books, offering design advice. Additionally, such advice is highly subjective in nature since different experts might have different interior design opinions. Our goal is to offer data-driven recommendations for an interior design task that reflects an individual's room style preferences. We present a style-based image suggestion framework to search for room ideas and relevant products for a given query image. We train a deep neural network classifier by focusing on high volume classes with high-agreement samples using a VGG architecture. The resulting model shows promising results and paves the way to style-aware product recommendation in virtual reality platforms for 3D room design.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122271404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Open-Source Physiological Computing Framework using Heart Rate Variability in Mobile Virtual Reality Applications 在移动虚拟现实应用中使用心率变异性的开源生理计算框架
Luis Quintero, P. Papapetrou, J. Muñoz
Electronic and mobile health technologies are posed as a tool that can promote self-care and extend coverage to bridge the gap in accessibility to mental care services between low-and high-income communities. However, the current technology-based mental health interventions use systems that are either cumbersome, expensive or require specialized knowledge to be operated. This paper describes the open-source framework PARE-VR, which provides heart rate variability (HRV) analysis to mobile virtual reality (VR) applications. It further outlines the advantages of the presented architecture as an initial step to provide more scalable mental health therapies in comparison to current technical setups; and as an approach with the capability to merge physiological data and artificial intelligence agents to provide computing systems with user understanding and adaptive functionalities. Furthermore, PARE-VR is evaluated with a feasibility study using a specific relaxation exercise with slow-paced breathing. The aim of the study is to get insights of the system performance, its capability to detect HRV metrics in real-time, as well as to identify changes between normal and slow-paced breathing using the HRV data. Preliminary results of the study, with the participation of eleven volunteers, showed high engagement of users towards the VR activity, and demonstrated technical potentialities of the framework to create physiological computing systems using mobile VR and wearable smartwatches for scalable health interventions. Several insights and recommendations were concluded from the study for enhancing the HRV analysis in real-time and conducting future similar studies.
电子和移动卫生技术被认为是一种可以促进自我保健和扩大覆盖面的工具,以弥合低收入和高收入社区在获得精神保健服务方面的差距。然而,目前以技术为基础的精神卫生干预措施使用的系统要么笨重、昂贵,要么需要专业知识才能操作。本文介绍了开源框架PARE-VR,该框架为移动虚拟现实(VR)应用提供心率变异性(HRV)分析。它进一步概述了与目前的技术设置相比,所提出的架构作为提供更可扩展的心理健康治疗的第一步的优势;作为一种能够将生理数据和人工智能结合起来的方法,为计算系统提供用户理解和自适应功能。此外,通过使用慢节奏呼吸的特定放松练习来评估PARE-VR的可行性研究。该研究的目的是深入了解系统性能,实时检测HRV指标的能力,以及利用HRV数据识别正常呼吸和慢速呼吸之间的变化。在11名志愿者的参与下,该研究的初步结果显示,用户对虚拟现实活动的参与度很高,并展示了该框架的技术潜力,可以使用移动虚拟现实和可穿戴智能手表创建可扩展健康干预的生理计算系统。从研究中得出了一些见解和建议,以加强实时HRV分析并开展未来的类似研究。
{"title":"Open-Source Physiological Computing Framework using Heart Rate Variability in Mobile Virtual Reality Applications","authors":"Luis Quintero, P. Papapetrou, J. Muñoz","doi":"10.1109/AIVR46125.2019.00027","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00027","url":null,"abstract":"Electronic and mobile health technologies are posed as a tool that can promote self-care and extend coverage to bridge the gap in accessibility to mental care services between low-and high-income communities. However, the current technology-based mental health interventions use systems that are either cumbersome, expensive or require specialized knowledge to be operated. This paper describes the open-source framework PARE-VR, which provides heart rate variability (HRV) analysis to mobile virtual reality (VR) applications. It further outlines the advantages of the presented architecture as an initial step to provide more scalable mental health therapies in comparison to current technical setups; and as an approach with the capability to merge physiological data and artificial intelligence agents to provide computing systems with user understanding and adaptive functionalities. Furthermore, PARE-VR is evaluated with a feasibility study using a specific relaxation exercise with slow-paced breathing. The aim of the study is to get insights of the system performance, its capability to detect HRV metrics in real-time, as well as to identify changes between normal and slow-paced breathing using the HRV data. Preliminary results of the study, with the participation of eleven volunteers, showed high engagement of users towards the VR activity, and demonstrated technical potentialities of the framework to create physiological computing systems using mobile VR and wearable smartwatches for scalable health interventions. Several insights and recommendations were concluded from the study for enhancing the HRV analysis in real-time and conducting future similar studies.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131972243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Situation-Adaptive Object Grasping Recognition in VR Environment VR环境下情境自适应物体抓取识别
Koki Hirota, T. Komuro
In this paper, we propose a method for recognizing grasping of virtual objects in VR environment. The proposed method utilizes the fact that the position and shape of the virtual object to be grasped are known. A camera acquires an image of the user grasping a virtual object, and the posture of the hand is extracted from that image. The obtained hand posture is used to classify whether it is a grasping action or not. In order to evaluate the proposed method, we created a new dataset that was specialized for grasping virtual objects with a bare hand. There were three shapes and three positions of virtual objects in the dataset. The recognition rate of the classifier that was trained using the dataset with specific shapes of virtual objects was 93.18 %, and that with all the shapes of virtual objects was 87.71 %. This result shows that the recognition rate was improved by training the classifier using the shape-dependent dataset.
本文提出了一种识别虚拟现实环境中虚拟物体抓取的方法。所提出的方法利用了被抓取虚拟物体的位置和形状是已知的这一事实。相机获取用户抓取虚拟物体的图像,并从该图像中提取手的姿势。用得到的手的姿态来区分是否为抓取动作。为了评估所提出的方法,我们创建了一个专门用于徒手抓取虚拟物体的新数据集。数据集中有三种形状和三种位置的虚拟物体。使用特定形状的虚拟物体数据集训练的分类器识别率为93.18%,使用所有形状的虚拟物体数据集训练的分类器识别率为87.71%。结果表明,使用形状相关数据集训练分类器可以提高识别率。
{"title":"Situation-Adaptive Object Grasping Recognition in VR Environment","authors":"Koki Hirota, T. Komuro","doi":"10.1109/AIVR46125.2019.00035","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00035","url":null,"abstract":"In this paper, we propose a method for recognizing grasping of virtual objects in VR environment. The proposed method utilizes the fact that the position and shape of the virtual object to be grasped are known. A camera acquires an image of the user grasping a virtual object, and the posture of the hand is extracted from that image. The obtained hand posture is used to classify whether it is a grasping action or not. In order to evaluate the proposed method, we created a new dataset that was specialized for grasping virtual objects with a bare hand. There were three shapes and three positions of virtual objects in the dataset. The recognition rate of the classifier that was trained using the dataset with specific shapes of virtual objects was 93.18 %, and that with all the shapes of virtual objects was 87.71 %. This result shows that the recognition rate was improved by training the classifier using the shape-dependent dataset.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114353064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Design Process for Enhancing Visual Expressive Qualities of Characters from Performance Capture into Virtual Reality 从表演捕捉到虚拟现实增强角色视觉表现力的设计过程
Victoria Campbell
In designing performances for virtual reality one must consider the unique qualities of the VR medium in order to deliver expressive character performance. This means that the design requirements for participant engagement and immersion must evolve to address these new possibilities. To address the need for evolving an expressive character performance for VR, a five step production framework is proposed which addresses steps of directing, performance capture, the cyclical stages of retargeting and animation refinement and movement translation to avatars in VR.
在为虚拟现实设计表演时,必须考虑到虚拟现实媒介的独特品质,以提供富有表现力的角色表演。这意味着参与者参与和沉浸的设计要求必须不断发展,以应对这些新的可能性。为了满足发展VR中富有表现力的角色表演的需求,提出了一个五步制作框架,该框架解决了导演、表演捕捉、重定向和动画细化的周期性阶段以及VR中角色的动作翻译的步骤。
{"title":"The Design Process for Enhancing Visual Expressive Qualities of Characters from Performance Capture into Virtual Reality","authors":"Victoria Campbell","doi":"10.1109/AIVR46125.2019.00067","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00067","url":null,"abstract":"In designing performances for virtual reality one must consider the unique qualities of the VR medium in order to deliver expressive character performance. This means that the design requirements for participant engagement and immersion must evolve to address these new possibilities. To address the need for evolving an expressive character performance for VR, a five step production framework is proposed which addresses steps of directing, performance capture, the cyclical stages of retargeting and animation refinement and movement translation to avatars in VR.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127448696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extended Abstract: Augmented Reality for Human-Robot Cooperation in Aircraft Assembly 飞机装配中人机协作的增强现实技术
A. Luxenburger, Jonas Mohr, Torsten Spieldenner, Dieter Merkel, Fabio Andres Espinosa Valcarcel, Tim Schwartz, Florian Reinicke, Julian Ahlers, Markus Stoyke
This extended abstract and the accompanying demonstration video show how Augmented Reality (AR) can be used in an industrial setting to coordinate a hybrid team consisting of a human worker and two robots in order to rivet stringers and ties to an aircraft hull.
这个扩展的抽象和随附的演示视频展示了如何在工业环境中使用增强现实(AR)来协调由人类工人和两个机器人组成的混合团队,以便将stringers和ties铆接到飞机船体上。
{"title":"Extended Abstract: Augmented Reality for Human-Robot Cooperation in Aircraft Assembly","authors":"A. Luxenburger, Jonas Mohr, Torsten Spieldenner, Dieter Merkel, Fabio Andres Espinosa Valcarcel, Tim Schwartz, Florian Reinicke, Julian Ahlers, Markus Stoyke","doi":"10.1109/AIVR46125.2019.00052","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00052","url":null,"abstract":"This extended abstract and the accompanying demonstration video show how Augmented Reality (AR) can be used in an industrial setting to coordinate a hybrid team consisting of a human worker and two robots in order to rivet stringers and ties to an aircraft hull.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127042958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep Learning on VR-Induced Attention vr诱导注意力的深度学习
Gang Li, Muhammad Adeel Khan
Some evidence suggests that virtual reality (VR) approaches may lead to a greater attentional focus than experiencing the same scenarios presented on computer monitors. The aim of this study is to differentiate attention levels captured during a perceptual discrimination task presented on two different viewing platforms, standard personal computer (PC) monitor and head-mounted-display (HMD)-VR, using a well-described electroencephalography (EEG)-based measure (parietal P3b latency) and deep learning-based measure (that is EEG features extracted by a compact convolutional neural network-EEGNet and visualized by a gradient-based relevance attribution method-DeepLIFT). Twenty healthy young adults participated in this perceptual discrimination task in which according to a spatial cue they were required to discriminate either a "Target" or "Distractor" stimuli on the screen of viewing platforms. Experimental results show that the EEGNet-based classification accuracies are highly correlated with the p values of statistical analysis of P3b. Also, the visualized EEG features are neurophysiologically interpretable. This study provides the first visualized deep learning-based EEG features captured during an HMD-VR-based attentional task.
一些证据表明,虚拟现实(VR)方法可能比在电脑显示器上体验相同的场景更能引起人们的注意力。本研究的目的是区分在两种不同的观看平台(标准个人电脑(PC)显示器和头戴式显示器(HMD)-VR)上呈现的知觉辨别任务中捕获的注意力水平。使用描述良好的基于脑电图(EEG)的测量(顶叶P3b潜伏期)和基于深度学习的测量(即由紧凑卷积神经网络eegnet提取的EEG特征,并通过基于梯度的关联归因方法deeplift进行可视化)。20名健康的年轻人参加了这个知觉辨别任务,在这个任务中,他们被要求根据空间线索区分观看平台屏幕上的“目标”或“干扰”刺激。实验结果表明,基于eegnet的分类准确率与P3b统计分析的p值高度相关。此外,可视化的脑电图特征在神经生理学上是可解释的。这项研究提供了在基于hmd - vr的注意力任务中捕获的第一个基于深度学习的可视化EEG特征。
{"title":"Deep Learning on VR-Induced Attention","authors":"Gang Li, Muhammad Adeel Khan","doi":"10.1109/AIVR46125.2019.00033","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00033","url":null,"abstract":"Some evidence suggests that virtual reality (VR) approaches may lead to a greater attentional focus than experiencing the same scenarios presented on computer monitors. The aim of this study is to differentiate attention levels captured during a perceptual discrimination task presented on two different viewing platforms, standard personal computer (PC) monitor and head-mounted-display (HMD)-VR, using a well-described electroencephalography (EEG)-based measure (parietal P3b latency) and deep learning-based measure (that is EEG features extracted by a compact convolutional neural network-EEGNet and visualized by a gradient-based relevance attribution method-DeepLIFT). Twenty healthy young adults participated in this perceptual discrimination task in which according to a spatial cue they were required to discriminate either a \"Target\" or \"Distractor\" stimuli on the screen of viewing platforms. Experimental results show that the EEGNet-based classification accuracies are highly correlated with the p values of statistical analysis of P3b. Also, the visualized EEG features are neurophysiologically interpretable. This study provides the first visualized deep learning-based EEG features captured during an HMD-VR-based attentional task.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132495093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1