首页 > 最新文献

2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)最新文献

英文 中文
A Virtual Reality Based Simulator for Training Surgical Skills in Procedure of Catheter Ablation 基于虚拟现实的导管消融手术技能训练模拟器
Haoyu Wang, Sheng Jiang, Jianhuang Wu
We present a VR-based simulator built for training surgical skills in the procedure of catheter ablation. Based on multi-body dynamics, we proposed a novel method to simulate the interactive behavior of the surgical devices and the human vascular system. An estimation based optimization technique and a track based motion control strategy are proposed to make the simulation efficient enough to achieve high level performance. The beating of human heart is also simulated in real time with our method within the position based dynamics framework. Results demonstrate that our simulator provides a realistic, effective, and stable environment for trainees to acquire essential surgical skills.
我们提出了一种基于vr的模拟器,用于训练导管消融过程中的手术技能。基于多体动力学,提出了一种模拟手术器械与人体血管系统交互行为的新方法。提出了一种基于估计的优化技术和一种基于轨迹的运动控制策略,使仿真能够达到高水平的性能。该方法还在基于位置的动力学框架内实时模拟了人体心脏的跳动。结果表明,我们的模拟器为学员提供了一个真实、有效和稳定的环境,以获得基本的外科技能。
{"title":"A Virtual Reality Based Simulator for Training Surgical Skills in Procedure of Catheter Ablation","authors":"Haoyu Wang, Sheng Jiang, Jianhuang Wu","doi":"10.1109/AIVR.2018.00057","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00057","url":null,"abstract":"We present a VR-based simulator built for training surgical skills in the procedure of catheter ablation. Based on multi-body dynamics, we proposed a novel method to simulate the interactive behavior of the surgical devices and the human vascular system. An estimation based optimization technique and a track based motion control strategy are proposed to make the simulation efficient enough to achieve high level performance. The beating of human heart is also simulated in real time with our method within the position based dynamics framework. Results demonstrate that our simulator provides a realistic, effective, and stable environment for trainees to acquire essential surgical skills.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133793588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Supporting the Sense of Unity between Remote Audiences in VR-Based Remote Live Music Support System KSA2 基于vr的远程现场音乐支持系统KSA2支持远程观众之间的统一性
Tatsuyoshi Kaneko, H. Tarumi, Keiya Kataoka, Yuki Kubochi, Daiki Yamashita, Tomoki Nakai, Ryota Yamaguchi
We are developing a system to support remote audiences of live music shows. At live shows of rock or popular music, audiences take actions responding to the music. This is a style of nonverbal communication between audiences to share emotion. The "sense of unity", which is often mentioned among music players and audiences, is a key of successful live performance. This research tries to enable remote audiences to exchange nonverbal communication with body actions between them in a VR environment. We have developed a prototype system and conducted evaluation sessions. Most of the participants of the sessions felt the sense of unity.
我们正在开发一个系统来支持现场音乐表演的远程观众。在摇滚或流行音乐的现场表演中,观众会对音乐做出反应。这是观众之间分享情感的一种非语言交流方式。音乐演奏者和听众之间经常提到的“团结感”是现场演出成功的关键。本研究试图在虚拟现实环境中实现远程观众之间通过肢体动作进行非语言交流。我们已经开发了一个原型系统并进行了评估。参加会议的大多数人都感到团结一致。
{"title":"Supporting the Sense of Unity between Remote Audiences in VR-Based Remote Live Music Support System KSA2","authors":"Tatsuyoshi Kaneko, H. Tarumi, Keiya Kataoka, Yuki Kubochi, Daiki Yamashita, Tomoki Nakai, Ryota Yamaguchi","doi":"10.1109/AIVR.2018.00025","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00025","url":null,"abstract":"We are developing a system to support remote audiences of live music shows. At live shows of rock or popular music, audiences take actions responding to the music. This is a style of nonverbal communication between audiences to share emotion. The \"sense of unity\", which is often mentioned among music players and audiences, is a key of successful live performance. This research tries to enable remote audiences to exchange nonverbal communication with body actions between them in a VR environment. We have developed a prototype system and conducted evaluation sessions. Most of the participants of the sessions felt the sense of unity.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130778862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
A Method to Build Multi-Scene Datasets for CNN for Camera Pose Regression 一种用于摄像机姿态回归的CNN多场景数据集构建方法
Yuhao Ma, Hao Guo, Hong Chen, Mengxiao Tian, Xin Huo, Chengjiang Long, Shiye Tang, Xiaoyu Song, Qing Wang
Convolutional neural networks (CNN) have shown to be useful for camera pose regression, and They have robust effects against some challenging scenarios such as lighting changes, motion blur, and scenes with lots of textureless surfaces. Additionally, PoseNet shows that the deep learning system can interpolate the camera pose in space between training images. In this paper, we explore how different strategies for processing datasets will affect the pose regression and propose a method for building multi-scene datasets for training such neural networks. We demonstrate that the location of several scenes can be remembered using only one neural network. By combining multiple scenes, we found that the position errors of the neural network do not decrease significantly as the distance between the cameras increases, which means that we do not need to train several models for the increase number of scenes. We also explore the impact factors that influence the accuracy of models for multi-scene camera pose regression, which can help us merge several scenes into one dataset in a better way. We opened our code and datasets to the public for better researches.
卷积神经网络(CNN)已被证明对相机姿势回归很有用,并且它们对一些具有挑战性的场景(如灯光变化,运动模糊和具有大量无纹理表面的场景)具有强大的效果。此外,PoseNet表明,深度学习系统可以在训练图像之间的空间内插入相机姿势。在本文中,我们探讨了处理数据集的不同策略如何影响姿态回归,并提出了一种构建多场景数据集的方法来训练这种神经网络。我们证明了仅使用一个神经网络就可以记住多个场景的位置。通过组合多个场景,我们发现神经网络的位置误差并没有随着摄像机之间距离的增加而显著减小,这意味着我们不需要为场景数量的增加而训练多个模型。我们还探讨了影响多场景相机姿态回归模型精度的影响因素,这可以帮助我们更好地将多个场景合并到一个数据集中。为了更好的研究,我们向公众开放了我们的代码和数据集。
{"title":"A Method to Build Multi-Scene Datasets for CNN for Camera Pose Regression","authors":"Yuhao Ma, Hao Guo, Hong Chen, Mengxiao Tian, Xin Huo, Chengjiang Long, Shiye Tang, Xiaoyu Song, Qing Wang","doi":"10.1109/AIVR.2018.00022","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00022","url":null,"abstract":"Convolutional neural networks (CNN) have shown to be useful for camera pose regression, and They have robust effects against some challenging scenarios such as lighting changes, motion blur, and scenes with lots of textureless surfaces. Additionally, PoseNet shows that the deep learning system can interpolate the camera pose in space between training images. In this paper, we explore how different strategies for processing datasets will affect the pose regression and propose a method for building multi-scene datasets for training such neural networks. We demonstrate that the location of several scenes can be remembered using only one neural network. By combining multiple scenes, we found that the position errors of the neural network do not decrease significantly as the distance between the cameras increases, which means that we do not need to train several models for the increase number of scenes. We also explore the impact factors that influence the accuracy of models for multi-scene camera pose regression, which can help us merge several scenes into one dataset in a better way. We opened our code and datasets to the public for better researches.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128826178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Motorcycle Riding Safety Education with Virtual Reality 基于虚拟现实的摩托车骑行安全教育
Chun-Chia Hsu, Y. Chen, Wen Ching Chou, Shih-Hsuan Huang, Kai-Kuo Chang
Young novice drivers are the group of drivers most likely to crash. Young novice drivers who tend to misidentify potential hazards in the traffic environment. There are a number of factors that contribute to the high crash risk experienced by these drivers. Age and lack of driving experience are the main factors behind young drivers having an increased risk of being involved in a road traffic collision. In Taiwan when novice drivers pass the driving test, they need to attend classroom instruction to learn safe driving from fatal crashes. But the classroom instruction has limited direct beneficial effects on the safety of new drivers. In this paper, the researchers use Unity game engine to develop an Android mobile game to enhance learning in driving classroom instruction. The game has to be played at three levels: the learning, the examination and the free drive level. The researchers also develop VR game and simulator based on mobile game which can be used to enhance learning in driving education.
年轻的新手司机是最容易发生车祸的司机群体。年轻的新手司机容易错误识别交通环境中的潜在危险。有许多因素导致这些司机经历的高碰撞风险。年龄和缺乏驾驶经验是年轻司机发生道路交通碰撞风险增加的主要因素。在台湾,当新司机通过驾驶考试时,他们需要参加课堂教学,从致命的撞车事故中学习安全驾驶。但课堂教学对新司机安全的直接有益影响有限。本文研究人员利用Unity游戏引擎开发了一款Android手机游戏,在驾驶课堂教学中加强学习。游戏必须在三个层面进行:学习、考试和自由驱动层面。研究人员还开发了基于手机游戏的VR游戏和模拟器,可用于驾驶教育中的学习。
{"title":"Motorcycle Riding Safety Education with Virtual Reality","authors":"Chun-Chia Hsu, Y. Chen, Wen Ching Chou, Shih-Hsuan Huang, Kai-Kuo Chang","doi":"10.1109/AIVR.2018.00050","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00050","url":null,"abstract":"Young novice drivers are the group of drivers most likely to crash. Young novice drivers who tend to misidentify potential hazards in the traffic environment. There are a number of factors that contribute to the high crash risk experienced by these drivers. Age and lack of driving experience are the main factors behind young drivers having an increased risk of being involved in a road traffic collision. In Taiwan when novice drivers pass the driving test, they need to attend classroom instruction to learn safe driving from fatal crashes. But the classroom instruction has limited direct beneficial effects on the safety of new drivers. In this paper, the researchers use Unity game engine to develop an Android mobile game to enhance learning in driving classroom instruction. The game has to be played at three levels: the learning, the examination and the free drive level. The researchers also develop VR game and simulator based on mobile game which can be used to enhance learning in driving education.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122692953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Augmented Reality Simulation of Cardiac Circulation Using APPLearn (Heart) 基于APPLearn的心脏循环增强现实模拟
R. Ba, Yiyu Cai, Yunqing Guan
Cardiac circulation is traditionally difficult for students from both high school and colleges to learn biology due its complexity and dynamic process. Cadavers can be used as a good tool for students to learn cardiovascular system. Unfortunately, this approach comes with some major concerns because of the collapsed anatomic structures, and no blood flowing. Also cadavers are normally available only in medical schools. Human subjects especially cardiac patients are ideal for students to learn human circulation from the perspectives of both structures and functions. This, however, is infeasible due to ethical and other constraints. As such most students learn cardiac circulation by reading text books, attending to lectures, viewing images and perhaps manipulating heart models. In this paper, we will present our efforts developing augmented reality (AR) technology to enhance learning of cardiac circulation. More specifically, a book based AR app APPLearn (Heart) is designed to allow each and every student learning the cardiac structure and function by interactive playing.
心脏循环由于其复杂性和动态过程,历来是高中和大学学生学习生物学的难点。尸体是学生学习心血管系统的一个很好的工具。不幸的是,这种方法有一些主要的问题,因为解剖结构崩溃,没有血液流动。此外,尸体通常只在医学院提供。以人体为研究对象,尤其是心脏病患者,是学生从结构和功能两方面了解人体循环的理想对象。然而,由于道德和其他限制,这是不可行的。因此,大多数学生通过阅读课本、参加讲座、观看图像和操作心脏模型来学习心脏循环。在本文中,我们将介绍我们开发增强现实(AR)技术来增强心脏循环的学习。更具体地说,基于书本的AR应用程序APPLearn(心脏)旨在让每个学生通过互动游戏来学习心脏的结构和功能。
{"title":"Augmented Reality Simulation of Cardiac Circulation Using APPLearn (Heart)","authors":"R. Ba, Yiyu Cai, Yunqing Guan","doi":"10.1109/AIVR.2018.00055","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00055","url":null,"abstract":"Cardiac circulation is traditionally difficult for students from both high school and colleges to learn biology due its complexity and dynamic process. Cadavers can be used as a good tool for students to learn cardiovascular system. Unfortunately, this approach comes with some major concerns because of the collapsed anatomic structures, and no blood flowing. Also cadavers are normally available only in medical schools. Human subjects especially cardiac patients are ideal for students to learn human circulation from the perspectives of both structures and functions. This, however, is infeasible due to ethical and other constraints. As such most students learn cardiac circulation by reading text books, attending to lectures, viewing images and perhaps manipulating heart models. In this paper, we will present our efforts developing augmented reality (AR) technology to enhance learning of cardiac circulation. More specifically, a book based AR app APPLearn (Heart) is designed to allow each and every student learning the cardiac structure and function by interactive playing.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125683260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
IEEE AIVR 2018 Organizing Committee IEEE AIVR 2018组委会
{"title":"IEEE AIVR 2018 Organizing Committee","authors":"","doi":"10.1109/aivr.2018.00007","DOIUrl":"https://doi.org/10.1109/aivr.2018.00007","url":null,"abstract":"","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"356 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132894236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FoodChangeLens: CNN-Based Food Transformation on HoloLens FoodChangeLens:基于cnn的全息透镜食物转换
Shu Naritomi, Ryosuke Tanno, Takumi Ege, Keiji Yanai
In this demonstration, we implemented food category transformation in mixed reality using both image generation and HoloLens. Our system overlays transformed food images to food objects in the AR space, so that it is possible to convert in consideration of real shape. This system has the potential to make meals more enjoyable. In this work, we use the Conditional CycleGAN trained with a large-scale food image data collected from the Twitter Stream for food category transformation which can transform among ten kinds of foods mutually keeping the shape of a given food. We show the virtual meal experience which is food category transformation among ten kinds of typical Japanese foods: ramen noodle, curry rice, fried rice, beef rice bowl, chilled noodle, spaghetti with meat source, white rice, eel bowl, and fried noodle. Note that additional results including demo videos can be see at https://negi111111.github.io/FoodChangeLensProjectHP/
在这个演示中,我们使用图像生成和HoloLens在混合现实中实现了食品类别转换。我们的系统将转换后的食物图像叠加到AR空间中的食物对象上,这样就可以根据真实形状进行转换。这个系统有可能让用餐变得更愉快。在这项工作中,我们使用从Twitter流中收集的大规模食物图像数据训练的条件CycleGAN进行食物类别转换,它可以在十种食物之间相互转换,保持给定食物的形状。我们展示了一种虚拟的用餐体验,即拉面、咖喱饭、炒饭、牛肉碗、冰鲜面、肉源意面、白米饭、鳗鱼碗、炒面等十种典型日本食物之间的食物类别转换。请注意,包括演示视频在内的其他结果可以在https://negi111111.github.io/FoodChangeLensProjectHP/上看到
{"title":"FoodChangeLens: CNN-Based Food Transformation on HoloLens","authors":"Shu Naritomi, Ryosuke Tanno, Takumi Ege, Keiji Yanai","doi":"10.1109/AIVR.2018.00046","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00046","url":null,"abstract":"In this demonstration, we implemented food category transformation in mixed reality using both image generation and HoloLens. Our system overlays transformed food images to food objects in the AR space, so that it is possible to convert in consideration of real shape. This system has the potential to make meals more enjoyable. In this work, we use the Conditional CycleGAN trained with a large-scale food image data collected from the Twitter Stream for food category transformation which can transform among ten kinds of foods mutually keeping the shape of a given food. We show the virtual meal experience which is food category transformation among ten kinds of typical Japanese foods: ramen noodle, curry rice, fried rice, beef rice bowl, chilled noodle, spaghetti with meat source, white rice, eel bowl, and fried noodle. Note that additional results including demo videos can be see at https://negi111111.github.io/FoodChangeLensProjectHP/","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116943416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Evaluating the Effects of a Cartoon-Like Character with Emotions on Users' Behaviour within Virtual Reality Environments 在虚拟现实环境中评估带有情感的卡通角色对用户行为的影响
D. Monteiro, Hai-Ning Liang, Jialin Wang, Luhan Wang, Xian Wang, Yong Yue
In this research we explore the effect of a virtual avatar that is non-human like and can express basic distinguishable emotions on users' level of engagement and interest. Virtual reality (VR) environments are able to render realistic representations. However, not all virtual environments require life-like representations of their characters-in our research a 'life-like' human character means that it resembles very closely to an actual person in real life. It is very common for games to use simple non-human characters. Cartoon-like characters can actually have a greater impact on users' affinity towards these games. The aim of this research is to examine if interactions with a cartoon-like character that has the capacity to express simple but common emotional expressions is sufficient to bring forth a change in the behavior and level of engagement of users with the character. This research seeks to find out if adding simple emotions to virtual characters is beneficial to increasing users' interest. To explore these questions, we have conducted a study with a human-like cartoon character in a VR environment that can express simple, basic human emotions based on users' input. The results of our experiment show that a cartoon-like character can benefit from displaying emotional traits or responses when interacting with humans in a VR environment.
在这项研究中,我们探索了一个非人类的虚拟化身对用户参与和兴趣水平的影响,这个虚拟化身可以表达基本的可区分的情感。虚拟现实(VR)环境能够呈现逼真的表现。然而,并不是所有的虚拟环境都需要栩栩如生的人物形象——在我们的研究中,“栩栩如生”的人物形象意味着它与现实生活中的真人非常相似。游戏使用简单的非人类角色是很常见的。类似卡通的角色实际上会对用户对这些游戏的亲和力产生更大的影响。本研究的目的是检验与具有表达简单但常见情感表达能力的卡通角色的互动是否足以改变用户与角色的行为和参与度。这项研究旨在发现在虚拟角色中添加简单的情感是否有助于提高用户的兴趣。为了探索这些问题,我们在VR环境中对一个人形卡通人物进行了研究,该卡通人物可以根据用户的输入表达简单、基本的人类情感。我们的实验结果表明,当在虚拟现实环境中与人类互动时,卡通角色可以从展示情感特征或反应中受益。
{"title":"Evaluating the Effects of a Cartoon-Like Character with Emotions on Users' Behaviour within Virtual Reality Environments","authors":"D. Monteiro, Hai-Ning Liang, Jialin Wang, Luhan Wang, Xian Wang, Yong Yue","doi":"10.1109/AIVR.2018.00053","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00053","url":null,"abstract":"In this research we explore the effect of a virtual avatar that is non-human like and can express basic distinguishable emotions on users' level of engagement and interest. Virtual reality (VR) environments are able to render realistic representations. However, not all virtual environments require life-like representations of their characters-in our research a 'life-like' human character means that it resembles very closely to an actual person in real life. It is very common for games to use simple non-human characters. Cartoon-like characters can actually have a greater impact on users' affinity towards these games. The aim of this research is to examine if interactions with a cartoon-like character that has the capacity to express simple but common emotional expressions is sufficient to bring forth a change in the behavior and level of engagement of users with the character. This research seeks to find out if adding simple emotions to virtual characters is beneficial to increasing users' interest. To explore these questions, we have conducted a study with a human-like cartoon character in a VR environment that can express simple, basic human emotions based on users' input. The results of our experiment show that a cartoon-like character can benefit from displaying emotional traits or responses when interacting with humans in a VR environment.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127180165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Gesture and Action Discovery for Evaluating Virtual Environments with Semi-Supervised Segmentation of Telemetry Records 遥测记录半监督分割评估虚拟环境的手势和动作发现
A. Batch, Kyungjun Lee, H. Maddali, N. Elmqvist
In this paper, we propose a novel pipeline for semi-supervised behavioral coding of videos of users testing a device or interface, with an eye toward human-computer interaction evaluation for virtual reality. Our system applies existing statistical techniques for time-series classification, including e-divisive change point detection and "Symbolic Aggregate approXimation" (SAX) with agglomerative hierarchical clustering, to 3D pose telemetry data. These techniques create classes of short segments of single-person video data–short actions of potential interest called "micro-gestures." A long short-term memory (LSTM) layer then learns these micro-gestures from pose features generated purely from video via a pre-trained OpenPose convolutional neural network (CNN) to predict their occurrence in unlabeled test videos. We present and discuss the results from testing our system on the single user pose videos of the CMU Panoptic Dataset.
在本文中,我们提出了一种新的管道,用于用户测试设备或界面的视频的半监督行为编码,着眼于虚拟现实的人机交互评估。我们的系统将现有的时间序列分类统计技术应用于三维姿态遥测数据,包括e-分裂变化点检测和具有聚集分层聚类的“符号聚集近似”(SAX)。这些技术创造了单人视频数据的短片段类——潜在兴趣的短动作,称为“微手势”。然后,长短期记忆(LSTM)层通过预训练的OpenPose卷积神经网络(CNN)从纯粹从视频生成的姿势特征中学习这些微手势,以预测它们在未标记的测试视频中的出现情况。我们展示并讨论了在CMU Panoptic数据集的单用户姿势视频上测试我们的系统的结果。
{"title":"Gesture and Action Discovery for Evaluating Virtual Environments with Semi-Supervised Segmentation of Telemetry Records","authors":"A. Batch, Kyungjun Lee, H. Maddali, N. Elmqvist","doi":"10.1109/AIVR.2018.00009","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00009","url":null,"abstract":"In this paper, we propose a novel pipeline for semi-supervised behavioral coding of videos of users testing a device or interface, with an eye toward human-computer interaction evaluation for virtual reality. Our system applies existing statistical techniques for time-series classification, including e-divisive change point detection and \"Symbolic Aggregate approXimation\" (SAX) with agglomerative hierarchical clustering, to 3D pose telemetry data. These techniques create classes of short segments of single-person video data–short actions of potential interest called \"micro-gestures.\" A long short-term memory (LSTM) layer then learns these micro-gestures from pose features generated purely from video via a pre-trained OpenPose convolutional neural network (CNN) to predict their occurrence in unlabeled test videos. We present and discuss the results from testing our system on the single user pose videos of the CMU Panoptic Dataset.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127917165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Understanding Head-Mounted Display FOV in Maritime Search and Rescue Object Detection 理解头戴式显示器视场在海上搜救目标检测中的应用
Susannah Soon, A. Lugmayr, A. Woods, T. Tan
Object detection when viewing Head Mounted Display (HMD) imagery for maritime Search and Rescue (SAR) detection tasks poses many challenges, for example, objects are difficult to distinguish due to low contrast or low observability. We survey existing Artificial Intelligence (AI) image processing algorithms that improve object detection performance. We also examine central and peripheral vision (HVS) and their relation to Field of View (FOV) within the Human Visual System when viewing such images using HMDs. We present results from our user-study which simulates different maritime scenes used in object detection tasks. Users are tested viewing sample images with different visual features over different FOVs, to inform the development of an AI algorithm for object detection.
在海上搜救(SAR)探测任务中,观看头戴式显示器(HMD)图像时,物体检测面临许多挑战,例如,由于低对比度或低可观测性,物体难以区分。我们调查了现有的提高目标检测性能的人工智能(AI)图像处理算法。我们还研究了使用头戴式显示器观看此类图像时,人类视觉系统中的中央和周边视觉(HVS)及其与视野(FOV)的关系。我们展示了用户研究的结果,该研究模拟了用于目标检测任务的不同海事场景。测试用户在不同的fov上观看具有不同视觉特征的样本图像,为开发用于物体检测的人工智能算法提供信息。
{"title":"Understanding Head-Mounted Display FOV in Maritime Search and Rescue Object Detection","authors":"Susannah Soon, A. Lugmayr, A. Woods, T. Tan","doi":"10.1109/AIVR.2018.00023","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00023","url":null,"abstract":"Object detection when viewing Head Mounted Display (HMD) imagery for maritime Search and Rescue (SAR) detection tasks poses many challenges, for example, objects are difficult to distinguish due to low contrast or low observability. We survey existing Artificial Intelligence (AI) image processing algorithms that improve object detection performance. We also examine central and peripheral vision (HVS) and their relation to Field of View (FOV) within the Human Visual System when viewing such images using HMDs. We present results from our user-study which simulates different maritime scenes used in object detection tasks. Users are tested viewing sample images with different visual features over different FOVs, to inform the development of an AI algorithm for object detection.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114333118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1