首页 > 最新文献

2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)最新文献

英文 中文
ATVR: An Attention Training System using Multitasking and Neurofeedback on Virtual Reality Platform 基于多任务和神经反馈的虚拟现实平台注意力训练系统ATVR
Menghe Zhang, Junsong Zhang, Dong Zhang
We present an attention training system based on the principles of multitasking training scenario and neurofeedback, which can be targeted on PCs and VR platforms. Our training system is a video game following the principle of multitasking training, which is designed for all ages. It adopts a non-invasive Electroencephalography (EEG) device Emotiv EPOC+ to collect EEG. Then wavelet package transformation(WPT) is applied to extract specific components of EEG signals. We then build a multi-class supporting vector machine(SVM) to classify different attention levels. The training system is built with the Unity game engine, which can be targeted on both desktops and Oculus VR headsets. We also launched an experiment by applying the system to preliminarily evaluate the effectiveness of our system. The results show that our system can generally improve users' abilities of multitasking and attention level.
我们提出了一种基于多任务训练场景和神经反馈原理的注意力训练系统,可以针对pc和VR平台进行训练。我们的训练系统是一个视频游戏,遵循多任务训练的原则,为所有年龄段的人设计。采用无创脑电图(EEG)装置Emotiv EPOC+采集EEG。然后利用小波包变换(WPT)提取脑电信号的特定成分。然后,我们构建了一个多类支持向量机(SVM)来对不同的注意力水平进行分类。训练系统是用Unity游戏引擎构建的,可以针对台式机和Oculus VR头显。我们还开展了应用该系统的实验,初步评价了系统的有效性。实验结果表明,该系统能普遍提高用户的多任务处理能力和注意力水平。
{"title":"ATVR: An Attention Training System using Multitasking and Neurofeedback on Virtual Reality Platform","authors":"Menghe Zhang, Junsong Zhang, Dong Zhang","doi":"10.1109/AIVR46125.2019.00032","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00032","url":null,"abstract":"We present an attention training system based on the principles of multitasking training scenario and neurofeedback, which can be targeted on PCs and VR platforms. Our training system is a video game following the principle of multitasking training, which is designed for all ages. It adopts a non-invasive Electroencephalography (EEG) device Emotiv EPOC+ to collect EEG. Then wavelet package transformation(WPT) is applied to extract specific components of EEG signals. We then build a multi-class supporting vector machine(SVM) to classify different attention levels. The training system is built with the Unity game engine, which can be targeted on both desktops and Oculus VR headsets. We also launched an experiment by applying the system to preliminarily evaluate the effectiveness of our system. The results show that our system can generally improve users' abilities of multitasking and attention level.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128012869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Immersive Analytics of Large Dynamic Networks via Overview and Detail Navigation 通过概览和详细导航的大型动态网络沉浸式分析
J. Sorger, Manuela Waldner, Wolfgang Knecht, Alessio Arleo
Analysis of large dynamic networks is a thriving research field, typically relying on 2D graph representations. The advent of affordable head mounted displays sparked new interest in the potential of 3D visualization for immersive network analytics. Nevertheless, most solutions do not scale well with the number of nodes and edges and rely on conventional fly-or walk-through navigation. In this paper, we present a novel approach for the exploration of large dynamic graphs in virtual reality that interweaves two navigation metaphors: overview exploration and immersive detail analysis. We thereby use the potential of state-of-the-art VR headsets, coupled with a web-based 3D rendering engine that supports heterogeneous input modalities to enable ad-hoc immersive network analytics. We validate our approach through a performance evaluation and a case study with experts analyzing medical data.
大型动态网络的分析是一个蓬勃发展的研究领域,通常依赖于二维图表示。可负担得起的头戴式显示器的出现,激发了人们对沉浸式网络分析的3D可视化潜力的新兴趣。然而,大多数解决方案不能很好地扩展节点和边的数量,并且依赖于传统的飞行或遍历导航。在本文中,我们提出了一种在虚拟现实中探索大型动态图形的新方法,该方法将两种导航隐喻交织在一起:概述探索和沉浸式细节分析。因此,我们利用了最先进的VR头显的潜力,加上基于web的3D渲染引擎,支持异构输入模式,以实现临时沉浸式网络分析。我们通过绩效评估和专家分析医疗数据的案例研究来验证我们的方法。
{"title":"Immersive Analytics of Large Dynamic Networks via Overview and Detail Navigation","authors":"J. Sorger, Manuela Waldner, Wolfgang Knecht, Alessio Arleo","doi":"10.1109/AIVR46125.2019.00030","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00030","url":null,"abstract":"Analysis of large dynamic networks is a thriving research field, typically relying on 2D graph representations. The advent of affordable head mounted displays sparked new interest in the potential of 3D visualization for immersive network analytics. Nevertheless, most solutions do not scale well with the number of nodes and edges and rely on conventional fly-or walk-through navigation. In this paper, we present a novel approach for the exploration of large dynamic graphs in virtual reality that interweaves two navigation metaphors: overview exploration and immersive detail analysis. We thereby use the potential of state-of-the-art VR headsets, coupled with a web-based 3D rendering engine that supports heterogeneous input modalities to enable ad-hoc immersive network analytics. We validate our approach through a performance evaluation and a case study with experts analyzing medical data.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130029562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Implementing Position-Based Real-Time Simulation of Large Crowds 实现基于位置的大人群实时仿真
Tomer Weiss, Alan Litteneker, Chenfanfu Jiang, Demetri Terzopoulos
Various methods have been proposed for simulating crowds of agents in recent years. Regrettably, not all are computational scalable as the number of simulated agents grows. Such quality is particularly important for virtual production, gaming, and immersive reality platforms. In this work, we provide an open-source implementation for the recently proposed Position-based dynamics approach to crowd simulation. Position-based crowd simulation was proven to be real-time, and scalable for crowds of up to 100k agents, while retaining dynamic agent and group behaviors. We provide both non-parallel, and GPU-based implementations. Our implementation is demonstrated on several scenarios, including examples from the original work. We witness interactive computation run-times, as well as visually realistic collective behavior.
近年来,人们提出了各种模拟智能体群体的方法。遗憾的是,随着模拟代理数量的增长,并非所有代理都具有计算可伸缩性。这种质量对于虚拟产品、游戏和沉浸式现实平台来说尤为重要。在这项工作中,我们为最近提出的基于位置的人群模拟动态方法提供了一个开源实现。基于位置的人群仿真被证明是实时的,可扩展到多达10万个智能体的人群,同时保留了动态的智能体和群体行为。我们提供非并行和基于gpu的实现。我们的实现在几个场景中进行了演示,包括原始作品中的示例。我们见证了交互计算运行时,以及视觉上逼真的集体行为。
{"title":"Implementing Position-Based Real-Time Simulation of Large Crowds","authors":"Tomer Weiss, Alan Litteneker, Chenfanfu Jiang, Demetri Terzopoulos","doi":"10.1109/AIVR46125.2019.00071","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00071","url":null,"abstract":"Various methods have been proposed for simulating crowds of agents in recent years. Regrettably, not all are computational scalable as the number of simulated agents grows. Such quality is particularly important for virtual production, gaming, and immersive reality platforms. In this work, we provide an open-source implementation for the recently proposed Position-based dynamics approach to crowd simulation. Position-based crowd simulation was proven to be real-time, and scalable for crowds of up to 100k agents, while retaining dynamic agent and group behaviors. We provide both non-parallel, and GPU-based implementations. Our implementation is demonstrated on several scenarios, including examples from the original work. We witness interactive computation run-times, as well as visually realistic collective behavior.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116920486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Unsupervised Learning of Depth and Ego-Motion From Cylindrical Panoramic Video 圆柱全景视频深度和自我运动的无监督学习
Alisha Sharma, Jonathan Ventura
We introduce a convolutional neural network model for unsupervised learning of depth and ego-motion from cylindrical panoramic video. Panoramic depth estimation is an important technology for applications such as virtual reality, 3d modeling, and autonomous robotic navigation. In contrast to previous approaches for applying convolutional neural networks to panoramic imagery, we use the cylindrical panoramic projection which allows for the use of the traditional CNN layers such as convolutional filters and max pooling without modification. Our evaluation of synthetic and real data shows that unsupervised learning of depth and ego-motion on cylindrical panoramic images can produce high-quality depth maps and that an increased field-of-view improves ego-motion estimation accuracy. We also introduce Headcam, a novel dataset of panoramic video collected from a helmet-mounted camera while biking in an urban setting.
本文提出了一种卷积神经网络模型,用于对圆柱全景视频的深度和自我运动进行无监督学习。全景深度估计是虚拟现实、三维建模和自主机器人导航等应用中的一项重要技术。与之前将卷积神经网络应用于全景图像的方法相比,我们使用圆柱形全景投影,它允许使用传统的CNN层,如卷积滤波器和最大池化,而无需修改。我们对合成数据和真实数据的评估表明,在圆柱形全景图像上对深度和自我运动进行无监督学习可以生成高质量的深度图,并且增加的视场可以提高自我运动估计的准确性。我们还介绍了Headcam,这是一种新颖的全景视频数据集,由在城市环境中骑自行车时安装在头盔上的摄像头收集。
{"title":"Unsupervised Learning of Depth and Ego-Motion From Cylindrical Panoramic Video","authors":"Alisha Sharma, Jonathan Ventura","doi":"10.1109/AIVR46125.2019.00018","DOIUrl":"https://doi.org/10.1109/AIVR46125.2019.00018","url":null,"abstract":"We introduce a convolutional neural network model for unsupervised learning of depth and ego-motion from cylindrical panoramic video. Panoramic depth estimation is an important technology for applications such as virtual reality, 3d modeling, and autonomous robotic navigation. In contrast to previous approaches for applying convolutional neural networks to panoramic imagery, we use the cylindrical panoramic projection which allows for the use of the traditional CNN layers such as convolutional filters and max pooling without modification. Our evaluation of synthetic and real data shows that unsupervised learning of depth and ego-motion on cylindrical panoramic images can produce high-quality depth maps and that an increased field-of-view improves ego-motion estimation accuracy. We also introduce Headcam, a novel dataset of panoramic video collected from a helmet-mounted camera while biking in an urban setting.","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114756985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1