首页 > 最新文献

2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)最新文献

英文 中文
Eye Tracking Data Collection Protocol for VR for Remotely Located Subjects using Blockchain and Smart Contracts 使用区块链和智能合约的远程定位对象的VR眼动追踪数据收集协议
Efe Bozkir, Shahram Eivazi, Mete Akgün, Enkelejda Kasneci
Eye tracking data collection in the virtual reality context is typically carried out in laboratory settings, which usually limits the number of participants or consumes at least several months of research time. In addition, under laboratory settings, subjects may not behave naturally due to being recorded in an uncomfortable environment. In this work, we propose a proof-of-concept eye tracking data collection protocol and its implementation to collect eye tracking data from remotely located subjects, particularly for virtual reality using Ethereum blockchain and smart contracts. With the proposed protocol, data collectors can collect high quality eye tracking data from a large number of human subjects with heterogeneous socio-demographic characteristics. The quality and the amount of data can be helpful for various tasks in datadriven human-computer interaction and artificial intelligence.
虚拟现实环境中的眼动追踪数据收集通常在实验室环境中进行,这通常限制了参与者的数量或消耗至少几个月的研究时间。此外,在实验室环境下,由于被记录在一个不舒服的环境中,受试者可能会表现得不自然。在这项工作中,我们提出了一种概念验证眼动追踪数据收集协议及其实现,以从远程位置的受试者收集眼动追踪数据,特别是使用以太坊区块链和智能合约的虚拟现实。利用该方案,数据采集人员可以从大量具有异质社会人口特征的人类受试者中收集高质量的眼动追踪数据。数据的质量和数量对数据驱动的人机交互和人工智能中的各种任务都有帮助。
{"title":"Eye Tracking Data Collection Protocol for VR for Remotely Located Subjects using Blockchain and Smart Contracts","authors":"Efe Bozkir, Shahram Eivazi, Mete Akgün, Enkelejda Kasneci","doi":"10.1109/AIVR50618.2020.00083","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00083","url":null,"abstract":"Eye tracking data collection in the virtual reality context is typically carried out in laboratory settings, which usually limits the number of participants or consumes at least several months of research time. In addition, under laboratory settings, subjects may not behave naturally due to being recorded in an uncomfortable environment. In this work, we propose a proof-of-concept eye tracking data collection protocol and its implementation to collect eye tracking data from remotely located subjects, particularly for virtual reality using Ethereum blockchain and smart contracts. With the proposed protocol, data collectors can collect high quality eye tracking data from a large number of human subjects with heterogeneous socio-demographic characteristics. The quality and the amount of data can be helpful for various tasks in datadriven human-computer interaction and artificial intelligence.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124996107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Comfort of Aircraft Seats for Customers of Size using Digital Human Model in Virtual Reality 使用虚拟现实中的数字人体模型为不同尺寸的客户提供飞机座椅舒适度
Sara Panicker, T. Huysmans
This paper looks at how Digital Human models can be used with Virtual Reality to understand the seat comfort in airplane economy class seats from the perspective of obese passengers. Participants were placed in a virtual environment similar to an economy class cabin and were asked to rate their perception of the space and other comfort parameters. The results showed that the participants experienced space crunch when they saw through the perspective of an obese person. This paper holds the future for a step towards ergonomics analyses using Digital Human Modeling and Virtual Reality.
本文从肥胖乘客的角度出发,探讨了如何利用数字人体模型和虚拟现实技术来了解飞机经济舱座椅的舒适性。参与者被放置在一个类似于经济舱的虚拟环境中,并被要求评价他们对空间和其他舒适参数的感觉。结果表明,当参与者从肥胖者的角度看世界时,他们会感到空间紧张。本文展望了利用数字人体建模和虚拟现实技术进行人体工程学分析的未来。
{"title":"Comfort of Aircraft Seats for Customers of Size using Digital Human Model in Virtual Reality","authors":"Sara Panicker, T. Huysmans","doi":"10.1109/AIVR50618.2020.00045","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00045","url":null,"abstract":"This paper looks at how Digital Human models can be used with Virtual Reality to understand the seat comfort in airplane economy class seats from the perspective of obese passengers. Participants were placed in a virtual environment similar to an economy class cabin and were asked to rate their perception of the space and other comfort parameters. The results showed that the participants experienced space crunch when they saw through the perspective of an obese person. This paper holds the future for a step towards ergonomics analyses using Digital Human Modeling and Virtual Reality.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116799640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Style-transfer GANs for bridging the domain gap in synthetic pose estimator training 在合成姿态估计器训练中弥合领域差距的风格转移 GANs
Pavel Rojtberg, Thomas Pollabauer, Arjan Kuijper
Given the dependency of current CNN architectures on a large training set, the possibility of using synthetic data is alluring as it allows generating a virtually infinite amount of labeled training data. However, producing such data is a nontrivial task as current CNN architectures are sensitive to the domain gap between real and synthetic data.We propose to adopt general-purpose GAN models for pixellevel image translation, allowing to formulate the domain gap itself as a learning problem. The obtained models are then used either during training or inference to bridge the domain gap. Here, we focus on training the single-stage YOLO6D [20] object pose estimator on synthetic CAD geometry only, where not even approximate surface information is available. When employing paired GAN models, we use an edge-based intermediate domain and introduce different mappings to represent the unknown surface properties.Our evaluation shows a considerable improvement in model performance when compared to a model trained with the same degree of domain randomization, while requiring only very little additional effort.
鉴于当前的 CNN 架构依赖于大量的训练集,使用合成数据的可能性非常诱人,因为它可以生成几乎无限量的标注训练数据。我们建议采用通用 GAN 模型进行像素级图像转换,从而将领域差距本身作为一个学习问题。我们建议采用通用 GAN 模型来处理像素级图像转换问题,从而将领域差距本身表述为学习问题,然后在训练或推理过程中使用获得的模型来弥合领域差距。在此,我们将重点放在仅在合成 CAD 几何图形上训练单级 YOLO6D [20] 物体姿态估计器上,在这种情况下,甚至连近似表面信息都无法获得。在使用配对 GAN 模型时,我们使用基于边缘的中间域,并引入不同的映射来表示未知的表面属性。我们的评估结果表明,与使用相同程度的域随机化方法训练的模型相比,模型性能有了显著提高,而所需的额外工作却很少。
{"title":"Style-transfer GANs for bridging the domain gap in synthetic pose estimator training","authors":"Pavel Rojtberg, Thomas Pollabauer, Arjan Kuijper","doi":"10.1109/AIVR50618.2020.00039","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00039","url":null,"abstract":"Given the dependency of current CNN architectures on a large training set, the possibility of using synthetic data is alluring as it allows generating a virtually infinite amount of labeled training data. However, producing such data is a nontrivial task as current CNN architectures are sensitive to the domain gap between real and synthetic data.We propose to adopt general-purpose GAN models for pixellevel image translation, allowing to formulate the domain gap itself as a learning problem. The obtained models are then used either during training or inference to bridge the domain gap. Here, we focus on training the single-stage YOLO6D [20] object pose estimator on synthetic CAD geometry only, where not even approximate surface information is available. When employing paired GAN models, we use an edge-based intermediate domain and introduce different mappings to represent the unknown surface properties.Our evaluation shows a considerable improvement in model performance when compared to a model trained with the same degree of domain randomization, while requiring only very little additional effort.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126253251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Shooting Labels: 3D Semantic Labeling by Virtual Reality 拍摄标签:虚拟现实的3D语义标签
Pierluigi Zama Ramirez, Claudio Paternesi, Daniele De Gregorio, L. D. Stefano
Availability of a few, large-size, annotated datasets, like ImageNet, Pascal VOC and COCO, has lead deep learning to revolutionize computer vision research by achieving astonishing results in several vision tasks. We argue that new tools to facilitate generation of annotated datasets may help spreading data-driven AI throughout applications and domains. In this work we propose Shooting Labels, the first 3D labeling tool for dense 3D semantic segmentation which exploits Virtual Reality to render the labeling task as easy and fun as playing a video-game. Our tool allows for semantically labeling large scale environments very expeditiously, whatever the nature of the 3D data at hand (e.g. point clouds, mesh). Furthermore, Shooting Labels efficiently integrates multiusers annotations to improve the labeling accuracy automatically and compute a label uncertainty map. Besides, within our framework the 3D annotations can be projected into 2D images, thereby speeding up also a notoriously slow and expensive task such as pixel-wise semantic labeling. We demonstrate the accuracy and efficiency of our tool in two different scenarios: an indoor workspace provided by Matterport3D and a large-scale outdoor environment reconstructed from 1000+ KITTI images.
ImageNet、Pascal VOC和COCO等一些大型带注释的数据集的可用性,通过在几个视觉任务中取得惊人的结果,使深度学习彻底改变了计算机视觉研究。我们认为,促进生成注释数据集的新工具可能有助于在整个应用程序和领域中传播数据驱动的人工智能。在这项工作中,我们提出了射击标签,这是第一个用于密集3D语义分割的3D标记工具,它利用虚拟现实使标记任务像玩视频游戏一样简单有趣。我们的工具允许非常快速地对大规模环境进行语义标记,无论手头的3D数据的性质如何(例如点云,网格)。此外,拍摄标签有效地集成了多用户标注,自动提高标注精度,并计算出标签不确定度图。此外,在我们的框架内,3D注释可以投影到2D图像中,从而加快了众所周知的缓慢和昂贵的任务,如逐像素语义标记。我们在两个不同的场景中展示了我们工具的准确性和效率:一个是由Matterport3D提供的室内工作空间,另一个是由1000多张KITTI图像重建的大型室外环境。
{"title":"Shooting Labels: 3D Semantic Labeling by Virtual Reality","authors":"Pierluigi Zama Ramirez, Claudio Paternesi, Daniele De Gregorio, L. D. Stefano","doi":"10.1109/AIVR50618.2020.00027","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00027","url":null,"abstract":"Availability of a few, large-size, annotated datasets, like ImageNet, Pascal VOC and COCO, has lead deep learning to revolutionize computer vision research by achieving astonishing results in several vision tasks. We argue that new tools to facilitate generation of annotated datasets may help spreading data-driven AI throughout applications and domains. In this work we propose Shooting Labels, the first 3D labeling tool for dense 3D semantic segmentation which exploits Virtual Reality to render the labeling task as easy and fun as playing a video-game. Our tool allows for semantically labeling large scale environments very expeditiously, whatever the nature of the 3D data at hand (e.g. point clouds, mesh). Furthermore, Shooting Labels efficiently integrates multiusers annotations to improve the labeling accuracy automatically and compute a label uncertainty map. Besides, within our framework the 3D annotations can be projected into 2D images, thereby speeding up also a notoriously slow and expensive task such as pixel-wise semantic labeling. We demonstrate the accuracy and efficiency of our tool in two different scenarios: an indoor workspace provided by Matterport3D and a large-scale outdoor environment reconstructed from 1000+ KITTI images.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125138750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
期刊
2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1