Towards a Symbiotic Human-Machine Depth Sensor: Exploring 3D Gaze for Object Reconstruction

Teresa Hirzle, Jan Gugenheimer, Florian Geiselhart, A. Bulling, E. Rukzio
{"title":"Towards a Symbiotic Human-Machine Depth Sensor: Exploring 3D Gaze for Object Reconstruction","authors":"Teresa Hirzle, Jan Gugenheimer, Florian Geiselhart, A. Bulling, E. Rukzio","doi":"10.1145/3266037.3266119","DOIUrl":null,"url":null,"abstract":"Eye tracking is expected to become an integral part of future augmented reality (AR) head-mounted displays (HMDs) given that it can easily be integrated into existing hardware and provides a versatile interaction modality. To augment objects in the real world, AR HMDs require a three-dimensional understanding of the scene, which is currently solved using depth cameras. In this work we aim to explore how 3D gaze data can be used to enhance scene understanding for AR HMDs by envisioning a symbiotic human-machine depth camera, fusing depth data with 3D gaze information. We present a first proof of concept, exploring to what extend we are able to recognise what a user is looking at by plotting 3D gaze data. To measure 3D gaze, we implemented a vergence-based algorithm and built an eye tracking setup consisting of a Pupil Labs headset and an OptiTrack motion capture system, allowing us to measure 3D gaze inside a 50x50x50 cm volume. We show first 3D gaze plots of \"gazed-at\" objects and describe our vision of a symbiotic human-machine depth camera that combines a depth camera and human 3D gaze information.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"226 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3266037.3266119","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Eye tracking is expected to become an integral part of future augmented reality (AR) head-mounted displays (HMDs) given that it can easily be integrated into existing hardware and provides a versatile interaction modality. To augment objects in the real world, AR HMDs require a three-dimensional understanding of the scene, which is currently solved using depth cameras. In this work we aim to explore how 3D gaze data can be used to enhance scene understanding for AR HMDs by envisioning a symbiotic human-machine depth camera, fusing depth data with 3D gaze information. We present a first proof of concept, exploring to what extend we are able to recognise what a user is looking at by plotting 3D gaze data. To measure 3D gaze, we implemented a vergence-based algorithm and built an eye tracking setup consisting of a Pupil Labs headset and an OptiTrack motion capture system, allowing us to measure 3D gaze inside a 50x50x50 cm volume. We show first 3D gaze plots of "gazed-at" objects and describe our vision of a symbiotic human-machine depth camera that combines a depth camera and human 3D gaze information.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
走向人机共生深度传感器:探索三维凝视对象重建
眼动追踪有望成为未来增强现实(AR)头戴式显示器(hmd)的一个组成部分,因为它可以很容易地集成到现有的硬件中,并提供一种通用的交互方式。为了增强现实世界中的物体,AR头戴式显示器需要对场景进行三维理解,目前使用深度相机来解决这个问题。在这项工作中,我们的目标是通过设想一个共生人机深度相机,融合深度数据和3D凝视信息,探索如何使用3D凝视数据来增强AR头戴式设备的场景理解。我们提出了第一个概念证明,通过绘制3D凝视数据,探索我们能够在多大程度上识别用户正在看什么。为了测量3D凝视,我们实现了一种基于顶点的算法,并建立了一个眼动追踪装置,该装置由瞳孔实验室耳机和OptiTrack动作捕捉系统组成,使我们能够在50x50x50厘米的体积内测量3D凝视。我们展示了“凝视”对象的第一个3D凝视图,并描述了我们的共生人机深度相机的愿景,该相机结合了深度相机和人类3D凝视信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Fostering Design Process of Shape-Changing Interfaces Hybrid Watch User Interfaces: Collaboration Between Electro-Mechanical Components and Analog Materials Trans-scale Playground: An Immersive Visual Telexistence System for Human Adaptation A Stretch-Flexible Textile Multitouch Sensor for User Input on Inflatable Membrane Structures & Non-Planar Surfaces MetaArms: Body Remapping Using Feet-Controlled Artificial Arms
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1