Remote and head-motion-free gaze tracking for real environments with automated head-eye model calibrations

H. Yamazoe, A. Utsumi, Tomoko Yonezawa, Shinji Abe
{"title":"Remote and head-motion-free gaze tracking for real environments with automated head-eye model calibrations","authors":"H. Yamazoe, A. Utsumi, Tomoko Yonezawa, Shinji Abe","doi":"10.1109/CVPRW.2008.4563184","DOIUrl":null,"url":null,"abstract":"We propose a gaze estimation method that substantially relaxes the practical constraints possessed by most conventional methods. Gaze estimation research has a long history, and many systems including some commercial schemes have been proposed. However, the application domain of gaze estimation is still limited (e.g, measurement devices for HCI issues, input devices for VDT works) due to the limitations of such systems. First, users must be close to the system (or must wear it) since most systems employ IR illumination and/or stereo cameras. Second, users are required to perform manual calibrations to get geometrically meaningful data. These limitations prevent applications of the system that capture and utilize useful human gaze information in daily situations. In our method, inspired by a bundled adjustment framework, the parameters of the 3D head-eye model are robustly estimated by minimizing pixel-wise re-projection errors between single-camera input images and eye model projections for multiple frames with adjacently estimated head poses. Since this process runs automatically, users does not need to be aware of it. Using the estimated parameters, 3D head poses and gaze directions for newly observed images can be directly determined with the same error minimization manner. This mechanism enables robust gaze estimation with single-camera-based low resolution images without user-aware preparation tasks (i.e., calibration). Experimental results show the proposed method achieves 6deg accuracy with QVGA (320 times 240) images. The proposed algorithm is free from observation distances. We confirmed that our system works with long-distance observations (10 meters).","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"189 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"27","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPRW.2008.4563184","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 27

Abstract

We propose a gaze estimation method that substantially relaxes the practical constraints possessed by most conventional methods. Gaze estimation research has a long history, and many systems including some commercial schemes have been proposed. However, the application domain of gaze estimation is still limited (e.g, measurement devices for HCI issues, input devices for VDT works) due to the limitations of such systems. First, users must be close to the system (or must wear it) since most systems employ IR illumination and/or stereo cameras. Second, users are required to perform manual calibrations to get geometrically meaningful data. These limitations prevent applications of the system that capture and utilize useful human gaze information in daily situations. In our method, inspired by a bundled adjustment framework, the parameters of the 3D head-eye model are robustly estimated by minimizing pixel-wise re-projection errors between single-camera input images and eye model projections for multiple frames with adjacently estimated head poses. Since this process runs automatically, users does not need to be aware of it. Using the estimated parameters, 3D head poses and gaze directions for newly observed images can be directly determined with the same error minimization manner. This mechanism enables robust gaze estimation with single-camera-based low resolution images without user-aware preparation tasks (i.e., calibration). Experimental results show the proposed method achieves 6deg accuracy with QVGA (320 times 240) images. The proposed algorithm is free from observation distances. We confirmed that our system works with long-distance observations (10 meters).
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
具有自动头眼模型校准的真实环境的远程和头部运动无凝视跟踪
我们提出了一种注视估计方法,大大放宽了大多数传统方法所具有的实际约束。注视估计的研究历史悠久,已经提出了许多系统,包括一些商业方案。然而,由于这些系统的局限性,注视估计的应用领域仍然有限(例如,用于HCI问题的测量设备,用于VDT工作的输入设备)。首先,用户必须靠近系统(或必须佩戴它),因为大多数系统使用红外照明和/或立体相机。其次,用户需要进行手动校准以获得几何上有意义的数据。这些限制阻碍了系统在日常情况下捕获和利用有用的人类凝视信息的应用。在我们的方法中,受捆绑调整框架的启发,通过最小化单摄像机输入图像与具有邻接估计头部姿态的多帧眼睛模型投影之间的逐像素重投影误差,对3D头眼模型的参数进行鲁棒估计。因为这个过程是自动运行的,所以用户不需要知道它。利用估计的参数,可以直接确定新观测图像的三维头部姿态和凝视方向,并以相同的误差最小化方式进行。这种机制可以实现基于单摄像头的低分辨率图像的鲁棒凝视估计,而无需用户感知的准备任务(即校准)。实验结果表明,该方法在QVGA (320 × 240)图像上达到了6度的精度。该算法不受观测距离的影响。我们确认我们的系统适用于远距离观测(10米)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Multi-fiber reconstruction from DW-MRI using a continuous mixture of von Mises-Fisher distributions New insights into the calibration of ToF-sensors Circular generalized cylinder fitting for 3D reconstruction in endoscopic imaging based on MRF A GPU-based implementation of motion detection from a moving platform Face model fitting based on machine learning from multi-band images of facial components
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1