{"title":"Using Deep Learning to Increase Eye-Tracking Robustness, Accuracy, and Precision in Virtual Reality.","authors":"Kevin Barkevich, Reynold Bailey, Gabriel J Diaz","doi":"10.1145/3654705","DOIUrl":null,"url":null,"abstract":"<p><p>Algorithms for the estimation of gaze direction from mobile and video-based eye trackers typically involve tracking a feature of the eye that moves through the eye camera image in a way that covaries with the shifting gaze direction, such as the center or boundaries of the pupil. Tracking these features using traditional computer vision techniques can be difficult due to partial occlusion and environmental reflections. Although recent efforts to use machine learning (ML) for pupil tracking have demonstrated superior results when evaluated using standard measures of segmentation performance, little is known of how these networks may affect the quality of the final gaze estimate. This work provides an objective assessment of the impact of several contemporary ML-based methods for eye feature tracking when the subsequent gaze estimate is produced using either feature-based or model-based methods. Metrics include the accuracy and precision of the gaze estimate, as well as drop-out rate.</p>","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"7 2","pages":""},"PeriodicalIF":1.4000,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11308822/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM on computer graphics and interactive techniques","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3654705","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/5/17 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Algorithms for the estimation of gaze direction from mobile and video-based eye trackers typically involve tracking a feature of the eye that moves through the eye camera image in a way that covaries with the shifting gaze direction, such as the center or boundaries of the pupil. Tracking these features using traditional computer vision techniques can be difficult due to partial occlusion and environmental reflections. Although recent efforts to use machine learning (ML) for pupil tracking have demonstrated superior results when evaluated using standard measures of segmentation performance, little is known of how these networks may affect the quality of the final gaze estimate. This work provides an objective assessment of the impact of several contemporary ML-based methods for eye feature tracking when the subsequent gaze estimate is produced using either feature-based or model-based methods. Metrics include the accuracy and precision of the gaze estimate, as well as drop-out rate.
从移动和基于视频的眼球跟踪器中估计注视方向的算法通常涉及跟踪眼球的某个特征,该特征在眼球摄像头图像中的移动方式与注视方向的变化相一致,例如瞳孔的中心或边界。由于部分遮挡和环境反射等原因,使用传统计算机视觉技术跟踪这些特征可能比较困难。虽然最近使用机器学习(ML)进行瞳孔跟踪的努力在使用分割性能的标准措施进行评估时取得了优异的结果,但人们对这些网络如何影响最终注视估计的质量却知之甚少。这项研究客观评估了几种基于 ML 的当代眼部特征跟踪方法在使用基于特征或基于模型的方法进行后续注视估计时产生的影响。衡量标准包括注视估计的准确度和精确度,以及丢失率。