文本凝视对齐的图像配准

Pascual Martínez-Gómez, Chen Chen, T. Hara, Yoshinobu Kano, Akiko Aizawa
{"title":"文本凝视对齐的图像配准","authors":"Pascual Martínez-Gómez, Chen Chen, T. Hara, Yoshinobu Kano, Akiko Aizawa","doi":"10.1145/2166966.2167012","DOIUrl":null,"url":null,"abstract":"Applications using eye-tracking devices need a higher accuracy in recognition when the task reaches a certain complexity. Thus, more sophisticated methods to correct eye-tracking measurement errors are necessary to lower the penetration barrier of eye-trackers in unconstrained tasks. We propose to take advantage of the content or the structure of textual information displayed on the screen to build informed error-correction algorithms that generalize well. The idea is to use feature-based image registration techniques to perform a linear transformation of gaze coordinates to find a good alignment with text printed on the screen. In order to estimate the parameters of the linear transformation, three optimization strategies are proposed to avoid the problem of local minima, namely Monte Carlo, multi-resolution and multi-blur optimization. Experimental results show that a more precise alignment of gaze data with words on the screen can be achieved by using these methods, allowing a more reliable use of eye-trackers in complex and unconstrained tasks.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Image registration for text-gaze alignment\",\"authors\":\"Pascual Martínez-Gómez, Chen Chen, T. Hara, Yoshinobu Kano, Akiko Aizawa\",\"doi\":\"10.1145/2166966.2167012\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Applications using eye-tracking devices need a higher accuracy in recognition when the task reaches a certain complexity. Thus, more sophisticated methods to correct eye-tracking measurement errors are necessary to lower the penetration barrier of eye-trackers in unconstrained tasks. We propose to take advantage of the content or the structure of textual information displayed on the screen to build informed error-correction algorithms that generalize well. The idea is to use feature-based image registration techniques to perform a linear transformation of gaze coordinates to find a good alignment with text printed on the screen. In order to estimate the parameters of the linear transformation, three optimization strategies are proposed to avoid the problem of local minima, namely Monte Carlo, multi-resolution and multi-blur optimization. Experimental results show that a more precise alignment of gaze data with words on the screen can be achieved by using these methods, allowing a more reliable use of eye-trackers in complex and unconstrained tasks.\",\"PeriodicalId\":87287,\"journal\":{\"name\":\"IUI. International Conference on Intelligent User Interfaces\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-02-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IUI. International Conference on Intelligent User Interfaces\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2166966.2167012\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IUI. International Conference on Intelligent User Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2166966.2167012","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

摘要

当任务达到一定的复杂性时,使用眼动追踪设备的应用程序需要更高的识别精度。因此,需要更复杂的方法来纠正眼动追踪测量误差,以降低眼动追踪器在无约束任务中的渗透障碍。我们建议利用屏幕上显示的文本信息的内容或结构来构建泛化良好的知情纠错算法。这个想法是使用基于特征的图像配准技术来执行凝视坐标的线性变换,以找到与屏幕上打印的文本的良好对齐。为了估计线性变换的参数,提出了三种优化策略,即蒙特卡罗优化、多分辨率优化和多模糊优化,以避免局部最小值问题。实验结果表明,使用这些方法可以更精确地将注视数据与屏幕上的单词对齐,从而使眼动仪在复杂和无约束的任务中更可靠地使用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Image registration for text-gaze alignment
Applications using eye-tracking devices need a higher accuracy in recognition when the task reaches a certain complexity. Thus, more sophisticated methods to correct eye-tracking measurement errors are necessary to lower the penetration barrier of eye-trackers in unconstrained tasks. We propose to take advantage of the content or the structure of textual information displayed on the screen to build informed error-correction algorithms that generalize well. The idea is to use feature-based image registration techniques to perform a linear transformation of gaze coordinates to find a good alignment with text printed on the screen. In order to estimate the parameters of the linear transformation, three optimization strategies are proposed to avoid the problem of local minima, namely Monte Carlo, multi-resolution and multi-blur optimization. Experimental results show that a more precise alignment of gaze data with words on the screen can be achieved by using these methods, allowing a more reliable use of eye-trackers in complex and unconstrained tasks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
IUI 2022: 27th International Conference on Intelligent User Interfaces, Helsinki, Finland, March 22 - 25, 2022 Employing Social Media to Improve Mental Health: Pitfalls, Lessons Learned, and the Next Frontier IUI '21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021 Towards Making Videos Accessible for Low Vision Screen Magnifier Users. SaIL: Saliency-Driven Injection of ARIA Landmarks.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1