利用基于位置的增强现实技术量化停留时间:利用 Vision Transformer 对移动眼球跟踪数据进行动态 AOI 分析。

IF 1.3 4区 心理学 Q3 OPHTHALMOLOGY Journal of Eye Movement Research Pub Date : 2024-04-29 eCollection Date: 2024-01-01 DOI:10.16910/jemr.17.3.3
Julien Mercier, Olivier Ertz, Erwan Bocher
{"title":"利用基于位置的增强现实技术量化停留时间:利用 Vision Transformer 对移动眼球跟踪数据进行动态 AOI 分析。","authors":"Julien Mercier, Olivier Ertz, Erwan Bocher","doi":"10.16910/jemr.17.3.3","DOIUrl":null,"url":null,"abstract":"<p><p>Mobile eye tracking captures egocentric vision and is well-suited for naturalistic studies. However, its data is noisy, especially when acquired outdoor with multiple participants over several sessions. Area of interest analysis on moving targets is difficult because A) camera and objects move nonlinearly and may disappear/reappear from the scene; and B) off-the-shelf analysis tools are limited to linearly moving objects. As a result, researchers resort to time-consuming manual annotation, which limits the use of mobile eye tracking in naturalistic studies. We introduce a method based on a fine-tuned Vision Transformer (ViT) model for classifying frames with overlaying gaze markers. After fine-tuning a model on a manually labelled training set made of 1.98% (=7845 frames) of our entire data for three epochs, our model reached 99.34% accuracy as evaluated on hold-out data. We used the method to quantify participants' dwell time on a tablet during the outdoor user test of a mobile augmented reality application for biodiversity education. We discuss the benefits and limitations of our approach and its potential to be applied to other contexts.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":1.3000,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11165940/pdf/","citationCount":"0","resultStr":"{\"title\":\"Quantifying Dwell Time With Location-based Augmented Reality: Dynamic AOI Analysis on Mobile Eye Tracking Data With Vision Transformer.\",\"authors\":\"Julien Mercier, Olivier Ertz, Erwan Bocher\",\"doi\":\"10.16910/jemr.17.3.3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Mobile eye tracking captures egocentric vision and is well-suited for naturalistic studies. However, its data is noisy, especially when acquired outdoor with multiple participants over several sessions. Area of interest analysis on moving targets is difficult because A) camera and objects move nonlinearly and may disappear/reappear from the scene; and B) off-the-shelf analysis tools are limited to linearly moving objects. As a result, researchers resort to time-consuming manual annotation, which limits the use of mobile eye tracking in naturalistic studies. We introduce a method based on a fine-tuned Vision Transformer (ViT) model for classifying frames with overlaying gaze markers. After fine-tuning a model on a manually labelled training set made of 1.98% (=7845 frames) of our entire data for three epochs, our model reached 99.34% accuracy as evaluated on hold-out data. We used the method to quantify participants' dwell time on a tablet during the outdoor user test of a mobile augmented reality application for biodiversity education. We discuss the benefits and limitations of our approach and its potential to be applied to other contexts.</p>\",\"PeriodicalId\":15813,\"journal\":{\"name\":\"Journal of Eye Movement Research\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2024-04-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11165940/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Eye Movement Research\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.16910/jemr.17.3.3\",\"RegionNum\":4,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q3\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Eye Movement Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.16910/jemr.17.3.3","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q3","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

移动眼动仪可捕捉以自我为中心的视觉,非常适合自然研究。然而,其数据噪声较大,尤其是在户外与多名参与者进行多个时段的数据采集时。对移动目标进行感兴趣区分析很困难,因为:A)摄像机和物体是非线性移动的,可能会从场景中消失或出现;B)现成的分析工具仅限于线性移动的物体。因此,研究人员不得不采用耗时的手动注释,这限制了移动眼动跟踪在自然研究中的应用。我们介绍了一种基于微调视觉变换器(ViT)模型的方法,用于对带有重叠注视标记的帧进行分类。在由全部数据的 1.98%(=7845 帧)组成的人工标注训练集上对模型进行三次历时微调后,我们的模型在保留数据上的评估准确率达到了 99.34%。在对生物多样性教育移动增强现实应用进行户外用户测试时,我们使用该方法量化了参与者在平板电脑上的停留时间。我们讨论了我们的方法的优点和局限性,以及将其应用于其他场合的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Quantifying Dwell Time With Location-based Augmented Reality: Dynamic AOI Analysis on Mobile Eye Tracking Data With Vision Transformer.

Mobile eye tracking captures egocentric vision and is well-suited for naturalistic studies. However, its data is noisy, especially when acquired outdoor with multiple participants over several sessions. Area of interest analysis on moving targets is difficult because A) camera and objects move nonlinearly and may disappear/reappear from the scene; and B) off-the-shelf analysis tools are limited to linearly moving objects. As a result, researchers resort to time-consuming manual annotation, which limits the use of mobile eye tracking in naturalistic studies. We introduce a method based on a fine-tuned Vision Transformer (ViT) model for classifying frames with overlaying gaze markers. After fine-tuning a model on a manually labelled training set made of 1.98% (=7845 frames) of our entire data for three epochs, our model reached 99.34% accuracy as evaluated on hold-out data. We used the method to quantify participants' dwell time on a tablet during the outdoor user test of a mobile augmented reality application for biodiversity education. We discuss the benefits and limitations of our approach and its potential to be applied to other contexts.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.90
自引率
33.30%
发文量
10
审稿时长
10 weeks
期刊介绍: The Journal of Eye Movement Research is an open-access, peer-reviewed scientific periodical devoted to all aspects of oculomotor functioning including methodology of eye recording, neurophysiological and cognitive models, attention, reading, as well as applications in neurology, ergonomy, media research and other areas,
期刊最新文献
The level of skills involved in an observation-based gait analysis. Persistence of primitive reflexes associated with asymmetries in fixation and ocular motility values. Influence of complexity and Gestalt principles on aesthetic preferences for building façades: An eye tracking study Classification framework to identify similar visual scan paths using multiple similarity metrics Gender selection dilemma in FMCG advertising: Insights from eye-tracking research
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1