首页 > 最新文献

Journal of Eye Movement Research最新文献

英文 中文
Mapping Eye-Tracking Research in Human-Computer Interaction: A Science-Mapping and Content-Analysis Study. 测绘眼动追踪在人机交互中的研究:一项科学测绘与内容分析研究。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2026-02-12 DOI: 10.3390/jemr19010023
Adem Korkmaz

Eye tracking has become a central method in human-computer interaction (HCI), supported by advances in sensing technologies and AI-based gaze analysis. Despite this rapid growth, a comprehensive and up-to-date overview of eye-tracking research across the broader HCI landscape remains lacking. This study combines records from Web of Science (WoS) and Scopus to analyse 1033 publications on eye tracking in HCI published between 2020 and 2025. After merging and deduplicating the datasets, we conducted bibliometric network analyses (keyword co-occurrence, co-citation, co-authorship, and source mapping) using VOSviewer and performed a qualitative content analysis of the 50 most-cited papers. The literature is dominated by journal articles and conference papers produced by small- to medium-sized research teams (mean: 3.9 authors per paper; h-index: 29). Keyword and overlay visualisations reveal four principal research axes: deep-learning-based gaze estimation; XR-related interaction paradigms within HCI; cognitive load and human factors; and usability- and accessibility-oriented interface design. The most-cited studies focus on gaze interaction in immersive environments, deep learning for gaze estimation, multimodal interaction, and physiological approaches to assessing cognitive load. Overall, the findings indicate that eye tracking in HCI is evolving from a measurement-oriented technique into a core enabling technology that supports interaction design, cognitive assessment, accessibility, and ethical considerations such as privacy. This review identifies research gaps and outlines future directions for benchmarking practices, real-world deployments, and privacy-preserving gaze analytics in HCI.

在传感技术和基于人工智能的注视分析的支持下,眼动追踪已成为人机交互(HCI)的核心方法。尽管这种快速增长,但在更广泛的人机交互领域,对眼球追踪研究的全面和最新的概述仍然缺乏。本研究结合了Web of Science (WoS)和Scopus的记录,分析了2020年至2025年间发表的1033篇关于HCI眼动追踪的出版物。在合并和重复数据集后,我们使用VOSviewer进行了文献计量网络分析(关键词共现、共被引、共同作者和来源映射),并对50篇被引最多的论文进行了定性内容分析。文献主要是由中小型研究团队撰写的期刊文章和会议论文(平均每篇论文3.9位作者;h指数:29)。关键词可视化和叠加可视化揭示了四个主要的研究方向:基于深度学习的凝视估计;人机交互中与xr相关的交互范例;认知负荷与人为因素;以及面向可用性和可访问性的界面设计。被引用最多的研究集中在沉浸式环境中的凝视交互、凝视估计的深度学习、多模态交互以及评估认知负荷的生理方法。总体而言,研究结果表明,眼动追踪在人机交互中正从一种面向测量的技术发展成为一种核心支持技术,支持交互设计、认知评估、可访问性和隐私等道德考虑。本综述指出了研究差距,并概述了HCI中基准实践、实际部署和隐私保护注视分析的未来方向。
{"title":"Mapping Eye-Tracking Research in Human-Computer Interaction: A Science-Mapping and Content-Analysis Study.","authors":"Adem Korkmaz","doi":"10.3390/jemr19010023","DOIUrl":"10.3390/jemr19010023","url":null,"abstract":"<p><p>Eye tracking has become a central method in human-computer interaction (HCI), supported by advances in sensing technologies and AI-based gaze analysis. Despite this rapid growth, a comprehensive and up-to-date overview of eye-tracking research across the broader HCI landscape remains lacking. This study combines records from Web of Science (WoS) and Scopus to analyse 1033 publications on eye tracking in HCI published between 2020 and 2025. After merging and deduplicating the datasets, we conducted bibliometric network analyses (keyword co-occurrence, co-citation, co-authorship, and source mapping) using VOSviewer and performed a qualitative content analysis of the 50 most-cited papers. The literature is dominated by journal articles and conference papers produced by small- to medium-sized research teams (mean: 3.9 authors per paper; h-index: 29). Keyword and overlay visualisations reveal four principal research axes: deep-learning-based gaze estimation; XR-related interaction paradigms within HCI; cognitive load and human factors; and usability- and accessibility-oriented interface design. The most-cited studies focus on gaze interaction in immersive environments, deep learning for gaze estimation, multimodal interaction, and physiological approaches to assessing cognitive load. Overall, the findings indicate that eye tracking in HCI is evolving from a measurement-oriented technique into a core enabling technology that supports interaction design, cognitive assessment, accessibility, and ethical considerations such as privacy. This review identifies research gaps and outlines future directions for benchmarking practices, real-world deployments, and privacy-preserving gaze analytics in HCI.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"19 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2026-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12921980/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146258371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Influence of Multimodal AR-HUD Navigation Prompt Design on Driving Behavior at F-Type-5 M Intersections. 多模态AR-HUD导航提示设计对f -5 M交叉口驾驶行为的影响
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2026-02-11 DOI: 10.3390/jemr19010022
Ziqi Liu, Zhengxing Yang, Yifan Du

In complex urban traffic environments, the design of multimodal prompts in augmented reality head-up displays (AR-HUDs) plays a critical role in driving safety and operational efficiency. Despite growing interest in audiovisual navigation assistance, empirical evidence remains limited regarding when prompts should be delivered and whether visual and auditory information should remain temporally aligned. To address this gap, this study aims to examine how audiovisual prompt timing and prompt mode influence driving behavior in AR-HUD navigation systems at complex F-type-5 m intersections through a within-subject experimental design. A 2 (prompt mode: synchronized vs. asynchronous) × 3 (prompt timing: -1000 m, -600 m, -400 m) design was employed to assess driver response time, situational awareness, and eye-movement measures, including average fixation duration and fixation count. The results showed clear main effects of both prompt mode and prompt timing. Compared with asynchronous prompts, synchronized prompts consistently resulted in shorter response times, reduced visual demand, and higher situational awareness. Driving performance also improved as prompt timing shifted closer to the intersection, from -1000 m to -400 m. But no significant interaction effects were found, suggesting that prompt mode and prompt timing can be treated as relatively independent design factors. In addition, among the six experimental conditions, the -400 m synchronized condition yielded the most favorable overall performance, whereas the -1000 m asynchronous condition performed worst. These findings indicate that in time-critical and low-tolerance scenarios, such as F-type-5 m intersections, near-distance synchronized multimodal prompts should be prioritized. This study provides empirical support for optimizing prompt timing and cross-modal temporal alignment in AR-HUD systems and offers actionable implications for interface and timing design.

在复杂的城市交通环境中,增强现实平视显示器(ar - hud)的多模式提示设计对驾驶安全和运行效率起着至关重要的作用。尽管人们对视听导航辅助越来越感兴趣,但关于何时应提供提示以及视觉和听觉信息是否应暂时保持一致的经验证据仍然有限。为了解决这一差距,本研究旨在通过受试者内实验设计,研究视听提示时间和提示模式如何影响AR-HUD导航系统在复杂的f型5米路口的驾驶行为。采用2(提示模式:同步与异步)× 3(提示时间:-1000 m、-600 m、-400 m)设计评估驾驶员反应时间、态势感知和眼动测量,包括平均注视时间和注视次数。结果表明,提示方式和提示时间的主要作用都很明显。与异步提示相比,同步提示始终导致更短的响应时间、更少的视觉需求和更高的态势感知。随着提示时间从-1000米转向-400米,驾驶性能也有所提高。但没有发现显著的交互效应,提示模式和提示时间可以作为相对独立的设计因素。此外,在6个实验条件中,-400 m同步条件的综合性能最好,而-1000 m异步条件的综合性能最差。这些发现表明,在时间紧迫和低容忍度的情况下,如f型5米路口,应优先考虑近距离同步多模式提示。本研究为优化AR-HUD系统的提示时间和跨模态时间对齐提供了实证支持,并为界面和时间设计提供了可操作的启示。
{"title":"Influence of Multimodal AR-HUD Navigation Prompt Design on Driving Behavior at F-Type-5 M Intersections.","authors":"Ziqi Liu, Zhengxing Yang, Yifan Du","doi":"10.3390/jemr19010022","DOIUrl":"10.3390/jemr19010022","url":null,"abstract":"<p><p>In complex urban traffic environments, the design of multimodal prompts in augmented reality head-up displays (AR-HUDs) plays a critical role in driving safety and operational efficiency. Despite growing interest in audiovisual navigation assistance, empirical evidence remains limited regarding when prompts should be delivered and whether visual and auditory information should remain temporally aligned. To address this gap, this study aims to examine how audiovisual prompt timing and prompt mode influence driving behavior in AR-HUD navigation systems at complex F-type-5 m intersections through a within-subject experimental design. A 2 (prompt mode: synchronized vs. asynchronous) × 3 (prompt timing: -1000 m, -600 m, -400 m) design was employed to assess driver response time, situational awareness, and eye-movement measures, including average fixation duration and fixation count. The results showed clear main effects of both prompt mode and prompt timing. Compared with asynchronous prompts, synchronized prompts consistently resulted in shorter response times, reduced visual demand, and higher situational awareness. Driving performance also improved as prompt timing shifted closer to the intersection, from -1000 m to -400 m. But no significant interaction effects were found, suggesting that prompt mode and prompt timing can be treated as relatively independent design factors. In addition, among the six experimental conditions, the -400 m synchronized condition yielded the most favorable overall performance, whereas the -1000 m asynchronous condition performed worst. These findings indicate that in time-critical and low-tolerance scenarios, such as F-type-5 m intersections, near-distance synchronized multimodal prompts should be prioritized. This study provides empirical support for optimizing prompt timing and cross-modal temporal alignment in AR-HUD systems and offers actionable implications for interface and timing design.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"19 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12921726/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146258335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Influence of Stimulus Layout and Social Presence on Deception-Related Eye Movements and Blinks in the Concealed Information Test. 刺激布局和社会存在对隐藏信息测验中欺骗相关眼动和眨眼的影响
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2026-02-11 DOI: 10.3390/jemr19010021
Valentin Foucher, Anke Huckauf

Over the past decades, eye movements and blinks have been integrated into Concealed Information Test (CIT) paradigms as indicators of deception. Recent findings suggested that fixation patterns in CITs depend on stimulus layout, particularly the distinction between sequential and simultaneous stimulus presentation. In addition, the impact of social presence on deceptive eye movements, critical for application of the CIT in real-world social settings, remains insufficiently examined. The present study addresses these issues through two experiments. In both, participants selected a card and had to reveal, conceal, or fake its value while all possible cards were displayed in pairs. Experiment 1 examined whether deceptive intentions could be differentiated using fixations and blinks, and extended previous findings on the effect of stimulus layout. Experiment 2 assessed the stability of deception-related eye movements and blinks across various levels of social presence (without, per video, being observed by a real person). Our findings replicate effects previously observed with simultaneous stimulus presentation of more cards, demonstrating how stimulus layout modulates deception-related eye movement patterns in CITs. The levels of social presence realised in this study did not significantly alter these patterns, indicating that deception-related eye movements and blinks in CITs remain stable under passive social presence.

在过去的几十年里,眼球运动和眨眼已经被整合到隐藏信息测试(CIT)范式中,作为欺骗的指标。最近的研究结果表明,cit的注视模式取决于刺激布局,特别是顺序和同步刺激呈现的区别。此外,社会存在对欺骗性眼球运动的影响,对于CIT在现实社会环境中的应用至关重要,仍然没有得到充分的研究。本研究通过两个实验来解决这些问题。在这两种情况下,参与者都选择一张牌,并必须透露、隐瞒或伪造它的价值,而所有可能的牌都是成对显示的。实验1考察了注视和眨眼是否可以区分欺骗意图,并扩展了先前关于刺激布局效应的研究结果。实验2评估了欺骗相关的眼球运动和眨眼在不同社会存在水平上的稳定性(每个视频都没有被真人观察到)。我们的研究结果重复了先前在同时呈现更多卡片的刺激下观察到的效果,证明了刺激布局如何调节cit中与欺骗相关的眼动模式。本研究中实现的社会存在水平并没有显著改变这些模式,这表明在被动社会存在下,cit中与欺骗相关的眼球运动和眨眼保持稳定。
{"title":"Influence of Stimulus Layout and Social Presence on Deception-Related Eye Movements and Blinks in the Concealed Information Test.","authors":"Valentin Foucher, Anke Huckauf","doi":"10.3390/jemr19010021","DOIUrl":"10.3390/jemr19010021","url":null,"abstract":"<p><p>Over the past decades, eye movements and blinks have been integrated into Concealed Information Test (CIT) paradigms as indicators of deception. Recent findings suggested that fixation patterns in CITs depend on stimulus layout, particularly the distinction between sequential and simultaneous stimulus presentation. In addition, the impact of social presence on deceptive eye movements, critical for application of the CIT in real-world social settings, remains insufficiently examined. The present study addresses these issues through two experiments. In both, participants selected a card and had to reveal, conceal, or fake its value while all possible cards were displayed in pairs. Experiment 1 examined whether deceptive intentions could be differentiated using fixations and blinks, and extended previous findings on the effect of stimulus layout. Experiment 2 assessed the stability of deception-related eye movements and blinks across various levels of social presence (without, per video, being observed by a real person). Our findings replicate effects previously observed with simultaneous stimulus presentation of more cards, demonstrating how stimulus layout modulates deception-related eye movement patterns in CITs. The levels of social presence realised in this study did not significantly alter these patterns, indicating that deception-related eye movements and blinks in CITs remain stable under passive social presence.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"19 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12921815/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146258399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Open-Source Horizontal Strabismus Simulator as an Evaluation Platform for Monocular Gaze Estimation Using Deep Learning Models. 基于深度学习模型的开放源代码水平斜视模拟器作为单眼注视估计评估平台。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2026-02-09 DOI: 10.3390/jemr19010020
Shumpei Takinami, Yuka Morita, Jun Seita, Tetsuro Oshika

Strabismus affects 2-4% of the global population, with horizontal cases accounting for more than 90%. Automated screening using monocular gaze estimation technology shows promise for early detection. However, existing models assume normal binocular vision, and their applicability to strabismus remains unvalidated due to the lack of evaluation platforms capable of reproducing disconjugate eye movements with known ground-truth angles. To address this gap, we developed an open-source, low-cost (approximately 200 USD) horizontal strabismus simulator. The simulator features two independently controllable artificial eyeballs mounted on a two-axis gimbal mechanism with servo motors and gyro sensors for real-time angle measurement. Mechanical accuracy achieved a mean absolute error of less than 0.1° across all axes, well below the clinical detection threshold of 1 prism diopter (≈0.57°). An evaluation of three representative AI models (Single Eye, GazeNet, and EyeNet) revealed estimation errors of 6.44-8.75°, substantially exceeding the clinical target of 2.8°. At this error level, small-angle strabismus (<15 prism diopters) would likely be missed, underscoring the need for strabismus-specific model development. Moreover, rapid accuracy degradation was observed beyond ±15° gaze angles. This platform establishes baseline performance metrics and provides a foundation for advancing gaze estimation technology for strabismus screening.

斜视影响全球人口的2-4%,横向病例占90%以上。使用单目注视估计技术的自动筛查显示出早期发现的希望。然而,现有的模型假设双目视力正常,由于缺乏能够以已知的地面真角再现分离眼运动的评估平台,其对斜视的适用性仍未得到验证。为了解决这个问题,我们开发了一个开源的、低成本的(大约200美元)水平斜视模拟器。该模拟器具有两个独立可控的人造眼球,安装在带有伺服电机和陀螺仪传感器的双轴框架机构上,用于实时角度测量。机械精度在所有轴上的平均绝对误差小于0.1°,远低于1棱镜屈光度的临床检测阈值(≈0.57°)。对三个具有代表性的人工智能模型(Single Eye, GazeNet和EyeNet)的评估显示,估计误差为6.44-8.75°,大大超过了2.8°的临床目标。在这个误差水平上,小角度斜视(
{"title":"An Open-Source Horizontal Strabismus Simulator as an Evaluation Platform for Monocular Gaze Estimation Using Deep Learning Models.","authors":"Shumpei Takinami, Yuka Morita, Jun Seita, Tetsuro Oshika","doi":"10.3390/jemr19010020","DOIUrl":"10.3390/jemr19010020","url":null,"abstract":"<p><p>Strabismus affects 2-4% of the global population, with horizontal cases accounting for more than 90%. Automated screening using monocular gaze estimation technology shows promise for early detection. However, existing models assume normal binocular vision, and their applicability to strabismus remains unvalidated due to the lack of evaluation platforms capable of reproducing disconjugate eye movements with known ground-truth angles. To address this gap, we developed an open-source, low-cost (approximately 200 USD) horizontal strabismus simulator. The simulator features two independently controllable artificial eyeballs mounted on a two-axis gimbal mechanism with servo motors and gyro sensors for real-time angle measurement. Mechanical accuracy achieved a mean absolute error of less than 0.1° across all axes, well below the clinical detection threshold of 1 prism diopter (≈0.57°). An evaluation of three representative AI models (Single Eye, GazeNet, and EyeNet) revealed estimation errors of 6.44-8.75°, substantially exceeding the clinical target of 2.8°. At this error level, small-angle strabismus (<15 prism diopters) would likely be missed, underscoring the need for strabismus-specific model development. Moreover, rapid accuracy degradation was observed beyond ±15° gaze angles. This platform establishes baseline performance metrics and provides a foundation for advancing gaze estimation technology for strabismus screening.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"19 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12922038/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146258019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Influence of Noise Perception and Parent-Rated Developmental Characteristics on White Noise Benefits in Children. 噪声感知和父母评价的发展特征对儿童白噪声效益的影响。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2026-02-05 DOI: 10.3390/jemr19010018
Erica Jostrup, Marcus Nyström, Göran B W Söderlund, Emma Claesdotter-Knutsson, Peik Gustafsson, Pia Tallberg

White noise has been proposed to enhance cognitive performance in children with ADHD, but findings are inconsistent, and benefits vary across tasks and individuals. Such variability suggests that diagnostic comparisons may overlook meaningful developmental differences. This exploratory study examined whether developmental characteristics and subjective evaluations of auditory and visual white noise predicted performance changes in two eye-movement tasks: Prolonged Fixation (PF) and Memory-Guided Saccades (MGS). Children with varying degrees of ADHD symptoms completed both tasks under noise and no-noise conditions, and noise benefit scores were calculated as the performance difference between conditions. Overall, white-noise effects were small and dependent on noise modality and task. In the PF task, large parent-rated perceptual difficulties and high visual noise discomfort were associated with improved performance under noise. In the MGS task, poor motor skills predicted visual noise benefit, whereas large visual noise discomfort predicted reduced noise benefit. These findings suggest that beneficial effects of white noise are influenced by developmental characteristics and subjective perception in task-dependent ways. The results highlight the need for individualized, transdiagnostic approaches in future noise research and challenge the notion of white noise as categorically beneficial for ADHD.

白噪音被认为可以提高多动症儿童的认知能力,但研究结果并不一致,而且效果因任务和个体而异。这种可变性表明,诊断比较可能忽略了有意义的发育差异。本探索性研究考察了听觉和视觉白噪声的发育特征和主观评价是否能预测两种眼动任务:长时间注视(PF)和记忆引导扫视(MGS)的表现变化。不同程度ADHD症状的儿童在噪声和无噪声条件下均完成任务,并计算噪声效益得分作为两种条件下的表现差异。总的来说,白噪声的影响很小,并且依赖于噪声的形式和任务。在PF任务中,较大的父母评定的知觉困难和高度的视觉噪声不适与噪声下的表现改善有关。在MGS任务中,较差的运动技能预示着视觉噪音的好处,而较大的视觉噪音不适预示着噪音的好处减少。这些研究结果表明,白噪声的有益作用以任务依赖的方式受到发育特征和主观知觉的影响。这些结果强调了在未来的噪声研究中需要个性化的、跨诊断的方法,并挑战了白噪声对ADHD绝对有益的概念。
{"title":"The Influence of Noise Perception and Parent-Rated Developmental Characteristics on White Noise Benefits in Children.","authors":"Erica Jostrup, Marcus Nyström, Göran B W Söderlund, Emma Claesdotter-Knutsson, Peik Gustafsson, Pia Tallberg","doi":"10.3390/jemr19010018","DOIUrl":"10.3390/jemr19010018","url":null,"abstract":"<p><p>White noise has been proposed to enhance cognitive performance in children with ADHD, but findings are inconsistent, and benefits vary across tasks and individuals. Such variability suggests that diagnostic comparisons may overlook meaningful developmental differences. This exploratory study examined whether developmental characteristics and subjective evaluations of auditory and visual white noise predicted performance changes in two eye-movement tasks: Prolonged Fixation (PF) and Memory-Guided Saccades (MGS). Children with varying degrees of ADHD symptoms completed both tasks under noise and no-noise conditions, and noise benefit scores were calculated as the performance difference between conditions. Overall, white-noise effects were small and dependent on noise modality and task. In the PF task, large parent-rated perceptual difficulties and high visual noise discomfort were associated with improved performance under noise. In the MGS task, poor motor skills predicted visual noise benefit, whereas large visual noise discomfort predicted reduced noise benefit. These findings suggest that beneficial effects of white noise are influenced by developmental characteristics and subjective perception in task-dependent ways. The results highlight the need for individualized, transdiagnostic approaches in future noise research and challenge the notion of white noise as categorically beneficial for ADHD.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"19 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12921771/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146258331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of 3D Interactive Prompts on College Students' Learning Outcomes in Desktop Virtual Learning Environments: A Study Based on Eye-Tracking Experiments. 桌面虚拟学习环境下三维互动提示对大学生学习成果的影响:基于眼动追踪实验的研究
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2026-02-05 DOI: 10.3390/jemr19010019
Xinyi Wu, Xiangen Wu, Weixing Hu, Jian Sun

Despite the increasing adoption of desktop virtual reality (VR) in higher education, the specific instructional efficacy of 3D interactive prompts remains inadequately understood. This study examines how such prompts-specifically dynamic spatial annotations and 3D animated demonstrations-influence learning outcomes within a desktop virtual learning environment (DVLE). Employing a quasi-experimental design integrated with eye-tracking and multimodal learning analytics, university students were assigned to either an experimental group (DVLE with 3D prompts) or a control group (basic DVLE) while completing physics tasks. Data collection encompassed eye-tracking metrics (fixation heatmaps, pupil diameter and dwell time), post-test performance (assessing knowledge comprehension and spatial problem-solving), and cognitive load ratings. Results indicated that the experimental group achieved significantly superior learning outcomes, particularly in spatial understanding and dynamic reasoning, alongside optimized visual attention patterns-characterized by shorter initial fixation latency and prolonged fixation on key 3D elements-and reduced cognitive load. Eye-tracking metrics were positively correlated with post-test scores, confirming that 3D prompts enhance learning by improving spatial attention guidance. These findings demonstrate that embedding 3D interactive prompts in DVLEs effectively directs visual attention, alleviates cognitive burden, and improves learning efficiency, offering valuable implications for the design of immersive educational settings.

尽管桌面虚拟现实(VR)在高等教育中的应用越来越多,但3D交互式提示的具体教学效果仍未得到充分的了解。本研究探讨了这些提示-特别是动态空间注释和3D动画演示-如何影响桌面虚拟学习环境(DVLE)中的学习结果。采用眼动追踪和多模态学习分析相结合的准实验设计,大学生在完成物理任务时被分配到实验组(带有3D提示的DVLE)和对照组(基本DVLE)。数据收集包括眼球追踪指标(注视热图、瞳孔直径和停留时间)、测试后表现(评估知识理解和空间问题解决)和认知负荷评级。结果表明,实验组在空间理解和动态推理方面取得了显著的学习成绩,同时优化了视觉注意模式,其特征是初始注视延迟缩短,对关键3D元素的注视时间延长,认知负荷减轻。眼球追踪指标与测试后得分呈正相关,证实了3D提示通过改善空间注意力引导来增强学习。这些研究结果表明,在dvle中嵌入3D交互式提示可以有效地引导视觉注意力,减轻认知负担,提高学习效率,为沉浸式教育环境的设计提供了有价值的启示。
{"title":"The Impact of 3D Interactive Prompts on College Students' Learning Outcomes in Desktop Virtual Learning Environments: A Study Based on Eye-Tracking Experiments.","authors":"Xinyi Wu, Xiangen Wu, Weixing Hu, Jian Sun","doi":"10.3390/jemr19010019","DOIUrl":"10.3390/jemr19010019","url":null,"abstract":"<p><p>Despite the increasing adoption of desktop virtual reality (VR) in higher education, the specific instructional efficacy of 3D interactive prompts remains inadequately understood. This study examines how such prompts-specifically dynamic spatial annotations and 3D animated demonstrations-influence learning outcomes within a desktop virtual learning environment (DVLE). Employing a quasi-experimental design integrated with eye-tracking and multimodal learning analytics, university students were assigned to either an experimental group (DVLE with 3D prompts) or a control group (basic DVLE) while completing physics tasks. Data collection encompassed eye-tracking metrics (fixation heatmaps, pupil diameter and dwell time), post-test performance (assessing knowledge comprehension and spatial problem-solving), and cognitive load ratings. Results indicated that the experimental group achieved significantly superior learning outcomes, particularly in spatial understanding and dynamic reasoning, alongside optimized visual attention patterns-characterized by shorter initial fixation latency and prolonged fixation on key 3D elements-and reduced cognitive load. Eye-tracking metrics were positively correlated with post-test scores, confirming that 3D prompts enhance learning by improving spatial attention guidance. These findings demonstrate that embedding 3D interactive prompts in DVLEs effectively directs visual attention, alleviates cognitive burden, and improves learning efficiency, offering valuable implications for the design of immersive educational settings.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"19 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12922015/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146258405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Eye Movement Classification Using Neuromorphic Vision Sensors. 利用神经形态视觉传感器进行眼动分类。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2026-02-04 DOI: 10.3390/jemr19010017
Khadija Iddrisu, Waseem Shariff, Maciej Stec, Noel O'Connor, Suzanne Little

Eye movement classification, particularly the identification of fixations and saccades, plays a vital role in advancing our understanding of neurological functions and cognitive processing. Conventional modalities of data, such as RGB webcams, often face limitations such as motion blur, latency and susceptibility to noise. Neuromorphic Vision Sensors, also known as event cameras (ECs), capture pixel-level changes asynchronously and at a high temporal resolution, making them well suited for detecting the swift transitions inherent to eye movements. However, the resulting data are sparse, which makes them less well suited for use with conventional algorithms. Spiking Neural Networks (SNNs) are gaining attention due to their discrete spatio-temporal spike mechanism ideally suited for sparse data. These networks offer a biologically inspired computational paradigm capable of modeling the temporal dynamics captured by event cameras. This study validates the use of Spiking Neural Networks (SNNs) with event cameras for efficient eye movement classification. We manually annotated the EV-Eye dataset, the largest publicly available event-based eye-tracking benchmark, into sequences of saccades and fixations, and we propose a convolutional SNN architecture operating directly on spike streams. Our model achieves an accuracy of 94% and a precision of 0.92 across annotated data from 10 users. As the first work to apply SNNs to eye movement classification using event data, we benchmark our approach against spiking baselines such as SpikingVGG and SpikingDenseNet, and additionally provide a detailed computational complexity comparison between SNN and ANN counterparts. Our results highlight the efficiency and robustness of SNNs for event-based vision tasks, with over one order of magnitude improvement in computational efficiency, with implications for fast and low-power neurocognitive diagnostic systems.

眼动分类,特别是对注视和扫视的识别,在促进我们对神经功能和认知处理的理解方面起着至关重要的作用。传统的数据模式,如RGB网络摄像头,经常面临诸如运动模糊、延迟和对噪声的敏感性等限制。神经形态视觉传感器,也被称为事件相机(ECs),以高时间分辨率异步捕获像素级变化,使其非常适合检测眼球运动固有的快速过渡。然而,结果数据是稀疏的,这使得它们不太适合与传统算法一起使用。尖峰神经网络(SNNs)由于其离散的时空尖峰机制非常适合于稀疏数据而受到人们的关注。这些网络提供了一种受生物学启发的计算范式,能够对事件摄像机捕捉到的时间动态进行建模。本研究验证了使用脉冲神经网络(SNNs)与事件相机进行有效的眼动分类。我们将EV-Eye数据集(最大的公开的基于事件的眼动追踪基准)手工标注为跳眼和注视序列,并提出了一种直接在尖峰流上运行的卷积SNN架构。我们的模型在来自10个用户的注释数据中实现了94%的准确率和0.92的精度。作为第一个将SNN应用于使用事件数据的眼动分类的工作,我们将我们的方法与SpikingVGG和SpikingDenseNet等峰值基线进行基准测试,并在SNN和ANN之间提供详细的计算复杂度比较。我们的研究结果突出了snn在基于事件的视觉任务中的效率和鲁棒性,计算效率提高了一个数量级以上,这对快速和低功耗的神经认知诊断系统具有重要意义。
{"title":"Eye Movement Classification Using Neuromorphic Vision Sensors.","authors":"Khadija Iddrisu, Waseem Shariff, Maciej Stec, Noel O'Connor, Suzanne Little","doi":"10.3390/jemr19010017","DOIUrl":"10.3390/jemr19010017","url":null,"abstract":"<p><p>Eye movement classification, particularly the identification of fixations and saccades, plays a vital role in advancing our understanding of neurological functions and cognitive processing. Conventional modalities of data, such as RGB webcams, often face limitations such as motion blur, latency and susceptibility to noise. Neuromorphic Vision Sensors, also known as event cameras (ECs), capture pixel-level changes asynchronously and at a high temporal resolution, making them well suited for detecting the swift transitions inherent to eye movements. However, the resulting data are sparse, which makes them less well suited for use with conventional algorithms. Spiking Neural Networks (SNNs) are gaining attention due to their discrete spatio-temporal spike mechanism ideally suited for sparse data. These networks offer a biologically inspired computational paradigm capable of modeling the temporal dynamics captured by event cameras. This study validates the use of Spiking Neural Networks (SNNs) with event cameras for efficient eye movement classification. We manually annotated the EV-Eye dataset, the largest publicly available event-based eye-tracking benchmark, into sequences of saccades and fixations, and we propose a convolutional SNN architecture operating directly on spike streams. Our model achieves an accuracy of 94% and a precision of 0.92 across annotated data from 10 users. As the first work to apply SNNs to eye movement classification using event data, we benchmark our approach against spiking baselines such as SpikingVGG and SpikingDenseNet, and additionally provide a detailed computational complexity comparison between SNN and ANN counterparts. Our results highlight the efficiency and robustness of SNNs for event-based vision tasks, with over one order of magnitude improvement in computational efficiency, with implications for fast and low-power neurocognitive diagnostic systems.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"19 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12921954/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146258020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rapid Automatized Naming (RAN) and Word Reading Fluency in Early School-Aged Children: A Pilot Eye-Tracking Study. 早期学龄儿童快速自动命名(RAN)与单词阅读流畅性:一项先导眼动追踪研究。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2026-02-04 DOI: 10.3390/jemr19010016
Alisa Baron, Alexia Martins, Gavino Puggioni, Vanessa Harwood

Fluent word reading is a key literacy skill, yet the full extent of the oculomotor underpinnings in developing readers remains unknown. Rapid automatized naming (RAN) is a useful clinical measure that has been shown to predict word reading fluency. Here we use RAN scores to predict early, mid, and late local stages of word reading as measured by eye tracking in children who are at a critical time in their literacy development. Thirty-three children participated in two RAN tasks (rapid letter naming (RLN) and rapid digit naming (RDN)) and an eye-tracking task, which included sentence-level reading with an embedded target word. The eye-tracking measures of first fixation duration, regression path duration, and total word reading time were used as early, mid, and late local measures, respectively. RLN and RDN significantly predicted only the mid-stage of the reading process (regression path duration). Faster RLN and RDN times were associated with briefer regressions from target words. Preliminary results link behavioral RAN performance to a mid-stage oculomotor variable, indicating that children with slower RAN times may exhibit longer regressions during reading, suggesting possible difficulties with the integration of phonological processing skills.

流利的文字阅读是一项关键的读写技能,然而动眼肌在发展阅读能力方面的全面基础仍然未知。快速自动命名(RAN)是一种有用的临床测量方法,已被证明可以预测单词阅读流畅性。在这里,我们使用RAN分数来预测在识字发展的关键时期,通过眼动追踪来测量单词阅读的早期、中期和晚期局部阶段。33名儿童参加了两个RAN任务(快速字母命名(RLN)和快速数字命名(RDN))和一个眼球追踪任务,其中包括嵌入目标单词的句子级阅读。首次注视时间、回归路径持续时间和总单词阅读时间分别作为早期、中期和晚期局部测量。RLN和RDN仅显著预测阅读过程的中期(回归路径持续时间)。更快的RLN和RDN时间与更短的目标单词回归相关。初步结果将行为RAN表现与中期动眼运动变量联系起来,表明RAN时间较慢的儿童在阅读时可能会出现更长时间的回归,这表明语音加工技能整合可能存在困难。
{"title":"Rapid Automatized Naming (RAN) and Word Reading Fluency in Early School-Aged Children: A Pilot Eye-Tracking Study.","authors":"Alisa Baron, Alexia Martins, Gavino Puggioni, Vanessa Harwood","doi":"10.3390/jemr19010016","DOIUrl":"10.3390/jemr19010016","url":null,"abstract":"<p><p>Fluent word reading is a key literacy skill, yet the full extent of the oculomotor underpinnings in developing readers remains unknown. Rapid automatized naming (RAN) is a useful clinical measure that has been shown to predict word reading fluency. Here we use RAN scores to predict early, mid, and late local stages of word reading as measured by eye tracking in children who are at a critical time in their literacy development. Thirty-three children participated in two RAN tasks (rapid letter naming (RLN) and rapid digit naming (RDN)) and an eye-tracking task, which included sentence-level reading with an embedded target word. The eye-tracking measures of first fixation duration, regression path duration, and total word reading time were used as early, mid, and late local measures, respectively. RLN and RDN significantly predicted only the mid-stage of the reading process (regression path duration). Faster RLN and RDN times were associated with briefer regressions from target words. Preliminary results link behavioral RAN performance to a mid-stage oculomotor variable, indicating that children with slower RAN times may exhibit longer regressions during reading, suggesting possible difficulties with the integration of phonological processing skills.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"19 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12921791/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146258365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of Saccade Characteristics During Fusional Vergence Tests in Normal Binocular Vision Participants. 双眼视力正常受试者融合会聚测试时的扫视特征分析。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2026-02-03 DOI: 10.3390/jemr19010015
Cristina Rovira-Gay, Clara Mestre, Marc Argilés, Jaume Pujol

The purpose of the study was to analyze, characterize, and compare the measurements of saccades that occurred during the positive and negative fusional vergence test (PFV and NFV, respectively) as a function of the disparity vergence demand. Thirty-four participants' PFV and NFV amplitudes were measured in a haploscopic setup, recording eye movements with an Eyelink 1000 Plus (SR Research). The visual stimulus was a column of letters. Break and recovery points were determined objectively offline, and saccades were detected with a velocity-threshold-based method. A total of 13,103 and 14,381 saccades were detected during the measurement of the PFV and NFV ranges, respectively. Saccades followed the main sequence (ρ = 0.97, p < 0.001). The distributions of saccadic amplitudes during PFV and NFV differed significantly (U = 4.28, p < 0.001). The amplitude of saccades that occurred while fusion was maintained (median (IQR) 0.73 (0.92) deg) was significantly smaller than that of saccades during diplopia (2.10 (3.90) deg) (U = -75.63, p < 0.001). The distributions of saccade direction during the measurement of PFV and NFV amplitudes were statistically significantly different (p < 0.01). These findings contribute to a better understanding of how the visual system adjusts saccades in response to different disparity vergence demand during fusional vergence amplitudes evaluation.

本研究的目的是分析、表征和比较在正和负融合收敛测试(分别为PFV和NFV)期间发生的扫视测量值作为视差收敛需求的函数。在单倍镜下测量34名参与者的PFV和NFV振幅,用Eyelink 1000 Plus (SR Research)记录眼球运动。视觉刺激是一列字母。离线客观确定断点和恢复点,并采用基于速度阈值的方法检测跳跳。在测量PFV和NFV范围时,共检测到13103次和14381次扫视。眼跳服从主序列(ρ = 0.97, p < 0.001)。PFV和NFV期间的跳眼振幅分布差异有统计学意义(U = 4.28, p < 0.001)。复视时的视跳幅度(IQR中值0.73(0.92)度)明显小于复视时的视跳幅度(2.10(3.90)度)(U = -75.63, p < 0.001)。在测量PFV和NFV振幅时,扫视方向分布差异有统计学意义(p < 0.01)。这些发现有助于更好地理解视觉系统如何在融合收敛幅度评估过程中根据不同的视差收敛需求调整扫视。
{"title":"Analysis of Saccade Characteristics During Fusional Vergence Tests in Normal Binocular Vision Participants.","authors":"Cristina Rovira-Gay, Clara Mestre, Marc Argilés, Jaume Pujol","doi":"10.3390/jemr19010015","DOIUrl":"10.3390/jemr19010015","url":null,"abstract":"<p><p>The purpose of the study was to analyze, characterize, and compare the measurements of saccades that occurred during the positive and negative fusional vergence test (PFV and NFV, respectively) as a function of the disparity vergence demand. Thirty-four participants' PFV and NFV amplitudes were measured in a haploscopic setup, recording eye movements with an Eyelink 1000 Plus (SR Research). The visual stimulus was a column of letters. Break and recovery points were determined objectively offline, and saccades were detected with a velocity-threshold-based method. A total of 13,103 and 14,381 saccades were detected during the measurement of the PFV and NFV ranges, respectively. Saccades followed the main sequence (ρ = 0.97, <i>p</i> < 0.001). The distributions of saccadic amplitudes during PFV and NFV differed significantly (U = 4.28, <i>p</i> < 0.001). The amplitude of saccades that occurred while fusion was maintained (median (IQR) 0.73 (0.92) deg) was significantly smaller than that of saccades during diplopia (2.10 (3.90) deg) (U = -75.63, <i>p</i> < 0.001). The distributions of saccade direction during the measurement of PFV and NFV amplitudes were statistically significantly different (<i>p</i> < 0.01). These findings contribute to a better understanding of how the visual system adjusts saccades in response to different disparity vergence demand during fusional vergence amplitudes evaluation.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"19 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12922041/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146258072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual Evaluation Strategies in Art Image Viewing: An Eye-Tracking Comparison of Art-Educated and Non-Art Participants. 艺术影像观看中的视觉评价策略:艺术教育与非艺术参与者的眼动追踪比较。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2026-01-30 DOI: 10.3390/jemr19010014
Adem Korkmaz, Sevinc Gülsecen, Grigor Mihaylov

Understanding how tacit knowledge embedded in visual materials is accessed and utilized during evaluation tasks remains a key challenge in human-computer interaction and visual expertise research. Although eye-tracking studies have identified systematic differences between experts and novices, findings remain inconsistent, particularly in art-related visual evaluation contexts. This study examines whether tacit aspects of visual evaluation can be inferred from gaze behavior by comparing individuals with and without formal art education. Visual evaluation was assessed using a structured, prompt-based task in which participants inspected artistic images and responded to items targeting specific visual elements. Eye movements were recorded using a screen-based eye-tracking system. Areas of Interest (AOIs) corresponding to correct-answer regions were defined a priori based on expert judgment and item prompts. Both AOI-level metrics (e.g., fixation count, mean, and total visit and gaze durations) and image-level metrics (e.g., fixation count, saccade count, and pupil size) were analyzed using appropriate parametric and non-parametric statistical tests. The results showed that participants with an art-education background produced more fixations within AOIs, exhibited longer mean and total AOI visit and gaze durations, and demonstrated lower saccade counts than participants without art education. These patterns indicate more systematic and goal-directed gaze behavior during visual evaluation, suggesting that formal art education may shape tacit visual evaluation strategies. The findings also highlight the potential of eye tracking as a methodological tool for studying expertise-related differences in visual evaluation processes.

了解嵌入在视觉材料中的隐性知识如何在评估任务中被访问和利用,仍然是人机交互和视觉专业知识研究的关键挑战。虽然眼动追踪研究已经确定了专家和新手之间的系统性差异,但研究结果仍然不一致,特别是在与艺术相关的视觉评估环境中。本研究通过比较受过和没有受过正规艺术教育的个体,探讨了视觉评价的隐性方面是否可以从凝视行为中推断出来。视觉评价是通过一个结构化的、基于提示的任务来评估的,在这个任务中,参与者检查艺术图像,并对针对特定视觉元素的项目做出反应。使用基于屏幕的眼球追踪系统记录眼球运动。根据专家判断和项目提示,先验地定义了与正确答案区域对应的兴趣区域(AOIs)。使用适当的参数和非参数统计检验分析aoi水平指标(例如,注视次数、平均和总访问和凝视持续时间)和图像水平指标(例如,注视次数、扫视次数和瞳孔大小)。结果表明,与没有接受过艺术教育的参与者相比,具有艺术教育背景的参与者在AOI内产生更多的注视,表现出更长的平均和总AOI访问和凝视持续时间,并且表现出更低的扫视计数。这些模式表明,在视觉评价过程中,凝视行为更具系统性和目的性,表明正规艺术教育可能会形成隐性视觉评价策略。研究结果还强调了眼动追踪作为研究视觉评估过程中专业知识相关差异的方法学工具的潜力。
{"title":"Visual Evaluation Strategies in Art Image Viewing: An Eye-Tracking Comparison of Art-Educated and Non-Art Participants.","authors":"Adem Korkmaz, Sevinc Gülsecen, Grigor Mihaylov","doi":"10.3390/jemr19010014","DOIUrl":"10.3390/jemr19010014","url":null,"abstract":"<p><p>Understanding how tacit knowledge embedded in visual materials is accessed and utilized during evaluation tasks remains a key challenge in human-computer interaction and visual expertise research. Although eye-tracking studies have identified systematic differences between experts and novices, findings remain inconsistent, particularly in art-related visual evaluation contexts. This study examines whether tacit aspects of visual evaluation can be inferred from gaze behavior by comparing individuals with and without formal art education. Visual evaluation was assessed using a structured, prompt-based task in which participants inspected artistic images and responded to items targeting specific visual elements. Eye movements were recorded using a screen-based eye-tracking system. Areas of Interest (AOIs) corresponding to correct-answer regions were defined a priori based on expert judgment and item prompts. Both AOI-level metrics (e.g., fixation count, mean, and total visit and gaze durations) and image-level metrics (e.g., fixation count, saccade count, and pupil size) were analyzed using appropriate parametric and non-parametric statistical tests. The results showed that participants with an art-education background produced more fixations within AOIs, exhibited longer mean and total AOI visit and gaze durations, and demonstrated lower saccade counts than participants without art education. These patterns indicate more systematic and goal-directed gaze behavior during visual evaluation, suggesting that formal art education may shape tacit visual evaluation strategies. The findings also highlight the potential of eye tracking as a methodological tool for studying expertise-related differences in visual evaluation processes.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"19 1","pages":""},"PeriodicalIF":2.8,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12921930/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146258318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Eye Movement Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1