首页 > 最新文献

Journal of Eye Movement Research最新文献

英文 中文
Eyes on Prevention: An Eye-Tracking Analysis of Visual Attention Patterns in Breast Cancer Screening Ads. 着眼预防:乳腺癌筛查广告中视觉注意模式的眼动追踪分析。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-12-13 DOI: 10.3390/jemr18060075
Stefanos Balaskas, Ioanna Yfantidou, Dimitra Skandali

Strong communication is central to the translation of breast cancer screening availability into uptake. This experiment tests the role of design features of screening advertisements in directing visual attention in screening-eligible women (≥40 years). To this end, a within-subjects eye-tracking experiment (N = 30) was conducted in which women viewed six static public service advertisements. Predefined Areas of Interest (AOIs), Text, Image/Visual, Symbol, Logo, Website/CTA, and Source/Authority-were annotated, and three standard measures were calculated: Time to First Fixation (TTFF), Fixation Count (FC), and Fixation Duration (FD). Analyses combined descriptive summaries with subgroup analyses using nonparametric methods and generalized linear mixed models (GLMMs) employing participant-level random intercepts. Within each category of stimuli, detected differences were small in magnitude yet trended towards few revisits in each category for the FC mode; TTFF and FD showed no significant differences across categories. Viewing data from the perspective of Areas of Interest (AOIs) highlighted pronounced individual differences. Narratives/efficacy text and dense icon/text callouts prolonged processing times, although institutional logos and abstract/anatomical symbols generally received brief treatment except when coupled with action-oriented communication triggers. TTFF timing also tended toward individual areas of interest aligned with the Scan-Then-Read strategy, in which smaller labels/sources/CTAs are exploited first in comparison with larger headlines/statistical text. Practically, screening messages should co-locate access and credibility information in early-attention areas and employ brief, fluent efficacy text to hold gaze. The study adds PSA-specific eye-tracking evidence for breast cancer screening and provides immediately testable design recommendations for programs in Greece and the EU.

强有力的沟通是将乳腺癌筛查的可获得性转化为接受性的核心。本实验检验了筛选广告设计特征在引导符合筛选条件的女性(≥40岁)视觉注意中的作用。为此,我们进行了一项受试者内眼球追踪实验(N = 30),让女性观看6个静态的公益广告。对预定义的兴趣区域(AOIs)、文本、图像/视觉、符号、标志、网站/CTA和来源/权威进行了注释,并计算了三个标准度量:首次注视时间(TTFF)、注视计数(FC)和注视持续时间(FD)。分析结合了描述性总结和亚组分析,采用非参数方法和广义线性混合模型(glmm),采用参与者水平的随机截距。在每个刺激类别中,检测到的差异在幅度上很小,但在每个类别中,FC模式的重复次数很少;TTFF和FD在不同类别间无显著差异。从兴趣领域(AOIs)的角度来看数据突出了明显的个体差异。叙述/功效文本和密集的图标/文本标注延长了处理时间,尽管机构标志和抽象/解剖符号通常得到简短的处理,除非加上以行动为导向的交流触发器。TTFF时机也倾向于与扫描-阅读策略一致的个人兴趣领域,其中较小的标签/来源/ cta首先被利用,而不是较大的标题/统计文本。实际上,筛选信息应该在早期关注区域同时定位访问和可信度信息,并使用简短,流畅的功效文本来吸引眼球。该研究为乳腺癌筛查增加了psa特定的眼球追踪证据,并为希腊和欧盟的项目提供了可立即测试的设计建议。
{"title":"Eyes on Prevention: An Eye-Tracking Analysis of Visual Attention Patterns in Breast Cancer Screening Ads.","authors":"Stefanos Balaskas, Ioanna Yfantidou, Dimitra Skandali","doi":"10.3390/jemr18060075","DOIUrl":"10.3390/jemr18060075","url":null,"abstract":"<p><p>Strong communication is central to the translation of breast cancer screening availability into uptake. This experiment tests the role of design features of screening advertisements in directing visual attention in screening-eligible women (≥40 years). To this end, a within-subjects eye-tracking experiment (N = 30) was conducted in which women viewed six static public service advertisements. Predefined Areas of Interest (AOIs), Text, Image/Visual, Symbol, Logo, Website/CTA, and Source/Authority-were annotated, and three standard measures were calculated: Time to First Fixation (TTFF), Fixation Count (FC), and Fixation Duration (FD). Analyses combined descriptive summaries with subgroup analyses using nonparametric methods and generalized linear mixed models (GLMMs) employing participant-level random intercepts. Within each category of stimuli, detected differences were small in magnitude yet trended towards few revisits in each category for the FC mode; TTFF and FD showed no significant differences across categories. Viewing data from the perspective of Areas of Interest (AOIs) highlighted pronounced individual differences. Narratives/efficacy text and dense icon/text callouts prolonged processing times, although institutional logos and abstract/anatomical symbols generally received brief treatment except when coupled with action-oriented communication triggers. TTFF timing also tended toward individual areas of interest aligned with the Scan-Then-Read strategy, in which smaller labels/sources/CTAs are exploited first in comparison with larger headlines/statistical text. Practically, screening messages should co-locate access and credibility information in early-attention areas and employ brief, fluent efficacy text to hold gaze. The study adds PSA-specific eye-tracking evidence for breast cancer screening and provides immediately testable design recommendations for programs in Greece and the EU.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12733868/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145819441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Where Vision Meets Memory: An Eye-Tracking Study of In-App Ads in Mobile Sports Games with Mixed Visual-Quantitative Analytics. 视觉与记忆相遇:基于混合视觉与定量分析的手机体育游戏应用内广告的眼动追踪研究
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-12-10 DOI: 10.3390/jemr18060074
Ümit Can Büyükakgül, Arif Yüce, Hakan Katırcı

Mobile games have become one of the fastest-growing segments of the digital economy, and in-app advertisements represent a major source of revenue while shaping consumer attention and memory processes. This study examined the relationship between visual attention and brand recall of in-app advertisements in a mobile sports game using mobile eye-tracking technology. A total of 79 participants (47 male, 32 female; Mage = 25.8) actively played a mobile sports game for ten minutes while their eye movements were recorded with Tobii Pro Glasses 2. Areas of interest (AOIs) were defined for embedded advertisements, and fixation-related measures were analyzed. Brand recall was assessed through unaided, verbal-aided, and visual-aided measures, followed by demographic comparisons based on gender, mobile sports game experience and interest in tennis. Results from Generalized Linear Mixed Models (GLMMs) revealed that brand placement was the strongest predictor of recall (p < 0.001), overriding raw fixation duration. Specifically, brands integrated into task-relevant zones (e.g., the central net area) achieved significantly higher recall odds compared to peripheral ads, regardless of marginal variations in dwell time. While eye movement metrics varied by gender and interest, the multivariate model confirmed that in active gameplay, task-integration drives memory encoding more effectively than passive visual salience. These findings suggest that active gameplay imposes unique cognitive demands, altering how attention and memory interact. The study contributes both theoretically by extending advertising research into ecologically valid gaming contexts and practically by informing strategies for optimizing mobile in-app advertising.

手机游戏已经成为数字经济中增长最快的领域之一,而应用内广告在塑造消费者注意力和记忆过程的同时,也代表着游戏的主要收益来源。本研究采用移动眼动追踪技术考察了移动体育游戏应用内广告的视觉注意与品牌回忆的关系。共有79名参与者(男性47人,女性32人,年龄25.8岁)积极地玩了10分钟的移动体育游戏,同时用Tobii Pro眼镜2记录了他们的眼球运动。为嵌入式广告定义了兴趣区域(aoi),并分析了与注视相关的措施。通过无辅助、语言辅助和视觉辅助测量来评估品牌回忆,然后根据性别、手机体育游戏体验和对网球的兴趣进行人口统计比较。广义线性混合模型(glmm)的结果显示,品牌位置是回忆最强的预测因子(p < 0.001),超过了原始注视时间。具体而言,与周边广告相比,整合到任务相关区域(例如,中央网络区域)的品牌获得了显著更高的召回率,无论停留时间的边际变化如何。虽然眼动指标因性别和兴趣而异,但多元模型证实,在主动游戏中,任务整合比被动视觉突出更有效地驱动记忆编码。这些发现表明,积极的游戏玩法施加了独特的认知需求,改变了注意力和记忆的相互作用。该研究在理论上将广告研究扩展到生态有效的游戏环境中,并在实践中为优化手机应用内广告提供策略。
{"title":"Where Vision Meets Memory: An Eye-Tracking Study of In-App Ads in Mobile Sports Games with Mixed Visual-Quantitative Analytics.","authors":"Ümit Can Büyükakgül, Arif Yüce, Hakan Katırcı","doi":"10.3390/jemr18060074","DOIUrl":"10.3390/jemr18060074","url":null,"abstract":"<p><p>Mobile games have become one of the fastest-growing segments of the digital economy, and in-app advertisements represent a major source of revenue while shaping consumer attention and memory processes. This study examined the relationship between visual attention and brand recall of in-app advertisements in a mobile sports game using mobile eye-tracking technology. A total of 79 participants (47 male, 32 female; Mage = 25.8) actively played a mobile sports game for ten minutes while their eye movements were recorded with Tobii Pro Glasses 2. Areas of interest (AOIs) were defined for embedded advertisements, and fixation-related measures were analyzed. Brand recall was assessed through unaided, verbal-aided, and visual-aided measures, followed by demographic comparisons based on gender, mobile sports game experience and interest in tennis. Results from Generalized Linear Mixed Models (GLMMs) revealed that brand placement was the strongest predictor of recall (<i>p</i> < 0.001), overriding raw fixation duration. Specifically, brands integrated into task-relevant zones (e.g., the central net area) achieved significantly higher recall odds compared to peripheral ads, regardless of marginal variations in dwell time. While eye movement metrics varied by gender and interest, the multivariate model confirmed that in active gameplay, task-integration drives memory encoding more effectively than passive visual salience. These findings suggest that active gameplay imposes unique cognitive demands, altering how attention and memory interact. The study contributes both theoretically by extending advertising research into ecologically valid gaming contexts and practically by informing strategies for optimizing mobile in-app advertising.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12733859/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145819591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Initial and Sustained Attentional Bias Toward Emotional Faces in Patients with Major Depressive Disorder. 重性抑郁障碍患者情绪面孔的初始和持续注意偏倚。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-12-01 DOI: 10.3390/jemr18060072
Hanliang Wei, Tak Kwan Lam, Weijian Liu, Waxun Su, Zheng Wang, Qiandong Wang, Xiao Lin, Peng Li

Major depressive disorder (MDD) represents a prevalent mental health condition characterized by prominent attentional biases, particularly toward negative stimuli. While extensive research has established the significance of negative attentional bias in depression, critical gaps remain in understanding the temporal dynamics and valence-specificity of these biases. This study employed eye-tracking technology to systematically examine the attentional processing of emotional faces (happy, fearful, sad) in MDD patients (n = 61) versus healthy controls (HC, n = 47), assessing both the initial orientation (initial gaze preference) and sustained attention (first dwell time). Key findings revealed the following: (1) while both groups showed an initial vigilance toward threatening faces (fearful/sad), only MDD patients displayed an additional attentional capture by happy faces; (2) a significant emotion main effect (F (2, 216) = 10.19, p < 0.001) indicated a stronger initial orientation to fearful versus happy faces, with Bayesian analyses (BF < 0.3) confirming the absence of group differences; and (3) no group disparities emerged in sustained attentional maintenance (all ps > 0.05). These results challenge conventional negativity-focused models by demonstrating valence-specific early-stage abnormalities in MDD, suggesting that depressive attentional dysfunction may be most pronounced during initial automatic processing rather than later strategic stages. The findings advance the theoretical understanding of attentional bias in depression while highlighting the need for stage-specific intervention approaches.

重度抑郁症(MDD)是一种普遍存在的精神健康状况,其特征是显著的注意偏差,特别是对负面刺激。虽然广泛的研究已经确立了负注意偏倚在抑郁症中的重要性,但在理解这些偏倚的时间动态和效价特异性方面仍然存在重大差距。本研究采用眼动追踪技术系统地研究了MDD患者(n = 61)与健康对照组(n = 47)对情绪面孔(快乐、恐惧、悲伤)的注意加工,评估了初始定向(初始凝视偏好)和持续注意(首次停留时间)。主要发现如下:(1)虽然两组都对威胁性面孔(恐惧/悲伤)表现出最初的警惕性,但只有重度抑郁症患者对快乐面孔表现出额外的注意捕获;(2)显著的情绪主效应(F (2,216) = 10.19, p < 0.001)表明恐惧面孔比快乐面孔具有更强的初始倾向,贝叶斯分析(BF < 0.3)证实了组间差异的存在;(3)持续注意维持不存在组间差异(p < 0.05)。这些结果挑战了传统的消极聚焦模型,证明了重度抑郁症的早期异常,表明抑郁性注意力功能障碍可能在最初的自动处理过程中最为明显,而不是在后来的策略阶段。研究结果促进了对抑郁症中注意偏倚的理论理解,同时强调了对特定阶段干预方法的需求。
{"title":"Initial and Sustained Attentional Bias Toward Emotional Faces in Patients with Major Depressive Disorder.","authors":"Hanliang Wei, Tak Kwan Lam, Weijian Liu, Waxun Su, Zheng Wang, Qiandong Wang, Xiao Lin, Peng Li","doi":"10.3390/jemr18060072","DOIUrl":"10.3390/jemr18060072","url":null,"abstract":"<p><p>Major depressive disorder (MDD) represents a prevalent mental health condition characterized by prominent attentional biases, particularly toward negative stimuli. While extensive research has established the significance of negative attentional bias in depression, critical gaps remain in understanding the temporal dynamics and valence-specificity of these biases. This study employed eye-tracking technology to systematically examine the attentional processing of emotional faces (happy, fearful, sad) in MDD patients (<i>n</i> = 61) versus healthy controls (HC, n = 47), assessing both the initial orientation (initial gaze preference) and sustained attention (first dwell time). Key findings revealed the following: (1) while both groups showed an initial vigilance toward threatening faces (fearful/sad), only MDD patients displayed an additional attentional capture by happy faces; (2) a significant emotion main effect (F (2, 216) = 10.19, <i>p</i> < 0.001) indicated a stronger initial orientation to fearful versus happy faces, with Bayesian analyses (BF < 0.3) confirming the absence of group differences; and (3) no group disparities emerged in sustained attentional maintenance (all <i>p</i>s > 0.05). These results challenge conventional negativity-focused models by demonstrating valence-specific early-stage abnormalities in MDD, suggesting that depressive attentional dysfunction may be most pronounced during initial automatic processing rather than later strategic stages. The findings advance the theoretical understanding of attentional bias in depression while highlighting the need for stage-specific intervention approaches.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12734090/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145819461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Camera-Based Eye-Tracking Method Allowing Head Movements and Its Application in User Experience Research. 基于摄像机的鲁棒眼动追踪方法及其在用户体验研究中的应用。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-12-01 DOI: 10.3390/jemr18060071
He Zhang, Lu Yin

Eye-tracking for user experience analysis has traditionally relied on dedicated hardware, which is often costly and imposes restrictive operating conditions. As an alternative, solutions utilizing ordinary webcams have attracted significant interest due to their affordability and ease of use. However, a major limitation persists in these vision-based methods: sensitivity to head movements. Therefore, users are often required to maintain a rigid head position, leading to discomfort and potentially skewed results. To address this challenge, this paper proposes a robust eye-tracking methodology designed to accommodate head motion. Our core technique involves mapping the displacement of the pupil center from a dynamically updated reference point to estimate the gaze point. When head movement is detected, the system recalculates the head-pointing coordinate using estimated head pose and user-to-screen distance. This new head position and the corresponding pupil center are then established as the fresh benchmark for subsequent gaze point estimation, creating a continuous and adaptive correction loop. We conducted accuracy tests with 22 participants. The results demonstrate that our method surpasses the performance of many current methods, achieving mean gaze errors of 1.13 and 1.37 degrees in two testing modes. Further validation in a smooth pursuit task confirmed its efficacy in dynamic scenarios. Finally, we applied the method in a real-world gaming context, successfully extracting fixation counts and gaze heatmaps to analyze visual behavior and UX across different game modes, thereby verifying its practical utility.

传统上,用于用户体验分析的眼动追踪依赖于专用硬件,这些硬件通常成本高昂,并且对操作条件有限制。作为一种替代方案,利用普通网络摄像头的解决方案由于其可负担性和易用性而引起了极大的兴趣。然而,这些基于视觉的方法存在一个主要限制:对头部运动的敏感性。因此,使用者通常需要保持一个僵硬的头部位置,导致不适和潜在的倾斜结果。为了解决这一挑战,本文提出了一种旨在适应头部运动的鲁棒眼动追踪方法。我们的核心技术包括从一个动态更新的参考点映射瞳孔中心的位移来估计凝视点。当检测到头部运动时,系统使用估计的头部姿势和用户到屏幕的距离重新计算指向头部的坐标。这个新的头部位置和相应的瞳孔中心然后被建立为后续注视点估计的新基准,创建一个连续的和自适应的校正循环。我们对22名参与者进行了准确性测试。结果表明,我们的方法优于现有的许多方法,在两种测试模式下平均注视误差分别为1.13度和1.37度。在平滑追踪任务中进一步验证了其在动态场景中的有效性。最后,我们将该方法应用于现实世界的游戏环境中,成功地提取了注视计数和凝视热图,以分析不同游戏模式下的视觉行为和用户体验,从而验证了其实用性。
{"title":"Robust Camera-Based Eye-Tracking Method Allowing Head Movements and Its Application in User Experience Research.","authors":"He Zhang, Lu Yin","doi":"10.3390/jemr18060071","DOIUrl":"10.3390/jemr18060071","url":null,"abstract":"<p><p>Eye-tracking for user experience analysis has traditionally relied on dedicated hardware, which is often costly and imposes restrictive operating conditions. As an alternative, solutions utilizing ordinary webcams have attracted significant interest due to their affordability and ease of use. However, a major limitation persists in these vision-based methods: sensitivity to head movements. Therefore, users are often required to maintain a rigid head position, leading to discomfort and potentially skewed results. To address this challenge, this paper proposes a robust eye-tracking methodology designed to accommodate head motion. Our core technique involves mapping the displacement of the pupil center from a dynamically updated reference point to estimate the gaze point. When head movement is detected, the system recalculates the head-pointing coordinate using estimated head pose and user-to-screen distance. This new head position and the corresponding pupil center are then established as the fresh benchmark for subsequent gaze point estimation, creating a continuous and adaptive correction loop. We conducted accuracy tests with 22 participants. The results demonstrate that our method surpasses the performance of many current methods, achieving mean gaze errors of 1.13 and 1.37 degrees in two testing modes. Further validation in a smooth pursuit task confirmed its efficacy in dynamic scenarios. Finally, we applied the method in a real-world gaming context, successfully extracting fixation counts and gaze heatmaps to analyze visual behavior and UX across different game modes, thereby verifying its practical utility.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12734114/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145819418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the Effect of Presentation Mode on Cognitive Load in English-Chinese Distance Simultaneous Interpreting: An Eye-Tracking Study. 表征方式对英汉远程同声传译认知负荷的影响:眼动追踪研究。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-12-01 DOI: 10.3390/jemr18060073
Xuelian Rachel Zhu

Distance simultaneous interpreting is a typical example of technology-mediated interpreting, bridging participants (i.e., interpreters, audience, and speakers) in various events and conferences. This study explores how presentation mode affects cognitive load in DSI, utilizing eye-tracking sensor technology. A controlled experiment was conducted involving 36 participants, comprising 19 professional interpreters and 17 student interpreters, to assess the effects of presentation mode on their cognitive load during English-to-Chinese DSI. A Tobii Pro X3-120 screen-based eye tracker was used to collect eye-tracking data as the participants sequentially performed a DSI task involving four distinct presentation modes: the Speaker, Slides, Split, and Corner modes. The findings, derived from the integration of eye-tracking data and interpreting performance scores, indicate that both presentation mode and experience level significantly influence interpreters' cognitive load. Notably, student interpreters demonstrated longer fixation durations in the Slides mode, indicating a reliance on visual aids for DSI. These results have implications for language learning, suggesting that the integration of visual supports can aid in the acquisition and performance of interpreting skills, particularly for less experienced interpreters. This study contributes to our understanding of the interplay between technology, cognitive load, and language learning in the context of DSI.

远程同声传译是技术中介口译的一个典型例子,在各种活动和会议中架起参与者(即口译员、听众和演讲者)的桥梁。本研究利用眼动追踪传感器技术,探讨呈现方式对DSI认知负荷的影响。本研究采用对照实验的方法,对36名被试(包括19名专业口译员和17名学生口译员)在英汉口译过程中的认知负荷进行了研究。研究人员使用Tobii Pro X3-120屏幕眼动仪,在参与者依次执行四种不同的演示模式(演讲者、幻灯片、分割和角落模式)的DSI任务时收集眼动追踪数据。通过对眼动追踪数据和口译成绩评分的整合研究发现,呈现模式和经验水平对口译员的认知负荷均有显著影响。值得注意的是,学生口译员在幻灯片模式下表现出更长的注视时间,这表明他们依赖于视觉辅助。这些结果对语言学习具有启示意义,表明视觉支持的整合有助于口译技能的习得和表现,特别是对于经验不足的口译员。本研究有助于我们理解DSI背景下技术、认知负荷和语言学习之间的相互作用。
{"title":"Investigating the Effect of Presentation Mode on Cognitive Load in English-Chinese Distance Simultaneous Interpreting: An Eye-Tracking Study.","authors":"Xuelian Rachel Zhu","doi":"10.3390/jemr18060073","DOIUrl":"10.3390/jemr18060073","url":null,"abstract":"<p><p>Distance simultaneous interpreting is a typical example of technology-mediated interpreting, bridging participants (i.e., interpreters, audience, and speakers) in various events and conferences. This study explores how presentation mode affects cognitive load in DSI, utilizing eye-tracking sensor technology. A controlled experiment was conducted involving 36 participants, comprising 19 professional interpreters and 17 student interpreters, to assess the effects of presentation mode on their cognitive load during English-to-Chinese DSI. A Tobii Pro X3-120 screen-based eye tracker was used to collect eye-tracking data as the participants sequentially performed a DSI task involving four distinct presentation modes: the Speaker, Slides, Split, and Corner modes. The findings, derived from the integration of eye-tracking data and interpreting performance scores, indicate that both presentation mode and experience level significantly influence interpreters' cognitive load. Notably, student interpreters demonstrated longer fixation durations in the Slides mode, indicating a reliance on visual aids for DSI. These results have implications for language learning, suggesting that the integration of visual supports can aid in the acquisition and performance of interpreting skills, particularly for less experienced interpreters. This study contributes to our understanding of the interplay between technology, cognitive load, and language learning in the context of DSI.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12734073/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145819436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measuring Mental Effort in Real Time Using Pupillometry. 使用瞳孔测量法实时测量脑力劳动。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-11-24 DOI: 10.3390/jemr18060070
Gavindya Jayawardena, Yasith Jayawardana, Jacek Gwizdka

Mental effort, a critical factor influencing task performance, is often difficult to measure accurately and efficiently. Pupil diameter has emerged as a reliable, real-time indicator of mental effort. This study introduces RIPA2, an enhanced pupillometric index for real-time mental effort assessment. Building on the original RIPA method, RIPA2 incorporates refined Savitzky-Golay filter parameters to better isolate pupil diameter fluctuations within biologically relevant frequency bands linked to cognitive load. We validated RIPA2 across two distinct tasks: a structured N-back memory task and a naturalistic information search task involving fact-checking and decision-making scenarios. Our findings show that RIPA2 reliably tracks variations in mental effort, demonstrating improved sensitivity and consistency over the original RIPA and strong alignment with the established offline measures of pupil-based cognitive load indices, such as LHIPA. Notably, RIPA2 captured increased mental effort at higher N-back levels and successfully distinguished greater effort during decision-making tasks compared to fact-checking tasks, highlighting its applicability to real-world cognitive demands. These findings suggest that RIPA2 provides a robust, continuous, and low-latency method for assessing mental effort. It holds strong potential for broader use in educational settings, medical environments, workplaces, and adaptive user interfaces, facilitating objective monitoring of mental effort beyond laboratory conditions.

心理努力是影响任务绩效的关键因素,但往往难以准确有效地衡量。瞳孔直径已成为衡量精神努力程度的可靠、实时指标。本研究引入了一种增强的瞳孔测量指数RIPA2,用于实时心理努力评估。在最初的RIPA方法的基础上,RIPA2结合了改进的Savitzky-Golay滤波参数,以更好地隔离与认知负荷相关的生物相关频段内的瞳孔直径波动。我们在两个不同的任务中验证了RIPA2:一个结构化的N-back记忆任务和一个涉及事实检查和决策场景的自然信息搜索任务。我们的研究结果表明,RIPA2可靠地跟踪了心理努力的变化,比原始的RIPA显示出更高的灵敏度和一致性,并且与基于瞳孔的认知负荷指数(如LHIPA)的已建立的离线测量方法有很强的一致性。值得注意的是,在较高的N-back水平下,RIPA2捕获了更多的心理努力,并成功区分了决策任务中比事实核查任务中更大的努力,突出了其对现实世界认知需求的适用性。这些发现表明,RIPA2为评估脑力劳动提供了一种稳健、持续、低延迟的方法。它在教育环境、医疗环境、工作场所和自适应用户界面中具有更广泛的应用潜力,有助于在实验室条件之外对心理活动进行客观监测。
{"title":"Measuring Mental Effort in Real Time Using Pupillometry.","authors":"Gavindya Jayawardena, Yasith Jayawardana, Jacek Gwizdka","doi":"10.3390/jemr18060070","DOIUrl":"10.3390/jemr18060070","url":null,"abstract":"<p><p>Mental effort, a critical factor influencing task performance, is often difficult to measure accurately and efficiently. Pupil diameter has emerged as a reliable, real-time indicator of mental effort. This study introduces RIPA2, an enhanced pupillometric index for real-time mental effort assessment. Building on the original RIPA method, RIPA2 incorporates refined Savitzky-Golay filter parameters to better isolate pupil diameter fluctuations within biologically relevant frequency bands linked to cognitive load. We validated RIPA2 across two distinct tasks: a structured N-back memory task and a naturalistic information search task involving fact-checking and decision-making scenarios. Our findings show that RIPA2 reliably tracks variations in mental effort, demonstrating improved sensitivity and consistency over the original RIPA and strong alignment with the established offline measures of pupil-based cognitive load indices, such as LHIPA. Notably, RIPA2 captured increased mental effort at higher N-back levels and successfully distinguished greater effort during decision-making tasks compared to fact-checking tasks, highlighting its applicability to real-world cognitive demands. These findings suggest that RIPA2 provides a robust, continuous, and low-latency method for assessing mental effort. It holds strong potential for broader use in educational settings, medical environments, workplaces, and adaptive user interfaces, facilitating objective monitoring of mental effort beyond laboratory conditions.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12733481/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145819401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual Attention to Food Content on Social Media: An Eye-Tracking Study Among Young Adults. 社交媒体上对食物内容的视觉关注:一项针对年轻人的眼动追踪研究。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-11-20 DOI: 10.3390/jemr18060069
Aura Lydia Riswanto, Seieun Kim, Youngsam Ha, Hak-Seon Kim

Social media has become a dominant channel for food marketing, particularly targeting youth through visually engaging and socially embedded content. This study investigates how young adults visually engage with food advertisements on social media and how specific visual and contextual features influence purchase intention. Using eye-tracking technology and survey analysis, data were collected from 35 participants aged 18 to 25. Participants viewed simulated Instagram posts incorporating elements such as food imagery, branding, influencer presence, and social cues. Visual attention was recorded using Tobii Pro Spectrum, and behavioral responses were assessed via post-surveys. A 2 × 2 design varying influencer presence and food type showed that both features significantly increased visual attention. Marketing cues and branding also attracted substantial visual attention. Linear regression revealed that core/non-core content and influencer features were among the strongest predictors of consumer response. The findings underscore the persuasive power of human and social features in digital food advertising. These insights have implications for commercial marketing practices and for understanding how visual and social elements influence youth engagement with food content on digital platforms.

社交媒体已经成为食品营销的主要渠道,特别是通过视觉吸引力和社会嵌入内容来瞄准年轻人。本研究调查了年轻人如何在视觉上参与社交媒体上的食品广告,以及特定的视觉和上下文特征如何影响购买意愿。通过眼动追踪技术和调查分析,收集了35名年龄在18至25岁之间的参与者的数据。参与者观看了模拟的Instagram帖子,这些帖子包含了食物图片、品牌、网红形象和社交线索等元素。使用Tobii Pro Spectrum记录视觉注意力,并通过事后调查评估行为反应。一个2 × 2设计,不同的影响者存在和食物类型表明,这两种特征都显著增加了视觉注意力。营销线索和品牌也吸引了大量的视觉关注。线性回归显示,核心/非核心内容和影响者特征是消费者反应的最强预测因子。研究结果强调了数字食品广告中人类和社会特征的说服力。这些见解对商业营销实践以及理解视觉和社会元素如何影响年轻人在数字平台上对食品内容的参与具有重要意义。
{"title":"Visual Attention to Food Content on Social Media: An Eye-Tracking Study Among Young Adults.","authors":"Aura Lydia Riswanto, Seieun Kim, Youngsam Ha, Hak-Seon Kim","doi":"10.3390/jemr18060069","DOIUrl":"10.3390/jemr18060069","url":null,"abstract":"<p><p>Social media has become a dominant channel for food marketing, particularly targeting youth through visually engaging and socially embedded content. This study investigates how young adults visually engage with food advertisements on social media and how specific visual and contextual features influence purchase intention. Using eye-tracking technology and survey analysis, data were collected from 35 participants aged 18 to 25. Participants viewed simulated Instagram posts incorporating elements such as food imagery, branding, influencer presence, and social cues. Visual attention was recorded using Tobii Pro Spectrum, and behavioral responses were assessed via post-surveys. A 2 × 2 design varying influencer presence and food type showed that both features significantly increased visual attention. Marketing cues and branding also attracted substantial visual attention. Linear regression revealed that core/non-core content and influencer features were among the strongest predictors of consumer response. The findings underscore the persuasive power of human and social features in digital food advertising. These insights have implications for commercial marketing practices and for understanding how visual and social elements influence youth engagement with food content on digital platforms.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641903/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaze Characteristics Using a Three-Dimensional Heads-Up Display During Cataract Surgery. 在白内障手术中使用三维平视显示器的凝视特征。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-11-17 DOI: 10.3390/jemr18060068
Puranjay Gupta, Emily Kao, Neil Sheth, Reem Alahmadi, Michael J Heiferman

Purpose: An observational study to investigate differences in gaze behaviors across varying expertise levels using a 3D heads-up display (HUD) integrated with eye-tracking was conducted. Methods: 25 ophthalmologists (PGY2-4, fellows, attendings; number(n) = 5/group) performed cataract surgery on a SimulEYE model using NGENUITY HUD. Results: Surgical proficiency increased with experience, with attendings achieving the highest scores (54.4 ± 0.89). Compared with attendings, PGY2s had longer fixation durations (p = 0.042), longer saccades (p < 0.0001), and fewer fixations on the HUD (p < 0.0001). Capsulorhexis diameter relative to capsule size increased with expertise, with fellows and attendings achieving significantly larger diameters than PGY2s (p < 0.0001). Experts maintained smaller tear angles, initiated tears closer to the main wound, and produced more circular morphologies. They rapidly alternated gaze between instruments and surrounding tissue, whereas novices (PGY2-4) fixated primarily on the instrument tip. Conclusions: Experts employ a feed-forward visual sampling strategy, allowing perception of instruments and surrounding tissue, minimizing inadvertent damage. Furthermore, attending surgeons maintain smaller tear angles and initiate tears proximally to forceps insertion, which may contribute to more controlled tears. Future integration of eye-tracking technology into surgical training could enhance visual-motor strategies in novices.

目的:利用三维平视显示器(HUD)结合眼动追踪技术,研究不同专业水平的注视行为差异。方法:25名眼科医生(PGY2-4名,研究员,主治医师,人数(n) = 5/组)使用NGENUITY HUD在SimulEYE模型上进行白内障手术。结果:手术熟练程度随经验的增加而增加,主治医师得分最高(54.4±0.89)。与主治医师相比,PGY2s的注视时间更长(p = 0.042),扫视时间更长(p < 0.0001), HUD上的注视较少(p < 0.0001)。撕囊直径相对于胶囊大小随专业知识的增加而增加,研究员和主治医师的撕囊直径明显大于PGY2s (p < 0.0001)。专家保持较小的撕裂角度,使撕裂更接近主伤口,并产生更圆的形态。他们在仪器和周围组织之间快速交替注视,而新手(PGY2-4)主要注视仪器尖端。结论:专家采用前馈视觉采样策略,允许感知仪器和周围组织,最大限度地减少无意的损害。此外,主治医生保持较小的撕裂角度,并在钳插入的近端开始撕裂,这可能有助于更可控的撕裂。未来将眼动追踪技术整合到外科训练中可以增强新手的视觉运动策略。
{"title":"Gaze Characteristics Using a Three-Dimensional Heads-Up Display During Cataract Surgery.","authors":"Puranjay Gupta, Emily Kao, Neil Sheth, Reem Alahmadi, Michael J Heiferman","doi":"10.3390/jemr18060068","DOIUrl":"10.3390/jemr18060068","url":null,"abstract":"<p><p><b>Purpose:</b> An observational study to investigate differences in gaze behaviors across varying expertise levels using a 3D heads-up display (HUD) integrated with eye-tracking was conducted. <b>Methods:</b> 25 ophthalmologists (PGY2-4, fellows, attendings; number(n) = 5/group) performed cataract surgery on a SimulEYE model using NGENUITY HUD. <b>Results:</b> Surgical proficiency increased with experience, with attendings achieving the highest scores (54.4 ± 0.89). Compared with attendings, PGY2s had longer fixation durations (<i>p</i> = 0.042), longer saccades (<i>p</i> < 0.0001), and fewer fixations on the HUD (<i>p</i> < 0.0001). Capsulorhexis diameter relative to capsule size increased with expertise, with fellows and attendings achieving significantly larger diameters than PGY2s (<i>p</i> < 0.0001). Experts maintained smaller tear angles, initiated tears closer to the main wound, and produced more circular morphologies. They rapidly alternated gaze between instruments and surrounding tissue, whereas novices (PGY2-4) fixated primarily on the instrument tip. <b>Conclusions:</b> Experts employ a feed-forward visual sampling strategy, allowing perception of instruments and surrounding tissue, minimizing inadvertent damage. Furthermore, attending surgeons maintain smaller tear angles and initiate tears proximally to forceps insertion, which may contribute to more controlled tears. Future integration of eye-tracking technology into surgical training could enhance visual-motor strategies in novices.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641938/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BEACH-Gaze: Supporting Descriptive and Predictive Gaze Analytics in the Era of Artificial Intelligence and Advanced Data Science. 海滩凝视:支持人工智能和高级数据科学时代的描述性和预测性凝视分析。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-11-12 DOI: 10.3390/jemr18060067
Bo Fu, Kayla Chu, Angelo Ryan Soriano, Peter Gatsby, Nicolas Guardado Guardado, Ashley Jones, Matthew Halderman

Recent breakthroughs in machine learning, artificial intelligence, and the emergence of large datasets have made the integration of eye tracking increasingly feasible not only in computing but also in many other disciplines to accelerate innovation and scientific discovery. These transformative changes often depend on intelligently analyzing and interpreting gaze data, which demand a substantial technical background. Overcoming these technical barriers has remained an obstacle to the broader adoption of eye tracking technologies in certain communities. In an effort to increase accessibility that potentially empowers a broader community of researchers and practitioners to leverage eye tracking, this paper presents an open-source software platform: Beach Environment for the Analytics of Human Gaze (BEACH-Gaze), designed to offer comprehensive descriptive and predictive analytical support. Firstly, BEACH-Gaze provides sequential gaze analytics through window segmentation in its data processing and analysis pipeline, which can be used to achieve simulations of real-time gaze-based systems. Secondly, it integrates a range of established machine learning models, allowing researchers from diverse disciplines to generate gaze-enabled predictions without advanced technical expertise. The overall goal is to simplify technical details and to aid the broader community interested in eye tracking research and applications in data interpretation, and to leverage knowledge gained from eye gaze in the development of machine intelligence. As such, we further demonstrate three use cases that apply descriptive and predictive gaze analytics to support individuals with autism spectrum disorder during technology-assisted exercises, to dynamically tailor visual cues for an individual user via physiologically adaptive visualizations, and to predict pilots' performance in flight maneuvers to enhance aviation safety.

最近在机器学习、人工智能和大型数据集方面的突破,使得眼动追踪的整合不仅在计算领域,而且在许多其他学科中越来越可行,以加速创新和科学发现。这些变革性的变化通常依赖于智能地分析和解释凝视数据,这需要大量的技术背景。克服这些技术障碍仍然是在某些社区广泛采用眼动追踪技术的障碍。为了增加可访问性,潜在地授权更广泛的研究人员和实践者利用眼动追踪,本文提出了一个开源软件平台:人类凝视分析的海滩环境(Beach -Gaze),旨在提供全面的描述和预测分析支持。首先,BEACH-Gaze在其数据处理和分析管道中通过窗口分割提供顺序凝视分析,可用于实现基于实时凝视系统的仿真。其次,它集成了一系列已建立的机器学习模型,允许来自不同学科的研究人员在没有高级技术专业知识的情况下生成基于凝视的预测。总体目标是简化技术细节,帮助对眼动追踪研究和数据解释应用感兴趣的更广泛的社区,并在机器智能的发展中利用从眼睛注视中获得的知识。因此,我们进一步展示了三个用例,应用描述性和预测性凝视分析在技术辅助练习中支持自闭症谱系障碍患者,通过生理适应性可视化动态定制个人用户的视觉线索,并预测飞行员在飞行演习中的表现,以提高航空安全。
{"title":"BEACH-Gaze: Supporting Descriptive and Predictive Gaze Analytics in the Era of Artificial Intelligence and Advanced Data Science.","authors":"Bo Fu, Kayla Chu, Angelo Ryan Soriano, Peter Gatsby, Nicolas Guardado Guardado, Ashley Jones, Matthew Halderman","doi":"10.3390/jemr18060067","DOIUrl":"10.3390/jemr18060067","url":null,"abstract":"<p><p>Recent breakthroughs in machine learning, artificial intelligence, and the emergence of large datasets have made the integration of eye tracking increasingly feasible not only in computing but also in many other disciplines to accelerate innovation and scientific discovery. These transformative changes often depend on intelligently analyzing and interpreting gaze data, which demand a substantial technical background. Overcoming these technical barriers has remained an obstacle to the broader adoption of eye tracking technologies in certain communities. In an effort to increase accessibility that potentially empowers a broader community of researchers and practitioners to leverage eye tracking, this paper presents an open-source software platform: <i>B</i>each <i>E</i>nvironment for the <i>A</i>nalyti<i>c</i>s of <i>H</i>uman <i>Gaze</i> (BEACH-Gaze), designed to offer comprehensive descriptive and predictive analytical support. Firstly, BEACH-Gaze provides sequential gaze analytics through window segmentation in its data processing and analysis pipeline, which can be used to achieve simulations of real-time gaze-based systems. Secondly, it integrates a range of established machine learning models, allowing researchers from diverse disciplines to generate gaze-enabled predictions without advanced technical expertise. The overall goal is to simplify technical details and to aid the broader community interested in eye tracking research and applications in data interpretation, and to leverage knowledge gained from eye gaze in the development of machine intelligence. As such, we further demonstrate three use cases that apply descriptive and predictive gaze analytics to support individuals with autism spectrum disorder during technology-assisted exercises, to dynamically tailor visual cues for an individual user via physiologically adaptive visualizations, and to predict pilots' performance in flight maneuvers to enhance aviation safety.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641676/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recovery of the Pupillary Response After Light Adaptation Is Slowed in Patients with Age-Related Macular Degeneration. 老年性黄斑变性患者光适应后瞳孔反应恢复缓慢。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-11-10 DOI: 10.3390/jemr18060066
Javier Barranco Garcia, Thomas Ferrazzini, Ana Coito, Dominik Brügger, Mathias Abegg

Purpose: This study evaluates a novel, non-invasive method using a virtual reality (VR) headset with integrated eye trackers to assess retinal function by measuring the recovery of the pupillary response after light adaptation in patients with age-related macular degeneration (AMD). Methods: In this pilot study, fourteen patients with clinically confirmed AMD and 14 age-matched healthy controls were exposed to alternating bright and dark stimuli using a VR headset. The dark stimulus duration increased incrementally by 100 milliseconds per trial, repeated over 50 cycles. The pupillary response to the re-onset of brightness was recorded. Data were analyzed using a linear mixed-effects model to compare recovery patterns between groups and a convolutional neural network to evaluate diagnostic accuracy. Results: The pupillary response amplitude increased with longer dark stimuli, i.e., the longer the eye was exposed to darkness the bigger was the subsequent pupillary amplitude. This pupillary recovery was significantly slowed by age and by the presence of macular degeneration. Test diagnostic accuracy for AMD was approximately 92%, with a sensitivity of 90% and a specificity of 70%. Conclusions: This proof-of-concept study demonstrates that consumer-grade VR headsets with integrated eye tracking can detect retinal dysfunction associated with AMD. The method offers a fast, accessible, and potentially scalable approach for retinal disease screening and monitoring. Further optimization and validation in larger cohorts are needed to confirm its clinical utility.

目的:本研究评估了一种新颖的、无创的方法,使用虚拟现实(VR)头戴式设备和集成眼动仪,通过测量年龄相关性黄斑变性(AMD)患者光适应后瞳孔反应的恢复来评估视网膜功能。方法:在这项初步研究中,14名临床确诊的AMD患者和14名年龄匹配的健康对照者使用VR头显暴露于明暗交替刺激下。每次试验,黑暗刺激持续时间增加100毫秒,重复50个周期。记录瞳孔对亮度重新出现的反应。数据分析使用线性混合效应模型来比较各组之间的恢复模式,并使用卷积神经网络来评估诊断准确性。结果:瞳孔反应幅度随黑暗刺激时间的延长而增加,即眼睛暴露于黑暗的时间越长,随后的瞳孔反应幅度越大。瞳孔的恢复因年龄和黄斑变性的存在而明显减慢。测试诊断AMD的准确率约为92%,灵敏度为90%,特异性为70%。结论:这项概念验证研究表明,集成眼动追踪的消费级VR头显可以检测与AMD相关的视网膜功能障碍。该方法为视网膜疾病的筛查和监测提供了一种快速、可获取且具有潜在可扩展性的方法。需要在更大的队列中进一步优化和验证以确认其临床应用。
{"title":"Recovery of the Pupillary Response After Light Adaptation Is Slowed in Patients with Age-Related Macular Degeneration.","authors":"Javier Barranco Garcia, Thomas Ferrazzini, Ana Coito, Dominik Brügger, Mathias Abegg","doi":"10.3390/jemr18060066","DOIUrl":"10.3390/jemr18060066","url":null,"abstract":"<p><p><b>Purpose:</b> This study evaluates a novel, non-invasive method using a virtual reality (VR) headset with integrated eye trackers to assess retinal function by measuring the recovery of the pupillary response after light adaptation in patients with age-related macular degeneration (AMD). <b>Methods:</b> In this pilot study, fourteen patients with clinically confirmed AMD and 14 age-matched healthy controls were exposed to alternating bright and dark stimuli using a VR headset. The dark stimulus duration increased incrementally by 100 milliseconds per trial, repeated over 50 cycles. The pupillary response to the re-onset of brightness was recorded. Data were analyzed using a linear mixed-effects model to compare recovery patterns between groups and a convolutional neural network to evaluate diagnostic accuracy. <b>Results:</b> The pupillary response amplitude increased with longer dark stimuli, i.e., the longer the eye was exposed to darkness the bigger was the subsequent pupillary amplitude. This pupillary recovery was significantly slowed by age and by the presence of macular degeneration. Test diagnostic accuracy for AMD was approximately 92%, with a sensitivity of 90% and a specificity of 70%. <b>Conclusions:</b> This proof-of-concept study demonstrates that consumer-grade VR headsets with integrated eye tracking can detect retinal dysfunction associated with AMD. The method offers a fast, accessible, and potentially scalable approach for retinal disease screening and monitoring. Further optimization and validation in larger cohorts are needed to confirm its clinical utility.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641904/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Eye Movement Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1