Gavindya Jayawardena, Yasith Jayawardana, Jacek Gwizdka
Mental effort, a critical factor influencing task performance, is often difficult to measure accurately and efficiently. Pupil diameter has emerged as a reliable, real-time indicator of mental effort. This study introduces RIPA2, an enhanced pupillometric index for real-time mental effort assessment. Building on the original RIPA method, RIPA2 incorporates refined Savitzky-Golay filter parameters to better isolate pupil diameter fluctuations within biologically relevant frequency bands linked to cognitive load. We validated RIPA2 across two distinct tasks: a structured N-back memory task and a naturalistic information search task involving fact-checking and decision-making scenarios. Our findings show that RIPA2 reliably tracks variations in mental effort, demonstrating improved sensitivity and consistency over the original RIPA and strong alignment with the established offline measures of pupil-based cognitive load indices, such as LHIPA. Notably, RIPA2 captured increased mental effort at higher N-back levels and successfully distinguished greater effort during decision-making tasks compared to fact-checking tasks, highlighting its applicability to real-world cognitive demands. These findings suggest that RIPA2 provides a robust, continuous, and low-latency method for assessing mental effort. It holds strong potential for broader use in educational settings, medical environments, workplaces, and adaptive user interfaces, facilitating objective monitoring of mental effort beyond laboratory conditions.
{"title":"Measuring Mental Effort in Real Time Using Pupillometry.","authors":"Gavindya Jayawardena, Yasith Jayawardana, Jacek Gwizdka","doi":"10.3390/jemr18060070","DOIUrl":"10.3390/jemr18060070","url":null,"abstract":"<p><p>Mental effort, a critical factor influencing task performance, is often difficult to measure accurately and efficiently. Pupil diameter has emerged as a reliable, real-time indicator of mental effort. This study introduces RIPA2, an enhanced pupillometric index for real-time mental effort assessment. Building on the original RIPA method, RIPA2 incorporates refined Savitzky-Golay filter parameters to better isolate pupil diameter fluctuations within biologically relevant frequency bands linked to cognitive load. We validated RIPA2 across two distinct tasks: a structured N-back memory task and a naturalistic information search task involving fact-checking and decision-making scenarios. Our findings show that RIPA2 reliably tracks variations in mental effort, demonstrating improved sensitivity and consistency over the original RIPA and strong alignment with the established offline measures of pupil-based cognitive load indices, such as LHIPA. Notably, RIPA2 captured increased mental effort at higher N-back levels and successfully distinguished greater effort during decision-making tasks compared to fact-checking tasks, highlighting its applicability to real-world cognitive demands. These findings suggest that RIPA2 provides a robust, continuous, and low-latency method for assessing mental effort. It holds strong potential for broader use in educational settings, medical environments, workplaces, and adaptive user interfaces, facilitating objective monitoring of mental effort beyond laboratory conditions.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12733481/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145819401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aura Lydia Riswanto, Seieun Kim, Youngsam Ha, Hak-Seon Kim
Social media has become a dominant channel for food marketing, particularly targeting youth through visually engaging and socially embedded content. This study investigates how young adults visually engage with food advertisements on social media and how specific visual and contextual features influence purchase intention. Using eye-tracking technology and survey analysis, data were collected from 35 participants aged 18 to 25. Participants viewed simulated Instagram posts incorporating elements such as food imagery, branding, influencer presence, and social cues. Visual attention was recorded using Tobii Pro Spectrum, and behavioral responses were assessed via post-surveys. A 2 × 2 design varying influencer presence and food type showed that both features significantly increased visual attention. Marketing cues and branding also attracted substantial visual attention. Linear regression revealed that core/non-core content and influencer features were among the strongest predictors of consumer response. The findings underscore the persuasive power of human and social features in digital food advertising. These insights have implications for commercial marketing practices and for understanding how visual and social elements influence youth engagement with food content on digital platforms.
社交媒体已经成为食品营销的主要渠道,特别是通过视觉吸引力和社会嵌入内容来瞄准年轻人。本研究调查了年轻人如何在视觉上参与社交媒体上的食品广告,以及特定的视觉和上下文特征如何影响购买意愿。通过眼动追踪技术和调查分析,收集了35名年龄在18至25岁之间的参与者的数据。参与者观看了模拟的Instagram帖子,这些帖子包含了食物图片、品牌、网红形象和社交线索等元素。使用Tobii Pro Spectrum记录视觉注意力,并通过事后调查评估行为反应。一个2 × 2设计,不同的影响者存在和食物类型表明,这两种特征都显著增加了视觉注意力。营销线索和品牌也吸引了大量的视觉关注。线性回归显示,核心/非核心内容和影响者特征是消费者反应的最强预测因子。研究结果强调了数字食品广告中人类和社会特征的说服力。这些见解对商业营销实践以及理解视觉和社会元素如何影响年轻人在数字平台上对食品内容的参与具有重要意义。
{"title":"Visual Attention to Food Content on Social Media: An Eye-Tracking Study Among Young Adults.","authors":"Aura Lydia Riswanto, Seieun Kim, Youngsam Ha, Hak-Seon Kim","doi":"10.3390/jemr18060069","DOIUrl":"10.3390/jemr18060069","url":null,"abstract":"<p><p>Social media has become a dominant channel for food marketing, particularly targeting youth through visually engaging and socially embedded content. This study investigates how young adults visually engage with food advertisements on social media and how specific visual and contextual features influence purchase intention. Using eye-tracking technology and survey analysis, data were collected from 35 participants aged 18 to 25. Participants viewed simulated Instagram posts incorporating elements such as food imagery, branding, influencer presence, and social cues. Visual attention was recorded using Tobii Pro Spectrum, and behavioral responses were assessed via post-surveys. A 2 × 2 design varying influencer presence and food type showed that both features significantly increased visual attention. Marketing cues and branding also attracted substantial visual attention. Linear regression revealed that core/non-core content and influencer features were among the strongest predictors of consumer response. The findings underscore the persuasive power of human and social features in digital food advertising. These insights have implications for commercial marketing practices and for understanding how visual and social elements influence youth engagement with food content on digital platforms.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641903/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Puranjay Gupta, Emily Kao, Neil Sheth, Reem Alahmadi, Michael J Heiferman
Purpose: An observational study to investigate differences in gaze behaviors across varying expertise levels using a 3D heads-up display (HUD) integrated with eye-tracking was conducted. Methods: 25 ophthalmologists (PGY2-4, fellows, attendings; number(n) = 5/group) performed cataract surgery on a SimulEYE model using NGENUITY HUD. Results: Surgical proficiency increased with experience, with attendings achieving the highest scores (54.4 ± 0.89). Compared with attendings, PGY2s had longer fixation durations (p = 0.042), longer saccades (p < 0.0001), and fewer fixations on the HUD (p < 0.0001). Capsulorhexis diameter relative to capsule size increased with expertise, with fellows and attendings achieving significantly larger diameters than PGY2s (p < 0.0001). Experts maintained smaller tear angles, initiated tears closer to the main wound, and produced more circular morphologies. They rapidly alternated gaze between instruments and surrounding tissue, whereas novices (PGY2-4) fixated primarily on the instrument tip. Conclusions: Experts employ a feed-forward visual sampling strategy, allowing perception of instruments and surrounding tissue, minimizing inadvertent damage. Furthermore, attending surgeons maintain smaller tear angles and initiate tears proximally to forceps insertion, which may contribute to more controlled tears. Future integration of eye-tracking technology into surgical training could enhance visual-motor strategies in novices.
{"title":"Gaze Characteristics Using a Three-Dimensional Heads-Up Display During Cataract Surgery.","authors":"Puranjay Gupta, Emily Kao, Neil Sheth, Reem Alahmadi, Michael J Heiferman","doi":"10.3390/jemr18060068","DOIUrl":"10.3390/jemr18060068","url":null,"abstract":"<p><p><b>Purpose:</b> An observational study to investigate differences in gaze behaviors across varying expertise levels using a 3D heads-up display (HUD) integrated with eye-tracking was conducted. <b>Methods:</b> 25 ophthalmologists (PGY2-4, fellows, attendings; number(n) = 5/group) performed cataract surgery on a SimulEYE model using NGENUITY HUD. <b>Results:</b> Surgical proficiency increased with experience, with attendings achieving the highest scores (54.4 ± 0.89). Compared with attendings, PGY2s had longer fixation durations (<i>p</i> = 0.042), longer saccades (<i>p</i> < 0.0001), and fewer fixations on the HUD (<i>p</i> < 0.0001). Capsulorhexis diameter relative to capsule size increased with expertise, with fellows and attendings achieving significantly larger diameters than PGY2s (<i>p</i> < 0.0001). Experts maintained smaller tear angles, initiated tears closer to the main wound, and produced more circular morphologies. They rapidly alternated gaze between instruments and surrounding tissue, whereas novices (PGY2-4) fixated primarily on the instrument tip. <b>Conclusions:</b> Experts employ a feed-forward visual sampling strategy, allowing perception of instruments and surrounding tissue, minimizing inadvertent damage. Furthermore, attending surgeons maintain smaller tear angles and initiate tears proximally to forceps insertion, which may contribute to more controlled tears. Future integration of eye-tracking technology into surgical training could enhance visual-motor strategies in novices.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641938/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bo Fu, Kayla Chu, Angelo Ryan Soriano, Peter Gatsby, Nicolas Guardado Guardado, Ashley Jones, Matthew Halderman
Recent breakthroughs in machine learning, artificial intelligence, and the emergence of large datasets have made the integration of eye tracking increasingly feasible not only in computing but also in many other disciplines to accelerate innovation and scientific discovery. These transformative changes often depend on intelligently analyzing and interpreting gaze data, which demand a substantial technical background. Overcoming these technical barriers has remained an obstacle to the broader adoption of eye tracking technologies in certain communities. In an effort to increase accessibility that potentially empowers a broader community of researchers and practitioners to leverage eye tracking, this paper presents an open-source software platform: Beach Environment for the Analytics of Human Gaze (BEACH-Gaze), designed to offer comprehensive descriptive and predictive analytical support. Firstly, BEACH-Gaze provides sequential gaze analytics through window segmentation in its data processing and analysis pipeline, which can be used to achieve simulations of real-time gaze-based systems. Secondly, it integrates a range of established machine learning models, allowing researchers from diverse disciplines to generate gaze-enabled predictions without advanced technical expertise. The overall goal is to simplify technical details and to aid the broader community interested in eye tracking research and applications in data interpretation, and to leverage knowledge gained from eye gaze in the development of machine intelligence. As such, we further demonstrate three use cases that apply descriptive and predictive gaze analytics to support individuals with autism spectrum disorder during technology-assisted exercises, to dynamically tailor visual cues for an individual user via physiologically adaptive visualizations, and to predict pilots' performance in flight maneuvers to enhance aviation safety.
{"title":"BEACH-Gaze: Supporting Descriptive and Predictive Gaze Analytics in the Era of Artificial Intelligence and Advanced Data Science.","authors":"Bo Fu, Kayla Chu, Angelo Ryan Soriano, Peter Gatsby, Nicolas Guardado Guardado, Ashley Jones, Matthew Halderman","doi":"10.3390/jemr18060067","DOIUrl":"10.3390/jemr18060067","url":null,"abstract":"<p><p>Recent breakthroughs in machine learning, artificial intelligence, and the emergence of large datasets have made the integration of eye tracking increasingly feasible not only in computing but also in many other disciplines to accelerate innovation and scientific discovery. These transformative changes often depend on intelligently analyzing and interpreting gaze data, which demand a substantial technical background. Overcoming these technical barriers has remained an obstacle to the broader adoption of eye tracking technologies in certain communities. In an effort to increase accessibility that potentially empowers a broader community of researchers and practitioners to leverage eye tracking, this paper presents an open-source software platform: <i>B</i>each <i>E</i>nvironment for the <i>A</i>nalyti<i>c</i>s of <i>H</i>uman <i>Gaze</i> (BEACH-Gaze), designed to offer comprehensive descriptive and predictive analytical support. Firstly, BEACH-Gaze provides sequential gaze analytics through window segmentation in its data processing and analysis pipeline, which can be used to achieve simulations of real-time gaze-based systems. Secondly, it integrates a range of established machine learning models, allowing researchers from diverse disciplines to generate gaze-enabled predictions without advanced technical expertise. The overall goal is to simplify technical details and to aid the broader community interested in eye tracking research and applications in data interpretation, and to leverage knowledge gained from eye gaze in the development of machine intelligence. As such, we further demonstrate three use cases that apply descriptive and predictive gaze analytics to support individuals with autism spectrum disorder during technology-assisted exercises, to dynamically tailor visual cues for an individual user via physiologically adaptive visualizations, and to predict pilots' performance in flight maneuvers to enhance aviation safety.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641676/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Javier Barranco Garcia, Thomas Ferrazzini, Ana Coito, Dominik Brügger, Mathias Abegg
Purpose: This study evaluates a novel, non-invasive method using a virtual reality (VR) headset with integrated eye trackers to assess retinal function by measuring the recovery of the pupillary response after light adaptation in patients with age-related macular degeneration (AMD). Methods: In this pilot study, fourteen patients with clinically confirmed AMD and 14 age-matched healthy controls were exposed to alternating bright and dark stimuli using a VR headset. The dark stimulus duration increased incrementally by 100 milliseconds per trial, repeated over 50 cycles. The pupillary response to the re-onset of brightness was recorded. Data were analyzed using a linear mixed-effects model to compare recovery patterns between groups and a convolutional neural network to evaluate diagnostic accuracy. Results: The pupillary response amplitude increased with longer dark stimuli, i.e., the longer the eye was exposed to darkness the bigger was the subsequent pupillary amplitude. This pupillary recovery was significantly slowed by age and by the presence of macular degeneration. Test diagnostic accuracy for AMD was approximately 92%, with a sensitivity of 90% and a specificity of 70%. Conclusions: This proof-of-concept study demonstrates that consumer-grade VR headsets with integrated eye tracking can detect retinal dysfunction associated with AMD. The method offers a fast, accessible, and potentially scalable approach for retinal disease screening and monitoring. Further optimization and validation in larger cohorts are needed to confirm its clinical utility.
{"title":"Recovery of the Pupillary Response After Light Adaptation Is Slowed in Patients with Age-Related Macular Degeneration.","authors":"Javier Barranco Garcia, Thomas Ferrazzini, Ana Coito, Dominik Brügger, Mathias Abegg","doi":"10.3390/jemr18060066","DOIUrl":"10.3390/jemr18060066","url":null,"abstract":"<p><p><b>Purpose:</b> This study evaluates a novel, non-invasive method using a virtual reality (VR) headset with integrated eye trackers to assess retinal function by measuring the recovery of the pupillary response after light adaptation in patients with age-related macular degeneration (AMD). <b>Methods:</b> In this pilot study, fourteen patients with clinically confirmed AMD and 14 age-matched healthy controls were exposed to alternating bright and dark stimuli using a VR headset. The dark stimulus duration increased incrementally by 100 milliseconds per trial, repeated over 50 cycles. The pupillary response to the re-onset of brightness was recorded. Data were analyzed using a linear mixed-effects model to compare recovery patterns between groups and a convolutional neural network to evaluate diagnostic accuracy. <b>Results:</b> The pupillary response amplitude increased with longer dark stimuli, i.e., the longer the eye was exposed to darkness the bigger was the subsequent pupillary amplitude. This pupillary recovery was significantly slowed by age and by the presence of macular degeneration. Test diagnostic accuracy for AMD was approximately 92%, with a sensitivity of 90% and a specificity of 70%. <b>Conclusions:</b> This proof-of-concept study demonstrates that consumer-grade VR headsets with integrated eye tracking can detect retinal dysfunction associated with AMD. The method offers a fast, accessible, and potentially scalable approach for retinal disease screening and monitoring. Further optimization and validation in larger cohorts are needed to confirm its clinical utility.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641904/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mahboubeh Nedaei, Roger Säljö, Shaista Kanwal, Simon Goodchild
In mathematics, and in learning mathematics, representations (texts, formulae, and figures) play a vital role. Eye-tracking is a promising approach for studying how representations are attended to in the context of mathematics learning. The focus of the research reported here is on the methodological and conceptual challenges that arise when analysing students' engagement with different kinds of representations using such data. The study critically examines some of these issues through a case study of three engineering students engaging with an instructional document introducing double integrals. This study reports that not only the characteristics of different types of representations affect students' engagement with areas of interests (AOIs), but also methodological decisions, such as how AOIs are defined, will be consequential for interpretations of that engagement. This shows that both technical parameters and the inherent nature of the representations themselves must be considered when defining AOIs and analysing students' engagement with representations. The findings offer practical considerations for designing and analysing eye-tracking studies when students' engagement with different representations is in focus.
{"title":"Eye-Tracking Data in the Exploration of Students' Engagement with Representations in Mathematics: Areas of Interest (AOIs) as Methodological and Conceptual Challenges.","authors":"Mahboubeh Nedaei, Roger Säljö, Shaista Kanwal, Simon Goodchild","doi":"10.3390/jemr18060065","DOIUrl":"10.3390/jemr18060065","url":null,"abstract":"<p><p>In mathematics, and in learning mathematics, representations (texts, formulae, and figures) play a vital role. Eye-tracking is a promising approach for studying how representations are attended to in the context of mathematics learning. The focus of the research reported here is on the methodological and conceptual challenges that arise when analysing students' engagement with different kinds of representations using such data. The study critically examines some of these issues through a case study of three engineering students engaging with an instructional document introducing double integrals. This study reports that not only the characteristics of different types of representations affect students' engagement with areas of interests (AOIs), but also methodological decisions, such as how AOIs are defined, will be consequential for interpretations of that engagement. This shows that both technical parameters and the inherent nature of the representations themselves must be considered when defining AOIs and analysing students' engagement with representations. The findings offer practical considerations for designing and analysing eye-tracking studies when students' engagement with different representations is in focus.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641983/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current research on multimodal AR-HUD navigation systems primarily focuses on the presentation forms of auditory and visual information, yet the effects of synchrony between auditory and visual prompts as well as prompt timing on driving behavior and attention mechanisms remain insufficiently explored. This study employed a 2 (prompt mode: synchronous vs. asynchronous) × 3 (prompt timing: -2000 m, -1000 m, -500 m) within-subject experimental design to assess the impact of multimodal prompt synchrony and prompt distance on drivers' reaction time, sustained attention, and eye movement behaviors, including average fixation duration and fixation count. Behavioral data demonstrated that both prompt mode and prompt timing significantly influenced drivers' response performance (indexed by reaction time) and attention stability, with synchronous prompts at -1000 m yielding optimal performance. Eye-tracking results further revealed that synchronous prompts significantly enhanced fixation stability and reduced visual load, indicating more efficient information integration. Therefore, prompt mode and prompt timing significantly affect drivers' perceptual processing and operational performance. Delivering synchronous auditory and visual prompts at -1000 m achieves an optimal balance between information timeliness and multimodal integration. This study recommends the following: (1) maintaining temporal consistency in multimodal prompts to facilitate perceptual integration and (2) controlling prompt distance within an intermediate range (-1000 m) to optimize the perception-action window, thereby improving the safety and efficiency of AR-HUD navigation systems.
{"title":"Effects of Multimodal AR-HUD Navigation Prompt Mode and Timing on Driving Behavior.","authors":"Qi Zhu, Ziqi Liu, Youlan Li, Jung Euitay","doi":"10.3390/jemr18060063","DOIUrl":"10.3390/jemr18060063","url":null,"abstract":"<p><p>Current research on multimodal AR-HUD navigation systems primarily focuses on the presentation forms of auditory and visual information, yet the effects of synchrony between auditory and visual prompts as well as prompt timing on driving behavior and attention mechanisms remain insufficiently explored. This study employed a 2 (prompt mode: synchronous vs. asynchronous) × 3 (prompt timing: -2000 m, -1000 m, -500 m) within-subject experimental design to assess the impact of multimodal prompt synchrony and prompt distance on drivers' reaction time, sustained attention, and eye movement behaviors, including average fixation duration and fixation count. Behavioral data demonstrated that both prompt mode and prompt timing significantly influenced drivers' response performance (indexed by reaction time) and attention stability, with synchronous prompts at -1000 m yielding optimal performance. Eye-tracking results further revealed that synchronous prompts significantly enhanced fixation stability and reduced visual load, indicating more efficient information integration. Therefore, prompt mode and prompt timing significantly affect drivers' perceptual processing and operational performance. Delivering synchronous auditory and visual prompts at -1000 m achieves an optimal balance between information timeliness and multimodal integration. This study recommends the following: (1) maintaining temporal consistency in multimodal prompts to facilitate perceptual integration and (2) controlling prompt distance within an intermediate range (-1000 m) to optimize the perception-action window, thereby improving the safety and efficiency of AR-HUD navigation systems.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641856/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Successful health promotion involves messages that are quickly captured and held long enough to permit eligibility, credibility, and calls to action to be coded. This research develops an exploratory eye-tracking atlas of breast cancer screening ads viewed by midlife women and a replicable pipeline that distinguishes early capture from long-term processing. Areas of Interest are divided into design-influential categories and graphed with two complementary measures: first hit and time to first fixation for entry and a tie-aware pairwise dominance model for dwell that produces rankings and an "early-vs.-sticky" quadrant visualization. Across creatives, pictorial and symbolic features were more likely to capture the first glance when they were perceptually dominant, while layouts containing centralized headlines or institutional cues deflected entry to the message and source. Prolonged attention was consistently focused on blocks of text, locations, and badges of authoring over ornamental pictures, demarcating the functional difference between capture and processing. Subgroup differences indicated audience-sensitive shifts: Older and household families shifted earlier toward source cues, more educated audiences shifted toward copy and locations, and younger or single viewers shifted toward symbols and images. Internal diagnostics verified that pairwise matrices were consistent with standard dwell summaries, verifying the comparative approach. The atlas converts the patterns into design-ready heuristics: defend sticky and early pieces, encourage sticky but late pieces by pushing them toward probable entry channels, de-clutter early but not sticky pieces to convert to processing, and re-think pieces that are neither. In practice, the diagnostics can be incorporated into procurement, pretesting, and briefs by agencies, educators, and campaign managers in order to enhance actionability without sacrificing segmentation of audiences. As an exploratory investigation, this study invites replication with larger and more diverse samples, generalizations to dynamic media, and associations with downstream measures such as recall and uptake of services.
{"title":"An Exploratory Eye-Tracking Study of Breast-Cancer Screening Ads: A Visual Analytics Framework and Descriptive Atlas.","authors":"Ioanna Yfantidou, Stefanos Balaskas, Dimitra Skandali","doi":"10.3390/jemr18060064","DOIUrl":"10.3390/jemr18060064","url":null,"abstract":"<p><p>Successful health promotion involves messages that are quickly captured and held long enough to permit eligibility, credibility, and calls to action to be coded. This research develops an exploratory eye-tracking atlas of breast cancer screening ads viewed by midlife women and a replicable pipeline that distinguishes early capture from long-term processing. Areas of Interest are divided into design-influential categories and graphed with two complementary measures: first hit and time to first fixation for entry and a tie-aware pairwise dominance model for dwell that produces rankings and an \"early-vs.-sticky\" quadrant visualization. Across creatives, pictorial and symbolic features were more likely to capture the first glance when they were perceptually dominant, while layouts containing centralized headlines or institutional cues deflected entry to the message and source. Prolonged attention was consistently focused on blocks of text, locations, and badges of authoring over ornamental pictures, demarcating the functional difference between capture and processing. Subgroup differences indicated audience-sensitive shifts: Older and household families shifted earlier toward source cues, more educated audiences shifted toward copy and locations, and younger or single viewers shifted toward symbols and images. Internal diagnostics verified that pairwise matrices were consistent with standard dwell summaries, verifying the comparative approach. The atlas converts the patterns into design-ready heuristics: defend sticky and early pieces, encourage sticky but late pieces by pushing them toward probable entry channels, de-clutter early but not sticky pieces to convert to processing, and re-think pieces that are neither. In practice, the diagnostics can be incorporated into procurement, pretesting, and briefs by agencies, educators, and campaign managers in order to enhance actionability without sacrificing segmentation of audiences. As an exploratory investigation, this study invites replication with larger and more diverse samples, generalizations to dynamic media, and associations with downstream measures such as recall and uptake of services.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12642007/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maria Mamalikou, Konstantinos Gkatzionis, Malamatenia Panagiotou
Social media has developed into a leading advertising platform, with Instagram likes serving as visual cues that may influence consumer perception and behavior. The present study investigated the effect of Instagram likes on visual attention, memory, and food evaluations focusing on traditional Greek food posts, using eye-tracking technology. The study assessed whether a higher number of likes increased attention to the food area, enhanced memory recall of food names, and influenced subjective ratings (liking, perceived tastiness, and intention to taste). The results demonstrated no significant differences in overall viewing time, memory performance, or evaluation ratings between high-like and low-like conditions. Although not statistically significant, descriptive trends suggested that posts with a higher number of likes tended to be evaluated more positively and the AOIs likes area showed a trend towards attracting more visual attention. The observed trends point to a possible subtle role of likes in user's engagement with food posts, influencing how they process and evaluate such content. These findings add to the discussion about the effect of social media likes on information processing when individuals observe food pictures on social media.
{"title":"The Influence of Social Media-like Cues on Visual Attention-An Eye-Tracking Study with Food Products.","authors":"Maria Mamalikou, Konstantinos Gkatzionis, Malamatenia Panagiotou","doi":"10.3390/jemr18060062","DOIUrl":"10.3390/jemr18060062","url":null,"abstract":"<p><p>Social media has developed into a leading advertising platform, with Instagram likes serving as visual cues that may influence consumer perception and behavior. The present study investigated the effect of Instagram likes on visual attention, memory, and food evaluations focusing on traditional Greek food posts, using eye-tracking technology. The study assessed whether a higher number of likes increased attention to the food area, enhanced memory recall of food names, and influenced subjective ratings (liking, perceived tastiness, and intention to taste). The results demonstrated no significant differences in overall viewing time, memory performance, or evaluation ratings between high-like and low-like conditions. Although not statistically significant, descriptive trends suggested that posts with a higher number of likes tended to be evaluated more positively and the AOIs likes area showed a trend towards attracting more visual attention. The observed trends point to a possible subtle role of likes in user's engagement with food posts, influencing how they process and evaluate such content. These findings add to the discussion about the effect of social media likes on information processing when individuals observe food pictures on social media.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641725/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Successful reading comprehension depends on many factors, including text genre. Eye-tracking studies indicate that genre shapes eye movement patterns at a local level. Although the reading of expository and narrative texts by adolescents has been described in the literature, the reading of poetry by adolescents remains understudied. In this study, we used scanpath analysis to examine how genre and comprehension level influence global eye movement strategies in adolescents (N = 44). Thus, the novelty of this study lies in the use of scanpath analysis to measure global eye movement strategies employed by adolescents while reading narrative, expository, and poetic texts. Two distinct reading patterns emerged: a forward reading pattern (linear progression) and a regressive reading pattern (frequent lookbacks). Readers tended to use regressive patterns more often with expository and poetic texts, while forward patterns were more common with a narrative text. Comprehension level also played a significant role, with readers with a higher level of comprehension relying more on regressive patterns for expository and poetic texts. The results of this experiment suggest that scanpaths effectively capture genre-driven differences in reading strategies, underscoring how genre expectations may shape visual processing during reading.
{"title":"The Influence of Text Genre on Eye Movement Patterns During Reading.","authors":"Maksim Markevich, Anastasiia Streltsova","doi":"10.3390/jemr18060060","DOIUrl":"10.3390/jemr18060060","url":null,"abstract":"<p><p>Successful reading comprehension depends on many factors, including text genre. Eye-tracking studies indicate that genre shapes eye movement patterns at a local level. Although the reading of expository and narrative texts by adolescents has been described in the literature, the reading of poetry by adolescents remains understudied. In this study, we used scanpath analysis to examine how genre and comprehension level influence global eye movement strategies in adolescents (N = 44). Thus, the novelty of this study lies in the use of scanpath analysis to measure global eye movement strategies employed by adolescents while reading narrative, expository, and poetic texts. Two distinct reading patterns emerged: a forward reading pattern (linear progression) and a regressive reading pattern (frequent lookbacks). Readers tended to use regressive patterns more often with expository and poetic texts, while forward patterns were more common with a narrative text. Comprehension level also played a significant role, with readers with a higher level of comprehension relying more on regressive patterns for expository and poetic texts. The results of this experiment suggest that scanpaths effectively capture genre-driven differences in reading strategies, underscoring how genre expectations may shape visual processing during reading.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641876/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}