Mahboubeh Nedaei, Roger Säljö, Shaista Kanwal, Simon Goodchild
In mathematics, and in learning mathematics, representations (texts, formulae, and figures) play a vital role. Eye-tracking is a promising approach for studying how representations are attended to in the context of mathematics learning. The focus of the research reported here is on the methodological and conceptual challenges that arise when analysing students' engagement with different kinds of representations using such data. The study critically examines some of these issues through a case study of three engineering students engaging with an instructional document introducing double integrals. This study reports that not only the characteristics of different types of representations affect students' engagement with areas of interests (AOIs), but also methodological decisions, such as how AOIs are defined, will be consequential for interpretations of that engagement. This shows that both technical parameters and the inherent nature of the representations themselves must be considered when defining AOIs and analysing students' engagement with representations. The findings offer practical considerations for designing and analysing eye-tracking studies when students' engagement with different representations is in focus.
{"title":"Eye-Tracking Data in the Exploration of Students' Engagement with Representations in Mathematics: Areas of Interest (AOIs) as Methodological and Conceptual Challenges.","authors":"Mahboubeh Nedaei, Roger Säljö, Shaista Kanwal, Simon Goodchild","doi":"10.3390/jemr18060065","DOIUrl":"10.3390/jemr18060065","url":null,"abstract":"<p><p>In mathematics, and in learning mathematics, representations (texts, formulae, and figures) play a vital role. Eye-tracking is a promising approach for studying how representations are attended to in the context of mathematics learning. The focus of the research reported here is on the methodological and conceptual challenges that arise when analysing students' engagement with different kinds of representations using such data. The study critically examines some of these issues through a case study of three engineering students engaging with an instructional document introducing double integrals. This study reports that not only the characteristics of different types of representations affect students' engagement with areas of interests (AOIs), but also methodological decisions, such as how AOIs are defined, will be consequential for interpretations of that engagement. This shows that both technical parameters and the inherent nature of the representations themselves must be considered when defining AOIs and analysing students' engagement with representations. The findings offer practical considerations for designing and analysing eye-tracking studies when students' engagement with different representations is in focus.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641983/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current research on multimodal AR-HUD navigation systems primarily focuses on the presentation forms of auditory and visual information, yet the effects of synchrony between auditory and visual prompts as well as prompt timing on driving behavior and attention mechanisms remain insufficiently explored. This study employed a 2 (prompt mode: synchronous vs. asynchronous) × 3 (prompt timing: -2000 m, -1000 m, -500 m) within-subject experimental design to assess the impact of multimodal prompt synchrony and prompt distance on drivers' reaction time, sustained attention, and eye movement behaviors, including average fixation duration and fixation count. Behavioral data demonstrated that both prompt mode and prompt timing significantly influenced drivers' response performance (indexed by reaction time) and attention stability, with synchronous prompts at -1000 m yielding optimal performance. Eye-tracking results further revealed that synchronous prompts significantly enhanced fixation stability and reduced visual load, indicating more efficient information integration. Therefore, prompt mode and prompt timing significantly affect drivers' perceptual processing and operational performance. Delivering synchronous auditory and visual prompts at -1000 m achieves an optimal balance between information timeliness and multimodal integration. This study recommends the following: (1) maintaining temporal consistency in multimodal prompts to facilitate perceptual integration and (2) controlling prompt distance within an intermediate range (-1000 m) to optimize the perception-action window, thereby improving the safety and efficiency of AR-HUD navigation systems.
{"title":"Effects of Multimodal AR-HUD Navigation Prompt Mode and Timing on Driving Behavior.","authors":"Qi Zhu, Ziqi Liu, Youlan Li, Jung Euitay","doi":"10.3390/jemr18060063","DOIUrl":"10.3390/jemr18060063","url":null,"abstract":"<p><p>Current research on multimodal AR-HUD navigation systems primarily focuses on the presentation forms of auditory and visual information, yet the effects of synchrony between auditory and visual prompts as well as prompt timing on driving behavior and attention mechanisms remain insufficiently explored. This study employed a 2 (prompt mode: synchronous vs. asynchronous) × 3 (prompt timing: -2000 m, -1000 m, -500 m) within-subject experimental design to assess the impact of multimodal prompt synchrony and prompt distance on drivers' reaction time, sustained attention, and eye movement behaviors, including average fixation duration and fixation count. Behavioral data demonstrated that both prompt mode and prompt timing significantly influenced drivers' response performance (indexed by reaction time) and attention stability, with synchronous prompts at -1000 m yielding optimal performance. Eye-tracking results further revealed that synchronous prompts significantly enhanced fixation stability and reduced visual load, indicating more efficient information integration. Therefore, prompt mode and prompt timing significantly affect drivers' perceptual processing and operational performance. Delivering synchronous auditory and visual prompts at -1000 m achieves an optimal balance between information timeliness and multimodal integration. This study recommends the following: (1) maintaining temporal consistency in multimodal prompts to facilitate perceptual integration and (2) controlling prompt distance within an intermediate range (-1000 m) to optimize the perception-action window, thereby improving the safety and efficiency of AR-HUD navigation systems.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641856/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Successful health promotion involves messages that are quickly captured and held long enough to permit eligibility, credibility, and calls to action to be coded. This research develops an exploratory eye-tracking atlas of breast cancer screening ads viewed by midlife women and a replicable pipeline that distinguishes early capture from long-term processing. Areas of Interest are divided into design-influential categories and graphed with two complementary measures: first hit and time to first fixation for entry and a tie-aware pairwise dominance model for dwell that produces rankings and an "early-vs.-sticky" quadrant visualization. Across creatives, pictorial and symbolic features were more likely to capture the first glance when they were perceptually dominant, while layouts containing centralized headlines or institutional cues deflected entry to the message and source. Prolonged attention was consistently focused on blocks of text, locations, and badges of authoring over ornamental pictures, demarcating the functional difference between capture and processing. Subgroup differences indicated audience-sensitive shifts: Older and household families shifted earlier toward source cues, more educated audiences shifted toward copy and locations, and younger or single viewers shifted toward symbols and images. Internal diagnostics verified that pairwise matrices were consistent with standard dwell summaries, verifying the comparative approach. The atlas converts the patterns into design-ready heuristics: defend sticky and early pieces, encourage sticky but late pieces by pushing them toward probable entry channels, de-clutter early but not sticky pieces to convert to processing, and re-think pieces that are neither. In practice, the diagnostics can be incorporated into procurement, pretesting, and briefs by agencies, educators, and campaign managers in order to enhance actionability without sacrificing segmentation of audiences. As an exploratory investigation, this study invites replication with larger and more diverse samples, generalizations to dynamic media, and associations with downstream measures such as recall and uptake of services.
{"title":"An Exploratory Eye-Tracking Study of Breast-Cancer Screening Ads: A Visual Analytics Framework and Descriptive Atlas.","authors":"Ioanna Yfantidou, Stefanos Balaskas, Dimitra Skandali","doi":"10.3390/jemr18060064","DOIUrl":"10.3390/jemr18060064","url":null,"abstract":"<p><p>Successful health promotion involves messages that are quickly captured and held long enough to permit eligibility, credibility, and calls to action to be coded. This research develops an exploratory eye-tracking atlas of breast cancer screening ads viewed by midlife women and a replicable pipeline that distinguishes early capture from long-term processing. Areas of Interest are divided into design-influential categories and graphed with two complementary measures: first hit and time to first fixation for entry and a tie-aware pairwise dominance model for dwell that produces rankings and an \"early-vs.-sticky\" quadrant visualization. Across creatives, pictorial and symbolic features were more likely to capture the first glance when they were perceptually dominant, while layouts containing centralized headlines or institutional cues deflected entry to the message and source. Prolonged attention was consistently focused on blocks of text, locations, and badges of authoring over ornamental pictures, demarcating the functional difference between capture and processing. Subgroup differences indicated audience-sensitive shifts: Older and household families shifted earlier toward source cues, more educated audiences shifted toward copy and locations, and younger or single viewers shifted toward symbols and images. Internal diagnostics verified that pairwise matrices were consistent with standard dwell summaries, verifying the comparative approach. The atlas converts the patterns into design-ready heuristics: defend sticky and early pieces, encourage sticky but late pieces by pushing them toward probable entry channels, de-clutter early but not sticky pieces to convert to processing, and re-think pieces that are neither. In practice, the diagnostics can be incorporated into procurement, pretesting, and briefs by agencies, educators, and campaign managers in order to enhance actionability without sacrificing segmentation of audiences. As an exploratory investigation, this study invites replication with larger and more diverse samples, generalizations to dynamic media, and associations with downstream measures such as recall and uptake of services.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12642007/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maria Mamalikou, Konstantinos Gkatzionis, Malamatenia Panagiotou
Social media has developed into a leading advertising platform, with Instagram likes serving as visual cues that may influence consumer perception and behavior. The present study investigated the effect of Instagram likes on visual attention, memory, and food evaluations focusing on traditional Greek food posts, using eye-tracking technology. The study assessed whether a higher number of likes increased attention to the food area, enhanced memory recall of food names, and influenced subjective ratings (liking, perceived tastiness, and intention to taste). The results demonstrated no significant differences in overall viewing time, memory performance, or evaluation ratings between high-like and low-like conditions. Although not statistically significant, descriptive trends suggested that posts with a higher number of likes tended to be evaluated more positively and the AOIs likes area showed a trend towards attracting more visual attention. The observed trends point to a possible subtle role of likes in user's engagement with food posts, influencing how they process and evaluate such content. These findings add to the discussion about the effect of social media likes on information processing when individuals observe food pictures on social media.
{"title":"The Influence of Social Media-like Cues on Visual Attention-An Eye-Tracking Study with Food Products.","authors":"Maria Mamalikou, Konstantinos Gkatzionis, Malamatenia Panagiotou","doi":"10.3390/jemr18060062","DOIUrl":"10.3390/jemr18060062","url":null,"abstract":"<p><p>Social media has developed into a leading advertising platform, with Instagram likes serving as visual cues that may influence consumer perception and behavior. The present study investigated the effect of Instagram likes on visual attention, memory, and food evaluations focusing on traditional Greek food posts, using eye-tracking technology. The study assessed whether a higher number of likes increased attention to the food area, enhanced memory recall of food names, and influenced subjective ratings (liking, perceived tastiness, and intention to taste). The results demonstrated no significant differences in overall viewing time, memory performance, or evaluation ratings between high-like and low-like conditions. Although not statistically significant, descriptive trends suggested that posts with a higher number of likes tended to be evaluated more positively and the AOIs likes area showed a trend towards attracting more visual attention. The observed trends point to a possible subtle role of likes in user's engagement with food posts, influencing how they process and evaluate such content. These findings add to the discussion about the effect of social media likes on information processing when individuals observe food pictures on social media.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641725/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Successful reading comprehension depends on many factors, including text genre. Eye-tracking studies indicate that genre shapes eye movement patterns at a local level. Although the reading of expository and narrative texts by adolescents has been described in the literature, the reading of poetry by adolescents remains understudied. In this study, we used scanpath analysis to examine how genre and comprehension level influence global eye movement strategies in adolescents (N = 44). Thus, the novelty of this study lies in the use of scanpath analysis to measure global eye movement strategies employed by adolescents while reading narrative, expository, and poetic texts. Two distinct reading patterns emerged: a forward reading pattern (linear progression) and a regressive reading pattern (frequent lookbacks). Readers tended to use regressive patterns more often with expository and poetic texts, while forward patterns were more common with a narrative text. Comprehension level also played a significant role, with readers with a higher level of comprehension relying more on regressive patterns for expository and poetic texts. The results of this experiment suggest that scanpaths effectively capture genre-driven differences in reading strategies, underscoring how genre expectations may shape visual processing during reading.
{"title":"The Influence of Text Genre on Eye Movement Patterns During Reading.","authors":"Maksim Markevich, Anastasiia Streltsova","doi":"10.3390/jemr18060060","DOIUrl":"10.3390/jemr18060060","url":null,"abstract":"<p><p>Successful reading comprehension depends on many factors, including text genre. Eye-tracking studies indicate that genre shapes eye movement patterns at a local level. Although the reading of expository and narrative texts by adolescents has been described in the literature, the reading of poetry by adolescents remains understudied. In this study, we used scanpath analysis to examine how genre and comprehension level influence global eye movement strategies in adolescents (N = 44). Thus, the novelty of this study lies in the use of scanpath analysis to measure global eye movement strategies employed by adolescents while reading narrative, expository, and poetic texts. Two distinct reading patterns emerged: a forward reading pattern (linear progression) and a regressive reading pattern (frequent lookbacks). Readers tended to use regressive patterns more often with expository and poetic texts, while forward patterns were more common with a narrative text. Comprehension level also played a significant role, with readers with a higher level of comprehension relying more on regressive patterns for expository and poetic texts. The results of this experiment suggest that scanpaths effectively capture genre-driven differences in reading strategies, underscoring how genre expectations may shape visual processing during reading.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641876/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Veslava Osińska, Weronika Kortas, Adam Szalach, Marc Welter
Recently, the photorealism of generated images has improved noticeably due to the development of AI algorithms. These are high-resolution images of human faces and bodies, cats and dogs, vehicles, and other categories of objects that the untrained eye cannot distinguish from authentic photographs. The study assessed how people perceive 12 pictures generated by AI vs. 12 real photographs. Six main categories of stimuli were selected: architecture, art, faces, cars, landscapes, and pets. The visual perception of selected images was studied by means of eye tracking and gaze patterns as well as time characteristics, compared with consideration to the respondent groups' gender and knowledge of AI graphics. After the experiment, the study participants analysed the pictures again in order to describe the reasons for their choice. The results show that AI images of pets and real photographs of architecture were the easiest to identify. The largest differences in visual perception are between men and women as well as between those experienced in digital graphics (including AI images) and the rest. Based on the analysis, several recommendations are suggested for AI developers and end-users.
{"title":"AI Images vs. Real Photographs: Investigating Visual Recognition and Perception.","authors":"Veslava Osińska, Weronika Kortas, Adam Szalach, Marc Welter","doi":"10.3390/jemr18060061","DOIUrl":"10.3390/jemr18060061","url":null,"abstract":"<p><p>Recently, the photorealism of generated images has improved noticeably due to the development of AI algorithms. These are high-resolution images of human faces and bodies, cats and dogs, vehicles, and other categories of objects that the untrained eye cannot distinguish from authentic photographs. The study assessed how people perceive 12 pictures generated by AI vs. 12 real photographs. Six main categories of stimuli were selected: architecture, art, faces, cars, landscapes, and pets. The visual perception of selected images was studied by means of eye tracking and gaze patterns as well as time characteristics, compared with consideration to the respondent groups' gender and knowledge of AI graphics. After the experiment, the study participants analysed the pictures again in order to describe the reasons for their choice. The results show that AI images of pets and real photographs of architecture were the easiest to identify. The largest differences in visual perception are between men and women as well as between those experienced in digital graphics (including AI images) and the rest. Based on the analysis, several recommendations are suggested for AI developers and end-users.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641808/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study examined how drivers' eye fixations change before, during, and after recognizing road markings, and how these changes relate to driving speed, visual complexity, cognitive functions, and demographics. 20 licensed drivers viewed on-board movies showing digit or character road markings while their eye movements were tracked. Fixation positions and dispersions were analyzed. Results showed that, regardless of marking type, fixations were horizontally dispersed before and after recognition but became vertically concentrated during recognition, with fixation points shifting higher (p < 0.001) and horizontal dispersion decreasing (p = 0.01). During the recognition period, fixations moved upward and narrowed horizontally toward the final third (p = 0.034), suggesting increased focus. Longer fixations were linked to slower speeds for digits (p = 0.029) and more characters for character markings (p < 0.001). No significant correlations were found with cognitive functions or demographics. These findings suggest that drivers first scan broadly, then concentrate on markings as they approach. For optimal recognition, simple or essential information should be placed centrally or lower, while detailed content should appear higher to align with natural gaze patterns. In high-speed environments, markings should prioritize clarity and brevity in central positions to ensure safe and rapid recognition.
{"title":"Sequential Fixation Behavior in Road Marking Recognition: Implications for Design.","authors":"Takaya Maeyama, Hiroki Okada, Daisuke Sawamura","doi":"10.3390/jemr18050059","DOIUrl":"10.3390/jemr18050059","url":null,"abstract":"<p><p>This study examined how drivers' eye fixations change before, during, and after recognizing road markings, and how these changes relate to driving speed, visual complexity, cognitive functions, and demographics. 20 licensed drivers viewed on-board movies showing digit or character road markings while their eye movements were tracked. Fixation positions and dispersions were analyzed. Results showed that, regardless of marking type, fixations were horizontally dispersed before and after recognition but became vertically concentrated during recognition, with fixation points shifting higher (<i>p</i> < 0.001) and horizontal dispersion decreasing (<i>p</i> = 0.01). During the recognition period, fixations moved upward and narrowed horizontally toward the final third (<i>p</i> = 0.034), suggesting increased focus. Longer fixations were linked to slower speeds for digits (<i>p</i> = 0.029) and more characters for character markings (<i>p</i> < 0.001). No significant correlations were found with cognitive functions or demographics. These findings suggest that drivers first scan broadly, then concentrate on markings as they approach. For optimal recognition, simple or essential information should be placed centrally or lower, while detailed content should appear higher to align with natural gaze patterns. In high-speed environments, markings should prioritize clarity and brevity in central positions to ensure safe and rapid recognition.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565692/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marina Norkina, Daria Chernova, Svetlana Alexeeva, Maria Harchevnik
Oculomotor reading behavior is influenced by both universal factors, like the "big three" of word length, frequency, and contextual predictability, and language-specific factors, such as script and grammar. The aim of this study was to examine the influence of the "big three" factors on L2 reading focusing on a typologically distant L1/L2 pair with dramatic differences in script and grammar. A total of 41 native Chinese-speaking learners of Russian (levels A2-B2) and 40 native Russian speakers read a corpus of 90 Russian sentences for comprehension. Their eye movements were recorded with EyeLink 1000+. We analyzed both early (gaze duration and skipping rate) and late (regression rate and rereading time) eye movement measures. As expected, the "big three" effects influenced oculomotor behavior in both L1 and L2 readers, being more pronounced for L2, but substantial differences were also revealed. Word frequency in L1 reading primarily influenced early processing stages, whereas in L2 reading it remained significant in later stages as well. Predictability had an immediate effect on skipping rates in L1 reading, while L2 readers only exhibited it in late measures. Word length was the only factor that interacted with L2 language exposure which demonstrated adjustment to alphabetic script and polymorphemic word structure. Our findings provide new insights into the processing challenges of L2 readers with typologically distant L1 backgrounds.
{"title":"Oculomotor Behavior of L2 Readers with Typologically Distant L1 Background: The \"Big Three\" Effects of Word Length, Frequency, and Predictability.","authors":"Marina Norkina, Daria Chernova, Svetlana Alexeeva, Maria Harchevnik","doi":"10.3390/jemr18050058","DOIUrl":"10.3390/jemr18050058","url":null,"abstract":"<p><p>Oculomotor reading behavior is influenced by both universal factors, like the \"big three\" of word length, frequency, and contextual predictability, and language-specific factors, such as script and grammar. The aim of this study was to examine the influence of the \"big three\" factors on L2 reading focusing on a typologically distant L1/L2 pair with dramatic differences in script and grammar. A total of 41 native Chinese-speaking learners of Russian (levels A2-B2) and 40 native Russian speakers read a corpus of 90 Russian sentences for comprehension. Their eye movements were recorded with EyeLink 1000+. We analyzed both early (gaze duration and skipping rate) and late (regression rate and rereading time) eye movement measures. As expected, the \"big three\" effects influenced oculomotor behavior in both L1 and L2 readers, being more pronounced for L2, but substantial differences were also revealed. Word frequency in L1 reading primarily influenced early processing stages, whereas in L2 reading it remained significant in later stages as well. Predictability had an immediate effect on skipping rates in L1 reading, while L2 readers only exhibited it in late measures. Word length was the only factor that interacted with L2 language exposure which demonstrated adjustment to alphabetic script and polymorphemic word structure. Our findings provide new insights into the processing challenges of L2 readers with typologically distant L1 backgrounds.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565054/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigated the effectiveness of visual strategies in guiding gaze behavior and attention on Yi graphic symbols using eye-tracking. Four strategies, color brightness, layering, line guidance, and size variation, were tested with 34 Thai participants unfamiliar with Yi symbol meanings. Gaze sequence analysis, using Levenshtein distance and similarity ratio, showed that bright colors, layered arrangements, and connected lines enhanced alignment with intended gaze sequences, while size variation had minimal effect. Bright red symbols and lines captured faster initial fixations (Time to First Fixation, TTFF) on key Areas of Interest (AOIs), unlike layering and size. Lines reduced dwell time at sequence starts, promoting efficient progression, while larger symbols sustained longer attention, though inconsistently. Color and layering showed no consistent dwell time effects. These findings inform Yi graphic symbol design for effective cross-cultural visual communication.
{"title":"Visual Strategies for Guiding Gaze Sequences and Attention in Yi Symbols: Eye-Tracking Insights.","authors":"Bo Yuan, Sakol Teeravarunyou","doi":"10.3390/jemr18050057","DOIUrl":"10.3390/jemr18050057","url":null,"abstract":"<p><p>This study investigated the effectiveness of visual strategies in guiding gaze behavior and attention on Yi graphic symbols using eye-tracking. Four strategies, color brightness, layering, line guidance, and size variation, were tested with 34 Thai participants unfamiliar with Yi symbol meanings. Gaze sequence analysis, using Levenshtein distance and similarity ratio, showed that bright colors, layered arrangements, and connected lines enhanced alignment with intended gaze sequences, while size variation had minimal effect. Bright red symbols and lines captured faster initial fixations (Time to First Fixation, TTFF) on key Areas of Interest (AOIs), unlike layering and size. Lines reduced dwell time at sequence starts, promoting efficient progression, while larger symbols sustained longer attention, though inconsistently. Color and layering showed no consistent dwell time effects. These findings inform Yi graphic symbol design for effective cross-cultural visual communication.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565636/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark Mervic, Ema Grašič, Polona Jaki Mekjavić, Nataša Vidovič Valentinčič, Ana Fakin
Real-world navigation depends on coordinated head-eye behaviour that standard tests of visual function miss. We investigated how visual impairment affects traffic navigation, whether behaviour differs by visual impairment type, and whether this functional grouping better explains performance than WHO categorisation. Using a virtual reality (VR) headset with integrated head and eye tracking, we evaluated detection of moving cars and safe road-crossing opportunities in 40 patients with central, peripheral, or combined visual impairment and 19 controls. Only two patients with a combination of very low visual acuity and severely constricted visual fields failed both visual tasks. Overall, patients identified safe-crossing intervals 1.3-1.5 s later than controls (p ≤ 0.01). Head-eye movement profiles diverged by visual impairment: patients with central impairment showed shorter, more frequent saccades (p < 0.05); patients with peripheral impairment showed exploratory behaviour similar to controls; while patients with combined impairment executed fewer microsaccades (p < 0.05), reduced total macrosaccade amplitude (p < 0.05), and fewer head turns (p < 0.05). Classification by impairment type explained behaviour better than WHO categorisation. These findings challenge acuity/field-based classifications and support integrating functional metrics into risk stratification and targeted rehabilitation, with VR providing a safe, scalable assessment tool.
{"title":"Head and Eye Movements During Pedestrian Crossing in Patients with Visual Impairment: A Virtual Reality Eye Tracking Study.","authors":"Mark Mervic, Ema Grašič, Polona Jaki Mekjavić, Nataša Vidovič Valentinčič, Ana Fakin","doi":"10.3390/jemr18050055","DOIUrl":"10.3390/jemr18050055","url":null,"abstract":"<p><p>Real-world navigation depends on coordinated head-eye behaviour that standard tests of visual function miss. We investigated how visual impairment affects traffic navigation, whether behaviour differs by visual impairment type, and whether this functional grouping better explains performance than WHO categorisation. Using a virtual reality (VR) headset with integrated head and eye tracking, we evaluated detection of moving cars and safe road-crossing opportunities in 40 patients with central, peripheral, or combined visual impairment and 19 controls. Only two patients with a combination of very low visual acuity and severely constricted visual fields failed both visual tasks. Overall, patients identified safe-crossing intervals 1.3-1.5 s later than controls (<i>p</i> ≤ 0.01). Head-eye movement profiles diverged by visual impairment: patients with central impairment showed shorter, more frequent saccades (<i>p</i> < 0.05); patients with peripheral impairment showed exploratory behaviour similar to controls; while patients with combined impairment executed fewer microsaccades (<i>p</i> < 0.05), reduced total macrosaccade amplitude (<i>p</i> < 0.05), and fewer head turns (<i>p</i> < 0.05). Classification by impairment type explained behaviour better than WHO categorisation. These findings challenge acuity/field-based classifications and support integrating functional metrics into risk stratification and targeted rehabilitation, with VR providing a safe, scalable assessment tool.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565098/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}