Veslava Osińska, Weronika Kortas, Adam Szalach, Marc Welter
Recently, the photorealism of generated images has improved noticeably due to the development of AI algorithms. These are high-resolution images of human faces and bodies, cats and dogs, vehicles, and other categories of objects that the untrained eye cannot distinguish from authentic photographs. The study assessed how people perceive 12 pictures generated by AI vs. 12 real photographs. Six main categories of stimuli were selected: architecture, art, faces, cars, landscapes, and pets. The visual perception of selected images was studied by means of eye tracking and gaze patterns as well as time characteristics, compared with consideration to the respondent groups' gender and knowledge of AI graphics. After the experiment, the study participants analysed the pictures again in order to describe the reasons for their choice. The results show that AI images of pets and real photographs of architecture were the easiest to identify. The largest differences in visual perception are between men and women as well as between those experienced in digital graphics (including AI images) and the rest. Based on the analysis, several recommendations are suggested for AI developers and end-users.
{"title":"AI Images vs. Real Photographs: Investigating Visual Recognition and Perception.","authors":"Veslava Osińska, Weronika Kortas, Adam Szalach, Marc Welter","doi":"10.3390/jemr18060061","DOIUrl":"10.3390/jemr18060061","url":null,"abstract":"<p><p>Recently, the photorealism of generated images has improved noticeably due to the development of AI algorithms. These are high-resolution images of human faces and bodies, cats and dogs, vehicles, and other categories of objects that the untrained eye cannot distinguish from authentic photographs. The study assessed how people perceive 12 pictures generated by AI vs. 12 real photographs. Six main categories of stimuli were selected: architecture, art, faces, cars, landscapes, and pets. The visual perception of selected images was studied by means of eye tracking and gaze patterns as well as time characteristics, compared with consideration to the respondent groups' gender and knowledge of AI graphics. After the experiment, the study participants analysed the pictures again in order to describe the reasons for their choice. The results show that AI images of pets and real photographs of architecture were the easiest to identify. The largest differences in visual perception are between men and women as well as between those experienced in digital graphics (including AI images) and the rest. Based on the analysis, several recommendations are suggested for AI developers and end-users.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641808/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study examined how drivers' eye fixations change before, during, and after recognizing road markings, and how these changes relate to driving speed, visual complexity, cognitive functions, and demographics. 20 licensed drivers viewed on-board movies showing digit or character road markings while their eye movements were tracked. Fixation positions and dispersions were analyzed. Results showed that, regardless of marking type, fixations were horizontally dispersed before and after recognition but became vertically concentrated during recognition, with fixation points shifting higher (p < 0.001) and horizontal dispersion decreasing (p = 0.01). During the recognition period, fixations moved upward and narrowed horizontally toward the final third (p = 0.034), suggesting increased focus. Longer fixations were linked to slower speeds for digits (p = 0.029) and more characters for character markings (p < 0.001). No significant correlations were found with cognitive functions or demographics. These findings suggest that drivers first scan broadly, then concentrate on markings as they approach. For optimal recognition, simple or essential information should be placed centrally or lower, while detailed content should appear higher to align with natural gaze patterns. In high-speed environments, markings should prioritize clarity and brevity in central positions to ensure safe and rapid recognition.
{"title":"Sequential Fixation Behavior in Road Marking Recognition: Implications for Design.","authors":"Takaya Maeyama, Hiroki Okada, Daisuke Sawamura","doi":"10.3390/jemr18050059","DOIUrl":"10.3390/jemr18050059","url":null,"abstract":"<p><p>This study examined how drivers' eye fixations change before, during, and after recognizing road markings, and how these changes relate to driving speed, visual complexity, cognitive functions, and demographics. 20 licensed drivers viewed on-board movies showing digit or character road markings while their eye movements were tracked. Fixation positions and dispersions were analyzed. Results showed that, regardless of marking type, fixations were horizontally dispersed before and after recognition but became vertically concentrated during recognition, with fixation points shifting higher (<i>p</i> < 0.001) and horizontal dispersion decreasing (<i>p</i> = 0.01). During the recognition period, fixations moved upward and narrowed horizontally toward the final third (<i>p</i> = 0.034), suggesting increased focus. Longer fixations were linked to slower speeds for digits (<i>p</i> = 0.029) and more characters for character markings (<i>p</i> < 0.001). No significant correlations were found with cognitive functions or demographics. These findings suggest that drivers first scan broadly, then concentrate on markings as they approach. For optimal recognition, simple or essential information should be placed centrally or lower, while detailed content should appear higher to align with natural gaze patterns. In high-speed environments, markings should prioritize clarity and brevity in central positions to ensure safe and rapid recognition.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565692/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marina Norkina, Daria Chernova, Svetlana Alexeeva, Maria Harchevnik
Oculomotor reading behavior is influenced by both universal factors, like the "big three" of word length, frequency, and contextual predictability, and language-specific factors, such as script and grammar. The aim of this study was to examine the influence of the "big three" factors on L2 reading focusing on a typologically distant L1/L2 pair with dramatic differences in script and grammar. A total of 41 native Chinese-speaking learners of Russian (levels A2-B2) and 40 native Russian speakers read a corpus of 90 Russian sentences for comprehension. Their eye movements were recorded with EyeLink 1000+. We analyzed both early (gaze duration and skipping rate) and late (regression rate and rereading time) eye movement measures. As expected, the "big three" effects influenced oculomotor behavior in both L1 and L2 readers, being more pronounced for L2, but substantial differences were also revealed. Word frequency in L1 reading primarily influenced early processing stages, whereas in L2 reading it remained significant in later stages as well. Predictability had an immediate effect on skipping rates in L1 reading, while L2 readers only exhibited it in late measures. Word length was the only factor that interacted with L2 language exposure which demonstrated adjustment to alphabetic script and polymorphemic word structure. Our findings provide new insights into the processing challenges of L2 readers with typologically distant L1 backgrounds.
{"title":"Oculomotor Behavior of L2 Readers with Typologically Distant L1 Background: The \"Big Three\" Effects of Word Length, Frequency, and Predictability.","authors":"Marina Norkina, Daria Chernova, Svetlana Alexeeva, Maria Harchevnik","doi":"10.3390/jemr18050058","DOIUrl":"10.3390/jemr18050058","url":null,"abstract":"<p><p>Oculomotor reading behavior is influenced by both universal factors, like the \"big three\" of word length, frequency, and contextual predictability, and language-specific factors, such as script and grammar. The aim of this study was to examine the influence of the \"big three\" factors on L2 reading focusing on a typologically distant L1/L2 pair with dramatic differences in script and grammar. A total of 41 native Chinese-speaking learners of Russian (levels A2-B2) and 40 native Russian speakers read a corpus of 90 Russian sentences for comprehension. Their eye movements were recorded with EyeLink 1000+. We analyzed both early (gaze duration and skipping rate) and late (regression rate and rereading time) eye movement measures. As expected, the \"big three\" effects influenced oculomotor behavior in both L1 and L2 readers, being more pronounced for L2, but substantial differences were also revealed. Word frequency in L1 reading primarily influenced early processing stages, whereas in L2 reading it remained significant in later stages as well. Predictability had an immediate effect on skipping rates in L1 reading, while L2 readers only exhibited it in late measures. Word length was the only factor that interacted with L2 language exposure which demonstrated adjustment to alphabetic script and polymorphemic word structure. Our findings provide new insights into the processing challenges of L2 readers with typologically distant L1 backgrounds.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565054/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigated the effectiveness of visual strategies in guiding gaze behavior and attention on Yi graphic symbols using eye-tracking. Four strategies, color brightness, layering, line guidance, and size variation, were tested with 34 Thai participants unfamiliar with Yi symbol meanings. Gaze sequence analysis, using Levenshtein distance and similarity ratio, showed that bright colors, layered arrangements, and connected lines enhanced alignment with intended gaze sequences, while size variation had minimal effect. Bright red symbols and lines captured faster initial fixations (Time to First Fixation, TTFF) on key Areas of Interest (AOIs), unlike layering and size. Lines reduced dwell time at sequence starts, promoting efficient progression, while larger symbols sustained longer attention, though inconsistently. Color and layering showed no consistent dwell time effects. These findings inform Yi graphic symbol design for effective cross-cultural visual communication.
{"title":"Visual Strategies for Guiding Gaze Sequences and Attention in Yi Symbols: Eye-Tracking Insights.","authors":"Bo Yuan, Sakol Teeravarunyou","doi":"10.3390/jemr18050057","DOIUrl":"10.3390/jemr18050057","url":null,"abstract":"<p><p>This study investigated the effectiveness of visual strategies in guiding gaze behavior and attention on Yi graphic symbols using eye-tracking. Four strategies, color brightness, layering, line guidance, and size variation, were tested with 34 Thai participants unfamiliar with Yi symbol meanings. Gaze sequence analysis, using Levenshtein distance and similarity ratio, showed that bright colors, layered arrangements, and connected lines enhanced alignment with intended gaze sequences, while size variation had minimal effect. Bright red symbols and lines captured faster initial fixations (Time to First Fixation, TTFF) on key Areas of Interest (AOIs), unlike layering and size. Lines reduced dwell time at sequence starts, promoting efficient progression, while larger symbols sustained longer attention, though inconsistently. Color and layering showed no consistent dwell time effects. These findings inform Yi graphic symbol design for effective cross-cultural visual communication.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565636/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark Mervic, Ema Grašič, Polona Jaki Mekjavić, Nataša Vidovič Valentinčič, Ana Fakin
Real-world navigation depends on coordinated head-eye behaviour that standard tests of visual function miss. We investigated how visual impairment affects traffic navigation, whether behaviour differs by visual impairment type, and whether this functional grouping better explains performance than WHO categorisation. Using a virtual reality (VR) headset with integrated head and eye tracking, we evaluated detection of moving cars and safe road-crossing opportunities in 40 patients with central, peripheral, or combined visual impairment and 19 controls. Only two patients with a combination of very low visual acuity and severely constricted visual fields failed both visual tasks. Overall, patients identified safe-crossing intervals 1.3-1.5 s later than controls (p ≤ 0.01). Head-eye movement profiles diverged by visual impairment: patients with central impairment showed shorter, more frequent saccades (p < 0.05); patients with peripheral impairment showed exploratory behaviour similar to controls; while patients with combined impairment executed fewer microsaccades (p < 0.05), reduced total macrosaccade amplitude (p < 0.05), and fewer head turns (p < 0.05). Classification by impairment type explained behaviour better than WHO categorisation. These findings challenge acuity/field-based classifications and support integrating functional metrics into risk stratification and targeted rehabilitation, with VR providing a safe, scalable assessment tool.
{"title":"Head and Eye Movements During Pedestrian Crossing in Patients with Visual Impairment: A Virtual Reality Eye Tracking Study.","authors":"Mark Mervic, Ema Grašič, Polona Jaki Mekjavić, Nataša Vidovič Valentinčič, Ana Fakin","doi":"10.3390/jemr18050055","DOIUrl":"10.3390/jemr18050055","url":null,"abstract":"<p><p>Real-world navigation depends on coordinated head-eye behaviour that standard tests of visual function miss. We investigated how visual impairment affects traffic navigation, whether behaviour differs by visual impairment type, and whether this functional grouping better explains performance than WHO categorisation. Using a virtual reality (VR) headset with integrated head and eye tracking, we evaluated detection of moving cars and safe road-crossing opportunities in 40 patients with central, peripheral, or combined visual impairment and 19 controls. Only two patients with a combination of very low visual acuity and severely constricted visual fields failed both visual tasks. Overall, patients identified safe-crossing intervals 1.3-1.5 s later than controls (<i>p</i> ≤ 0.01). Head-eye movement profiles diverged by visual impairment: patients with central impairment showed shorter, more frequent saccades (<i>p</i> < 0.05); patients with peripheral impairment showed exploratory behaviour similar to controls; while patients with combined impairment executed fewer microsaccades (<i>p</i> < 0.05), reduced total macrosaccade amplitude (<i>p</i> < 0.05), and fewer head turns (<i>p</i> < 0.05). Classification by impairment type explained behaviour better than WHO categorisation. These findings challenge acuity/field-based classifications and support integrating functional metrics into risk stratification and targeted rehabilitation, with VR providing a safe, scalable assessment tool.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565098/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dyslexia is a neurodevelopmental disorder that impairs reading, affecting 5-17.5% of children and representing the most common learning disability. Individuals with dyslexia experience decoding, reading fluency, and comprehension difficulties, hindering vocabulary development and learning. Early and accurate identification is essential for targeted interventions. Traditional diagnostic methods rely on behavioral assessments and neuropsychological tests, which can be time-consuming and subjective. Recent studies suggest that physiological signals, such as electrooculography (EOG), can provide objective insights into reading-related cognitive and visual processes. Despite this potential, there is limited research on how typeface and font characteristics influence reading performance in dyslexic children using EOG measurements. To address this gap, we investigated the most suitable typefaces for Turkish-speaking children with dyslexia by analyzing EOG signals recorded during reading tasks. We developed a novel deep learning framework, DyslexiaNet, using scalogram images from horizontal and vertical EOG channels, and compared it with AlexNet, MobileNet, and ResNet. Reading performance indicators, including reading time, blink rate, regression rate, and EOG signal energy, were evaluated across multiple typefaces and font sizes. Results showed that typeface significantly affects reading efficiency in dyslexic children. The BonvenoCF font was associated with shorter reading times, fewer regressions, and lower cognitive load. DyslexiaNet achieved the highest classification accuracy (99.96% for horizontal channels) while requiring lower computational load than other networks. These findings demonstrate that EOG-based physiological measurements combined with deep learning offer a non-invasive, objective approach for dyslexia detection and personalized typeface selection. This method can provide practical guidance for designing educational materials and support clinicians in early diagnosis and individualized intervention strategies for children with dyslexia.
{"title":"DyslexiaNet: Examining the Viability and Efficacy of Eye Movement-Based Deep Learning for Dyslexia Detection.","authors":"Ramis İleri, Çiğdem Gülüzar Altıntop, Fatma Latifoğlu, Esra Demirci","doi":"10.3390/jemr18050056","DOIUrl":"10.3390/jemr18050056","url":null,"abstract":"<p><p>Dyslexia is a neurodevelopmental disorder that impairs reading, affecting 5-17.5% of children and representing the most common learning disability. Individuals with dyslexia experience decoding, reading fluency, and comprehension difficulties, hindering vocabulary development and learning. Early and accurate identification is essential for targeted interventions. Traditional diagnostic methods rely on behavioral assessments and neuropsychological tests, which can be time-consuming and subjective. Recent studies suggest that physiological signals, such as electrooculography (EOG), can provide objective insights into reading-related cognitive and visual processes. Despite this potential, there is limited research on how typeface and font characteristics influence reading performance in dyslexic children using EOG measurements. To address this gap, we investigated the most suitable typefaces for Turkish-speaking children with dyslexia by analyzing EOG signals recorded during reading tasks. We developed a novel deep learning framework, DyslexiaNet, using scalogram images from horizontal and vertical EOG channels, and compared it with AlexNet, MobileNet, and ResNet. Reading performance indicators, including reading time, blink rate, regression rate, and EOG signal energy, were evaluated across multiple typefaces and font sizes. Results showed that typeface significantly affects reading efficiency in dyslexic children. The BonvenoCF font was associated with shorter reading times, fewer regressions, and lower cognitive load. DyslexiaNet achieved the highest classification accuracy (99.96% for horizontal channels) while requiring lower computational load than other networks. These findings demonstrate that EOG-based physiological measurements combined with deep learning offer a non-invasive, objective approach for dyslexia detection and personalized typeface selection. This method can provide practical guidance for designing educational materials and support clinicians in early diagnosis and individualized intervention strategies for children with dyslexia.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565239/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antonio Ríder-Vázquez, Estanislao Gutiérrez-Sánchez, Clara Martinez-Perez, María Carmen Sánchez-González
Background: Hand-eye coordination is essential for daily functioning and sports performance, but standardized digital protocols for its reliable assessment are limited. This study aimed to evaluate the intra-examiner repeatability and inter-examiner reproducibility of a computerized protocol (COI-SV®) for assessing hand-eye coordination in healthy adults, as well as the influence of age and sex. Methods: Seventy-eight adults completed four sessions of a computerized visual-motor task requiring rapid and accurate responses to randomly presented targets. Accuracy and response times were analyzed using repeated-measures and reliability analyses. Results: Accuracy showed a small session effect and minor examiner differences on the first day, whereas response times were consistent across sessions. Men generally responded faster than women, and response times increased slightly with age. Overall, reliability indices indicated moderate-to-good repeatability and reproducibility for both accuracy and response time measures. Conclusions: The COI-SV® protocol provides a robust, objective, and reproducible measurement of hand-eye coordination, supporting its use in clinical, sports, and research settings.
{"title":"Test-Retest Reliability of a Computerized Hand-Eye Coordination Task.","authors":"Antonio Ríder-Vázquez, Estanislao Gutiérrez-Sánchez, Clara Martinez-Perez, María Carmen Sánchez-González","doi":"10.3390/jemr18050054","DOIUrl":"10.3390/jemr18050054","url":null,"abstract":"<p><p><b>Background:</b> Hand-eye coordination is essential for daily functioning and sports performance, but standardized digital protocols for its reliable assessment are limited. This study aimed to evaluate the intra-examiner repeatability and inter-examiner reproducibility of a computerized protocol (COI-SV<sup>®</sup>) for assessing hand-eye coordination in healthy adults, as well as the influence of age and sex. <b>Methods:</b> Seventy-eight adults completed four sessions of a computerized visual-motor task requiring rapid and accurate responses to randomly presented targets. Accuracy and response times were analyzed using repeated-measures and reliability analyses. <b>Results:</b> Accuracy showed a small session effect and minor examiner differences on the first day, whereas response times were consistent across sessions. Men generally responded faster than women, and response times increased slightly with age. Overall, reliability indices indicated moderate-to-good repeatability and reproducibility for both accuracy and response time measures. <b>Conclusions:</b> The COI-SV<sup>®</sup> protocol provides a robust, objective, and reproducible measurement of hand-eye coordination, supporting its use in clinical, sports, and research settings.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565402/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate recognition of basic facial emotions is well documented, yet the mechanisms of misclassification and their relation to gaze allocation remain under-reported. The present study utilized a within-subjects eye-tracking design to examine both accurate and inaccurate recognition of five basic emotions (anger, disgust, fear, happiness, and sadness) in healthy young adults. Fifty participants (twenty-four women) completed a forced-choice categorization task with 10 stimuli (female/male poser × emotion). A remote eye tracker (60 Hz) recorded fixations mapped to eyes, nose, and mouth areas of interest (AOIs). The analyses combined accuracy and decision-time statistics with heatmap comparisons of misclassified versus accurate trials within the same image. Overall accuracy was 87.8% (439/500). Misclassification patterns depended on the target emotion, but not on participant gender. Fear male was most often misclassified (typically as disgust), and sadness female was frequently labeled as fear or disgust; disgust was the most incorrectly attributed response. For accurate trials, decision time showed main effects of emotion (p < 0.001) and participant gender (p = 0.033): happiness was categorized fastest and anger slowest, and women responded faster overall, with particularly fast response times for sadness. The AOI results revealed strong main effects and an AOI × emotion interaction (p < 0.001): eyes received the most fixations, but fear drew relatively more mouth sampling and sadness more nose sampling. Crucially, heatmaps showed an upper-face bias (eye AOI) in inaccurate trials, whereas accurate trials retained eye sampling and added nose and mouth AOI coverage, which aligned with diagnostic cues. These findings indicate that the scanpath strategy, in addition to information availability, underpins success and failure in basic-emotion recognition, with implications for theory, targeted training, and affective technologies.
{"title":"Recognition and Misclassification Patterns of Basic Emotional Facial Expressions: An Eye-Tracking Study in Young Healthy Adults.","authors":"Neşe Alkan","doi":"10.3390/jemr18050053","DOIUrl":"10.3390/jemr18050053","url":null,"abstract":"<p><p>Accurate recognition of basic facial emotions is well documented, yet the mechanisms of misclassification and their relation to gaze allocation remain under-reported. The present study utilized a within-subjects eye-tracking design to examine both accurate and inaccurate recognition of five basic emotions (anger, disgust, fear, happiness, and sadness) in healthy young adults. Fifty participants (twenty-four women) completed a forced-choice categorization task with 10 stimuli (female/male poser × emotion). A remote eye tracker (60 Hz) recorded fixations mapped to eyes, nose, and mouth areas of interest (AOIs). The analyses combined accuracy and decision-time statistics with heatmap comparisons of misclassified versus accurate trials within the same image. Overall accuracy was 87.8% (439/500). Misclassification patterns depended on the target emotion, but not on participant gender. Fear male was most often misclassified (typically as disgust), and sadness female was frequently labeled as fear or disgust; disgust was the most incorrectly attributed response. For accurate trials, decision time showed main effects of emotion (<i>p</i> < 0.001) and participant gender (<i>p</i> = 0.033): happiness was categorized fastest and anger slowest, and women responded faster overall, with particularly fast response times for sadness. The AOI results revealed strong main effects and an AOI × emotion interaction (<i>p</i> < 0.001): eyes received the most fixations, but fear drew relatively more mouth sampling and sadness more nose sampling. Crucially, heatmaps showed an upper-face bias (eye AOI) in inaccurate trials, whereas accurate trials retained eye sampling and added nose and mouth AOI coverage, which aligned with diagnostic cues. These findings indicate that the scanpath strategy, in addition to information availability, underpins success and failure in basic-emotion recognition, with implications for theory, targeted training, and affective technologies.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565503/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In safety-critical systems like nuclear power plants, the rapid and accurate perception of visual interface information is vital. This study investigates the relationship between visual attention dispersion measured via heatmap entropy (as a specific measure of gaze entropy) and response time during information search tasks. Sixteen participants viewed a prototype of an accident response support system and answered questions at three difficulty levels while their eye movements were tracked using Tobii Pro Glasses 2. Results showed a significant positive correlation (r = 0.595, p < 0.01) between heatmap entropy and response time, indicating that more dispersed attention leads to longer task completion times. This pattern held consistently across all difficulty levels. These findings suggest that heatmap entropy is a useful metric for evaluating user attention strategies and can inform interface usability assessments in high-stakes environments.
在核电厂等安全关键系统中,快速准确地感知视觉界面信息至关重要。本研究探讨了通过热图熵(注视熵的一种特定度量)测量的视觉注意分散与信息搜索任务中反应时间的关系。16名参与者观看了事故响应支持系统的原型,并回答了三个难度级别的问题,同时使用Tobii Pro眼镜2跟踪他们的眼球运动。结果显示,热图熵与反应时间呈显著正相关(r = 0.595, p < 0.01),说明注意力越分散,任务完成时间越长。这种模式在所有难度级别中都是一致的。这些发现表明,热图熵是评估用户注意力策略的有用度量,可以为高风险环境下的界面可用性评估提供信息。
{"title":"The Effect of Visual Attention Dispersion on Cognitive Response Time.","authors":"Yejin Lee, Kwangtae Jung","doi":"10.3390/jemr18050052","DOIUrl":"10.3390/jemr18050052","url":null,"abstract":"<p><p>In safety-critical systems like nuclear power plants, the rapid and accurate perception of visual interface information is vital. This study investigates the relationship between visual attention dispersion measured via heatmap entropy (as a specific measure of gaze entropy) and response time during information search tasks. Sixteen participants viewed a prototype of an accident response support system and answered questions at three difficulty levels while their eye movements were tracked using Tobii Pro Glasses 2. Results showed a significant positive correlation (r = 0.595, <i>p</i> < 0.01) between heatmap entropy and response time, indicating that more dispersed attention leads to longer task completion times. This pattern held consistently across all difficulty levels. These findings suggest that heatmap entropy is a useful metric for evaluating user attention strategies and can inform interface usability assessments in high-stakes environments.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12564979/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aryaman Taore, Gabriel Lobo, Philip R K Turnbull, Steven C Dakin
Purpose: To investigate the efficacy of a novel test for diagnosing colour vision deficiencies using reflexive eye movements measured using an unmodified tablet.
Methods: This study followed a cross-sectional design, where thirty-three participants aged between 17 and 65 years were recruited. The participant group comprised 23 controls, 8 deuteranopes, and 2 protanopes. An anomaloscope was employed to determine the colour vision status of these participants. The study methodology involved using an Apple iPad Pro's built-in eye-tracking capabilities to record eye movements in response to coloured patterns drifting on the screen. Through an automated analysis of these movements, the researchers estimated individuals' red-green equiluminant point and their equivalent luminance contrast.
Results: Estimates of the red-green equiluminant point and the equivalent luminance contrast were used to classify participants' colour vision status with a sensitivity rate of 90.0% and a specificity rate of 91.30%.
Conclusions: The novel colour vision test administered using an unmodified tablet was found to be effective in diagnosing colour vision deficiencies and has the potential to be a practical and cost-effective alternative to traditional methods. Translation Relevance: The test's objectivity, its straightforward implementation on a standard tablet, and its minimal requirement for patient cooperation, all contribute to the wider accessibility of colour vision diagnosis. This is particularly advantageous for demographics like children who might be challenging to engage, but for whom early detection is of paramount importance.
{"title":"Diagnosing Colour Vision Deficiencies Using Eye Movements (Without Dedicated Eye-Tracking Hardware).","authors":"Aryaman Taore, Gabriel Lobo, Philip R K Turnbull, Steven C Dakin","doi":"10.3390/jemr18050051","DOIUrl":"10.3390/jemr18050051","url":null,"abstract":"<p><strong>Purpose: </strong>To investigate the efficacy of a novel test for diagnosing colour vision deficiencies using reflexive eye movements measured using an unmodified tablet.</p><p><strong>Methods: </strong>This study followed a cross-sectional design, where thirty-three participants aged between 17 and 65 years were recruited. The participant group comprised 23 controls, 8 deuteranopes, and 2 protanopes. An anomaloscope was employed to determine the colour vision status of these participants. The study methodology involved using an Apple iPad Pro's built-in eye-tracking capabilities to record eye movements in response to coloured patterns drifting on the screen. Through an automated analysis of these movements, the researchers estimated individuals' red-green equiluminant point and their equivalent luminance contrast.</p><p><strong>Results: </strong>Estimates of the red-green equiluminant point and the equivalent luminance contrast were used to classify participants' colour vision status with a sensitivity rate of 90.0% and a specificity rate of 91.30%.</p><p><strong>Conclusions: </strong>The novel colour vision test administered using an unmodified tablet was found to be effective in diagnosing colour vision deficiencies and has the potential to be a practical and cost-effective alternative to traditional methods. Translation Relevance: The test's objectivity, its straightforward implementation on a standard tablet, and its minimal requirement for patient cooperation, all contribute to the wider accessibility of colour vision diagnosis. This is particularly advantageous for demographics like children who might be challenging to engage, but for whom early detection is of paramount importance.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}