首页 > 最新文献

Journal of Eye Movement Research最新文献

英文 中文
DyslexiaNet: Examining the Viability and Efficacy of Eye Movement-Based Deep Learning for Dyslexia Detection. DyslexiaNet:检测基于眼动的深度学习检测阅读障碍的可行性和有效性。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-10-15 DOI: 10.3390/jemr18050056
Ramis İleri, Çiğdem Gülüzar Altıntop, Fatma Latifoğlu, Esra Demirci

Dyslexia is a neurodevelopmental disorder that impairs reading, affecting 5-17.5% of children and representing the most common learning disability. Individuals with dyslexia experience decoding, reading fluency, and comprehension difficulties, hindering vocabulary development and learning. Early and accurate identification is essential for targeted interventions. Traditional diagnostic methods rely on behavioral assessments and neuropsychological tests, which can be time-consuming and subjective. Recent studies suggest that physiological signals, such as electrooculography (EOG), can provide objective insights into reading-related cognitive and visual processes. Despite this potential, there is limited research on how typeface and font characteristics influence reading performance in dyslexic children using EOG measurements. To address this gap, we investigated the most suitable typefaces for Turkish-speaking children with dyslexia by analyzing EOG signals recorded during reading tasks. We developed a novel deep learning framework, DyslexiaNet, using scalogram images from horizontal and vertical EOG channels, and compared it with AlexNet, MobileNet, and ResNet. Reading performance indicators, including reading time, blink rate, regression rate, and EOG signal energy, were evaluated across multiple typefaces and font sizes. Results showed that typeface significantly affects reading efficiency in dyslexic children. The BonvenoCF font was associated with shorter reading times, fewer regressions, and lower cognitive load. DyslexiaNet achieved the highest classification accuracy (99.96% for horizontal channels) while requiring lower computational load than other networks. These findings demonstrate that EOG-based physiological measurements combined with deep learning offer a non-invasive, objective approach for dyslexia detection and personalized typeface selection. This method can provide practical guidance for designing educational materials and support clinicians in early diagnosis and individualized intervention strategies for children with dyslexia.

阅读障碍是一种影响阅读的神经发育障碍,影响了5-17.5%的儿童,是最常见的学习障碍。患有阅读障碍的个体会经历解码、阅读流畅性和理解困难,从而阻碍了词汇的发展和学习。早期和准确的识别对于有针对性的干预至关重要。传统的诊断方法依赖于行为评估和神经心理学测试,这既耗时又主观。最近的研究表明,眼电图(EOG)等生理信号可以为阅读相关的认知和视觉过程提供客观的见解。尽管有这种潜力,但关于字体和字体特征如何影响阅读障碍儿童阅读表现的研究有限。为了解决这一差距,我们通过分析阅读任务中记录的脑电图信号,研究了最适合土耳其语阅读障碍儿童的字体。我们开发了一种新的深度学习框架DyslexiaNet,使用来自水平和垂直EOG通道的尺度图图像,并将其与AlexNet, MobileNet和ResNet进行比较。阅读性能指标,包括阅读时间、眨眼率、回归率和EOG信号能量,在多种字体和字体大小下进行评估。结果表明,字体对阅读效率有显著影响。BonvenoCF字体与更短的阅读时间、更少的回归和更低的认知负荷有关。DyslexiaNet实现了最高的分类准确率(水平通道为99.96%),同时所需的计算负荷低于其他网络。这些发现表明,基于脑电图的生理测量与深度学习相结合,为阅读障碍检测和个性化字体选择提供了一种非侵入性、客观的方法。该方法可为设计教材提供实用指导,并支持临床医生对阅读障碍儿童进行早期诊断和个性化干预。
{"title":"DyslexiaNet: Examining the Viability and Efficacy of Eye Movement-Based Deep Learning for Dyslexia Detection.","authors":"Ramis İleri, Çiğdem Gülüzar Altıntop, Fatma Latifoğlu, Esra Demirci","doi":"10.3390/jemr18050056","DOIUrl":"10.3390/jemr18050056","url":null,"abstract":"<p><p>Dyslexia is a neurodevelopmental disorder that impairs reading, affecting 5-17.5% of children and representing the most common learning disability. Individuals with dyslexia experience decoding, reading fluency, and comprehension difficulties, hindering vocabulary development and learning. Early and accurate identification is essential for targeted interventions. Traditional diagnostic methods rely on behavioral assessments and neuropsychological tests, which can be time-consuming and subjective. Recent studies suggest that physiological signals, such as electrooculography (EOG), can provide objective insights into reading-related cognitive and visual processes. Despite this potential, there is limited research on how typeface and font characteristics influence reading performance in dyslexic children using EOG measurements. To address this gap, we investigated the most suitable typefaces for Turkish-speaking children with dyslexia by analyzing EOG signals recorded during reading tasks. We developed a novel deep learning framework, DyslexiaNet, using scalogram images from horizontal and vertical EOG channels, and compared it with AlexNet, MobileNet, and ResNet. Reading performance indicators, including reading time, blink rate, regression rate, and EOG signal energy, were evaluated across multiple typefaces and font sizes. Results showed that typeface significantly affects reading efficiency in dyslexic children. The BonvenoCF font was associated with shorter reading times, fewer regressions, and lower cognitive load. DyslexiaNet achieved the highest classification accuracy (99.96% for horizontal channels) while requiring lower computational load than other networks. These findings demonstrate that EOG-based physiological measurements combined with deep learning offer a non-invasive, objective approach for dyslexia detection and personalized typeface selection. This method can provide practical guidance for designing educational materials and support clinicians in early diagnosis and individualized intervention strategies for children with dyslexia.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565239/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Test-Retest Reliability of a Computerized Hand-Eye Coordination Task. 计算机手眼协调任务的重测信度。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-10-14 DOI: 10.3390/jemr18050054
Antonio Ríder-Vázquez, Estanislao Gutiérrez-Sánchez, Clara Martinez-Perez, María Carmen Sánchez-González

Background: Hand-eye coordination is essential for daily functioning and sports performance, but standardized digital protocols for its reliable assessment are limited. This study aimed to evaluate the intra-examiner repeatability and inter-examiner reproducibility of a computerized protocol (COI-SV®) for assessing hand-eye coordination in healthy adults, as well as the influence of age and sex. Methods: Seventy-eight adults completed four sessions of a computerized visual-motor task requiring rapid and accurate responses to randomly presented targets. Accuracy and response times were analyzed using repeated-measures and reliability analyses. Results: Accuracy showed a small session effect and minor examiner differences on the first day, whereas response times were consistent across sessions. Men generally responded faster than women, and response times increased slightly with age. Overall, reliability indices indicated moderate-to-good repeatability and reproducibility for both accuracy and response time measures. Conclusions: The COI-SV® protocol provides a robust, objective, and reproducible measurement of hand-eye coordination, supporting its use in clinical, sports, and research settings.

背景:手眼协调对日常功能和运动表现至关重要,但用于其可靠评估的标准化数字协议有限。本研究旨在评估用于评估健康成人手眼协调的计算机方案(COI-SV®)的检查者内部可重复性和检查者之间的可重复性,以及年龄和性别的影响。方法:78名成年人完成了四次计算机视觉运动任务,该任务要求对随机呈现的目标做出快速准确的反应。使用重复测量和可靠性分析来分析准确性和响应时间。结果:准确性在第一天表现出较小的会话效应和较小的考官差异,而反应时间在各个会话中是一致的。男性的反应通常比女性快,而且随着年龄的增长,反应时间会略有增加。总体而言,可靠性指标表明准确性和响应时间测量的可重复性和再现性中等至良好。结论:COI-SV®方案提供了一种可靠、客观、可重复的手眼协调测量方法,支持其在临床、运动和研究环境中的应用。
{"title":"Test-Retest Reliability of a Computerized Hand-Eye Coordination Task.","authors":"Antonio Ríder-Vázquez, Estanislao Gutiérrez-Sánchez, Clara Martinez-Perez, María Carmen Sánchez-González","doi":"10.3390/jemr18050054","DOIUrl":"10.3390/jemr18050054","url":null,"abstract":"<p><p><b>Background:</b> Hand-eye coordination is essential for daily functioning and sports performance, but standardized digital protocols for its reliable assessment are limited. This study aimed to evaluate the intra-examiner repeatability and inter-examiner reproducibility of a computerized protocol (COI-SV<sup>®</sup>) for assessing hand-eye coordination in healthy adults, as well as the influence of age and sex. <b>Methods:</b> Seventy-eight adults completed four sessions of a computerized visual-motor task requiring rapid and accurate responses to randomly presented targets. Accuracy and response times were analyzed using repeated-measures and reliability analyses. <b>Results:</b> Accuracy showed a small session effect and minor examiner differences on the first day, whereas response times were consistent across sessions. Men generally responded faster than women, and response times increased slightly with age. Overall, reliability indices indicated moderate-to-good repeatability and reproducibility for both accuracy and response time measures. <b>Conclusions:</b> The COI-SV<sup>®</sup> protocol provides a robust, objective, and reproducible measurement of hand-eye coordination, supporting its use in clinical, sports, and research settings.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565402/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recognition and Misclassification Patterns of Basic Emotional Facial Expressions: An Eye-Tracking Study in Young Healthy Adults. 年轻健康成人基本情绪面部表情的识别和错误分类模式:一项眼动追踪研究。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-10-11 DOI: 10.3390/jemr18050053
Neşe Alkan

Accurate recognition of basic facial emotions is well documented, yet the mechanisms of misclassification and their relation to gaze allocation remain under-reported. The present study utilized a within-subjects eye-tracking design to examine both accurate and inaccurate recognition of five basic emotions (anger, disgust, fear, happiness, and sadness) in healthy young adults. Fifty participants (twenty-four women) completed a forced-choice categorization task with 10 stimuli (female/male poser × emotion). A remote eye tracker (60 Hz) recorded fixations mapped to eyes, nose, and mouth areas of interest (AOIs). The analyses combined accuracy and decision-time statistics with heatmap comparisons of misclassified versus accurate trials within the same image. Overall accuracy was 87.8% (439/500). Misclassification patterns depended on the target emotion, but not on participant gender. Fear male was most often misclassified (typically as disgust), and sadness female was frequently labeled as fear or disgust; disgust was the most incorrectly attributed response. For accurate trials, decision time showed main effects of emotion (p < 0.001) and participant gender (p = 0.033): happiness was categorized fastest and anger slowest, and women responded faster overall, with particularly fast response times for sadness. The AOI results revealed strong main effects and an AOI × emotion interaction (p < 0.001): eyes received the most fixations, but fear drew relatively more mouth sampling and sadness more nose sampling. Crucially, heatmaps showed an upper-face bias (eye AOI) in inaccurate trials, whereas accurate trials retained eye sampling and added nose and mouth AOI coverage, which aligned with diagnostic cues. These findings indicate that the scanpath strategy, in addition to information availability, underpins success and failure in basic-emotion recognition, with implications for theory, targeted training, and affective technologies.

对基本面部情绪的准确识别已有充分的文献记载,但错误分类的机制及其与凝视分配的关系仍未得到充分的报道。本研究利用受试者内部眼球追踪设计来检验健康年轻人对五种基本情绪(愤怒、厌恶、恐惧、快乐和悲伤)的准确和不准确识别。50名参与者(24名女性)完成了一项有10种刺激(女性/男性问题×情绪)的强迫选择分类任务。远程眼动追踪器(60赫兹)记录了眼睛、鼻子和嘴巴感兴趣区域(AOIs)的注视。该分析将准确度和决策时间统计数据与同一图像中错误分类与准确试验的热图比较相结合。总体准确率为87.8%(439/500)。错误分类模式取决于目标情绪,而不是参与者的性别。恐惧男性最常被错误分类(通常是厌恶),而悲伤女性经常被贴上恐惧或厌恶的标签;厌恶是最不正确的反应。在精确的试验中,决策时间主要受情绪(p < 0.001)和参与者性别(p = 0.033)的影响:对快乐的分类速度最快,对愤怒的分类速度最慢,总体而言,女性的反应速度更快,对悲伤的反应速度尤其快。AOI结果显示了强烈的主效应和AOI与情绪的交互作用(p < 0.001):眼睛获得了最多的注视,但恐惧获得了相对更多的嘴采样,悲伤获得了相对更多的鼻子采样。关键是,在不准确的试验中,热图显示了上脸偏差(眼部AOI),而准确的试验保留了眼部采样,并增加了鼻子和嘴巴的AOI覆盖范围,这与诊断线索一致。这些发现表明,扫描路径策略除了信息可用性外,还支持基本情绪识别的成功和失败,这对理论,有针对性的培训和情感技术具有启示意义。
{"title":"Recognition and Misclassification Patterns of Basic Emotional Facial Expressions: An Eye-Tracking Study in Young Healthy Adults.","authors":"Neşe Alkan","doi":"10.3390/jemr18050053","DOIUrl":"10.3390/jemr18050053","url":null,"abstract":"<p><p>Accurate recognition of basic facial emotions is well documented, yet the mechanisms of misclassification and their relation to gaze allocation remain under-reported. The present study utilized a within-subjects eye-tracking design to examine both accurate and inaccurate recognition of five basic emotions (anger, disgust, fear, happiness, and sadness) in healthy young adults. Fifty participants (twenty-four women) completed a forced-choice categorization task with 10 stimuli (female/male poser × emotion). A remote eye tracker (60 Hz) recorded fixations mapped to eyes, nose, and mouth areas of interest (AOIs). The analyses combined accuracy and decision-time statistics with heatmap comparisons of misclassified versus accurate trials within the same image. Overall accuracy was 87.8% (439/500). Misclassification patterns depended on the target emotion, but not on participant gender. Fear male was most often misclassified (typically as disgust), and sadness female was frequently labeled as fear or disgust; disgust was the most incorrectly attributed response. For accurate trials, decision time showed main effects of emotion (<i>p</i> < 0.001) and participant gender (<i>p</i> = 0.033): happiness was categorized fastest and anger slowest, and women responded faster overall, with particularly fast response times for sadness. The AOI results revealed strong main effects and an AOI × emotion interaction (<i>p</i> < 0.001): eyes received the most fixations, but fear drew relatively more mouth sampling and sadness more nose sampling. Crucially, heatmaps showed an upper-face bias (eye AOI) in inaccurate trials, whereas accurate trials retained eye sampling and added nose and mouth AOI coverage, which aligned with diagnostic cues. These findings indicate that the scanpath strategy, in addition to information availability, underpins success and failure in basic-emotion recognition, with implications for theory, targeted training, and affective technologies.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565503/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Effect of Visual Attention Dispersion on Cognitive Response Time. 视觉注意力分散对认知反应时间的影响。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-10-10 DOI: 10.3390/jemr18050052
Yejin Lee, Kwangtae Jung

In safety-critical systems like nuclear power plants, the rapid and accurate perception of visual interface information is vital. This study investigates the relationship between visual attention dispersion measured via heatmap entropy (as a specific measure of gaze entropy) and response time during information search tasks. Sixteen participants viewed a prototype of an accident response support system and answered questions at three difficulty levels while their eye movements were tracked using Tobii Pro Glasses 2. Results showed a significant positive correlation (r = 0.595, p < 0.01) between heatmap entropy and response time, indicating that more dispersed attention leads to longer task completion times. This pattern held consistently across all difficulty levels. These findings suggest that heatmap entropy is a useful metric for evaluating user attention strategies and can inform interface usability assessments in high-stakes environments.

在核电厂等安全关键系统中,快速准确地感知视觉界面信息至关重要。本研究探讨了通过热图熵(注视熵的一种特定度量)测量的视觉注意分散与信息搜索任务中反应时间的关系。16名参与者观看了事故响应支持系统的原型,并回答了三个难度级别的问题,同时使用Tobii Pro眼镜2跟踪他们的眼球运动。结果显示,热图熵与反应时间呈显著正相关(r = 0.595, p < 0.01),说明注意力越分散,任务完成时间越长。这种模式在所有难度级别中都是一致的。这些发现表明,热图熵是评估用户注意力策略的有用度量,可以为高风险环境下的界面可用性评估提供信息。
{"title":"The Effect of Visual Attention Dispersion on Cognitive Response Time.","authors":"Yejin Lee, Kwangtae Jung","doi":"10.3390/jemr18050052","DOIUrl":"10.3390/jemr18050052","url":null,"abstract":"<p><p>In safety-critical systems like nuclear power plants, the rapid and accurate perception of visual interface information is vital. This study investigates the relationship between visual attention dispersion measured via heatmap entropy (as a specific measure of gaze entropy) and response time during information search tasks. Sixteen participants viewed a prototype of an accident response support system and answered questions at three difficulty levels while their eye movements were tracked using Tobii Pro Glasses 2. Results showed a significant positive correlation (r = 0.595, <i>p</i> < 0.01) between heatmap entropy and response time, indicating that more dispersed attention leads to longer task completion times. This pattern held consistently across all difficulty levels. These findings suggest that heatmap entropy is a useful metric for evaluating user attention strategies and can inform interface usability assessments in high-stakes environments.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12564979/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diagnosing Colour Vision Deficiencies Using Eye Movements (Without Dedicated Eye-Tracking Hardware). 使用眼动诊断色觉缺陷(没有专用的眼动追踪硬件)。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-10-02 DOI: 10.3390/jemr18050051
Aryaman Taore, Gabriel Lobo, Philip R K Turnbull, Steven C Dakin

Purpose: To investigate the efficacy of a novel test for diagnosing colour vision deficiencies using reflexive eye movements measured using an unmodified tablet.

Methods: This study followed a cross-sectional design, where thirty-three participants aged between 17 and 65 years were recruited. The participant group comprised 23 controls, 8 deuteranopes, and 2 protanopes. An anomaloscope was employed to determine the colour vision status of these participants. The study methodology involved using an Apple iPad Pro's built-in eye-tracking capabilities to record eye movements in response to coloured patterns drifting on the screen. Through an automated analysis of these movements, the researchers estimated individuals' red-green equiluminant point and their equivalent luminance contrast.

Results: Estimates of the red-green equiluminant point and the equivalent luminance contrast were used to classify participants' colour vision status with a sensitivity rate of 90.0% and a specificity rate of 91.30%.

Conclusions: The novel colour vision test administered using an unmodified tablet was found to be effective in diagnosing colour vision deficiencies and has the potential to be a practical and cost-effective alternative to traditional methods. Translation Relevance: The test's objectivity, its straightforward implementation on a standard tablet, and its minimal requirement for patient cooperation, all contribute to the wider accessibility of colour vision diagnosis. This is particularly advantageous for demographics like children who might be challenging to engage, but for whom early detection is of paramount importance.

目的:探讨一种新的色觉缺陷诊断方法的有效性,该方法使用未经修饰的片剂测量反射性眼动。方法:本研究采用横断面设计,招募了33名年龄在17至65岁之间的参与者。参与者组包括23个对照组,8个氘核和2个质子。使用异常镜来确定这些参与者的色觉状态。研究方法包括使用苹果iPad Pro内置的眼球追踪功能来记录眼球对屏幕上浮动的彩色图案的反应。通过对这些动作的自动分析,研究人员估计了个体的红绿等效亮度点和等效亮度对比。结果:采用红绿等效点和等效亮度对比的估计值对被试的色觉状态进行分类,灵敏度为90.0%,特异性为91.30%。结论:使用未经修饰的片剂进行的新型色觉测试被发现可以有效诊断色觉缺陷,并且有可能成为传统方法的实用和经济的替代方法。翻译相关性:该测试的客观性,在标准平板电脑上的直接实施,以及对患者合作的最低要求,都有助于色视觉诊断的更广泛可及性。这对于像儿童这样的人群来说尤其有利,他们可能很难参与进来,但对他们来说,早期发现是至关重要的。
{"title":"Diagnosing Colour Vision Deficiencies Using Eye Movements (Without Dedicated Eye-Tracking Hardware).","authors":"Aryaman Taore, Gabriel Lobo, Philip R K Turnbull, Steven C Dakin","doi":"10.3390/jemr18050051","DOIUrl":"10.3390/jemr18050051","url":null,"abstract":"<p><strong>Purpose: </strong>To investigate the efficacy of a novel test for diagnosing colour vision deficiencies using reflexive eye movements measured using an unmodified tablet.</p><p><strong>Methods: </strong>This study followed a cross-sectional design, where thirty-three participants aged between 17 and 65 years were recruited. The participant group comprised 23 controls, 8 deuteranopes, and 2 protanopes. An anomaloscope was employed to determine the colour vision status of these participants. The study methodology involved using an Apple iPad Pro's built-in eye-tracking capabilities to record eye movements in response to coloured patterns drifting on the screen. Through an automated analysis of these movements, the researchers estimated individuals' red-green equiluminant point and their equivalent luminance contrast.</p><p><strong>Results: </strong>Estimates of the red-green equiluminant point and the equivalent luminance contrast were used to classify participants' colour vision status with a sensitivity rate of 90.0% and a specificity rate of 91.30%.</p><p><strong>Conclusions: </strong>The novel colour vision test administered using an unmodified tablet was found to be effective in diagnosing colour vision deficiencies and has the potential to be a practical and cost-effective alternative to traditional methods. Translation Relevance: The test's objectivity, its straightforward implementation on a standard tablet, and its minimal requirement for patient cooperation, all contribute to the wider accessibility of colour vision diagnosis. This is particularly advantageous for demographics like children who might be challenging to engage, but for whom early detection is of paramount importance.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual Attention to Economic Information in Simulated Ophthalmic Deficits: A Remote Eye-Tracking Study. 模拟眼缺陷对经济信息的视觉注意:远程眼动追踪研究。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-10-02 DOI: 10.3390/jemr18050050
Cansu Yuksel Elgin, Ceyhun Elgin

This study investigated how simulated ophthalmic visual field deficits affect visual attention and economic information processing. Using webcam-based eye tracking, 227 participants with normal vision recruited through Amazon Mechanical Turk were assigned to control, central vision loss, peripheral vision loss, or scattered vision loss simulation conditions. Participants viewed economic stimuli of varying complexity while eye movements, cognitive load, and comprehension were measured. All deficit conditions showed altered oculomotor behaviors. Central vision loss produced the most severe impairments: 43.6% increased fixation durations, 68% longer scanpaths, and comprehension accuracy of 61.2% versus 87.3% for controls. Visual deficits interacted with information complexity, showing accelerated impairment for complex stimuli. Mediation analysis revealed 47% of comprehension deficits were mediated through altered attention patterns. Cognitive load was significantly elevated, with central vision loss participants reporting 84% higher mental demand than controls. These findings demonstrate that visual field deficits fundamentally alter economic information processing through both direct perceptual limitations and compensatory attention strategies. Results demonstrate the feasibility of webcam-based eye tracking for studying simulated visual deficits and suggest that different types of simulated visual deficits may require distinct information presentation strategies.

本研究探讨了模拟眼视野缺损对视觉注意和经济信息加工的影响。使用基于网络摄像头的眼动追踪,通过亚马逊土耳其机械招募的227名视力正常的参与者被分配到控制、中心视力丧失、周边视力丧失或分散视力丧失的模拟条件下。参与者观看不同复杂性的经济刺激,同时测量眼球运动、认知负荷和理解能力。所有的缺陷情况都表现出眼部运动行为的改变。中心视力丧失造成了最严重的损害:43.6%的注视时间增加,68%的扫描路径延长,理解准确率为61.2%,而对照组为87.3%。视觉缺陷与信息复杂性相互作用,表现出对复杂刺激的加速损伤。中介分析显示47%的理解缺陷是通过注意模式的改变来中介的。认知负荷显著升高,中心视力丧失的参与者报告的精神需求比对照组高84%。这些发现表明,视野缺陷通过直接感知限制和补偿性注意策略从根本上改变了经济信息处理。结果表明,基于网络摄像头的眼动追踪技术用于模拟视觉缺陷研究的可行性,并表明不同类型的模拟视觉缺陷可能需要不同的信息呈现策略。
{"title":"Visual Attention to Economic Information in Simulated Ophthalmic Deficits: A Remote Eye-Tracking Study.","authors":"Cansu Yuksel Elgin, Ceyhun Elgin","doi":"10.3390/jemr18050050","DOIUrl":"10.3390/jemr18050050","url":null,"abstract":"<p><p>This study investigated how simulated ophthalmic visual field deficits affect visual attention and economic information processing. Using webcam-based eye tracking, 227 participants with normal vision recruited through Amazon Mechanical Turk were assigned to control, central vision loss, peripheral vision loss, or scattered vision loss simulation conditions. Participants viewed economic stimuli of varying complexity while eye movements, cognitive load, and comprehension were measured. All deficit conditions showed altered oculomotor behaviors. Central vision loss produced the most severe impairments: 43.6% increased fixation durations, 68% longer scanpaths, and comprehension accuracy of 61.2% versus 87.3% for controls. Visual deficits interacted with information complexity, showing accelerated impairment for complex stimuli. Mediation analysis revealed 47% of comprehension deficits were mediated through altered attention patterns. Cognitive load was significantly elevated, with central vision loss participants reporting 84% higher mental demand than controls. These findings demonstrate that visual field deficits fundamentally alter economic information processing through both direct perceptual limitations and compensatory attention strategies. Results demonstrate the feasibility of webcam-based eye tracking for studying simulated visual deficits and suggest that different types of simulated visual deficits may require distinct information presentation strategies.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565663/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guiding the Gaze: How Bionic Reading Influences Eye Movements. 引导凝视:仿生阅读如何影响眼球运动。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-10-01 DOI: 10.3390/jemr18050049
T R Beelders

In recent years, Bionic reading has been introduced as a means to combat superficial reading and low comprehension rates. This paper investigates eye movements between participants who read a passage in standard font and an additional Bionic font passage. It was found that Bionic font does not significantly change eye movements when reading. Fixation durations, number of fixations and reading speeds were not significantly different between the two formats. Furthermore, fixations were spread throughout the word and not only on leading characters, even when using Bionic font; hence, participants were not able to "auto-complete" the words. Additionally, Bionic font did not facilitate easier processing of low-frequency or unfamiliar words. Overall, it would appear that Bionic font, in the short term, does not affect reading. Further investigation is needed to determine whether a long-term intervention with Bionic font is more meaningful than standard interventions.

近年来,仿生阅读作为一种对抗肤浅阅读和低理解率的手段被引入。本文研究了参与者在阅读标准字体和额外的仿生字体段落之间的眼球运动。研究发现,仿生字体在阅读时不会显著改变眼球运动。注视时间、注视次数和阅读速度在两种格式间无显著差异。此外,人们的注视遍及整个世界,而且不仅限于主角,即使在使用仿生字体时也是如此;因此,参与者无法“自动完成”单词。此外,仿生字体不能促进低频或不熟悉的单词的处理。总的来说,仿生字体在短期内似乎不会影响阅读。需要进一步的研究来确定长期的仿生字体干预是否比标准干预更有意义。
{"title":"Guiding the Gaze: How Bionic Reading Influences Eye Movements.","authors":"T R Beelders","doi":"10.3390/jemr18050049","DOIUrl":"10.3390/jemr18050049","url":null,"abstract":"<p><p>In recent years, Bionic reading has been introduced as a means to combat superficial reading and low comprehension rates. This paper investigates eye movements between participants who read a passage in standard font and an additional Bionic font passage. It was found that Bionic font does not significantly change eye movements when reading. Fixation durations, number of fixations and reading speeds were not significantly different between the two formats. Furthermore, fixations were spread throughout the word and not only on leading characters, even when using Bionic font; hence, participants were not able to \"auto-complete\" the words. Additionally, Bionic font did not facilitate easier processing of low-frequency or unfamiliar words. Overall, it would appear that Bionic font, in the short term, does not affect reading. Further investigation is needed to determine whether a long-term intervention with Bionic font is more meaningful than standard interventions.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565662/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tracking the Impact of Age and Dimensional Shifts on Situation Model Updating During Narrative Text Comprehension. 叙事文本理解中年龄和维度变化对情境模型更新的影响。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-09-26 DOI: 10.3390/jemr18050048
César Campos-Rojas, Romualdo Ibáñez-Orellana

Studies on the relationship between age and situation model updating during narrative text reading have mainly used response or reading times. This study enhances previous measures (working memory, recognition probes, and comprehension) by incorporating eye-tracking techniques to compare situation model updating between young and older Chilean adults. The study included 82 participants (40 older adults and 42 young adults) who read two narrative texts under three conditions (no shift, spatial shift, and character shift) using a between-subject (age) and within-subject (dimensional change) design. The results show that, while differences in working memory capacity were observed between the groups, these differences did not impact situation model comprehension. Younger adults performed better in recognition tests regardless of updating conditions. Eye-tracking data showed increased fixation times for dimensional shifts and longer reading times in older adults, with no interaction between age and dimensional shifts.

叙事性文本阅读中年龄与情境模式更新关系的研究主要采用反应或阅读时间。本研究通过结合眼动追踪技术来比较智利年轻人和老年人的情境模型更新,从而增强了先前的测量(工作记忆、识别探针和理解)。这项研究包括82名参与者(40名老年人和42名年轻人),他们在三种条件下(无转移、空间转移和人物转移)阅读两篇叙事文本,使用主题间(年龄)和主题内(维度变化)设计。结果表明,虽然工作记忆容量在两组之间存在差异,但这些差异并不影响情境模型理解。无论更新条件如何,年轻人在识别测试中表现更好。眼球追踪数据显示,老年人对维度转换的注视时间增加,阅读时间更长,年龄和维度转换之间没有相互作用。
{"title":"Tracking the Impact of Age and Dimensional Shifts on Situation Model Updating During Narrative Text Comprehension.","authors":"César Campos-Rojas, Romualdo Ibáñez-Orellana","doi":"10.3390/jemr18050048","DOIUrl":"10.3390/jemr18050048","url":null,"abstract":"<p><p>Studies on the relationship between age and situation model updating during narrative text reading have mainly used response or reading times. This study enhances previous measures (working memory, recognition probes, and comprehension) by incorporating eye-tracking techniques to compare situation model updating between young and older Chilean adults. The study included 82 participants (40 older adults and 42 young adults) who read two narrative texts under three conditions (no shift, spatial shift, and character shift) using a between-subject (age) and within-subject (dimensional change) design. The results show that, while differences in working memory capacity were observed between the groups, these differences did not impact situation model comprehension. Younger adults performed better in recognition tests regardless of updating conditions. Eye-tracking data showed increased fixation times for dimensional shifts and longer reading times in older adults, with no interaction between age and dimensional shifts.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565202/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comprehensive Framework for Eye Tracking: Methods, Tools, Applications, and Cross-Platform Evaluation. 眼动追踪的综合框架:方法、工具、应用和跨平台评估。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-09-23 DOI: 10.3390/jemr18050047
Govind Ram Chhimpa, Ajay Kumar, Sunita Garhwal, Dhiraj Kumar, Niyaz Ahmad Wani, Mudasir Ahmad Wani, Kashish Ara Shakil

Eye tracking, a fundamental process in gaze analysis, involves measuring the point of gaze or eye motion. It is crucial in numerous applications, including human-computer interaction (HCI), education, health care, and virtual reality. This study delves into eye-tracking concepts, terminology, performance parameters, applications, and techniques, focusing on modern and efficient approaches such as video-oculography (VOG)-based systems, deep learning models for gaze estimation, wearable and cost-effective devices, and integration with virtual/augmented reality and assistive technologies. These contemporary methods, prevalent for over two decades, significantly contribute to developing cutting-edge eye-tracking applications. The findings underscore the significance of diverse eye-tracking techniques in advancing eye-tracking applications. They leverage machine learning to glean insights from existing data, enhance decision-making, and minimize the need for manual calibration during tracking. Furthermore, the study explores and recommends strategies to address limitations/challenges inherent in specific eye-tracking methods and applications. Finally, the study outlines future directions for leveraging eye tracking across various developed applications, highlighting its potential to continue evolving and enriching user experiences.

眼动追踪是注视分析的一个基本过程,包括测量注视点或眼球运动。它在许多应用中都是至关重要的,包括人机交互(HCI)、教育、医疗保健和虚拟现实。本研究深入探讨眼动追踪的概念、术语、性能参数、应用和技术,重点关注现代有效的方法,如基于视频眼动学(VOG)的系统、用于凝视估计的深度学习模型、可穿戴和具有成本效益的设备,以及与虚拟/增强现实和辅助技术的集成。这些流行了二十多年的现代方法,极大地促进了眼动追踪应用的发展。研究结果强调了多种眼球追踪技术在推进眼球追踪应用中的重要性。他们利用机器学习从现有数据中收集见解,增强决策,并最大限度地减少跟踪过程中手动校准的需要。此外,该研究还探讨并建议了解决特定眼动追踪方法和应用中固有的局限性/挑战的策略。最后,该研究概述了在各种已开发应用程序中利用眼动追踪的未来方向,强调了其继续发展和丰富用户体验的潜力。
{"title":"A Comprehensive Framework for Eye Tracking: Methods, Tools, Applications, and Cross-Platform Evaluation.","authors":"Govind Ram Chhimpa, Ajay Kumar, Sunita Garhwal, Dhiraj Kumar, Niyaz Ahmad Wani, Mudasir Ahmad Wani, Kashish Ara Shakil","doi":"10.3390/jemr18050047","DOIUrl":"10.3390/jemr18050047","url":null,"abstract":"<p><p>Eye tracking, a fundamental process in gaze analysis, involves measuring the point of gaze or eye motion. It is crucial in numerous applications, including human-computer interaction (HCI), education, health care, and virtual reality. This study delves into eye-tracking concepts, terminology, performance parameters, applications, and techniques, focusing on modern and efficient approaches such as video-oculography (VOG)-based systems, deep learning models for gaze estimation, wearable and cost-effective devices, and integration with virtual/augmented reality and assistive technologies. These contemporary methods, prevalent for over two decades, significantly contribute to developing cutting-edge eye-tracking applications. The findings underscore the significance of diverse eye-tracking techniques in advancing eye-tracking applications. They leverage machine learning to glean insights from existing data, enhance decision-making, and minimize the need for manual calibration during tracking. Furthermore, the study explores and recommends strategies to address limitations/challenges inherent in specific eye-tracking methods and applications. Finally, the study outlines future directions for leveraging eye tracking across various developed applications, highlighting its potential to continue evolving and enriching user experiences.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12564957/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Microsaccade Activity During Visuospatial Working Memory in Early-Stage Parkinson's Disease. 早期帕金森病患者视觉空间工作记忆中的微跳动活动。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-09-22 DOI: 10.3390/jemr18050046
Katherine Farber, Linjing Jiang, Mario Michiels, Ignacio Obeso, Hoi-Chung Leung

Fixational saccadic eye movements (microsaccades) have been associated with cognitive processes, especially in tasks requiring spatial attention and memory. Alterations in oculomotor and cognitive control are commonly observed in Parkinson's disease (PD), though it is unclear to what extent microsaccade activity is affected. We acquired eye movement data from sixteen participants with early-stage PD and thirteen older healthy controls to examine the effects of dopamine modulation on microsaccade activity during the delay period of a spatial working memory task. Some microsaccade characteristics, like amplitude and duration, were moderately larger in the PD participants when they were "on" their dopaminergic medication than healthy controls, or when they were "off" medication, while PD participants exhibited microsaccades with a linear amplitude-velocity relationship comparable to controls. Both groups showed similar microsaccade rate patterns across task events, with most participants showing a horizontal bias in microsaccade direction during the delay period regardless of the remembered target location. Overall, our data suggest minimal involvement of microsaccades during visuospatial working memory maintenance under conditions without explicit attentional cues in both subject groups. However, moderate effects of PD-related dopamine deficiency were observed for microsaccade size during working memory maintenance.

注视性跳眼运动(microsaccades)与认知过程有关,特别是在需要空间注意力和记忆的任务中。眼动和认知控制的改变在帕金森病(PD)中很常见,尽管目前尚不清楚微跳活动在多大程度上受到影响。我们收集了16名早期PD参与者和13名老年健康对照者的眼动数据,以研究多巴胺调节对空间工作记忆任务延迟期微跳活动的影响。某些微跳特征,如振幅和持续时间,PD参与者在“服用”多巴胺能药物时比健康对照组稍大,或当他们“关闭”药物时,PD参与者表现出与对照组相当的线性振幅-速度关系。两组在任务事件中都表现出相似的微跳速率模式,大多数参与者在延迟期间表现出微跳方向的水平偏差,而不管记忆的目标位置如何。总的来说,我们的数据表明,在没有明确注意提示的情况下,两组被试在视觉空间工作记忆维持过程中都很少涉及微跳动。然而,在工作记忆维持过程中,pd相关的多巴胺缺乏对微跳大小的影响是中等的。
{"title":"Microsaccade Activity During Visuospatial Working Memory in Early-Stage Parkinson's Disease.","authors":"Katherine Farber, Linjing Jiang, Mario Michiels, Ignacio Obeso, Hoi-Chung Leung","doi":"10.3390/jemr18050046","DOIUrl":"10.3390/jemr18050046","url":null,"abstract":"<p><p>Fixational saccadic eye movements (microsaccades) have been associated with cognitive processes, especially in tasks requiring spatial attention and memory. Alterations in oculomotor and cognitive control are commonly observed in Parkinson's disease (PD), though it is unclear to what extent microsaccade activity is affected. We acquired eye movement data from sixteen participants with early-stage PD and thirteen older healthy controls to examine the effects of dopamine modulation on microsaccade activity during the delay period of a spatial working memory task. Some microsaccade characteristics, like amplitude and duration, were moderately larger in the PD participants when they were \"on\" their dopaminergic medication than healthy controls, or when they were \"off\" medication, while PD participants exhibited microsaccades with a linear amplitude-velocity relationship comparable to controls. Both groups showed similar microsaccade rate patterns across task events, with most participants showing a horizontal bias in microsaccade direction during the delay period regardless of the remembered target location. Overall, our data suggest minimal involvement of microsaccades during visuospatial working memory maintenance under conditions without explicit attentional cues in both subject groups. However, moderate effects of PD-related dopamine deficiency were observed for microsaccade size during working memory maintenance.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565590/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Eye Movement Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1