首页 > 最新文献

Journal of Eye Movement Research最新文献

英文 中文
AI Images vs. Real Photographs: Investigating Visual Recognition and Perception. 人工智能图像与真实照片:研究视觉识别和感知。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-11-03 DOI: 10.3390/jemr18060061
Veslava Osińska, Weronika Kortas, Adam Szalach, Marc Welter

Recently, the photorealism of generated images has improved noticeably due to the development of AI algorithms. These are high-resolution images of human faces and bodies, cats and dogs, vehicles, and other categories of objects that the untrained eye cannot distinguish from authentic photographs. The study assessed how people perceive 12 pictures generated by AI vs. 12 real photographs. Six main categories of stimuli were selected: architecture, art, faces, cars, landscapes, and pets. The visual perception of selected images was studied by means of eye tracking and gaze patterns as well as time characteristics, compared with consideration to the respondent groups' gender and knowledge of AI graphics. After the experiment, the study participants analysed the pictures again in order to describe the reasons for their choice. The results show that AI images of pets and real photographs of architecture were the easiest to identify. The largest differences in visual perception are between men and women as well as between those experienced in digital graphics (including AI images) and the rest. Based on the analysis, several recommendations are suggested for AI developers and end-users.

最近,由于人工智能算法的发展,生成图像的真实感有了明显的提高。这些是人脸和身体、猫和狗、车辆和其他类别物体的高分辨率图像,未经训练的眼睛无法将其与真实照片区分开来。该研究评估了人们如何看待人工智能生成的12张图片和12张真实照片。他们选择了六种主要的刺激类型:建筑、艺术、面孔、汽车、风景和宠物。通过眼动追踪和凝视模式以及时间特征来研究所选图像的视觉感知,同时考虑到受访者群体的性别和对AI图形的了解。实验结束后,研究参与者再次分析了这些照片,以描述他们选择的原因。结果表明,宠物的人工智能图像和建筑的真实照片最容易识别。视觉感知的最大差异存在于男性和女性之间,以及那些有数字图形(包括人工智能图像)经验的人与其他人之间。基于分析,本文对人工智能开发者和最终用户提出了几点建议。
{"title":"AI Images vs. Real Photographs: Investigating Visual Recognition and Perception.","authors":"Veslava Osińska, Weronika Kortas, Adam Szalach, Marc Welter","doi":"10.3390/jemr18060061","DOIUrl":"10.3390/jemr18060061","url":null,"abstract":"<p><p>Recently, the photorealism of generated images has improved noticeably due to the development of AI algorithms. These are high-resolution images of human faces and bodies, cats and dogs, vehicles, and other categories of objects that the untrained eye cannot distinguish from authentic photographs. The study assessed how people perceive 12 pictures generated by AI vs. 12 real photographs. Six main categories of stimuli were selected: architecture, art, faces, cars, landscapes, and pets. The visual perception of selected images was studied by means of eye tracking and gaze patterns as well as time characteristics, compared with consideration to the respondent groups' gender and knowledge of AI graphics. After the experiment, the study participants analysed the pictures again in order to describe the reasons for their choice. The results show that AI images of pets and real photographs of architecture were the easiest to identify. The largest differences in visual perception are between men and women as well as between those experienced in digital graphics (including AI images) and the rest. Based on the analysis, several recommendations are suggested for AI developers and end-users.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 6","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641808/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145587499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sequential Fixation Behavior in Road Marking Recognition: Implications for Design. 道路标志识别中的顺序注视行为:对设计的启示。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-10-21 DOI: 10.3390/jemr18050059
Takaya Maeyama, Hiroki Okada, Daisuke Sawamura

This study examined how drivers' eye fixations change before, during, and after recognizing road markings, and how these changes relate to driving speed, visual complexity, cognitive functions, and demographics. 20 licensed drivers viewed on-board movies showing digit or character road markings while their eye movements were tracked. Fixation positions and dispersions were analyzed. Results showed that, regardless of marking type, fixations were horizontally dispersed before and after recognition but became vertically concentrated during recognition, with fixation points shifting higher (p < 0.001) and horizontal dispersion decreasing (p = 0.01). During the recognition period, fixations moved upward and narrowed horizontally toward the final third (p = 0.034), suggesting increased focus. Longer fixations were linked to slower speeds for digits (p = 0.029) and more characters for character markings (p < 0.001). No significant correlations were found with cognitive functions or demographics. These findings suggest that drivers first scan broadly, then concentrate on markings as they approach. For optimal recognition, simple or essential information should be placed centrally or lower, while detailed content should appear higher to align with natural gaze patterns. In high-speed environments, markings should prioritize clarity and brevity in central positions to ensure safe and rapid recognition.

这项研究考察了驾驶员在识别道路标志之前、期间和之后的目光注视是如何变化的,以及这些变化与驾驶速度、视觉复杂性、认知功能和人口统计学之间的关系。20名有执照的司机观看了显示数字或字符道路标记的车载电影,同时跟踪了他们的眼球运动。分析固定位置和分散度。结果表明,无论标记类型如何,识别前后注视点水平分散,识别过程中注视点垂直集中,注视点移动幅度增大(p < 0.001),水平分散程度减小(p = 0.01)。在识别期间,注视向上移动,并向最后三分之一水平变窄(p = 0.034),表明注意力增加。长时间的注视与较慢的数字速度(p = 0.029)和更多的字符标记(p < 0.001)有关。未发现与认知功能或人口统计学有显著相关性。这些发现表明,司机首先要广泛地扫视,然后在接近时将注意力集中在标记上。为了获得最佳的识别,简单或重要的信息应该放在中央或更低的位置,而详细的内容应该出现在更高的位置,以与自然的注视模式保持一致。在高速环境中,标记应优先考虑中心位置的清晰和简洁,以确保安全和快速识别。
{"title":"Sequential Fixation Behavior in Road Marking Recognition: Implications for Design.","authors":"Takaya Maeyama, Hiroki Okada, Daisuke Sawamura","doi":"10.3390/jemr18050059","DOIUrl":"10.3390/jemr18050059","url":null,"abstract":"<p><p>This study examined how drivers' eye fixations change before, during, and after recognizing road markings, and how these changes relate to driving speed, visual complexity, cognitive functions, and demographics. 20 licensed drivers viewed on-board movies showing digit or character road markings while their eye movements were tracked. Fixation positions and dispersions were analyzed. Results showed that, regardless of marking type, fixations were horizontally dispersed before and after recognition but became vertically concentrated during recognition, with fixation points shifting higher (<i>p</i> < 0.001) and horizontal dispersion decreasing (<i>p</i> = 0.01). During the recognition period, fixations moved upward and narrowed horizontally toward the final third (<i>p</i> = 0.034), suggesting increased focus. Longer fixations were linked to slower speeds for digits (<i>p</i> = 0.029) and more characters for character markings (<i>p</i> < 0.001). No significant correlations were found with cognitive functions or demographics. These findings suggest that drivers first scan broadly, then concentrate on markings as they approach. For optimal recognition, simple or essential information should be placed centrally or lower, while detailed content should appear higher to align with natural gaze patterns. In high-speed environments, markings should prioritize clarity and brevity in central positions to ensure safe and rapid recognition.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565692/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Oculomotor Behavior of L2 Readers with Typologically Distant L1 Background: The "Big Three" Effects of Word Length, Frequency, and Predictability. 具有类型学上较远母语背景的二语读者的眼动行为:单词长度、频率和可预测性的“三大”影响。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-10-18 DOI: 10.3390/jemr18050058
Marina Norkina, Daria Chernova, Svetlana Alexeeva, Maria Harchevnik

Oculomotor reading behavior is influenced by both universal factors, like the "big three" of word length, frequency, and contextual predictability, and language-specific factors, such as script and grammar. The aim of this study was to examine the influence of the "big three" factors on L2 reading focusing on a typologically distant L1/L2 pair with dramatic differences in script and grammar. A total of 41 native Chinese-speaking learners of Russian (levels A2-B2) and 40 native Russian speakers read a corpus of 90 Russian sentences for comprehension. Their eye movements were recorded with EyeLink 1000+. We analyzed both early (gaze duration and skipping rate) and late (regression rate and rereading time) eye movement measures. As expected, the "big three" effects influenced oculomotor behavior in both L1 and L2 readers, being more pronounced for L2, but substantial differences were also revealed. Word frequency in L1 reading primarily influenced early processing stages, whereas in L2 reading it remained significant in later stages as well. Predictability had an immediate effect on skipping rates in L1 reading, while L2 readers only exhibited it in late measures. Word length was the only factor that interacted with L2 language exposure which demonstrated adjustment to alphabetic script and polymorphemic word structure. Our findings provide new insights into the processing challenges of L2 readers with typologically distant L1 backgrounds.

眼动阅读行为受到两个普遍因素的影响,比如单词长度、频率和上下文可预测性的“三大因素”,以及语言特定因素,比如脚本和语法。本研究的目的是考察“三大”因素对二语阅读的影响,重点关注类型学上距离较远的、在文字和语法上存在巨大差异的L1/L2对。共有41名汉语母语俄语学习者(A2-B2级)和40名俄语母语学习者阅读了90个俄语句子的语料库以进行理解。用EyeLink 1000+记录他们的眼球运动。我们分析了早期(凝视持续时间和跳跃率)和晚期(回归率和重读时间)眼动测量。正如预期的那样,“三大”效应影响了第一语言和第二语言读者的动眼肌行为,第二语言读者的动眼肌行为更为明显,但也揭示了实质性的差异。在第一语言阅读中,词频主要影响早期加工阶段,而在第二语言阅读中,词频对后期加工阶段也有显著影响。可预见性对第一语言阅读的跳读率有直接影响,而第二语言读者只在后期的测量中表现出来。单词长度是唯一与第二语言暴露相互作用的因素,显示了对字母脚本和多态单词结构的调整。我们的研究结果为具有类型学上较远的母语背景的二语读者的加工挑战提供了新的见解。
{"title":"Oculomotor Behavior of L2 Readers with Typologically Distant L1 Background: The \"Big Three\" Effects of Word Length, Frequency, and Predictability.","authors":"Marina Norkina, Daria Chernova, Svetlana Alexeeva, Maria Harchevnik","doi":"10.3390/jemr18050058","DOIUrl":"10.3390/jemr18050058","url":null,"abstract":"<p><p>Oculomotor reading behavior is influenced by both universal factors, like the \"big three\" of word length, frequency, and contextual predictability, and language-specific factors, such as script and grammar. The aim of this study was to examine the influence of the \"big three\" factors on L2 reading focusing on a typologically distant L1/L2 pair with dramatic differences in script and grammar. A total of 41 native Chinese-speaking learners of Russian (levels A2-B2) and 40 native Russian speakers read a corpus of 90 Russian sentences for comprehension. Their eye movements were recorded with EyeLink 1000+. We analyzed both early (gaze duration and skipping rate) and late (regression rate and rereading time) eye movement measures. As expected, the \"big three\" effects influenced oculomotor behavior in both L1 and L2 readers, being more pronounced for L2, but substantial differences were also revealed. Word frequency in L1 reading primarily influenced early processing stages, whereas in L2 reading it remained significant in later stages as well. Predictability had an immediate effect on skipping rates in L1 reading, while L2 readers only exhibited it in late measures. Word length was the only factor that interacted with L2 language exposure which demonstrated adjustment to alphabetic script and polymorphemic word structure. Our findings provide new insights into the processing challenges of L2 readers with typologically distant L1 backgrounds.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565054/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual Strategies for Guiding Gaze Sequences and Attention in Yi Symbols: Eye-Tracking Insights. 易符号中引导凝视序列和注意的视觉策略:眼动追踪研究。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-10-16 DOI: 10.3390/jemr18050057
Bo Yuan, Sakol Teeravarunyou

This study investigated the effectiveness of visual strategies in guiding gaze behavior and attention on Yi graphic symbols using eye-tracking. Four strategies, color brightness, layering, line guidance, and size variation, were tested with 34 Thai participants unfamiliar with Yi symbol meanings. Gaze sequence analysis, using Levenshtein distance and similarity ratio, showed that bright colors, layered arrangements, and connected lines enhanced alignment with intended gaze sequences, while size variation had minimal effect. Bright red symbols and lines captured faster initial fixations (Time to First Fixation, TTFF) on key Areas of Interest (AOIs), unlike layering and size. Lines reduced dwell time at sequence starts, promoting efficient progression, while larger symbols sustained longer attention, though inconsistently. Color and layering showed no consistent dwell time effects. These findings inform Yi graphic symbol design for effective cross-cultural visual communication.

本研究利用眼动追踪研究视觉策略在彝族图形符号注视行为和注意引导中的有效性。对34名不熟悉彝族符号含义的泰国人进行了四种策略(颜色亮度、分层、线条引导和尺寸变化)的测试。基于Levenshtein距离和相似比的注视序列分析表明,明亮的颜色、分层排列和连接的线条增强了与预期注视序列的对齐,而尺寸变化的影响最小。与分层和大小不同,鲜红色的符号和线条捕获了关键兴趣区域(aoi)上更快的初始注视(首次注视时间,TTFF)。线条减少了序列开始时的停留时间,促进了有效的进程,而较大的符号维持了更长的注意力,尽管不一致。颜色和分层没有一致的停留时间效应。这些研究结果为彝族图形符号设计提供了有效的跨文化视觉传达依据。
{"title":"Visual Strategies for Guiding Gaze Sequences and Attention in Yi Symbols: Eye-Tracking Insights.","authors":"Bo Yuan, Sakol Teeravarunyou","doi":"10.3390/jemr18050057","DOIUrl":"10.3390/jemr18050057","url":null,"abstract":"<p><p>This study investigated the effectiveness of visual strategies in guiding gaze behavior and attention on Yi graphic symbols using eye-tracking. Four strategies, color brightness, layering, line guidance, and size variation, were tested with 34 Thai participants unfamiliar with Yi symbol meanings. Gaze sequence analysis, using Levenshtein distance and similarity ratio, showed that bright colors, layered arrangements, and connected lines enhanced alignment with intended gaze sequences, while size variation had minimal effect. Bright red symbols and lines captured faster initial fixations (Time to First Fixation, TTFF) on key Areas of Interest (AOIs), unlike layering and size. Lines reduced dwell time at sequence starts, promoting efficient progression, while larger symbols sustained longer attention, though inconsistently. Color and layering showed no consistent dwell time effects. These findings inform Yi graphic symbol design for effective cross-cultural visual communication.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565636/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Head and Eye Movements During Pedestrian Crossing in Patients with Visual Impairment: A Virtual Reality Eye Tracking Study. 视觉障碍患者过马路时的头部和眼球运动:一项虚拟现实眼动追踪研究。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-10-15 DOI: 10.3390/jemr18050055
Mark Mervic, Ema Grašič, Polona Jaki Mekjavić, Nataša Vidovič Valentinčič, Ana Fakin

Real-world navigation depends on coordinated head-eye behaviour that standard tests of visual function miss. We investigated how visual impairment affects traffic navigation, whether behaviour differs by visual impairment type, and whether this functional grouping better explains performance than WHO categorisation. Using a virtual reality (VR) headset with integrated head and eye tracking, we evaluated detection of moving cars and safe road-crossing opportunities in 40 patients with central, peripheral, or combined visual impairment and 19 controls. Only two patients with a combination of very low visual acuity and severely constricted visual fields failed both visual tasks. Overall, patients identified safe-crossing intervals 1.3-1.5 s later than controls (p ≤ 0.01). Head-eye movement profiles diverged by visual impairment: patients with central impairment showed shorter, more frequent saccades (p < 0.05); patients with peripheral impairment showed exploratory behaviour similar to controls; while patients with combined impairment executed fewer microsaccades (p < 0.05), reduced total macrosaccade amplitude (p < 0.05), and fewer head turns (p < 0.05). Classification by impairment type explained behaviour better than WHO categorisation. These findings challenge acuity/field-based classifications and support integrating functional metrics into risk stratification and targeted rehabilitation, with VR providing a safe, scalable assessment tool.

我们调查了视觉障碍如何影响交通导航,行为是否因视觉障碍类型而异,以及这种功能分组是否比世卫组织分类更好地解释了性能。使用集成头部和眼动追踪的虚拟现实(VR)头戴设备,我们评估了40名中枢性、外周性或混合性视觉障碍患者和19名对照患者对移动车辆和安全过马路机会的检测。只有两名视力非常低且视野严重狭窄的患者未能完成两项视觉任务。总体而言,患者识别安全穿越间隔的时间比对照组晚1.3 ~ 1.5 s (p≤0.01)。头眼运动特征因视觉障碍而不同:中枢性损伤患者的扫视时间更短,频率更高(p < 0.05);外周损伤患者表现出与对照组相似的探索性行为;合并损伤患者微眼跳减少(p < 0.05),大眼跳总幅度减少(p < 0.05),头部转动次数减少(p < 0.05)。按损伤类型分类比世卫组织分类更能解释行为。这些发现挑战了基于敏锐度/现场的分类,并支持将功能指标整合到风险分层和有针对性的康复中,VR提供了一种安全、可扩展的评估工具。
{"title":"Head and Eye Movements During Pedestrian Crossing in Patients with Visual Impairment: A Virtual Reality Eye Tracking Study.","authors":"Mark Mervic, Ema Grašič, Polona Jaki Mekjavić, Nataša Vidovič Valentinčič, Ana Fakin","doi":"10.3390/jemr18050055","DOIUrl":"10.3390/jemr18050055","url":null,"abstract":"<p><p>Real-world navigation depends on coordinated head-eye behaviour that standard tests of visual function miss. We investigated how visual impairment affects traffic navigation, whether behaviour differs by visual impairment type, and whether this functional grouping better explains performance than WHO categorisation. Using a virtual reality (VR) headset with integrated head and eye tracking, we evaluated detection of moving cars and safe road-crossing opportunities in 40 patients with central, peripheral, or combined visual impairment and 19 controls. Only two patients with a combination of very low visual acuity and severely constricted visual fields failed both visual tasks. Overall, patients identified safe-crossing intervals 1.3-1.5 s later than controls (<i>p</i> ≤ 0.01). Head-eye movement profiles diverged by visual impairment: patients with central impairment showed shorter, more frequent saccades (<i>p</i> < 0.05); patients with peripheral impairment showed exploratory behaviour similar to controls; while patients with combined impairment executed fewer microsaccades (<i>p</i> < 0.05), reduced total macrosaccade amplitude (<i>p</i> < 0.05), and fewer head turns (<i>p</i> < 0.05). Classification by impairment type explained behaviour better than WHO categorisation. These findings challenge acuity/field-based classifications and support integrating functional metrics into risk stratification and targeted rehabilitation, with VR providing a safe, scalable assessment tool.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565098/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DyslexiaNet: Examining the Viability and Efficacy of Eye Movement-Based Deep Learning for Dyslexia Detection. DyslexiaNet:检测基于眼动的深度学习检测阅读障碍的可行性和有效性。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-10-15 DOI: 10.3390/jemr18050056
Ramis İleri, Çiğdem Gülüzar Altıntop, Fatma Latifoğlu, Esra Demirci

Dyslexia is a neurodevelopmental disorder that impairs reading, affecting 5-17.5% of children and representing the most common learning disability. Individuals with dyslexia experience decoding, reading fluency, and comprehension difficulties, hindering vocabulary development and learning. Early and accurate identification is essential for targeted interventions. Traditional diagnostic methods rely on behavioral assessments and neuropsychological tests, which can be time-consuming and subjective. Recent studies suggest that physiological signals, such as electrooculography (EOG), can provide objective insights into reading-related cognitive and visual processes. Despite this potential, there is limited research on how typeface and font characteristics influence reading performance in dyslexic children using EOG measurements. To address this gap, we investigated the most suitable typefaces for Turkish-speaking children with dyslexia by analyzing EOG signals recorded during reading tasks. We developed a novel deep learning framework, DyslexiaNet, using scalogram images from horizontal and vertical EOG channels, and compared it with AlexNet, MobileNet, and ResNet. Reading performance indicators, including reading time, blink rate, regression rate, and EOG signal energy, were evaluated across multiple typefaces and font sizes. Results showed that typeface significantly affects reading efficiency in dyslexic children. The BonvenoCF font was associated with shorter reading times, fewer regressions, and lower cognitive load. DyslexiaNet achieved the highest classification accuracy (99.96% for horizontal channels) while requiring lower computational load than other networks. These findings demonstrate that EOG-based physiological measurements combined with deep learning offer a non-invasive, objective approach for dyslexia detection and personalized typeface selection. This method can provide practical guidance for designing educational materials and support clinicians in early diagnosis and individualized intervention strategies for children with dyslexia.

阅读障碍是一种影响阅读的神经发育障碍,影响了5-17.5%的儿童,是最常见的学习障碍。患有阅读障碍的个体会经历解码、阅读流畅性和理解困难,从而阻碍了词汇的发展和学习。早期和准确的识别对于有针对性的干预至关重要。传统的诊断方法依赖于行为评估和神经心理学测试,这既耗时又主观。最近的研究表明,眼电图(EOG)等生理信号可以为阅读相关的认知和视觉过程提供客观的见解。尽管有这种潜力,但关于字体和字体特征如何影响阅读障碍儿童阅读表现的研究有限。为了解决这一差距,我们通过分析阅读任务中记录的脑电图信号,研究了最适合土耳其语阅读障碍儿童的字体。我们开发了一种新的深度学习框架DyslexiaNet,使用来自水平和垂直EOG通道的尺度图图像,并将其与AlexNet, MobileNet和ResNet进行比较。阅读性能指标,包括阅读时间、眨眼率、回归率和EOG信号能量,在多种字体和字体大小下进行评估。结果表明,字体对阅读效率有显著影响。BonvenoCF字体与更短的阅读时间、更少的回归和更低的认知负荷有关。DyslexiaNet实现了最高的分类准确率(水平通道为99.96%),同时所需的计算负荷低于其他网络。这些发现表明,基于脑电图的生理测量与深度学习相结合,为阅读障碍检测和个性化字体选择提供了一种非侵入性、客观的方法。该方法可为设计教材提供实用指导,并支持临床医生对阅读障碍儿童进行早期诊断和个性化干预。
{"title":"DyslexiaNet: Examining the Viability and Efficacy of Eye Movement-Based Deep Learning for Dyslexia Detection.","authors":"Ramis İleri, Çiğdem Gülüzar Altıntop, Fatma Latifoğlu, Esra Demirci","doi":"10.3390/jemr18050056","DOIUrl":"10.3390/jemr18050056","url":null,"abstract":"<p><p>Dyslexia is a neurodevelopmental disorder that impairs reading, affecting 5-17.5% of children and representing the most common learning disability. Individuals with dyslexia experience decoding, reading fluency, and comprehension difficulties, hindering vocabulary development and learning. Early and accurate identification is essential for targeted interventions. Traditional diagnostic methods rely on behavioral assessments and neuropsychological tests, which can be time-consuming and subjective. Recent studies suggest that physiological signals, such as electrooculography (EOG), can provide objective insights into reading-related cognitive and visual processes. Despite this potential, there is limited research on how typeface and font characteristics influence reading performance in dyslexic children using EOG measurements. To address this gap, we investigated the most suitable typefaces for Turkish-speaking children with dyslexia by analyzing EOG signals recorded during reading tasks. We developed a novel deep learning framework, DyslexiaNet, using scalogram images from horizontal and vertical EOG channels, and compared it with AlexNet, MobileNet, and ResNet. Reading performance indicators, including reading time, blink rate, regression rate, and EOG signal energy, were evaluated across multiple typefaces and font sizes. Results showed that typeface significantly affects reading efficiency in dyslexic children. The BonvenoCF font was associated with shorter reading times, fewer regressions, and lower cognitive load. DyslexiaNet achieved the highest classification accuracy (99.96% for horizontal channels) while requiring lower computational load than other networks. These findings demonstrate that EOG-based physiological measurements combined with deep learning offer a non-invasive, objective approach for dyslexia detection and personalized typeface selection. This method can provide practical guidance for designing educational materials and support clinicians in early diagnosis and individualized intervention strategies for children with dyslexia.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565239/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Test-Retest Reliability of a Computerized Hand-Eye Coordination Task. 计算机手眼协调任务的重测信度。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-10-14 DOI: 10.3390/jemr18050054
Antonio Ríder-Vázquez, Estanislao Gutiérrez-Sánchez, Clara Martinez-Perez, María Carmen Sánchez-González

Background: Hand-eye coordination is essential for daily functioning and sports performance, but standardized digital protocols for its reliable assessment are limited. This study aimed to evaluate the intra-examiner repeatability and inter-examiner reproducibility of a computerized protocol (COI-SV®) for assessing hand-eye coordination in healthy adults, as well as the influence of age and sex. Methods: Seventy-eight adults completed four sessions of a computerized visual-motor task requiring rapid and accurate responses to randomly presented targets. Accuracy and response times were analyzed using repeated-measures and reliability analyses. Results: Accuracy showed a small session effect and minor examiner differences on the first day, whereas response times were consistent across sessions. Men generally responded faster than women, and response times increased slightly with age. Overall, reliability indices indicated moderate-to-good repeatability and reproducibility for both accuracy and response time measures. Conclusions: The COI-SV® protocol provides a robust, objective, and reproducible measurement of hand-eye coordination, supporting its use in clinical, sports, and research settings.

背景:手眼协调对日常功能和运动表现至关重要,但用于其可靠评估的标准化数字协议有限。本研究旨在评估用于评估健康成人手眼协调的计算机方案(COI-SV®)的检查者内部可重复性和检查者之间的可重复性,以及年龄和性别的影响。方法:78名成年人完成了四次计算机视觉运动任务,该任务要求对随机呈现的目标做出快速准确的反应。使用重复测量和可靠性分析来分析准确性和响应时间。结果:准确性在第一天表现出较小的会话效应和较小的考官差异,而反应时间在各个会话中是一致的。男性的反应通常比女性快,而且随着年龄的增长,反应时间会略有增加。总体而言,可靠性指标表明准确性和响应时间测量的可重复性和再现性中等至良好。结论:COI-SV®方案提供了一种可靠、客观、可重复的手眼协调测量方法,支持其在临床、运动和研究环境中的应用。
{"title":"Test-Retest Reliability of a Computerized Hand-Eye Coordination Task.","authors":"Antonio Ríder-Vázquez, Estanislao Gutiérrez-Sánchez, Clara Martinez-Perez, María Carmen Sánchez-González","doi":"10.3390/jemr18050054","DOIUrl":"10.3390/jemr18050054","url":null,"abstract":"<p><p><b>Background:</b> Hand-eye coordination is essential for daily functioning and sports performance, but standardized digital protocols for its reliable assessment are limited. This study aimed to evaluate the intra-examiner repeatability and inter-examiner reproducibility of a computerized protocol (COI-SV<sup>®</sup>) for assessing hand-eye coordination in healthy adults, as well as the influence of age and sex. <b>Methods:</b> Seventy-eight adults completed four sessions of a computerized visual-motor task requiring rapid and accurate responses to randomly presented targets. Accuracy and response times were analyzed using repeated-measures and reliability analyses. <b>Results:</b> Accuracy showed a small session effect and minor examiner differences on the first day, whereas response times were consistent across sessions. Men generally responded faster than women, and response times increased slightly with age. Overall, reliability indices indicated moderate-to-good repeatability and reproducibility for both accuracy and response time measures. <b>Conclusions:</b> The COI-SV<sup>®</sup> protocol provides a robust, objective, and reproducible measurement of hand-eye coordination, supporting its use in clinical, sports, and research settings.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565402/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recognition and Misclassification Patterns of Basic Emotional Facial Expressions: An Eye-Tracking Study in Young Healthy Adults. 年轻健康成人基本情绪面部表情的识别和错误分类模式:一项眼动追踪研究。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-10-11 DOI: 10.3390/jemr18050053
Neşe Alkan

Accurate recognition of basic facial emotions is well documented, yet the mechanisms of misclassification and their relation to gaze allocation remain under-reported. The present study utilized a within-subjects eye-tracking design to examine both accurate and inaccurate recognition of five basic emotions (anger, disgust, fear, happiness, and sadness) in healthy young adults. Fifty participants (twenty-four women) completed a forced-choice categorization task with 10 stimuli (female/male poser × emotion). A remote eye tracker (60 Hz) recorded fixations mapped to eyes, nose, and mouth areas of interest (AOIs). The analyses combined accuracy and decision-time statistics with heatmap comparisons of misclassified versus accurate trials within the same image. Overall accuracy was 87.8% (439/500). Misclassification patterns depended on the target emotion, but not on participant gender. Fear male was most often misclassified (typically as disgust), and sadness female was frequently labeled as fear or disgust; disgust was the most incorrectly attributed response. For accurate trials, decision time showed main effects of emotion (p < 0.001) and participant gender (p = 0.033): happiness was categorized fastest and anger slowest, and women responded faster overall, with particularly fast response times for sadness. The AOI results revealed strong main effects and an AOI × emotion interaction (p < 0.001): eyes received the most fixations, but fear drew relatively more mouth sampling and sadness more nose sampling. Crucially, heatmaps showed an upper-face bias (eye AOI) in inaccurate trials, whereas accurate trials retained eye sampling and added nose and mouth AOI coverage, which aligned with diagnostic cues. These findings indicate that the scanpath strategy, in addition to information availability, underpins success and failure in basic-emotion recognition, with implications for theory, targeted training, and affective technologies.

对基本面部情绪的准确识别已有充分的文献记载,但错误分类的机制及其与凝视分配的关系仍未得到充分的报道。本研究利用受试者内部眼球追踪设计来检验健康年轻人对五种基本情绪(愤怒、厌恶、恐惧、快乐和悲伤)的准确和不准确识别。50名参与者(24名女性)完成了一项有10种刺激(女性/男性问题×情绪)的强迫选择分类任务。远程眼动追踪器(60赫兹)记录了眼睛、鼻子和嘴巴感兴趣区域(AOIs)的注视。该分析将准确度和决策时间统计数据与同一图像中错误分类与准确试验的热图比较相结合。总体准确率为87.8%(439/500)。错误分类模式取决于目标情绪,而不是参与者的性别。恐惧男性最常被错误分类(通常是厌恶),而悲伤女性经常被贴上恐惧或厌恶的标签;厌恶是最不正确的反应。在精确的试验中,决策时间主要受情绪(p < 0.001)和参与者性别(p = 0.033)的影响:对快乐的分类速度最快,对愤怒的分类速度最慢,总体而言,女性的反应速度更快,对悲伤的反应速度尤其快。AOI结果显示了强烈的主效应和AOI与情绪的交互作用(p < 0.001):眼睛获得了最多的注视,但恐惧获得了相对更多的嘴采样,悲伤获得了相对更多的鼻子采样。关键是,在不准确的试验中,热图显示了上脸偏差(眼部AOI),而准确的试验保留了眼部采样,并增加了鼻子和嘴巴的AOI覆盖范围,这与诊断线索一致。这些发现表明,扫描路径策略除了信息可用性外,还支持基本情绪识别的成功和失败,这对理论,有针对性的培训和情感技术具有启示意义。
{"title":"Recognition and Misclassification Patterns of Basic Emotional Facial Expressions: An Eye-Tracking Study in Young Healthy Adults.","authors":"Neşe Alkan","doi":"10.3390/jemr18050053","DOIUrl":"10.3390/jemr18050053","url":null,"abstract":"<p><p>Accurate recognition of basic facial emotions is well documented, yet the mechanisms of misclassification and their relation to gaze allocation remain under-reported. The present study utilized a within-subjects eye-tracking design to examine both accurate and inaccurate recognition of five basic emotions (anger, disgust, fear, happiness, and sadness) in healthy young adults. Fifty participants (twenty-four women) completed a forced-choice categorization task with 10 stimuli (female/male poser × emotion). A remote eye tracker (60 Hz) recorded fixations mapped to eyes, nose, and mouth areas of interest (AOIs). The analyses combined accuracy and decision-time statistics with heatmap comparisons of misclassified versus accurate trials within the same image. Overall accuracy was 87.8% (439/500). Misclassification patterns depended on the target emotion, but not on participant gender. Fear male was most often misclassified (typically as disgust), and sadness female was frequently labeled as fear or disgust; disgust was the most incorrectly attributed response. For accurate trials, decision time showed main effects of emotion (<i>p</i> < 0.001) and participant gender (<i>p</i> = 0.033): happiness was categorized fastest and anger slowest, and women responded faster overall, with particularly fast response times for sadness. The AOI results revealed strong main effects and an AOI × emotion interaction (<i>p</i> < 0.001): eyes received the most fixations, but fear drew relatively more mouth sampling and sadness more nose sampling. Crucially, heatmaps showed an upper-face bias (eye AOI) in inaccurate trials, whereas accurate trials retained eye sampling and added nose and mouth AOI coverage, which aligned with diagnostic cues. These findings indicate that the scanpath strategy, in addition to information availability, underpins success and failure in basic-emotion recognition, with implications for theory, targeted training, and affective technologies.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565503/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Effect of Visual Attention Dispersion on Cognitive Response Time. 视觉注意力分散对认知反应时间的影响。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-10-10 DOI: 10.3390/jemr18050052
Yejin Lee, Kwangtae Jung

In safety-critical systems like nuclear power plants, the rapid and accurate perception of visual interface information is vital. This study investigates the relationship between visual attention dispersion measured via heatmap entropy (as a specific measure of gaze entropy) and response time during information search tasks. Sixteen participants viewed a prototype of an accident response support system and answered questions at three difficulty levels while their eye movements were tracked using Tobii Pro Glasses 2. Results showed a significant positive correlation (r = 0.595, p < 0.01) between heatmap entropy and response time, indicating that more dispersed attention leads to longer task completion times. This pattern held consistently across all difficulty levels. These findings suggest that heatmap entropy is a useful metric for evaluating user attention strategies and can inform interface usability assessments in high-stakes environments.

在核电厂等安全关键系统中,快速准确地感知视觉界面信息至关重要。本研究探讨了通过热图熵(注视熵的一种特定度量)测量的视觉注意分散与信息搜索任务中反应时间的关系。16名参与者观看了事故响应支持系统的原型,并回答了三个难度级别的问题,同时使用Tobii Pro眼镜2跟踪他们的眼球运动。结果显示,热图熵与反应时间呈显著正相关(r = 0.595, p < 0.01),说明注意力越分散,任务完成时间越长。这种模式在所有难度级别中都是一致的。这些发现表明,热图熵是评估用户注意力策略的有用度量,可以为高风险环境下的界面可用性评估提供信息。
{"title":"The Effect of Visual Attention Dispersion on Cognitive Response Time.","authors":"Yejin Lee, Kwangtae Jung","doi":"10.3390/jemr18050052","DOIUrl":"10.3390/jemr18050052","url":null,"abstract":"<p><p>In safety-critical systems like nuclear power plants, the rapid and accurate perception of visual interface information is vital. This study investigates the relationship between visual attention dispersion measured via heatmap entropy (as a specific measure of gaze entropy) and response time during information search tasks. Sixteen participants viewed a prototype of an accident response support system and answered questions at three difficulty levels while their eye movements were tracked using Tobii Pro Glasses 2. Results showed a significant positive correlation (r = 0.595, <i>p</i> < 0.01) between heatmap entropy and response time, indicating that more dispersed attention leads to longer task completion times. This pattern held consistently across all difficulty levels. These findings suggest that heatmap entropy is a useful metric for evaluating user attention strategies and can inform interface usability assessments in high-stakes environments.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12564979/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diagnosing Colour Vision Deficiencies Using Eye Movements (Without Dedicated Eye-Tracking Hardware). 使用眼动诊断色觉缺陷(没有专用的眼动追踪硬件)。
IF 2.8 4区 心理学 Q3 OPHTHALMOLOGY Pub Date : 2025-10-02 DOI: 10.3390/jemr18050051
Aryaman Taore, Gabriel Lobo, Philip R K Turnbull, Steven C Dakin

Purpose: To investigate the efficacy of a novel test for diagnosing colour vision deficiencies using reflexive eye movements measured using an unmodified tablet.

Methods: This study followed a cross-sectional design, where thirty-three participants aged between 17 and 65 years were recruited. The participant group comprised 23 controls, 8 deuteranopes, and 2 protanopes. An anomaloscope was employed to determine the colour vision status of these participants. The study methodology involved using an Apple iPad Pro's built-in eye-tracking capabilities to record eye movements in response to coloured patterns drifting on the screen. Through an automated analysis of these movements, the researchers estimated individuals' red-green equiluminant point and their equivalent luminance contrast.

Results: Estimates of the red-green equiluminant point and the equivalent luminance contrast were used to classify participants' colour vision status with a sensitivity rate of 90.0% and a specificity rate of 91.30%.

Conclusions: The novel colour vision test administered using an unmodified tablet was found to be effective in diagnosing colour vision deficiencies and has the potential to be a practical and cost-effective alternative to traditional methods. Translation Relevance: The test's objectivity, its straightforward implementation on a standard tablet, and its minimal requirement for patient cooperation, all contribute to the wider accessibility of colour vision diagnosis. This is particularly advantageous for demographics like children who might be challenging to engage, but for whom early detection is of paramount importance.

目的:探讨一种新的色觉缺陷诊断方法的有效性,该方法使用未经修饰的片剂测量反射性眼动。方法:本研究采用横断面设计,招募了33名年龄在17至65岁之间的参与者。参与者组包括23个对照组,8个氘核和2个质子。使用异常镜来确定这些参与者的色觉状态。研究方法包括使用苹果iPad Pro内置的眼球追踪功能来记录眼球对屏幕上浮动的彩色图案的反应。通过对这些动作的自动分析,研究人员估计了个体的红绿等效亮度点和等效亮度对比。结果:采用红绿等效点和等效亮度对比的估计值对被试的色觉状态进行分类,灵敏度为90.0%,特异性为91.30%。结论:使用未经修饰的片剂进行的新型色觉测试被发现可以有效诊断色觉缺陷,并且有可能成为传统方法的实用和经济的替代方法。翻译相关性:该测试的客观性,在标准平板电脑上的直接实施,以及对患者合作的最低要求,都有助于色视觉诊断的更广泛可及性。这对于像儿童这样的人群来说尤其有利,他们可能很难参与进来,但对他们来说,早期发现是至关重要的。
{"title":"Diagnosing Colour Vision Deficiencies Using Eye Movements (Without Dedicated Eye-Tracking Hardware).","authors":"Aryaman Taore, Gabriel Lobo, Philip R K Turnbull, Steven C Dakin","doi":"10.3390/jemr18050051","DOIUrl":"10.3390/jemr18050051","url":null,"abstract":"<p><strong>Purpose: </strong>To investigate the efficacy of a novel test for diagnosing colour vision deficiencies using reflexive eye movements measured using an unmodified tablet.</p><p><strong>Methods: </strong>This study followed a cross-sectional design, where thirty-three participants aged between 17 and 65 years were recruited. The participant group comprised 23 controls, 8 deuteranopes, and 2 protanopes. An anomaloscope was employed to determine the colour vision status of these participants. The study methodology involved using an Apple iPad Pro's built-in eye-tracking capabilities to record eye movements in response to coloured patterns drifting on the screen. Through an automated analysis of these movements, the researchers estimated individuals' red-green equiluminant point and their equivalent luminance contrast.</p><p><strong>Results: </strong>Estimates of the red-green equiluminant point and the equivalent luminance contrast were used to classify participants' colour vision status with a sensitivity rate of 90.0% and a specificity rate of 91.30%.</p><p><strong>Conclusions: </strong>The novel colour vision test administered using an unmodified tablet was found to be effective in diagnosing colour vision deficiencies and has the potential to be a practical and cost-effective alternative to traditional methods. Translation Relevance: The test's objectivity, its straightforward implementation on a standard tablet, and its minimal requirement for patient cooperation, all contribute to the wider accessibility of colour vision diagnosis. This is particularly advantageous for demographics like children who might be challenging to engage, but for whom early detection is of paramount importance.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"18 5","pages":""},"PeriodicalIF":2.8,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12565516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145390286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Eye Movement Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1