首页 > 最新文献

Vision (Switzerland)最新文献

英文 中文
Comparative Analysis of Transformer Architectures and Ensemble Methods for Automated Glaucoma Screening in Fundus Images from Portable Ophthalmoscopes. 便携式检眼镜眼底图像中青光眼自动筛查变压器结构与集成方法的比较分析。
IF 1.8 Q2 Medicine Pub Date : 2025-11-03 DOI: 10.3390/vision9040093
Rodrigo Otávio Cantanhede Costa, Pedro Alexandre Ferreira França, Alexandre César Pinto Pessoa, Geraldo Braz Júnior, João Dallyson Sousa de Almeida, António Cunha

Deep learning for glaucoma screening often relies on high-resolution clinical images and convolutional neural networks (CNNs). However, these methods face significant performance drops when applied to noisy, low-resolution images from portable devices. To address this, our work investigates ensemble methods using multiple Transformer architectures for automated glaucoma detection in challenging scenarios. We use the Brazil Glaucoma (BrG) and private D-Eye datasets to assess model robustness. These datasets include images typical of smartphone-coupled ophthalmoscopes, which are often noisy and variable in quality. Four Transformer models-Swin-Tiny, ViT-Base, MobileViT-Small, and DeiT-Base-were trained and evaluated both individually and in ensembles. We evaluated the results at both image and patient levels to reflect clinical practice. The results show that, although performance drops on lower-quality images, ensemble combinations and patient-level aggregation significantly improve accuracy and sensitivity. We achieved up to 85% accuracy and an 84.2% F1-score on the D-Eye dataset, with a notable reduction in false negatives. Grad-CAM attention maps confirmed that Transformers identify anatomical regions relevant to diagnosis. These findings reinforce the potential of Transformer ensembles as an accessible solution for early glaucoma detection in populations with limited access to specialized equipment.

青光眼筛查的深度学习通常依赖于高分辨率临床图像和卷积神经网络(cnn)。然而,当这些方法应用于来自便携式设备的噪声、低分辨率图像时,它们面临着显著的性能下降。为了解决这个问题,我们研究了在具有挑战性的情况下使用多个Transformer架构进行青光眼自动检测的集成方法。我们使用巴西青光眼(BrG)和私人D-Eye数据集来评估模型的稳健性。这些数据集包括典型的智能手机耦合检眼镜图像,这些图像通常是嘈杂的,质量多变。四种变压器模型(swwin - tiny、viti - base、MobileViT-Small和deit - base)分别进行了单独和整体训练和评估。我们在图像和患者水平上评估结果以反映临床实践。结果表明,尽管在低质量图像上性能下降,但集成组合和患者级聚合显著提高了准确性和灵敏度。我们在D-Eye数据集上实现了高达85%的准确率和84.2%的f1得分,假阴性显著减少。Grad-CAM注意图证实变形金刚识别与诊断相关的解剖区域。这些发现加强了Transformer整体作为一种易于获得的青光眼早期检测解决方案的潜力,这些解决方案适用于无法获得专业设备的人群。
{"title":"Comparative Analysis of Transformer Architectures and Ensemble Methods for Automated Glaucoma Screening in Fundus Images from Portable Ophthalmoscopes.","authors":"Rodrigo Otávio Cantanhede Costa, Pedro Alexandre Ferreira França, Alexandre César Pinto Pessoa, Geraldo Braz Júnior, João Dallyson Sousa de Almeida, António Cunha","doi":"10.3390/vision9040093","DOIUrl":"10.3390/vision9040093","url":null,"abstract":"<p><p>Deep learning for glaucoma screening often relies on high-resolution clinical images and convolutional neural networks (CNNs). However, these methods face significant performance drops when applied to noisy, low-resolution images from portable devices. To address this, our work investigates ensemble methods using multiple Transformer architectures for automated glaucoma detection in challenging scenarios. We use the Brazil Glaucoma (BrG) and private D-Eye datasets to assess model robustness. These datasets include images typical of smartphone-coupled ophthalmoscopes, which are often noisy and variable in quality. Four Transformer models-Swin-Tiny, ViT-Base, MobileViT-Small, and DeiT-Base-were trained and evaluated both individually and in ensembles. We evaluated the results at both image and patient levels to reflect clinical practice. The results show that, although performance drops on lower-quality images, ensemble combinations and patient-level aggregation significantly improve accuracy and sensitivity. We achieved up to 85% accuracy and an 84.2% F1-score on the D-Eye dataset, with a notable reduction in false negatives. Grad-CAM attention maps confirmed that Transformers identify anatomical regions relevant to diagnosis. These findings reinforce the potential of Transformer ensembles as an accessible solution for early glaucoma detection in populations with limited access to specialized equipment.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 4","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641837/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145589100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Biases in Perceiving Positive Versus Negative Emotions: The Influence of Social Anxiety and State Affect. 社会焦虑和状态情感对积极与消极情绪感知的影响。
IF 1.8 Q2 Medicine Pub Date : 2025-11-01 DOI: 10.3390/vision9040092
Vivian M Ciaramitaro, Erinda Morina, Jenny L Wu, Daniel A Harris, Sarah A Hayes-Skelton

Models suggest social anxiety is characterized by negative processing biases. Negative biases also arise from negative mood, i.e., state affect. We examined how social anxiety influences emotional processing and whether state affect, or mood, modified the relationship between social anxiety and perceptual bias. We quantified bias by determining the point of subjective equality, PSE, the face judged equally often as happy and as angry. We found perceptual bias depended on social anxiety and state affect. PSE was greater in individuals high (mean PSE: 8.69) versus low (mean PSE: 3.04) in social anxiety. The higher PSE indicated a stronger negative bias in high social anxiety. State affect modified this relationship, with high social anxiety associated with stronger negative biases, but only for individuals with greater negative affect. State affect and trait anxiety interacted such that social anxiety status alone was insufficient to fully characterize perceptual biases. This raises several issues such as the need to consider what constitutes an appropriate control group and the need to consider state affect in social anxiety. Importantly, our results suggest compensatory effects may counteract the influences of negative mood in individuals low in social anxiety.

模型表明,社交焦虑的特征是负面加工偏见。消极偏见也来自消极情绪,即状态影响。我们研究了社交焦虑如何影响情绪加工,以及状态影响或情绪是否改变了社交焦虑与知觉偏见之间的关系。我们通过确定主观平等点(PSE)来量化偏见,即被判断为同样快乐和愤怒的脸。我们发现知觉偏差依赖于社交焦虑和状态影响。社交焦虑高(平均PSE: 8.69)个体的PSE高于低(平均PSE: 3.04)个体。PSE越高,表明高社交焦虑者的负性偏见越强。状态影响修正了这一关系,高社交焦虑与更强的负面偏见相关,但仅适用于具有更大负面影响的个体。状态影响和特质焦虑相互作用,使社会焦虑状态本身不足以充分表征知觉偏见。这就提出了几个问题,比如需要考虑什么是合适的对照组,需要考虑社会焦虑中的状态影响。重要的是,我们的研究结果表明,补偿效应可能抵消负面情绪对低社交焦虑个体的影响。
{"title":"Biases in Perceiving Positive Versus Negative Emotions: The Influence of Social Anxiety and State Affect.","authors":"Vivian M Ciaramitaro, Erinda Morina, Jenny L Wu, Daniel A Harris, Sarah A Hayes-Skelton","doi":"10.3390/vision9040092","DOIUrl":"10.3390/vision9040092","url":null,"abstract":"<p><p>Models suggest social anxiety is characterized by negative processing biases. Negative biases also arise from negative mood, i.e., state affect. We examined how social anxiety influences emotional processing and whether state affect, or mood, modified the relationship between social anxiety and perceptual bias. We quantified bias by determining the point of subjective equality, PSE, the face judged equally often as happy and as angry. We found perceptual bias depended on social anxiety and state affect. PSE was greater in individuals high (mean PSE: 8.69) versus low (mean PSE: 3.04) in social anxiety. The higher PSE indicated a stronger negative bias in high social anxiety. State affect modified this relationship, with high social anxiety associated with stronger negative biases, but only for individuals with greater negative affect. State affect and trait anxiety interacted such that social anxiety status alone was insufficient to fully characterize perceptual biases. This raises several issues such as the need to consider what constitutes an appropriate control group and the need to consider state affect in social anxiety. Importantly, our results suggest compensatory effects may counteract the influences of negative mood in individuals low in social anxiety.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 4","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12642024/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145589141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In-Vivo Characterization of Healthy Retinal Pigment Epithelium and Photoreceptor Cells from AO-(T)FI Imaging. AO-(T)FI成像对健康视网膜色素上皮和光感受器细胞的体内表征。
IF 1.8 Q2 Medicine Pub Date : 2025-11-01 DOI: 10.3390/vision9040091
Sohrab Ferdowsi, Leila Sara Eppenberger, Safa Mohanna, Oliver Pfäffli, Christoph Amstutz, Lucas M Bachmann, Michael A Thiel, Martin K Schmid

We provide an automated characterization of human retinal cells, i.e., RPE's based on the non-invasive AO-TFI retinal imaging and PR's based on the non-invasive AO-FI retinal imaging on a large-scale study involving 171 confirmed healthy eyes from 104 participants of 23 to 80 years old. Comprehensive standard checkups based on SD-OCT and Fondus imaging modalities were carried out by Ophthalmologists from the Luzerner Kantonsspital (LUKS) to confirm the absence of retinal pathologies. AO imaging imaging was performed using the Cellularis® device and each eye was imaged at various retinal eccentricities. The images were automatically segmented using a dedicated software and RPE and PR cells were identified and morphometric characterizations, such as cell density and area were computed. The results were stratified based on various criteria, such as age, retinal eccentricity, visual acuity, etc. The automatic segmentation was validated independently on a held-out set by five trained medical students not involved in this study. We plotted cell density variations as a function of eccentricity from the fovea along both nasal and temporal directions. For RPE cells, no consistent trend in density was observed between 0° to 9° eccentricity, contrasting with established histological literature demonstrating foveal density peaks. In contrast, PR cell density showed a clear decrease from 2.5° to 9°. RPE cell density declined linearly with age, whereas no age-related pattern was detected for PR cell density. On average, RPE cell density was found to be ≈6313 cells/mm2 (±σ=757), while the average PR cell density was calculated as ≈10,207 cells/mm2 (±σ=1273).

我们提供了人类视网膜细胞的自动表征,即基于非侵入性AO-TFI视网膜成像的RPE和基于非侵入性AO-FI视网膜成像的PR,这是一项大规模研究,涉及来自104名23至80岁参与者的171只健康眼睛。Luzerner kantonspital (LUKS)的眼科医生进行了基于SD-OCT和Fondus成像模式的全面标准检查,以确认没有视网膜病变。使用Cellularis®设备进行AO成像,在不同的视网膜偏心位置对每只眼睛进行成像。使用专用软件自动分割图像,识别RPE和PR细胞,并计算细胞密度和面积等形态学特征。根据年龄、视网膜偏心率、视力等标准对结果进行分层。自动分割在5名未参与本研究的训练有素的医学生的独立测试集上进行了验证。我们绘制了细胞密度的变化,作为从中央窝沿鼻和颞方向偏心的函数。对于RPE细胞,在0°到9°偏心度之间没有观察到密度的一致趋势,与已建立的组织学文献显示的中央凹密度峰值形成对比。相比之下,PR细胞密度从2.5°到9°明显下降。RPE细胞密度随年龄线性下降,而PR细胞密度无年龄相关性。RPE细胞平均密度为≈6313 cells/mm2(±σ=757), PR细胞平均密度为≈10,207 cells/mm2(±σ=1273)。
{"title":"In-Vivo Characterization of Healthy Retinal Pigment Epithelium and Photoreceptor Cells from AO-(T)FI Imaging.","authors":"Sohrab Ferdowsi, Leila Sara Eppenberger, Safa Mohanna, Oliver Pfäffli, Christoph Amstutz, Lucas M Bachmann, Michael A Thiel, Martin K Schmid","doi":"10.3390/vision9040091","DOIUrl":"10.3390/vision9040091","url":null,"abstract":"<p><p>We provide an automated characterization of human retinal cells, i.e., RPE's based on the non-invasive AO-TFI retinal imaging and PR's based on the non-invasive AO-FI retinal imaging on a large-scale study involving 171 confirmed healthy eyes from 104 participants of 23 to 80 years old. Comprehensive standard checkups based on SD-OCT and Fondus imaging modalities were carried out by Ophthalmologists from the Luzerner Kantonsspital (LUKS) to confirm the absence of retinal pathologies. AO imaging imaging was performed using the Cellularis<sup>®</sup> device and each eye was imaged at various retinal eccentricities. The images were automatically segmented using a dedicated software and RPE and PR cells were identified and morphometric characterizations, such as cell density and area were computed. The results were stratified based on various criteria, such as age, retinal eccentricity, visual acuity, etc. The automatic segmentation was validated independently on a held-out set by five trained medical students not involved in this study. We plotted cell density variations as a function of eccentricity from the fovea along both nasal and temporal directions. For RPE cells, no consistent trend in density was observed between 0° to 9° eccentricity, contrasting with established histological literature demonstrating foveal density peaks. In contrast, PR cell density showed a clear decrease from 2.5° to 9°. RPE cell density declined linearly with age, whereas no age-related pattern was detected for PR cell density. On average, RPE cell density was found to be ≈6313 cells/mm<sup>2</sup> (±σ=757), while the average PR cell density was calculated as ≈10,207 cells/mm<sup>2</sup> (±σ=1273).</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 4","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641776/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145589115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Birefringence of the Human Cornea: A Review. 人类角膜双折射研究进展。
IF 1.8 Q2 Medicine Pub Date : 2025-10-28 DOI: 10.3390/vision9040090
Sudi Patel, Larysa Tutchenko, Igor Dmytruk

Background: This paper aims to provide an overview of corneal birefringence (CB), systematize the knowledge and current understanding of CB, and identify difficulties associated with introducing CB into mainstream clinical practice.

Methods: Literature reviews were conducted, seeking articles focused on CB published between the early 19th century and the present time. Secondary-level searches were made examining relevant publications referred to in primary-level publications, ranging back to the early 17th century. The key search words were "corneal birefringence" and "non-invasive measurements".

Results: CB was first recorded by Brewster in 1815. Orthogonally polarized rays travel at different speeds through the cornea, creating a slow axis and a fast axis. The slow axis aligns with the pattern of most corneal stromal collagen fibrils. In vivo, it is oriented along the superior temporal-inferior nasal direction at an angle of about 25° (with an approximate range of -54° to 90°) from the horizontal. CB has been reported to (i) influence the estimation of retinal nerve fiber layer thickness; (ii) be affected by corneal interventions; (iii) be altered in keratoconus; (iv) vary along the depth of the cornea; and (v) be affected by intra-ocular pressure.

Conclusions: Under precisely controlled conditions, capturing the CB pattern is the first step in a non-destructive process used to model the ultra-fine structure of the individual cornea, and changes thereof, in vivo.

背景:本文旨在提供角膜双折射(CB)的概述,系统地介绍角膜双折射的知识和当前的理解,并确定将角膜双折射引入主流临床实践的困难。方法:进行文献综述,寻找19世纪初至今发表的关于CB的文章。二级检索是审查初级出版物中提到的有关出版物,可追溯到17世纪初。关键词是“角膜双折射”和“无创测量”。结果:1815年Brewster首次记录了CB。正交偏振光以不同的速度穿过角膜,形成慢轴和快轴。慢轴与大多数角膜基质胶原原纤维的模式一致。在体内,它沿上颞下鼻方向定向,与水平方向成约25°角(约为-54°至90°)。据报道,CB会(i)影响视网膜神经纤维层厚度的估计;(ii)受到角膜干预的影响;(iii)圆锥角膜发生改变;(iv)沿角膜深度变化;(v)受眼压的影响。结论:在精确控制的条件下,捕获CB模式是非破坏性过程的第一步,用于模拟个体角膜的超精细结构及其在体内的变化。
{"title":"Birefringence of the Human Cornea: A Review.","authors":"Sudi Patel, Larysa Tutchenko, Igor Dmytruk","doi":"10.3390/vision9040090","DOIUrl":"10.3390/vision9040090","url":null,"abstract":"<p><strong>Background: </strong>This paper aims to provide an overview of corneal birefringence (CB), systematize the knowledge and current understanding of CB, and identify difficulties associated with introducing CB into mainstream clinical practice.</p><p><strong>Methods: </strong>Literature reviews were conducted, seeking articles focused on CB published between the early 19th century and the present time. Secondary-level searches were made examining relevant publications referred to in primary-level publications, ranging back to the early 17th century. The key search words were \"corneal birefringence\" and \"non-invasive measurements\".</p><p><strong>Results: </strong>CB was first recorded by Brewster in 1815. Orthogonally polarized rays travel at different speeds through the cornea, creating a slow axis and a fast axis. The slow axis aligns with the pattern of most corneal stromal collagen fibrils. In vivo, it is oriented along the superior temporal-inferior nasal direction at an angle of about 25° (with an approximate range of -54° to 90°) from the horizontal. CB has been reported to (i) influence the estimation of retinal nerve fiber layer thickness; (ii) be affected by corneal interventions; (iii) be altered in keratoconus; (iv) vary along the depth of the cornea; and (v) be affected by intra-ocular pressure.</p><p><strong>Conclusions: </strong>Under precisely controlled conditions, capturing the CB pattern is the first step in a non-destructive process used to model the ultra-fine structure of the individual cornea, and changes thereof, in vivo.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 4","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12641804/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145589108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prevalence of Keratoconus and Associated Risk Factors Among High School Students in Couva, Trinidad: A Cross-Sectional Study. 特立尼达库瓦高中生圆锥角膜患病率及相关危险因素:一项横断面研究。
IF 1.8 Q2 Medicine Pub Date : 2025-10-20 DOI: 10.3390/vision9040089
Ngozika Esther Ezinne, Shinead Phagoo, Ameera Roopnarinesingh, Michael Agyemang Kwarteng

Purpose: This study aimed to determine the prevalence and associated risk factors of keratoconus (KC) among high school students in Couva, Trinidad and Tobago. Method: A cross-sectional, school-based approach was used, involving a simple random sampling technique to select schools and students. A structured questionnaire assessed KC risk factors, while clinical assessments, including visual acuity, refraction, slit lamp biomicroscopy, and topography, were performed. Data were analyzed using R. Exact tests were used for KC (n = 2 cases) and robust Poisson regression estimated adjusted prevalence ratios for the 'at-risk' screening endpoint. Results: A total of 432 students aged 12-17 years participated, with a response rate of 97.5%. Most participants were of East Indian descent (48.1%), female (52.1%), and 14 years old (23.1%). Approximately 47.7% (95% CI 43.0-52.5%) were at risk of KC, with 0.5% (2/432; exact 95% CI 0.06-1.67%) diagnosed with the condition. The most common risk factors were eye rubbing (87.4%), over eight hours of sun exposure weekly (71.8%), and atopy (68.4%). KC was observed to be significantly higher among people with a family history (p = 0.018). Conclusions: The study highlights a low prevalence and a high risk of KC among high school students, with a strong link to family history and common risk factors such as eye rubbing and sun exposure. These findings emphasize the urgent need for regular KC screening in schools to ensure early diagnosis and effective management.

目的:本研究旨在了解圆锥角膜(KC)在特立尼达和多巴哥库瓦高中学生中的患病率及其相关危险因素。方法:采用横断面、以学校为基础的方法,采用简单的随机抽样技术选择学校和学生。结构化问卷评估KC的危险因素,同时进行临床评估,包括视力、屈光、裂隙灯生物显微镜和地形。数据使用r进行分析,对KC (n = 2例)使用精确检验,稳健泊松回归估计“高危”筛查终点的校正患病率。结果:共有432名12-17岁的学生参与,回复率为97.5%。大多数参与者为东印度裔(48.1%)、女性(52.1%)和14岁(23.1%)。大约47.7% (95% CI 43.0-52.5%)的患者有KC的风险,其中0.5%(2/432;确切95% CI 0.06-1.67%)的患者被诊断为KC。最常见的危险因素是揉眼(87.4%)、每周日晒超过8小时(71.8%)和过敏性(68.4%)。KC在有家族病史的人群中显著升高(p = 0.018)。结论:该研究强调了KC在高中生中的低患病率和高风险,与家族史和常见的风险因素(如眼睛摩擦和阳光照射)有很强的联系。这些发现强调了在学校定期进行KC筛查以确保早期诊断和有效管理的迫切需要。
{"title":"Prevalence of Keratoconus and Associated Risk Factors Among High School Students in Couva, Trinidad: A Cross-Sectional Study.","authors":"Ngozika Esther Ezinne, Shinead Phagoo, Ameera Roopnarinesingh, Michael Agyemang Kwarteng","doi":"10.3390/vision9040089","DOIUrl":"10.3390/vision9040089","url":null,"abstract":"<p><p><b>Purpose:</b> This study aimed to determine the prevalence and associated risk factors of keratoconus (KC) among high school students in Couva, Trinidad and Tobago. <b>Method:</b> A cross-sectional, school-based approach was used, involving a simple random sampling technique to select schools and students. A structured questionnaire assessed KC risk factors, while clinical assessments, including visual acuity, refraction, slit lamp biomicroscopy, and topography, were performed. Data were analyzed using R. Exact tests were used for KC (<i>n</i> = 2 cases) and robust Poisson regression estimated adjusted prevalence ratios for the 'at-risk' screening endpoint. <b>Results:</b> A total of 432 students aged 12-17 years participated, with a response rate of 97.5%. Most participants were of East Indian descent (48.1%), female (52.1%), and 14 years old (23.1%). Approximately 47.7% (95% CI 43.0-52.5%) were at risk of KC, with 0.5% (2/432; exact 95% CI 0.06-1.67%) diagnosed with the condition. The most common risk factors were eye rubbing (87.4%), over eight hours of sun exposure weekly (71.8%), and atopy (68.4%). KC was observed to be significantly higher among people with a family history (<i>p</i> = 0.018). <b>Conclusions:</b> The study highlights a low prevalence and a high risk of KC among high school students, with a strong link to family history and common risk factors such as eye rubbing and sun exposure. These findings emphasize the urgent need for regular KC screening in schools to ensure early diagnosis and effective management.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 4","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12551074/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145356284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing Visual Search Efficiency Across Different Facial Characteristics. 比较不同面部特征的视觉搜索效率。
IF 1.8 Q2 Medicine Pub Date : 2025-10-15 DOI: 10.3390/vision9040088
Navdeep Kaur, Isabella Hooge, Andrea Albonico

Face recognition is an important skill that helps people make social judgments by identifying both who a person is and other characteristics such as their expression, age, and ethnicity. Previous models of face processing, such as those proposed by Bruce and Young and by Haxby and colleagues, suggest that identity and other facial features are processed through partly independent systems. This study aimed to compare the efficiency with which different facial characteristics are processed in a visual search task. Participants viewed arrays of two, four, or six faces and judged whether one face differed from the others. Four tasks were created, focusing separately on identity, expression, ethnicity, and gender. We found that search times were significantly longer when looking for identity and shorter when looking for ethnicity. Significant correlations were found among almost all tests in all outcome variables. Comparison of target-present and target-absent trials suggested that performance in none of the tests seems to follow a serial-search-terminating model. These results suggest that different facial characteristics share early processing but differentiate into independent recognition mechanisms at a later stage.

面部识别是一项重要的技能,它可以帮助人们通过识别一个人是谁以及其他特征,如表情、年龄和种族,来做出社会判断。以前的面部处理模型,如布鲁斯和杨以及哈克斯比及其同事提出的模型,表明身份和其他面部特征是通过部分独立的系统处理的。本研究旨在比较不同面部特征在视觉搜索任务中的处理效率。参与者观看由两张、四张或六张面孔组成的数组,并判断其中一张面孔是否与其他面孔不同。创建了四个任务,分别关注身份、表达、种族和性别。我们发现,寻找身份的搜索时间明显更长,而寻找种族的搜索时间明显更短。在几乎所有的测试中,所有的结果变量都发现了显著的相关性。目标存在和目标缺失试验的比较表明,在所有测试中的表现似乎都不遵循序列搜索终止模型。这些结果表明,不同的面部特征共享早期加工,但在后期分化成独立的识别机制。
{"title":"Comparing Visual Search Efficiency Across Different Facial Characteristics.","authors":"Navdeep Kaur, Isabella Hooge, Andrea Albonico","doi":"10.3390/vision9040088","DOIUrl":"10.3390/vision9040088","url":null,"abstract":"<p><p>Face recognition is an important skill that helps people make social judgments by identifying both who a person is and other characteristics such as their expression, age, and ethnicity. Previous models of face processing, such as those proposed by Bruce and Young and by Haxby and colleagues, suggest that identity and other facial features are processed through partly independent systems. This study aimed to compare the efficiency with which different facial characteristics are processed in a visual search task. Participants viewed arrays of two, four, or six faces and judged whether one face differed from the others. Four tasks were created, focusing separately on identity, expression, ethnicity, and gender. We found that search times were significantly longer when looking for identity and shorter when looking for ethnicity. Significant correlations were found among almost all tests in all outcome variables. Comparison of target-present and target-absent trials suggested that performance in none of the tests seems to follow a serial-search-terminating model. These results suggest that different facial characteristics share early processing but differentiate into independent recognition mechanisms at a later stage.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 4","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12550928/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145356281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a Neural Network to Predict Optimal IOP Reduction in Glaucoma Management. 青光眼治疗中预测最佳IOP降低的神经网络的发展。
IF 1.8 Q2 Medicine Pub Date : 2025-10-15 DOI: 10.3390/vision9040087
Raheem Remtulla, Sidrat Rahman, Hady Saheb

Glaucoma management relies on lowering intraocular pressure (IOP), but determining the target reduction at presentation is challenging, particularly in normal-tension glaucoma (NTG). We developed and internally validated a neural network regression model using retrospective clinical data from Qiu et al. (2015), including 270 patients (118 with NTG). A single-layer artificial neural network with five nodes was trained in MATLAB R2024b using the Levenberg-Marquardt algorithm. Inputs included demographic, refractive, structural, and functional parameters, with IOP reduction as the output. Data were split into 65% training, 15% validation, and 20% testing, with training repeated 10 times. Model performance was strong and consistent (average RMSE: 1.90 ± 0.29 training, 2.18 ± 0.34 validation, 2.11 ± 0.30 testing; Pearson's r: 0.92 ± 0.02, 0.88 ± 0.02, 0.88 ± 0.04). The best-performing model achieved RMSEs of 1.57, 2.90, and 1.77 with r values of 0.93, 0.91, and 0.93, respectively. Feature ablation revealed significant contributions from IOP, axial length, CCT, diagnosis, VCDR, spherical equivalent, mean deviation, and laterality. This study demonstrates that a simple neural network can reliably predict individualized IOP reduction targets, supporting personalized glaucoma management and improved outcomes.

青光眼的治疗依赖于降低眼内压(IOP),但在出现时确定目标降低是具有挑战性的,特别是在正常眼压青光眼(NTG)中。我们利用Qiu等人(2015)的回顾性临床数据开发并内部验证了神经网络回归模型,其中包括270例患者(118例患有NTG)。采用Levenberg-Marquardt算法,在MATLAB R2024b中训练了一个5个节点的单层人工神经网络。输入包括人口统计、屈光、结构和功能参数,以IOP降低为输出。数据被分成65%的训练、15%的验证和20%的测试,训练重复10次。模型性能强且一致(平均RMSE:训练组1.90±0.29,验证组2.18±0.34,检验组2.11±0.30;Pearson’s r: 0.92±0.02,0.88±0.02,0.88±0.04)。表现最好的模型rmse分别为1.57、2.90和1.77,r值分别为0.93、0.91和0.93。特征消融显示IOP、轴向长度、CCT、诊断、VCDR、球面等效、平均偏差和侧侧度有重要贡献。该研究表明,简单的神经网络可以可靠地预测个体化IOP降低目标,支持个性化青光眼治疗和改善预后。
{"title":"Development of a Neural Network to Predict Optimal IOP Reduction in Glaucoma Management.","authors":"Raheem Remtulla, Sidrat Rahman, Hady Saheb","doi":"10.3390/vision9040087","DOIUrl":"10.3390/vision9040087","url":null,"abstract":"<p><p>Glaucoma management relies on lowering intraocular pressure (IOP), but determining the target reduction at presentation is challenging, particularly in normal-tension glaucoma (NTG). We developed and internally validated a neural network regression model using retrospective clinical data from Qiu et al. (2015), including 270 patients (118 with NTG). A single-layer artificial neural network with five nodes was trained in MATLAB R2024b using the Levenberg-Marquardt algorithm. Inputs included demographic, refractive, structural, and functional parameters, with IOP reduction as the output. Data were split into 65% training, 15% validation, and 20% testing, with training repeated 10 times. Model performance was strong and consistent (average RMSE: 1.90 ± 0.29 training, 2.18 ± 0.34 validation, 2.11 ± 0.30 testing; Pearson's r: 0.92 ± 0.02, 0.88 ± 0.02, 0.88 ± 0.04). The best-performing model achieved RMSEs of 1.57, 2.90, and 1.77 with r values of 0.93, 0.91, and 0.93, respectively. Feature ablation revealed significant contributions from IOP, axial length, CCT, diagnosis, VCDR, spherical equivalent, mean deviation, and laterality. This study demonstrates that a simple neural network can reliably predict individualized IOP reduction targets, supporting personalized glaucoma management and improved outcomes.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 4","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12551081/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145356268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clinical Assessment of a Virtual Reality Perimeter Versus the Humphrey Field Analyzer: Comparative Reliability, Usability, and Prospective Applications. 虚拟现实周界与汉弗莱场分析仪的临床评估:比较可靠性,可用性和前景应用。
IF 1.8 Q2 Medicine Pub Date : 2025-10-11 DOI: 10.3390/vision9040086
Marco Zeppieri, Caterina Gagliano, Francesco Cappellani, Federico Visalli, Fabiana D'Esposito, Alessandro Avitabile, Roberta Amato, Alessandra Cuna, Francesco Pellegrini

Background: This study compared the performance of a Head-mounted Virtual Reality Perimeter (HVRP) with the Humphrey Field Analyzer (HFA), the standard in automated perimetry. The HFA is the established standard for automated perimetry but is constrained by lengthy testing, bulky equipment, and limited patient comfort. Comparative data on newer head-mounted virtual reality perimeters are limited, leaving uncertainty about their clinical reliability and potential advantages. Aim: The aim was to evaluate parameters such as visual field outcomes, portability, patient comfort, eye tracking, and usability. Methods: Participants underwent testing with both devices, assessing metrics like mean deviation (MD), pattern standard deviation (PSD), and duration. Results: The HVRP demonstrated small but statistically significant differences in MD and PSD compared to the HFA, while maintaining a consistent trend across participants. MD values were slightly more negative for HFA than HVRP (average difference -0.60 dB, p = 0.0006), while pattern standard deviation was marginally higher with HFA (average difference 0.38 dB, p = 0.00018). Although statistically significant, these differences were small in magnitude and do not undermine the clinical utility or reproducibility of the device. Notably, HVRP showed markedly shorter testing times with HVRP (7.15 vs. 18.11 min, mean difference 10.96 min, p < 0.0001). Its lightweight, portable design allowed for bedside and home testing, enhancing accessibility for pediatric, geriatric, and mobility-impaired patients. Participants reported greater comfort due to the headset design, which eliminated the need for chin rests. The device also offers potential for AI integration and remote data analysis. Conclusions: The HVRP proved to be a reliable, user-friendly alternative to traditional perimetry. Its advantages in comfort, portability, and test efficiency support its use in both clinical settings and remote screening programs for visual field assessment. Its portability and user-friendly design support broader use in clinical practice and expand possibilities for bedside assessment, home monitoring, and remote screening, particularly in populations with limited access to conventional perimetry.

背景:本研究比较了头戴式虚拟现实周界(HVRP)与自动周界测量标准Humphrey Field Analyzer (HFA)的性能。HFA是自动视野检查的既定标准,但受到测试时间长、设备笨重和患者舒适度有限的限制。新的头戴式虚拟现实周边的比较数据有限,留下了不确定的临床可靠性和潜在的优势。目的:目的是评估诸如视野结果、便携性、患者舒适度、眼动追踪和可用性等参数。方法:参与者接受两种设备的测试,评估指标如平均偏差(MD)、模式标准差(PSD)和持续时间。结果:与HFA相比,HVRP在MD和PSD方面表现出较小但具有统计学意义的差异,同时在参与者中保持一致的趋势。HFA的MD值略高于HVRP(平均差值-0.60 dB, p = 0.0006),而HFA的模式标准差略高于HVRP(平均差值0.38 dB, p = 0.00018)。虽然具有统计学意义,但这些差异幅度很小,不会影响该装置的临床实用性或可重复性。值得注意的是,HVRP与HVRP的检测时间明显缩短(7.15 vs. 18.11 min,平均差10.96 min, p < 0.0001)。其轻巧、便携的设计允许床边和家庭测试,提高儿童、老年人和行动不便患者的可及性。参与者报告说,由于耳机的设计更舒适,不需要下巴托。该设备还提供了人工智能集成和远程数据分析的潜力。结论:HVRP被证明是一个可靠的,用户友好的替代传统的视野。它在舒适性、便携性和测试效率方面的优势支持它在临床环境和远程视野评估筛查项目中的使用。它的便携性和用户友好的设计支持在临床实践中更广泛的应用,并扩大了床边评估、家庭监测和远程筛查的可能性,特别是在无法获得常规视野检查的人群中。
{"title":"Clinical Assessment of a Virtual Reality Perimeter Versus the Humphrey Field Analyzer: Comparative Reliability, Usability, and Prospective Applications.","authors":"Marco Zeppieri, Caterina Gagliano, Francesco Cappellani, Federico Visalli, Fabiana D'Esposito, Alessandro Avitabile, Roberta Amato, Alessandra Cuna, Francesco Pellegrini","doi":"10.3390/vision9040086","DOIUrl":"10.3390/vision9040086","url":null,"abstract":"<p><p><i>Background:</i> This study compared the performance of a Head-mounted Virtual Reality Perimeter (HVRP) with the Humphrey Field Analyzer (HFA), the standard in automated perimetry. The HFA is the established standard for automated perimetry but is constrained by lengthy testing, bulky equipment, and limited patient comfort. Comparative data on newer head-mounted virtual reality perimeters are limited, leaving uncertainty about their clinical reliability and potential advantages. <i>Aim:</i> The aim was to evaluate parameters such as visual field outcomes, portability, patient comfort, eye tracking, and usability. <i>Methods:</i> Participants underwent testing with both devices, assessing metrics like mean deviation (MD), pattern standard deviation (PSD), and duration. <i>Results:</i> The HVRP demonstrated small but statistically significant differences in MD and PSD compared to the HFA, while maintaining a consistent trend across participants. MD values were slightly more negative for HFA than HVRP (average difference -0.60 dB, <i>p</i> = 0.0006), while pattern standard deviation was marginally higher with HFA (average difference 0.38 dB, <i>p</i> = 0.00018). Although statistically significant, these differences were small in magnitude and do not undermine the clinical utility or reproducibility of the device. Notably, HVRP showed markedly shorter testing times with HVRP (7.15 vs. 18.11 min, mean difference 10.96 min, <i>p</i> < 0.0001). Its lightweight, portable design allowed for bedside and home testing, enhancing accessibility for pediatric, geriatric, and mobility-impaired patients. Participants reported greater comfort due to the headset design, which eliminated the need for chin rests. The device also offers potential for AI integration and remote data analysis. <i>Conclusions:</i> The HVRP proved to be a reliable, user-friendly alternative to traditional perimetry. Its advantages in comfort, portability, and test efficiency support its use in both clinical settings and remote screening programs for visual field assessment. Its portability and user-friendly design support broader use in clinical practice and expand possibilities for bedside assessment, home monitoring, and remote screening, particularly in populations with limited access to conventional perimetry.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 4","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12550896/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145356291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Assessment of Large Language Models in Optics and Refractive Surgery: Performance on Multiple-Choice Questions. 光学和屈光手术中大语言模型的比较评估:多项选择题的表现。
IF 1.8 Q2 Medicine Pub Date : 2025-10-09 DOI: 10.3390/vision9040085
Leah Attal, Elad Shvartz, Alon Gorenshtein, Shirley Pincovich, Daniel Bahir

This study aimed to evaluate the performance of seven advanced AI Large Language Models (LLMs)-ChatGPT 4o, ChatGPT O3 Mini, ChatGPT O1, DeepSeek V3, DeepSeek R1, Gemini 2.0 Flash, and Grok-3-in answering multiple-choice questions (MCQs) in optics and refractive surgery, to assess their role in medical education for residents. The AI models were tested using 134 publicly available MCQs from national ophthalmology certification exams, categorized by the need to perform calculations, the relevant subspecialty, and the use of images. Accuracy was analyzed and compared statistically. ChatGPT O1 achieved the highest overall accuracy (83.5%), excelling in complex optical calculations (84.1%) and optics questions (82.4%). DeepSeek V3 displayed superior accuracy in refractive surgery-related questions (89.7%), followed by ChatGPT O3 Mini (88.4%). ChatGPT O3 Mini significantly outperformed others in image analysis, with 88.2% accuracy. Moreover, ChatGPT O1 demonstrated comparable accuracy rates for both calculated and non-calculated questions (84.1% vs. 83.3%). This is in stark contrast to other models, which exhibited significant discrepancies in accuracy for calculated and non-calculated questions. The findings highlight the ability of LLMs to achieve high accuracy in ophthalmology MCQs, particularly in complex optical calculations and visual items. These results suggest potential applications in exam preparation and medical training contexts, while underscoring the need for future studies designed to directly evaluate their role and impact in medical education. The findings highlight the significant potential of AI models in ophthalmology education, particularly in performing complex optical calculations and visual stem questions. Future studies should utilize larger, multilingual datasets to confirm and extend these preliminary findings.

本研究旨在评估七个先进的人工智能大型语言模型(llm)——ChatGPT 40、ChatGPT O3 Mini、ChatGPT O1、DeepSeek V3、DeepSeek R1、Gemini 2.0 Flash和grok -3在光学和屈光手术中回答多项选择题(mcq)的性能,以评估它们在住院医师医学教育中的作用。人工智能模型使用来自国家眼科认证考试的134个公开mcq进行测试,根据执行计算的需要、相关的亚专业和图像的使用进行分类。对准确度进行分析和统计比较。ChatGPT 01获得了最高的总体精度(83.5%),在复杂光学计算(84.1%)和光学问题(82.4%)方面表现出色。DeepSeek V3在屈光手术相关问题上的准确率更高(89.7%),其次是ChatGPT O3 Mini(88.4%)。ChatGPT O3 Mini在图像分析方面明显优于其他产品,准确率为88.2%。此外,ChatGPT 01对计算题和非计算题的准确率相当(84.1%对83.3%)。这与其他模型形成鲜明对比,这些模型在计算问题和非计算问题的准确性上表现出显著差异。研究结果强调了llm在眼科mcq中实现高精度的能力,特别是在复杂的光学计算和视觉项目中。这些结果提示了在考试准备和医学培训背景下的潜在应用,同时强调了未来研究的必要性,旨在直接评估它们在医学教育中的作用和影响。研究结果强调了人工智能模型在眼科教育中的巨大潜力,特别是在执行复杂的光学计算和视觉系统问题方面。未来的研究应该利用更大的、多语言的数据集来证实和扩展这些初步发现。
{"title":"Comparative Assessment of Large Language Models in Optics and Refractive Surgery: Performance on Multiple-Choice Questions.","authors":"Leah Attal, Elad Shvartz, Alon Gorenshtein, Shirley Pincovich, Daniel Bahir","doi":"10.3390/vision9040085","DOIUrl":"10.3390/vision9040085","url":null,"abstract":"<p><p>This study aimed to evaluate the performance of seven advanced AI Large Language Models (LLMs)-ChatGPT 4o, ChatGPT O3 Mini, ChatGPT O1, DeepSeek V3, DeepSeek R1, Gemini 2.0 Flash, and Grok-3-in answering multiple-choice questions (MCQs) in optics and refractive surgery, to assess their role in medical education for residents. The AI models were tested using 134 publicly available MCQs from national ophthalmology certification exams, categorized by the need to perform calculations, the relevant subspecialty, and the use of images. Accuracy was analyzed and compared statistically. ChatGPT O1 achieved the highest overall accuracy (83.5%), excelling in complex optical calculations (84.1%) and optics questions (82.4%). DeepSeek V3 displayed superior accuracy in refractive surgery-related questions (89.7%), followed by ChatGPT O3 Mini (88.4%). ChatGPT O3 Mini significantly outperformed others in image analysis, with 88.2% accuracy. Moreover, ChatGPT O1 demonstrated comparable accuracy rates for both calculated and non-calculated questions (84.1% vs. 83.3%). This is in stark contrast to other models, which exhibited significant discrepancies in accuracy for calculated and non-calculated questions. The findings highlight the ability of LLMs to achieve high accuracy in ophthalmology MCQs, particularly in complex optical calculations and visual items. These results suggest potential applications in exam preparation and medical training contexts, while underscoring the need for future studies designed to directly evaluate their role and impact in medical education. The findings highlight the significant potential of AI models in ophthalmology education, particularly in performing complex optical calculations and visual stem questions. Future studies should utilize larger, multilingual datasets to confirm and extend these preliminary findings.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 4","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12550897/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145356294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Myopia Prediction Using Machine Learning: An External Validation Study. 使用机器学习预测近视:一项外部验证研究。
IF 1.8 Q2 Medicine Pub Date : 2025-10-09 DOI: 10.3390/vision9040084
Rajat S Chandra, Bole Ying, Jianyong Wang, Hongguang Cui, Guishuang Ying, Julius T Oatts

We previously developed machine learning (ML) models for predicting cycloplegic spherical equivalent refraction (SER) and myopia using non-cycloplegic data and following a standardized protocol (cycloplegia with 0.5% tropicamide and biometry using NIDEK A-scan), but the models' performance may not be generalizable to other settings. This study evaluated the performance of ML models in an independent cohort using a different cycloplegic agent and biometer. Chinese students (N = 614) aged 8-13 years underwent autorefraction before and after cycloplegia with 0.5% tropicamide (n = 505) or 1% cyclopentolate (n = 109). Biometric measures were obtained using an IOLMaster 700 (n = 207) or Optical Biometer SW-9000 (n = 407). ML models were evaluated using R2, mean absolute error (MAE), sensitivity, specificity, and area under the ROC curve (AUC). The XGBoost model predicted cycloplegic SER very well (R2 = 0.95, MAE (SD) = 0.32 (0.30) D). Both ML models predicted myopia well (random forest: AUC 0.99, sensitivity 93.7%, specificity 96.4%; XGBoost: sensitivity 90.1%, specificity 96.8%) and accurately predicted the myopia rate (observed 62.9%; random forest: 60.6%; XGBoost: 58.8%) despite heterogeneous cycloplegia and biometry factors. In this independent cohort of students, XGBoost and random forest performed very well for predicting cycloplegic SER and myopia status using non-cycloplegic data. This external validation study demonstrated that ML may provide a useful tool for estimating cycloplegic SER and myopia prevalence with heterogeneous clinical parameters, and study in additional populations is warranted.

我们之前开发了机器学习(ML)模型,用于使用非睫状体麻痹数据并遵循标准化方案(使用0.5% tropicamide的睫状体麻痹和使用NIDEK a扫描的生物测定)预测睫状体麻痹的球面等效屈光度(SER)和近视,但模型的性能可能无法推广到其他设置。本研究评估了ML模型在独立队列中使用不同的单眼麻痹剂和生物计的性能。中国8-13岁学生(N = 614)在睫状体麻痹前后分别用0.5%托品酰胺(N = 505)或1%环戊酸酯(N = 109)进行自体屈光检查。使用IOLMaster 700 (n = 207)或Optical Biometer SW-9000 (n = 407)进行生物特征测量。采用R2、平均绝对误差(MAE)、敏感性、特异性和ROC曲线下面积(AUC)对ML模型进行评价。XGBoost模型很好地预测了单眼瘫痪SER (R2 = 0.95, MAE (SD) = 0.32 (0.30) D)。两种ML模型均能很好地预测近视(随机森林:AUC 0.99,灵敏度93.7%,特异性96.4%;XGBoost:灵敏度90.1%,特异性96.8%),并能准确预测近视率(观察值62.9%,随机森林:60.6%,XGBoost: 58.8%),尽管存在异质性的单眼截截性和生物统计学因素。在这个独立的学生队列中,XGBoost和随机森林在使用非睫状体麻痹数据预测睫状体麻痹SER和近视状态方面表现很好。这项外部验证研究表明,ML可能为估计具有异质临床参数的单眼截瘫SER和近视患病率提供了有用的工具,并且有必要在其他人群中进行研究。
{"title":"Myopia Prediction Using Machine Learning: An External Validation Study.","authors":"Rajat S Chandra, Bole Ying, Jianyong Wang, Hongguang Cui, Guishuang Ying, Julius T Oatts","doi":"10.3390/vision9040084","DOIUrl":"10.3390/vision9040084","url":null,"abstract":"<p><p>We previously developed machine learning (ML) models for predicting cycloplegic spherical equivalent refraction (SER) and myopia using non-cycloplegic data and following a standardized protocol (cycloplegia with 0.5% tropicamide and biometry using NIDEK A-scan), but the models' performance may not be generalizable to other settings. This study evaluated the performance of ML models in an independent cohort using a different cycloplegic agent and biometer. Chinese students (N = 614) aged 8-13 years underwent autorefraction before and after cycloplegia with 0.5% tropicamide (<i>n</i> = 505) or 1% cyclopentolate (<i>n</i> = 109). Biometric measures were obtained using an IOLMaster 700 (<i>n</i> = 207) or Optical Biometer SW-9000 (<i>n</i> = 407). ML models were evaluated using R<sup>2</sup>, mean absolute error (MAE), sensitivity, specificity, and area under the ROC curve (AUC). The XGBoost model predicted cycloplegic SER very well (R<sup>2</sup> = 0.95, MAE (SD) = 0.32 (0.30) D). Both ML models predicted myopia well (random forest: AUC 0.99, sensitivity 93.7%, specificity 96.4%; XGBoost: sensitivity 90.1%, specificity 96.8%) and accurately predicted the myopia rate (observed 62.9%; random forest: 60.6%; XGBoost: 58.8%) despite heterogeneous cycloplegia and biometry factors. In this independent cohort of students, XGBoost and random forest performed very well for predicting cycloplegic SER and myopia status using non-cycloplegic data. This external validation study demonstrated that ML may provide a useful tool for estimating cycloplegic SER and myopia prevalence with heterogeneous clinical parameters, and study in additional populations is warranted.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 4","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12551012/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145356246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Vision (Switzerland)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1