This study assesses the accuracy and consistency of a commercially available large language model (LLM) in extracting and interpreting sensitivity and reliability data from entire visual field (VF) test reports for the evaluation of glaucomatous defects. Single-page anonymised VF test reports from 60 eyes of 60 subjects were analysed by an LLM (ChatGPT 4o) across four domains-test reliability, defect type, defect severity and overall diagnosis. The main outcome measures were accuracy of data extraction, interpretation of glaucomatous field defects and diagnostic classification. The LLM displayed 100% accuracy in the extraction of global sensitivity and reliability metrics and in classifying test reliability. It also demonstrated high accuracy (96.7%) in diagnosing whether the VF defect was consistent with a healthy, suspect or glaucomatous eye. The accuracy in correctly defining the type of defect was moderate (73.3%), which only partially improved when provided with a more defined region of interest. The causes of incorrect defect type were mostly attributed to the wrong location, particularly confusing the superior and inferior hemifields. Numerical/text-based data extraction and interpretation was overall notably superior to image-based interpretation of VF defects. This study demonstrates the potential and also limitations of multimodal LLMs in processing multimodal medical investigation data such as VF reports.
{"title":"Coherent Interpretation of Entire Visual Field Test Reports Using a Multimodal Large Language Model (ChatGPT).","authors":"Jeremy C K Tan","doi":"10.3390/vision9020033","DOIUrl":"https://doi.org/10.3390/vision9020033","url":null,"abstract":"<p><p>This study assesses the accuracy and consistency of a commercially available large language model (LLM) in extracting and interpreting sensitivity and reliability data from entire visual field (VF) test reports for the evaluation of glaucomatous defects. Single-page anonymised VF test reports from 60 eyes of 60 subjects were analysed by an LLM (ChatGPT 4o) across four domains-test reliability, defect type, defect severity and overall diagnosis. The main outcome measures were accuracy of data extraction, interpretation of glaucomatous field defects and diagnostic classification. The LLM displayed 100% accuracy in the extraction of global sensitivity and reliability metrics and in classifying test reliability. It also demonstrated high accuracy (96.7%) in diagnosing whether the VF defect was consistent with a healthy, suspect or glaucomatous eye. The accuracy in correctly defining the type of defect was moderate (73.3%), which only partially improved when provided with a more defined region of interest. The causes of incorrect defect type were mostly attributed to the wrong location, particularly confusing the superior and inferior hemifields. Numerical/text-based data extraction and interpretation was overall notably superior to image-based interpretation of VF defects. This study demonstrates the potential and also limitations of multimodal LLMs in processing multimodal medical investigation data such as VF reports.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015771/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144051434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thiago Paiva Freire, Geraldo Braz Júnior, João Dallyson Sousa de Almeida, José Ribamar Durand Rodrigues Junior
Glaucoma is a visual disease that affects millions of people, and early diagnosis can prevent total blindness. One way to diagnose the disease is through fundus image examination, which analyzes the optic disc and cup structures. However, screening programs in primary care are costly and unfeasible. Neural network models have been used to segment optic nerve structures, assisting physicians in this task and reducing fatigue. This work presents a methodology to enhance morphological biomarkers of the optic disc and cup in images obtained by a smartphone coupled to an ophthalmoscope through a deep neural network, which combines two backbones and a dual decoder approach to improve the segmentation of these structures, as well as a new way to combine the loss weights in the training process. The models obtained were numerically evaluated through Dice and IoU measures. The dice values obtained in the experiments reached a Dice of 95.92% and 85.30% for the optical disc and cup and an IoU of 92.22% and 75.68% for the optical disc and cup, respectively, in the BrG dataset. These findings indicate promising architectures in the fundus image segmentation task.
{"title":"Cup and Disc Segmentation in Smartphone Handheld Ophthalmoscope Images with a Composite Backbone and Double Decoder Architecture.","authors":"Thiago Paiva Freire, Geraldo Braz Júnior, João Dallyson Sousa de Almeida, José Ribamar Durand Rodrigues Junior","doi":"10.3390/vision9020032","DOIUrl":"https://doi.org/10.3390/vision9020032","url":null,"abstract":"<p><p>Glaucoma is a visual disease that affects millions of people, and early diagnosis can prevent total blindness. One way to diagnose the disease is through fundus image examination, which analyzes the optic disc and cup structures. However, screening programs in primary care are costly and unfeasible. Neural network models have been used to segment optic nerve structures, assisting physicians in this task and reducing fatigue. This work presents a methodology to enhance morphological biomarkers of the optic disc and cup in images obtained by a smartphone coupled to an ophthalmoscope through a deep neural network, which combines two backbones and a dual decoder approach to improve the segmentation of these structures, as well as a new way to combine the loss weights in the training process. The models obtained were numerically evaluated through Dice and IoU measures. The dice values obtained in the experiments reached a Dice of 95.92% and 85.30% for the optical disc and cup and an IoU of 92.22% and 75.68% for the optical disc and cup, respectively, in the BrG dataset. These findings indicate promising architectures in the fundus image segmentation task.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015843/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144015550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna P Maino, Jakub Klikowski, Brendan Strong, Wahid Ghaffari, Michał Woźniak, Tristan Bourcier, Andrzej Grzybowski
Background/objectives: This paper aims to assess ChatGPT's performance in answering European Board of Ophthalmology Diploma (EBOD) examination papers and to compare these results to pass benchmarks and candidate results.
Methods: This cross-sectional study used a sample of past exam papers from 2012, 2013, 2020-2023 EBOD examinations. This study analyzed ChatGPT's responses to 440 multiple choice questions (MCQs), each containing five true/false statements (2200 statements in total) and 48 single best answer (SBA) questions.
Results: ChatGPT, for MCQs, scored on average 64.39%. ChatGPT's strongest metric performance for MCQs was precision (68.76%). ChatGPT performed best at answering pathology MCQs (Grubbs test p < 0.05). Optics and refraction had the lowest-scoring MCQ performance across all metrics. ChatGPT-3.5 Turbo performed worse than human candidates and ChatGPT-4o on easy questions (75% vs. 100% accuracy) but outperformed humans and ChatGPT-4o on challenging questions (50% vs. 28% accuracy). ChatGPT's SBA performance averaged 28.43%, with the highest score and strongest performance in precision (29.36%). Pathology SBA questions were consistently the lowest-scoring topic across most metrics. ChatGPT demonstrated a nonsignificant tendency to select option 1 more frequently (p = 0.19). When answering SBAs, human candidates scored higher than ChatGPT in all metric areas measured.
Conclusions: ChatGPT performed stronger for true/false questions, scoring a pass mark in most instances. Performance was poorer for SBA questions, suggesting that ChatGPT's ability in information retrieval is better than that in knowledge integration. ChatGPT could become a valuable tool in ophthalmic education, allowing exam boards to test their exam papers to ensure they are pitched at the right level, marking open-ended questions and providing detailed feedback.
{"title":"Artificial Intelligence vs. Human Cognition: A Comparative Analysis of ChatGPT and Candidates Sitting the European Board of Ophthalmology Diploma Examination.","authors":"Anna P Maino, Jakub Klikowski, Brendan Strong, Wahid Ghaffari, Michał Woźniak, Tristan Bourcier, Andrzej Grzybowski","doi":"10.3390/vision9020031","DOIUrl":"https://doi.org/10.3390/vision9020031","url":null,"abstract":"<p><strong>Background/objectives: </strong>This paper aims to assess ChatGPT's performance in answering European Board of Ophthalmology Diploma (EBOD) examination papers and to compare these results to pass benchmarks and candidate results.</p><p><strong>Methods: </strong>This cross-sectional study used a sample of past exam papers from 2012, 2013, 2020-2023 EBOD examinations. This study analyzed ChatGPT's responses to 440 multiple choice questions (MCQs), each containing five true/false statements (2200 statements in total) and 48 single best answer (SBA) questions.</p><p><strong>Results: </strong>ChatGPT, for MCQs, scored on average 64.39%. ChatGPT's strongest metric performance for MCQs was precision (68.76%). ChatGPT performed best at answering pathology MCQs (Grubbs test <i>p</i> < 0.05). Optics and refraction had the lowest-scoring MCQ performance across all metrics. ChatGPT-3.5 Turbo performed worse than human candidates and ChatGPT-4o on easy questions (75% vs. 100% accuracy) but outperformed humans and ChatGPT-4o on challenging questions (50% vs. 28% accuracy). ChatGPT's SBA performance averaged 28.43%, with the highest score and strongest performance in precision (29.36%). Pathology SBA questions were consistently the lowest-scoring topic across most metrics. ChatGPT demonstrated a nonsignificant tendency to select option 1 more frequently (<i>p</i> = 0.19). When answering SBAs, human candidates scored higher than ChatGPT in all metric areas measured.</p><p><strong>Conclusions: </strong>ChatGPT performed stronger for true/false questions, scoring a pass mark in most instances. Performance was poorer for SBA questions, suggesting that ChatGPT's ability in information retrieval is better than that in knowledge integration. ChatGPT could become a valuable tool in ophthalmic education, allowing exam boards to test their exam papers to ensure they are pitched at the right level, marking open-ended questions and providing detailed feedback.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015923/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144051950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ekaterina Pechenkova, Mary Rachinskaya, Varvara Vasilenko, Olesya Blazhenkova, Elena Mershina
The ability to adopt different perspectives, or vantage points, is fundamental to human cognition, affecting reasoning, memory, and imagery. While the first-person perspective allows individuals to experience a scene through their own eyes, the third-person perspective involves an external viewpoint, which is thought to demand greater cognitive effort and different neural processing. Despite the frequent use of perspective switching across various contexts, including modern media and in therapeutic settings, the neural mechanisms differentiating these two perspectives in visual imagery remain largely underexplored. In an exploratory fMRI study, we compared both activation and task-based functional connectivity underlying first-person and third-person perspective taking in the same 26 participants performing two spatial egocentric imagery tasks, namely imaginary tennis and house navigation. No significant differences in activation emerged between the first-person and third-person conditions. The network-based statistics analysis revealed a small subnetwork of the early visual and posterior temporal areas that manifested stronger functional connectivity during the first-person perspective, suggesting a closer sensory recruitment loop, or, in different terms, a loop between long-term memory and the "visual buffer" circuits. The absence of a strong neural distinction between the first-person and third-person perspectives suggests that third-person imagery may not fully decenter individuals from the scene, as is often assumed.
{"title":"Brain Functional Connectivity During First- and Third-Person Visual Imagery.","authors":"Ekaterina Pechenkova, Mary Rachinskaya, Varvara Vasilenko, Olesya Blazhenkova, Elena Mershina","doi":"10.3390/vision9020030","DOIUrl":"https://doi.org/10.3390/vision9020030","url":null,"abstract":"<p><p>The ability to adopt different perspectives, or vantage points, is fundamental to human cognition, affecting reasoning, memory, and imagery. While the first-person perspective allows individuals to experience a scene through their own eyes, the third-person perspective involves an external viewpoint, which is thought to demand greater cognitive effort and different neural processing. Despite the frequent use of perspective switching across various contexts, including modern media and in therapeutic settings, the neural mechanisms differentiating these two perspectives in visual imagery remain largely underexplored. In an exploratory fMRI study, we compared both activation and task-based functional connectivity underlying first-person and third-person perspective taking in the same 26 participants performing two spatial egocentric imagery tasks, namely imaginary tennis and house navigation. No significant differences in activation emerged between the first-person and third-person conditions. The network-based statistics analysis revealed a small subnetwork of the early visual and posterior temporal areas that manifested stronger functional connectivity during the first-person perspective, suggesting a closer sensory recruitment loop, or, in different terms, a loop between long-term memory and the \"visual buffer\" circuits. The absence of a strong neural distinction between the first-person and third-person perspectives suggests that third-person imagery may not fully decenter individuals from the scene, as is often assumed.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015856/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144038742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonas Jänig, Norman Forschack, Christopher Gundlach, Matthias M Müller
Visuo-spatial attention acts as a filter for the flood of visual information. Until recently, experimental research in this area focused on neural dynamics of shifting attention in 2D space, leaving attentional shifts in depth less explored. In this study, twenty-three participants were cued to attend to one of two overlapping random-dot kinematograms (RDKs) in different stereoscopic depths in a novel experimental setup. These RDKs flickered at two different frequencies to evoke Steady-State Visual Evoked Potentials (SSVEPs), a neural signature of early visual stimulus processing. Subjects were instructed to detect coherent motion events in the to-be-attended-to plane/RDK. Behavioral data showed that subjects were able to perform the task and selectively respond to events at the cued depth. Event-Related Potentials (ERPs) elicited by these events-namely the Selection Negativity (SN) and the P3b-showed greater amplitudes for coherent motion events in the to-be-attended-to compared to the to-be-ignored plane/RDK, indicating that attention was shifted accordingly. Although our new experimental setting reliably evoked SSVEPs, SSVEP amplitude time courses did not differ between the to-be-attended-to and to-be-ignored stimuli. These results suggest that early visual areas may not optimally represent depth-selective attention, which might rely more on higher processing stages, as suggested by the ERP results.
{"title":"Exploring Attention in Depth: Event-Related and Steady-State Visual Evoked Potentials During Attentional Shifts Between Depth Planes in a Novel Stimulation Setup.","authors":"Jonas Jänig, Norman Forschack, Christopher Gundlach, Matthias M Müller","doi":"10.3390/vision9020028","DOIUrl":"https://doi.org/10.3390/vision9020028","url":null,"abstract":"<p><p>Visuo-spatial attention acts as a filter for the flood of visual information. Until recently, experimental research in this area focused on neural dynamics of shifting attention in 2D space, leaving attentional shifts in depth less explored. In this study, twenty-three participants were cued to attend to one of two overlapping random-dot kinematograms (RDKs) in different stereoscopic depths in a novel experimental setup. These RDKs flickered at two different frequencies to evoke Steady-State Visual Evoked Potentials (SSVEPs), a neural signature of early visual stimulus processing. Subjects were instructed to detect coherent motion events in the to-be-attended-to plane/RDK. Behavioral data showed that subjects were able to perform the task and selectively respond to events at the cued depth. Event-Related Potentials (ERPs) elicited by these events-namely the Selection Negativity (SN) and the P3b-showed greater amplitudes for coherent motion events in the to-be-attended-to compared to the to-be-ignored plane/RDK, indicating that attention was shifted accordingly. Although our new experimental setting reliably evoked SSVEPs, SSVEP amplitude time courses did not differ between the to-be-attended-to and to-be-ignored stimuli. These results suggest that early visual areas may not optimally represent depth-selective attention, which might rely more on higher processing stages, as suggested by the ERP results.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015859/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144033194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Varun Padikal, Alex Plonkowski, Penelope F Lawton, Laura K Young, Jenny C A Read
Eye tracking technology plays a crucial role in various fields such as psychology, medical training, marketing, and human-computer interaction. However, achieving high accuracy over a larger field of view in eye tracking systems remains a significant challenge, both in free viewing and in a head-stabilized condition. In this paper, we propose a simple approach to improve the accuracy of video-based eye trackers through the implementation of linear coordinate transformations. This method involves applying stretching, shearing, translation, or their combinations to correct gaze accuracy errors. Our investigation shows that re-calibrating the eye tracker via linear transformations significantly improves the accuracy of video-based tracker over a large field of view.
{"title":"Gaze Error Estimation and Linear Transformation to Improve Accuracy of Video-Based Eye Trackers.","authors":"Varun Padikal, Alex Plonkowski, Penelope F Lawton, Laura K Young, Jenny C A Read","doi":"10.3390/vision9020029","DOIUrl":"https://doi.org/10.3390/vision9020029","url":null,"abstract":"<p><p>Eye tracking technology plays a crucial role in various fields such as psychology, medical training, marketing, and human-computer interaction. However, achieving high accuracy over a larger field of view in eye tracking systems remains a significant challenge, both in free viewing and in a head-stabilized condition. In this paper, we propose a simple approach to improve the accuracy of video-based eye trackers through the implementation of linear coordinate transformations. This method involves applying stretching, shearing, translation, or their combinations to correct gaze accuracy errors. Our investigation shows that re-calibrating the eye tracker via linear transformations significantly improves the accuracy of video-based tracker over a large field of view.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015841/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eduardo Insua Pereira, Madalena Lira, Ana Paula Sampaio
Discomfort is one of the leading causes associated with contact lens dropout. This study investigated changes in the tear film parameters induced by lens wear and their relationship with ocular symptomology. Thirty-four lens wearers (32.9 ± 9.1 years, 7 men) and thirty-three non-lens wearers (29.4 ± 6.8 years, 12 men) participated in this clinical setting. Subjects were categorised into asymptomatic (n = 11), moderate (n = 15), or severe symptomatic (n = 8). Clinical evaluations were performed in the morning, including blink frequency and completeness, pre-corneal (NIBUT) and pre-lens non-invasive break-up (PL-NIBUT), lipid interference patterns, and tear meniscus height. Contact lens wearers had a higher percentage of incomplete blinks (37% vs. 19%, p < 0.001) and reduced tear meniscus height compared to controls (0.24 ± 0.08 vs. 0.28 ± 0.10 mm, p = 0.014). PL-NIBUT was shorter than NIBUT (7.6 ± 6.2 vs. 10.7 ± 9.3 s. p = 0.002). Significant statistical differences between the groups were found in the PL-NIBUT (p = 0.01) and NIBUT (p = 0.05), with asymptomatic recording higher times than symptomatic. Long-term use of silicone-hydrogel lenses can affect tear stability, production, and adequate distribution through blinking. Ocular symptomology correlates with tear stability parameters in both lens wearers and non-wearers.
不适是导致隐形眼镜脱落的主要原因之一。本研究探讨晶状体磨损引起的泪膜参数变化及其与眼部症状的关系。34名晶状体配戴者(32.9±9.1岁,7名男性)和33名非晶状体配戴者(29.4±6.8岁,12名男性)参加了这项临床研究。受试者分为无症状组(n = 11)、中度组(n = 15)和重度组(n = 8)。临床评估在早上进行,包括眨眼频率和完整性,角膜前(NIBUT)和晶状体前非侵入性破裂(PL-NIBUT),脂质干扰模式和撕裂半月板高度。与对照组相比,隐形眼镜佩戴者的不完全眨眼比例更高(37% vs. 19%, p < 0.001),撕裂半月板高度降低(0.24±0.08 vs. 0.28±0.10 mm, p = 0.014)。PL-NIBUT比NIBUT短(7.6±6.2 vs 10.7±9.3,p = 0.002)。两组间pli -NIBUT (p = 0.01)和NIBUT (p = 0.05)差异有统计学意义,无症状记录次数高于有症状记录次数。长期使用硅水凝胶镜片会影响泪液的稳定性、产生和充分分布。眼部症状与晶状体配戴者和非配戴者的泪液稳定性参数相关。
{"title":"Tear Film Changes and Ocular Symptoms Associated with Soft Contact Lens Wear.","authors":"Eduardo Insua Pereira, Madalena Lira, Ana Paula Sampaio","doi":"10.3390/vision9020027","DOIUrl":"https://doi.org/10.3390/vision9020027","url":null,"abstract":"<p><p>Discomfort is one of the leading causes associated with contact lens dropout. This study investigated changes in the tear film parameters induced by lens wear and their relationship with ocular symptomology. Thirty-four lens wearers (32.9 ± 9.1 years, 7 men) and thirty-three non-lens wearers (29.4 ± 6.8 years, 12 men) participated in this clinical setting. Subjects were categorised into asymptomatic (n = 11), moderate (n = 15), or severe symptomatic (n = 8). Clinical evaluations were performed in the morning, including blink frequency and completeness, pre-corneal (NIBUT) and pre-lens non-invasive break-up (PL-NIBUT), lipid interference patterns, and tear meniscus height. Contact lens wearers had a higher percentage of incomplete blinks (37% vs. 19%, <i>p</i> < 0.001) and reduced tear meniscus height compared to controls (0.24 ± 0.08 vs. 0.28 ± 0.10 mm, <i>p</i> = 0.014). PL-NIBUT was shorter than NIBUT (7.6 ± 6.2 vs. 10.7 ± 9.3 s. <i>p</i> = 0.002). Significant statistical differences between the groups were found in the PL-NIBUT (<i>p</i> = 0.01) and NIBUT (<i>p</i> = 0.05), with asymptomatic recording higher times than symptomatic. Long-term use of silicone-hydrogel lenses can affect tear stability, production, and adequate distribution through blinking. Ocular symptomology correlates with tear stability parameters in both lens wearers and non-wearers.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015879/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144062737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeonghyun Esther Kwon, Christie Kang, Amirhossein Moghtader, Sumaiya Shahjahan, Zahra Bibak Bejandi, Ahmad Alzein, Ali R Djalilian
Persistent corneal epithelial defects (PCEDs) are a challenging ocular condition characterized by the failure of complete corneal epithelial healing after an insult or injury, even after 14 days of standard care. There is a lack of therapeutics that target this condition and encourage re-epithelialization of the corneal surface in a timely and efficient manner. This review aims to provide an overview of current standards of management for PCEDs, highlighting novel, emerging treatments in this field. While many of the current non-surgical treatments aim to provide lubrication and mechanical support, novel non-surgical approaches are undergoing development to harness the proliferative and healing properties of human mesenchymal stem cells, platelets, lufepirsen, hyaluronic acid, thymosin ß4, p-derived peptide, and insulin-like growth factor for the treatment of PCEDs. Novel surgical treatments focus on corneal neurotization and limbal cell reconstruction using novel scaffold materials and cell-sources. This review provides insights into future PCED treatments that build upon current management guidelines.
{"title":"Emerging Treatments for Persistent Corneal Epithelial Defects.","authors":"Jeonghyun Esther Kwon, Christie Kang, Amirhossein Moghtader, Sumaiya Shahjahan, Zahra Bibak Bejandi, Ahmad Alzein, Ali R Djalilian","doi":"10.3390/vision9020026","DOIUrl":"https://doi.org/10.3390/vision9020026","url":null,"abstract":"<p><p>Persistent corneal epithelial defects (PCEDs) are a challenging ocular condition characterized by the failure of complete corneal epithelial healing after an insult or injury, even after 14 days of standard care. There is a lack of therapeutics that target this condition and encourage re-epithelialization of the corneal surface in a timely and efficient manner. This review aims to provide an overview of current standards of management for PCEDs, highlighting novel, emerging treatments in this field. While many of the current non-surgical treatments aim to provide lubrication and mechanical support, novel non-surgical approaches are undergoing development to harness the proliferative and healing properties of human mesenchymal stem cells, platelets, lufepirsen, hyaluronic acid, thymosin ß4, <i>p</i>-derived peptide, and insulin-like growth factor for the treatment of PCEDs. Novel surgical treatments focus on corneal neurotization and limbal cell reconstruction using novel scaffold materials and cell-sources. This review provides insights into future PCED treatments that build upon current management guidelines.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015846/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144018280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision Science is an area of study that focuses on specific aspects of visual perception and is conducted mainly in the restricted and controlled context of laboratories. In so doing, the methodological procedures adopted necessarily reduce the variables of natural perception. For the time being, it is extremely difficult to perform psychophysical, neurophysiological, and phenomenological experiments in open scenery, even if that is our natural visual experience. This study discusses four points whose status in Vision Science is still controversial. Namely, the copresence of distinct visual phenomena of primary and secondary processes in natural vision; the role of visual imagination in seeing; the factors ruling the perception of global ambiguity and enigmatic and emotional atmosphere in the visual experience of a scene; and if the phenomena of subjective vision are considered, what kind of new laboratories are available for studying visual perception in open scenery. In the framework of experimental phenomenology and the use of pictorial art as a complement and test for perceptual phenomena, a case study from painting showing the copresence of perceptual and mental visual processes is also discussed and analyzed. This has involved measuring color and light in specific zones of the painting chosen for analysis, relative to visual templates, using Natural Color System notation cards.
{"title":"μετὰ τὰ ϕυσικά: Vision Far Beyond Physics.","authors":"Liliana Albertazzi","doi":"10.3390/vision9020025","DOIUrl":"https://doi.org/10.3390/vision9020025","url":null,"abstract":"<p><p>Vision Science is an area of study that focuses on specific aspects of visual perception and is conducted mainly in the restricted and controlled context of laboratories. In so doing, the methodological procedures adopted necessarily reduce the variables of natural perception. For the time being, it is extremely difficult to perform psychophysical, neurophysiological, and phenomenological experiments in open scenery, even if that is our natural visual experience. This study discusses four points whose status in Vision Science is still controversial. Namely, the copresence of distinct visual phenomena of primary and secondary processes in natural vision; the role of visual imagination in seeing; the factors ruling the perception of global ambiguity and enigmatic and emotional atmosphere in the visual experience of a scene; and if the phenomena of subjective vision are considered, what kind of new laboratories are available for studying visual perception in open scenery. In the framework of experimental phenomenology and the use of pictorial art as a complement and test for perceptual phenomena, a case study from painting showing the copresence of perceptual and mental visual processes is also discussed and analyzed. This has involved measuring color and light in specific zones of the painting chosen for analysis, relative to visual templates, using Natural Color System notation cards.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015877/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144001502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual conditions significantly influence fear of movement (FOM), which is a condition that impairs postural control and quality of life (QOL). This study examined how visual conditions influence sway velocity during repeated one-leg standing tasks and explored the potential relationship between postural control, FOM, and QOL in older adults with and without FOM. Thirty-seven older adults with FOM and 37 controls participated in the study. Postural sway velocity was measured across three repeated trials under visual conditions in both anteroposterior (AP) and mediolateral (ML) directions. The groups demonstrated significant interaction under visual conditions (F = 7.43, p = 0.01). In the eyes-closed condition, the FOM group exhibited faster ML sway velocity than the control group, with significant differences across all three trials. There was a significant interaction between sway direction and vision (F = 27.41, p = 0.001). In addition, the FOM demonstrated strong negative correlations with several QOL measures on social functioning (r = -0.69, p = 0.001) and role limitations due to emotional problems (r = -0.58, p = 0.001) in the FOM group. While FOM influenced sway velocity during balance tasks, visual input emerged as a key determinant of postural control. The FOM group demonstrated a heightened reliance on vision, suggesting an increased need for vision-dependent strategies to maintain balance.
视觉条件显著影响运动恐惧(FOM),这是一种损害姿势控制和生活质量(QOL)的条件。本研究考察了视觉条件对重复单腿站立任务中摇摆速度的影响,并探讨了姿势控制、FOM和生活质量在有和没有FOM的老年人中的潜在关系。37名患有FOM的老年人和37名对照组参加了这项研究。体位摇摆速度在视觉条件下在前后(AP)和中外侧(ML)方向上进行了三次重复试验。两组在视觉条件下表现出显著的交互作用(F = 7.43, p = 0.01)。在闭眼条件下,FOM组比对照组表现出更快的ML摇摆速度,在三个试验中都有显著差异。摇摆方向与视力之间存在显著的交互作用(F = 27.41, p = 0.001)。此外,在FOM组中,FOM与社会功能(r = -0.69, p = 0.001)和由于情绪问题引起的角色限制(r = -0.58, p = 0.001)的几个生活质量指标表现出强烈的负相关。当FOM影响平衡任务中的摇摆速度时,视觉输入成为姿势控制的关键决定因素。FOM组表现出对视觉的高度依赖,这表明他们更需要依赖视觉的策略来保持平衡。
{"title":"Impact of Visual Input and Kinesiophobia on Postural Control and Quality of Life in Older Adults During One-Leg Standing Tasks.","authors":"Paul S Sung, Dongchul Lee","doi":"10.3390/vision9010024","DOIUrl":"10.3390/vision9010024","url":null,"abstract":"<p><p>Visual conditions significantly influence fear of movement (FOM), which is a condition that impairs postural control and quality of life (QOL). This study examined how visual conditions influence sway velocity during repeated one-leg standing tasks and explored the potential relationship between postural control, FOM, and QOL in older adults with and without FOM. Thirty-seven older adults with FOM and 37 controls participated in the study. Postural sway velocity was measured across three repeated trials under visual conditions in both anteroposterior (AP) and mediolateral (ML) directions. The groups demonstrated significant interaction under visual conditions (F = 7.43, <i>p</i> = 0.01). In the eyes-closed condition, the FOM group exhibited faster ML sway velocity than the control group, with significant differences across all three trials. There was a significant interaction between sway direction and vision (F = 27.41, <i>p</i> = 0.001). In addition, the FOM demonstrated strong negative correlations with several QOL measures on social functioning (r = -0.69, <i>p</i> = 0.001) and role limitations due to emotional problems (r = -0.58, <i>p</i> = 0.001) in the FOM group. While FOM influenced sway velocity during balance tasks, visual input emerged as a key determinant of postural control. The FOM group demonstrated a heightened reliance on vision, suggesting an increased need for vision-dependent strategies to maintain balance.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11946431/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143732151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}