Songyang An, Kelvin Teo, Michael V McConnell, John Marshall, Christopher Galloway, David Squirrell
{"title":"AI explainability in oculomics: how it works, its role in establishing trust, and what still needs to be addressed.","authors":"Songyang An, Kelvin Teo, Michael V McConnell, John Marshall, Christopher Galloway, David Squirrell","doi":"10.1016/j.preteyeres.2025.101352","DOIUrl":null,"url":null,"abstract":"<p><p>Recent developments in artificial intelligence (AI) have seen a proliferation of algorithms that are now capable of predicting a range of systemic diseases from retinal images. Unlike traditional retinal disease detection AI models which are trained on well-recognised retinal biomarkers, systemic disease detection or \"oculomics\" models use a range of often poorly characterised retinal biomarkers to arrive at their predictions. As the retinal phenotype that oculomics models use may not be intuitive, clinicians have to rely on the developers' explanations of how these algorithms work in order to understand them. The discipline of understanding how AI algorithms work employs two similar but distinct terms: Explainable AI and Interpretable AI (iAI). Explainable AI describes the holistic functioning of an AI system, including its impact and potential biases. Interpretable AI concentrates solely on examining and understanding the workings of the AI algorithm itself. iAI tools are, therefore, what the clinician must rely on if they are to understand how the algorithm works and whether its predictions are reliable. The iAI tools that developers use can be delineated into two broad categories: Intrinsic methods that improve transparency through architectural changes, and post-hoc methods that explain trained models via external algorithms. Currently, post-hoc methods, class activation maps in particular, are far more widely used than other techniques but they have their limitations especially when applied to oculomics AI models. Aimed at clinicians, we examine how the key iAI methods work, what they are designed to do, and what their limitations are when applied to oculomics AI. We conclude by discussing how combining existing iAI techniques with novel approaches could allow AI developers to better explain how their oculomics models work and reassure clinicians that the results issued are reliable.</p>","PeriodicalId":21159,"journal":{"name":"Progress in Retinal and Eye Research","volume":" ","pages":"101352"},"PeriodicalIF":18.6000,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Progress in Retinal and Eye Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.preteyeres.2025.101352","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Recent developments in artificial intelligence (AI) have seen a proliferation of algorithms that are now capable of predicting a range of systemic diseases from retinal images. Unlike traditional retinal disease detection AI models which are trained on well-recognised retinal biomarkers, systemic disease detection or "oculomics" models use a range of often poorly characterised retinal biomarkers to arrive at their predictions. As the retinal phenotype that oculomics models use may not be intuitive, clinicians have to rely on the developers' explanations of how these algorithms work in order to understand them. The discipline of understanding how AI algorithms work employs two similar but distinct terms: Explainable AI and Interpretable AI (iAI). Explainable AI describes the holistic functioning of an AI system, including its impact and potential biases. Interpretable AI concentrates solely on examining and understanding the workings of the AI algorithm itself. iAI tools are, therefore, what the clinician must rely on if they are to understand how the algorithm works and whether its predictions are reliable. The iAI tools that developers use can be delineated into two broad categories: Intrinsic methods that improve transparency through architectural changes, and post-hoc methods that explain trained models via external algorithms. Currently, post-hoc methods, class activation maps in particular, are far more widely used than other techniques but they have their limitations especially when applied to oculomics AI models. Aimed at clinicians, we examine how the key iAI methods work, what they are designed to do, and what their limitations are when applied to oculomics AI. We conclude by discussing how combining existing iAI techniques with novel approaches could allow AI developers to better explain how their oculomics models work and reassure clinicians that the results issued are reliable.
期刊介绍:
Progress in Retinal and Eye Research is a Reviews-only journal. By invitation, leading experts write on basic and clinical aspects of the eye in a style appealing to molecular biologists, neuroscientists and physiologists, as well as to vision researchers and ophthalmologists.
The journal covers all aspects of eye research, including topics pertaining to the retina and pigment epithelial layer, cornea, tears, lacrimal glands, aqueous humour, iris, ciliary body, trabeculum, lens, vitreous humour and diseases such as dry-eye, inflammation, keratoconus, corneal dystrophy, glaucoma and cataract.