Exploring transparency: A comparative analysis of explainable artificial intelligence techniques in retinography images to support the diagnosis of glaucoma.
Cleverson Vieira, Leonardo Rocha, Marcelo Guimarães, Diego Dias
{"title":"Exploring transparency: A comparative analysis of explainable artificial intelligence techniques in retinography images to support the diagnosis of glaucoma.","authors":"Cleverson Vieira, Leonardo Rocha, Marcelo Guimarães, Diego Dias","doi":"10.1016/j.compbiomed.2024.109556","DOIUrl":null,"url":null,"abstract":"<p><p>Machine learning models are widely applied across diverse fields, including nearly all segments of human activity. In healthcare, artificial intelligence techniques have revolutionized disease diagnosis, particularly in image classification. Although these models have achieved significant results, their lack of explainability has limited widespread adoption in clinical practice. In medical environments, understanding AI model decisions is essential not only for healthcare professionals' trust but also for regulatory compliance, patient safety, and accountability in case of failures. Glaucoma, a neurodegenerative eye disease, can lead to irreversible blindness, making early detection crucial for preventing vision loss. Automated glaucoma detection has been a focus of intensive research in computer vision, with numerous studies proposing the use of convolutional neural networks (CNNs) to analyze retinal fundus images and diagnose the disease automatically. However, these models often lack the necessary explainability, which is essential for ophthalmologists to understand and justify their decisions to patients. This paper explores and applies explainable artificial intelligence (XAI) techniques to different CNN architectures for glaucoma classification, comparing which explanation technique offers the best interpretive resources for clinical diagnosis. We propose a new approach, SCIM (SHAP-CAM Interpretable Mapping), which has shown promising results. The experiments were conducted with an ophthalmology specialist who highlighted that CAM-based interpretability, applied to the VGG16 and VGG19 architectures, stands out as the most effective resource for promoting interpretability and supporting diagnosis.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"185 ","pages":"109556"},"PeriodicalIF":7.0000,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in biology and medicine","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1016/j.compbiomed.2024.109556","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BIOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Machine learning models are widely applied across diverse fields, including nearly all segments of human activity. In healthcare, artificial intelligence techniques have revolutionized disease diagnosis, particularly in image classification. Although these models have achieved significant results, their lack of explainability has limited widespread adoption in clinical practice. In medical environments, understanding AI model decisions is essential not only for healthcare professionals' trust but also for regulatory compliance, patient safety, and accountability in case of failures. Glaucoma, a neurodegenerative eye disease, can lead to irreversible blindness, making early detection crucial for preventing vision loss. Automated glaucoma detection has been a focus of intensive research in computer vision, with numerous studies proposing the use of convolutional neural networks (CNNs) to analyze retinal fundus images and diagnose the disease automatically. However, these models often lack the necessary explainability, which is essential for ophthalmologists to understand and justify their decisions to patients. This paper explores and applies explainable artificial intelligence (XAI) techniques to different CNN architectures for glaucoma classification, comparing which explanation technique offers the best interpretive resources for clinical diagnosis. We propose a new approach, SCIM (SHAP-CAM Interpretable Mapping), which has shown promising results. The experiments were conducted with an ophthalmology specialist who highlighted that CAM-based interpretability, applied to the VGG16 and VGG19 architectures, stands out as the most effective resource for promoting interpretability and supporting diagnosis.
期刊介绍:
Computers in Biology and Medicine is an international forum for sharing groundbreaking advancements in the use of computers in bioscience and medicine. This journal serves as a medium for communicating essential research, instruction, ideas, and information regarding the rapidly evolving field of computer applications in these domains. By encouraging the exchange of knowledge, we aim to facilitate progress and innovation in the utilization of computers in biology and medicine.