Ahmad Chaddad , Yan Hu , Yihang Wu , Binbin Wen , Reem Kateb
{"title":"Generalizable and explainable deep learning for medical image computing: An overview","authors":"Ahmad Chaddad , Yan Hu , Yihang Wu , Binbin Wen , Reem Kateb","doi":"10.1016/j.cobme.2024.100567","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><div>This paper presents an overview of generalizable and explainable artificial intelligence (XAI) in deep learning (DL) for medical imaging, with the aim of addressing the urgent need for transparency and explainability in clinical applications.</div></div><div><h3>Methodology</h3><div>We propose to use four CNNs in three medical datasets (brain tumor, skin cancer, and chest x-ray) for medical image classification tasks. Furthermore, we combine ResNet50 with five common XAI techniques to obtain explainable results for model prediction, in order to improve model transparency. We also involve a quantitative metric (confidence increase) to evaluate the usefulness of XAI techniques.</div></div><div><h3>Key findings</h3><div>The experimental results indicate that ResNet50 can achieve feasible accuracy and F1 score in all datasets (e.g., 86.31 % accuracy in skin cancer). Furthermore, the findings show that while certain XAI methods, such as eXplanation with Gradient-weighted Class activation mapping (XgradCAM), effectively highlight relevant abnormal regions in medical images, others, such as EigenGradCAM, may perform less effectively in specific scenarios. In addition, XgradCAM indicates higher confidence increase (e.g., 0.12 in glioma tumor) compared to GradCAM++ (0.09) and LayerCAM (0.08).</div></div><div><h3>Implications</h3><div>Based on the experimental results and recent advancements, we outline future research directions to enhance the generalizability of DL models in the field of biomedical imaging.</div></div>","PeriodicalId":36748,"journal":{"name":"Current Opinion in Biomedical Engineering","volume":"33 ","pages":"Article 100567"},"PeriodicalIF":4.7000,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Current Opinion in Biomedical Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468451124000473","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Objective
This paper presents an overview of generalizable and explainable artificial intelligence (XAI) in deep learning (DL) for medical imaging, with the aim of addressing the urgent need for transparency and explainability in clinical applications.
Methodology
We propose to use four CNNs in three medical datasets (brain tumor, skin cancer, and chest x-ray) for medical image classification tasks. Furthermore, we combine ResNet50 with five common XAI techniques to obtain explainable results for model prediction, in order to improve model transparency. We also involve a quantitative metric (confidence increase) to evaluate the usefulness of XAI techniques.
Key findings
The experimental results indicate that ResNet50 can achieve feasible accuracy and F1 score in all datasets (e.g., 86.31 % accuracy in skin cancer). Furthermore, the findings show that while certain XAI methods, such as eXplanation with Gradient-weighted Class activation mapping (XgradCAM), effectively highlight relevant abnormal regions in medical images, others, such as EigenGradCAM, may perform less effectively in specific scenarios. In addition, XgradCAM indicates higher confidence increase (e.g., 0.12 in glioma tumor) compared to GradCAM++ (0.09) and LayerCAM (0.08).
Implications
Based on the experimental results and recent advancements, we outline future research directions to enhance the generalizability of DL models in the field of biomedical imaging.