{"title":"Comparative Analysis of Explainable Artificial Intelligence for COVID-19 Diagnosis on CXR Image","authors":"Joe Huei Ong, Kam Meng Goh, Li Li Lim","doi":"10.1109/ICSIPA52582.2021.9576766","DOIUrl":null,"url":null,"abstract":"The COVID-19 outbreak brought a huge impact globally. Early studies show that the COVID-19 is manifested in chest X-rays of infected patients. Hence, these studies attract the attention of the computer vision community in integrating X-ray scans and deep-learning-based solutions to aid the diagnosis of COVID-19 infection. However, at present, efforts and information on implementing explainable artificial intelligence in interpreting deep learning model for COVID-19 recognition are scarce and limited. In this paper, we proposed and compared the LIME and SHAP model to enhance the interpretation of COVID diagnosis through X-ray scans. We first applied SqueezeNet to recognise pneumonia, COVID-19, and normal lung image. Through SqueezeNet, an 84.34% recognition rate success in testing accuracy was obtained. To better understand what the network “sees” a specific task, namely, image classification, Shapley Additive Explanation (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) were implemented to expound and interpret how Squeezenet performs classification. Results show that LIME and SHAP can highlight the area of interest where they can help to increase the transparency and the interpretability of the Squeezenet model.","PeriodicalId":326688,"journal":{"name":"2021 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSIPA52582.2021.9576766","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
The COVID-19 outbreak brought a huge impact globally. Early studies show that the COVID-19 is manifested in chest X-rays of infected patients. Hence, these studies attract the attention of the computer vision community in integrating X-ray scans and deep-learning-based solutions to aid the diagnosis of COVID-19 infection. However, at present, efforts and information on implementing explainable artificial intelligence in interpreting deep learning model for COVID-19 recognition are scarce and limited. In this paper, we proposed and compared the LIME and SHAP model to enhance the interpretation of COVID diagnosis through X-ray scans. We first applied SqueezeNet to recognise pneumonia, COVID-19, and normal lung image. Through SqueezeNet, an 84.34% recognition rate success in testing accuracy was obtained. To better understand what the network “sees” a specific task, namely, image classification, Shapley Additive Explanation (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) were implemented to expound and interpret how Squeezenet performs classification. Results show that LIME and SHAP can highlight the area of interest where they can help to increase the transparency and the interpretability of the Squeezenet model.