Comparative Analysis of Explainable Artificial Intelligence for COVID-19 Diagnosis on CXR Image

Joe Huei Ong, Kam Meng Goh, Li Li Lim
{"title":"Comparative Analysis of Explainable Artificial Intelligence for COVID-19 Diagnosis on CXR Image","authors":"Joe Huei Ong, Kam Meng Goh, Li Li Lim","doi":"10.1109/ICSIPA52582.2021.9576766","DOIUrl":null,"url":null,"abstract":"The COVID-19 outbreak brought a huge impact globally. Early studies show that the COVID-19 is manifested in chest X-rays of infected patients. Hence, these studies attract the attention of the computer vision community in integrating X-ray scans and deep-learning-based solutions to aid the diagnosis of COVID-19 infection. However, at present, efforts and information on implementing explainable artificial intelligence in interpreting deep learning model for COVID-19 recognition are scarce and limited. In this paper, we proposed and compared the LIME and SHAP model to enhance the interpretation of COVID diagnosis through X-ray scans. We first applied SqueezeNet to recognise pneumonia, COVID-19, and normal lung image. Through SqueezeNet, an 84.34% recognition rate success in testing accuracy was obtained. To better understand what the network “sees” a specific task, namely, image classification, Shapley Additive Explanation (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) were implemented to expound and interpret how Squeezenet performs classification. Results show that LIME and SHAP can highlight the area of interest where they can help to increase the transparency and the interpretability of the Squeezenet model.","PeriodicalId":326688,"journal":{"name":"2021 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSIPA52582.2021.9576766","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

The COVID-19 outbreak brought a huge impact globally. Early studies show that the COVID-19 is manifested in chest X-rays of infected patients. Hence, these studies attract the attention of the computer vision community in integrating X-ray scans and deep-learning-based solutions to aid the diagnosis of COVID-19 infection. However, at present, efforts and information on implementing explainable artificial intelligence in interpreting deep learning model for COVID-19 recognition are scarce and limited. In this paper, we proposed and compared the LIME and SHAP model to enhance the interpretation of COVID diagnosis through X-ray scans. We first applied SqueezeNet to recognise pneumonia, COVID-19, and normal lung image. Through SqueezeNet, an 84.34% recognition rate success in testing accuracy was obtained. To better understand what the network “sees” a specific task, namely, image classification, Shapley Additive Explanation (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) were implemented to expound and interpret how Squeezenet performs classification. Results show that LIME and SHAP can highlight the area of interest where they can help to increase the transparency and the interpretability of the Squeezenet model.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
可解释人工智能在CXR图像上诊断COVID-19的比较分析
新冠肺炎疫情在全球范围内造成巨大影响。早期研究表明,COVID-19在感染患者的胸部x光片中表现出来。因此,这些研究将x射线扫描和基于深度学习的解决方案结合起来,以帮助诊断COVID-19感染,引起了计算机视觉界的关注。然而,目前,在解释COVID-19识别的深度学习模型中实施可解释人工智能的努力和信息非常有限。在本文中,我们提出并比较了LIME和SHAP模型,以增强通过x射线扫描诊断COVID的解释。我们首先应用SqueezeNet识别肺炎、COVID-19和正常肺部图像。通过SqueezeNet,获得了84.34%的测试准确率识别率。为了更好地理解网络“看到”了一个特定的任务,即图像分类,我们使用Shapley Additive Explanation (SHAP)和Local Interpretable Model-Agnostic Explanations (LIME)来阐述和解释Squeezenet是如何进行分类的。结果表明,LIME和SHAP可以突出感兴趣的领域,它们可以帮助提高Squeezenet模型的透明度和可解释性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Personal Protective Equipment Detection with Live Camera A Fast and Unbiased Minimalistic Resampling Approach for the Particle Filter Sparse Checkerboard Corner Detection from Global Perspective Comparison of Dental Caries Level Images Classification Performance using KNN and SVM Methods An Insight Into the Rise Time of Exponential Smoothing for Speech Enhancement Methods
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1