A comprehensive evaluation of explainable Artificial Intelligence techniques in stroke diagnosis: A systematic review

IF 2.1 Q2 ENGINEERING, MULTIDISCIPLINARY Cogent Engineering Pub Date : 2023-10-25 DOI:10.1080/23311916.2023.2273088
Daraje Kaba Gurmessa, Worku Jimma
{"title":"A comprehensive evaluation of explainable Artificial Intelligence techniques in stroke diagnosis: A systematic review","authors":"Daraje Kaba Gurmessa, Worku Jimma","doi":"10.1080/23311916.2023.2273088","DOIUrl":null,"url":null,"abstract":"Stroke presents a formidable global health threat, carrying significant risks and challenges. Timely intervention and improved outcomes hinge on the integration of Explainable Artificial Intelligence (XAI) into medical decision-making. XAI, an evolving field, enhances the transparency of conventional Artificial Intelligence (AI) models. This systematic review addresses key research questions: How is XAI applied in the context of stroke diagnosis? To what extent can XAI elucidate the outputs of machine learning models? Which systematic evaluation methodologies are employed, and what categories of explainable approaches (Model Explanation, Outcome Explanation, Model Inspection) are prevalent We conducted this review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Our search encompassed five databases: Google Scholar, PubMed, IEEE Xplore, ScienceDirect, and Scopus, spanning studies published between January 1988 and June 2023. Various combinations of search terms, including “stroke,” “explainable,” “interpretable,” “machine learning,” “artificial intelligence,” and “XAI,” were employed. This study identified 17 primary studies employing explainable machine learning techniques for stroke diagnosis. Among these studies, 94.1% incorporated XAI for model visualization, and 47.06% employed model inspection. It is noteworthy that none of the studies employed evaluation metrics such as D, R, F, or S to assess the performance of their XAI systems. Furthermore, none evaluated human confidence in utilizing XAI for stroke diagnosis. Explainable Artificial Intelligence serves as a vital tool in enhancing trust among both patients and healthcare providers in the diagnostic process. The effective implementation of systematic evaluation metrics is crucial for harnessing the potential of XAI in improving stroke diagnosis.","PeriodicalId":10464,"journal":{"name":"Cogent Engineering","volume":"20 3","pages":"0"},"PeriodicalIF":2.1000,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cogent Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/23311916.2023.2273088","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Stroke presents a formidable global health threat, carrying significant risks and challenges. Timely intervention and improved outcomes hinge on the integration of Explainable Artificial Intelligence (XAI) into medical decision-making. XAI, an evolving field, enhances the transparency of conventional Artificial Intelligence (AI) models. This systematic review addresses key research questions: How is XAI applied in the context of stroke diagnosis? To what extent can XAI elucidate the outputs of machine learning models? Which systematic evaluation methodologies are employed, and what categories of explainable approaches (Model Explanation, Outcome Explanation, Model Inspection) are prevalent We conducted this review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Our search encompassed five databases: Google Scholar, PubMed, IEEE Xplore, ScienceDirect, and Scopus, spanning studies published between January 1988 and June 2023. Various combinations of search terms, including “stroke,” “explainable,” “interpretable,” “machine learning,” “artificial intelligence,” and “XAI,” were employed. This study identified 17 primary studies employing explainable machine learning techniques for stroke diagnosis. Among these studies, 94.1% incorporated XAI for model visualization, and 47.06% employed model inspection. It is noteworthy that none of the studies employed evaluation metrics such as D, R, F, or S to assess the performance of their XAI systems. Furthermore, none evaluated human confidence in utilizing XAI for stroke diagnosis. Explainable Artificial Intelligence serves as a vital tool in enhancing trust among both patients and healthcare providers in the diagnostic process. The effective implementation of systematic evaluation metrics is crucial for harnessing the potential of XAI in improving stroke diagnosis.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
卒中诊断中可解释人工智能技术的综合评价:系统综述
中风是一个巨大的全球健康威胁,带来重大风险和挑战。及时干预和改善结果取决于可解释人工智能(XAI)与医疗决策的整合。XAI是一个不断发展的领域,它提高了传统人工智能(AI)模型的透明度。本系统综述解决了关键的研究问题:XAI如何在脑卒中诊断中应用?XAI能在多大程度上阐明机器学习模型的输出?采用了哪些系统评价方法,以及常用的可解释方法(模型解释、结果解释、模型检验)的类别。我们按照系统评价和荟萃分析(PRISMA)指南的首选报告项目进行了本综述。我们的搜索包括五个数据库:Google Scholar、PubMed、IEEE explore、ScienceDirect和Scopus,涵盖1988年1月至2023年6月之间发表的研究。搜索词的各种组合,包括“笔画”、“可解释的”、“可解释的”、“机器学习”、“人工智能”和“XAI”。本研究确定了17项采用可解释的机器学习技术进行中风诊断的初步研究。其中94.1%的研究采用了XAI进行模型可视化,47.06%的研究采用了模型检验。值得注意的是,没有一项研究使用诸如D、R、F或S之类的评估指标来评估其XAI系统的性能。此外,没有人评估人类对使用XAI进行脑卒中诊断的信心。在诊断过程中,可解释的人工智能是增强患者和医疗保健提供者之间信任的重要工具。系统评估指标的有效实施对于利用XAI在提高卒中诊断方面的潜力至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Cogent Engineering
Cogent Engineering ENGINEERING, MULTIDISCIPLINARY-
CiteScore
4.00
自引率
5.30%
发文量
213
审稿时长
13 weeks
期刊介绍: One of the largest, multidisciplinary open access engineering journals of peer-reviewed research, Cogent Engineering, part of the Taylor & Francis Group, covers all areas of engineering and technology, from chemical engineering to computer science, and mechanical to materials engineering. Cogent Engineering encourages interdisciplinary research and also accepts negative results, software article, replication studies and reviews.
期刊最新文献
Evaluating road work site safety management: A case study of the Amman bus rapid transit project construction Toward optimizing scientific workflow using multi-objective optimization in a cloud environment Technology capability of Indonesian medium-sized shipyards for ship production using Product-oriented Work Breakdown Structure method (case study on shipbuilding of Mini LNG vessel) NREL Phase VI wind turbine blade tip with S809 airfoil profile winglet design and performance analysis using computational fluid dynamics Revisiting multi-domain empirical modelling of light-emitting diode luminaire
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1