可解释人工智能在乳腺癌检测和风险预测中的应用:系统性范围审查

Cancer Innovation Pub Date : 2024-07-03 DOI:10.1002/cai2.136
Amirehsan Ghasemi, Soheil Hashtarkhani, David L. Schwartz, Arash Shaban-Nejad
{"title":"可解释人工智能在乳腺癌检测和风险预测中的应用:系统性范围审查","authors":"Amirehsan Ghasemi,&nbsp;Soheil Hashtarkhani,&nbsp;David L. Schwartz,&nbsp;Arash Shaban-Nejad","doi":"10.1002/cai2.136","DOIUrl":null,"url":null,"abstract":"<p>With the advances in artificial intelligence (AI), data-driven algorithms are becoming increasingly popular in the medical domain. However, due to the nonlinear and complex behavior of many of these algorithms, decision-making by such algorithms is not trustworthy for clinicians and is considered a black-box process. Hence, the scientific community has introduced explainable artificial intelligence (XAI) to remedy the problem. This systematic scoping review investigates the application of XAI in breast cancer detection and risk prediction. We conducted a comprehensive search on Scopus, IEEE Explore, PubMed, and Google Scholar (first 50 citations) using a systematic search strategy. The search spanned from January 2017 to July 2023, focusing on peer-reviewed studies implementing XAI methods in breast cancer datasets. Thirty studies met our inclusion criteria and were included in the analysis. The results revealed that SHapley Additive exPlanations (SHAP) is the top model-agnostic XAI technique in breast cancer research in terms of usage, explaining the model prediction results, diagnosis and classification of biomarkers, and prognosis and survival analysis. Additionally, the SHAP model primarily explained tree-based ensemble machine learning models. The most common reason is that SHAP is model agnostic, which makes it both popular and useful for explaining any model prediction. Additionally, it is relatively easy to implement effectively and completely suits performant models, such as tree-based models. Explainable AI improves the transparency, interpretability, fairness, and trustworthiness of AI-enabled health systems and medical devices and, ultimately, the quality of care and outcomes.</p>","PeriodicalId":100212,"journal":{"name":"Cancer Innovation","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cai2.136","citationCount":"0","resultStr":"{\"title\":\"Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review\",\"authors\":\"Amirehsan Ghasemi,&nbsp;Soheil Hashtarkhani,&nbsp;David L. Schwartz,&nbsp;Arash Shaban-Nejad\",\"doi\":\"10.1002/cai2.136\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>With the advances in artificial intelligence (AI), data-driven algorithms are becoming increasingly popular in the medical domain. However, due to the nonlinear and complex behavior of many of these algorithms, decision-making by such algorithms is not trustworthy for clinicians and is considered a black-box process. Hence, the scientific community has introduced explainable artificial intelligence (XAI) to remedy the problem. This systematic scoping review investigates the application of XAI in breast cancer detection and risk prediction. We conducted a comprehensive search on Scopus, IEEE Explore, PubMed, and Google Scholar (first 50 citations) using a systematic search strategy. The search spanned from January 2017 to July 2023, focusing on peer-reviewed studies implementing XAI methods in breast cancer datasets. Thirty studies met our inclusion criteria and were included in the analysis. The results revealed that SHapley Additive exPlanations (SHAP) is the top model-agnostic XAI technique in breast cancer research in terms of usage, explaining the model prediction results, diagnosis and classification of biomarkers, and prognosis and survival analysis. Additionally, the SHAP model primarily explained tree-based ensemble machine learning models. The most common reason is that SHAP is model agnostic, which makes it both popular and useful for explaining any model prediction. Additionally, it is relatively easy to implement effectively and completely suits performant models, such as tree-based models. Explainable AI improves the transparency, interpretability, fairness, and trustworthiness of AI-enabled health systems and medical devices and, ultimately, the quality of care and outcomes.</p>\",\"PeriodicalId\":100212,\"journal\":{\"name\":\"Cancer Innovation\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cai2.136\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cancer Innovation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/cai2.136\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cancer Innovation","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cai2.136","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

随着人工智能(AI)的发展,数据驱动算法在医疗领域越来越受欢迎。然而,由于许多此类算法的非线性和复杂行为,临床医生对此类算法的决策并不信任,认为这是一个黑箱过程。因此,科学界引入了可解释人工智能(XAI)来解决这一问题。本系统性范围综述调查了 XAI 在乳腺癌检测和风险预测中的应用。我们采用系统性检索策略,在 Scopus、IEEE Explore、PubMed 和 Google Scholar(前 50 篇引文)上进行了全面检索。搜索时间跨度为 2017 年 1 月至 2023 年 7 月,重点关注在乳腺癌数据集中采用 XAI 方法的同行评审研究。有 30 项研究符合我们的纳入标准并被纳入分析。结果显示,在乳腺癌研究中,SHapley Additive exPlanations(SHAP)在使用、解释模型预测结果、生物标记物的诊断和分类以及预后和生存分析方面是最重要的模型诊断 XAI 技术。此外,SHAP 模型主要解释了基于树的集合机器学习模型。最常见的原因是,SHAP 与模型无关,这使得它在解释任何模型预测结果时既受欢迎又有用。此外,它相对容易有效实现,完全适合性能良好的模型,如基于树的模型。可解释的人工智能提高了人工智能医疗系统和医疗设备的透明度、可解释性、公平性和可信度,并最终提高了医疗质量和结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review

With the advances in artificial intelligence (AI), data-driven algorithms are becoming increasingly popular in the medical domain. However, due to the nonlinear and complex behavior of many of these algorithms, decision-making by such algorithms is not trustworthy for clinicians and is considered a black-box process. Hence, the scientific community has introduced explainable artificial intelligence (XAI) to remedy the problem. This systematic scoping review investigates the application of XAI in breast cancer detection and risk prediction. We conducted a comprehensive search on Scopus, IEEE Explore, PubMed, and Google Scholar (first 50 citations) using a systematic search strategy. The search spanned from January 2017 to July 2023, focusing on peer-reviewed studies implementing XAI methods in breast cancer datasets. Thirty studies met our inclusion criteria and were included in the analysis. The results revealed that SHapley Additive exPlanations (SHAP) is the top model-agnostic XAI technique in breast cancer research in terms of usage, explaining the model prediction results, diagnosis and classification of biomarkers, and prognosis and survival analysis. Additionally, the SHAP model primarily explained tree-based ensemble machine learning models. The most common reason is that SHAP is model agnostic, which makes it both popular and useful for explaining any model prediction. Additionally, it is relatively easy to implement effectively and completely suits performant models, such as tree-based models. Explainable AI improves the transparency, interpretability, fairness, and trustworthiness of AI-enabled health systems and medical devices and, ultimately, the quality of care and outcomes.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
0.70
自引率
0.00%
发文量
0
期刊最新文献
Combination therapy using low-dose anlotinib and immune checkpoint inhibitors for extensive-stage small cell lung cancer Beyond clinical trials: CDK4/6 inhibitor efficacy predictors and nomogram model from real-world evidence in metastatic breast cancer Prognostic nomograms for young breast cancer: A retrospective study based on the SEER and METABRIC databases Retinoic acid receptor responder 2 and lipid metabolic reprogramming: A new insight into brain metastasis Leukocyte immunoglobulin-like receptor B4: A keystone in immune modulation and therapeutic target in cancer and beyond
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1