慢性病预防的多类反事实解释的估算和一致性评价。

IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-11-06 DOI:10.1109/JBHI.2024.3492730
Marta Lenatti, Alberto Carlevaro, Aziz Guergachi, Karim Keshavjee, Maurizio Mongelli, Alessia Paglialonga
{"title":"慢性病预防的多类反事实解释的估算和一致性评价。","authors":"Marta Lenatti, Alberto Carlevaro, Aziz Guergachi, Karim Keshavjee, Maurizio Mongelli, Alessia Paglialonga","doi":"10.1109/JBHI.2024.3492730","DOIUrl":null,"url":null,"abstract":"<p><p>Recent advances in Artificial Intelligence (AI) in healthcare are driving research into solutions that can provide personalized guidance. For these solutions to be used as clinical decision support tools, the results provided must be interpretable and consistent with medical knowledge. To this end, this study explores the use of explainable AI to characterize the risk of developing cardiovascular disease in patients diagnosed with chronic obstructive pulmonary disease. A dataset of 9613 records from patients diagnosed with chronic obstructive pulmonary disease was classified into three categories of cardiovascular risk (low, moderate, and high), as estimated by the Framingham Risk Score. Counterfactual explanations were generated with two different methods, MUlti Counterfactuals via Halton sampling (MUCH) and Diverse Counterfactual Explanation (DiCE). An error control mechanism is introduced in the preliminary classification phase to reduce classification errors and obtain meaningful and representative explanations. Furthermore, the concept of counterfactual conformity is introduced as a new way to validate single counterfactual explanations in terms of their conformity, based on proximity with respect to the factual observation and plausibility. The results indicate that explanations generated with MUCH are generally more plausible (lower implausibility) and more distinguishable (higher discriminative power) from the original class than those generated with DiCE, whereas DiCE shows better availability, proximity and sparsity. Furthermore, filtering the counterfactual explanations by eliminating the non-conformal ones results in an additional improvement in quality. The results of this study suggest that combining counterfactual explanations generation with conformity evaluation is worth further validation and expert assessment to enable future development of support tools that provide personalized recommendations for reducing individual risk by targeting specific subsets of biomarkers.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Estimation and Conformity Evaluation of Multi-Class Counterfactual Explanations for Chronic Disease Prevention.\",\"authors\":\"Marta Lenatti, Alberto Carlevaro, Aziz Guergachi, Karim Keshavjee, Maurizio Mongelli, Alessia Paglialonga\",\"doi\":\"10.1109/JBHI.2024.3492730\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Recent advances in Artificial Intelligence (AI) in healthcare are driving research into solutions that can provide personalized guidance. For these solutions to be used as clinical decision support tools, the results provided must be interpretable and consistent with medical knowledge. To this end, this study explores the use of explainable AI to characterize the risk of developing cardiovascular disease in patients diagnosed with chronic obstructive pulmonary disease. A dataset of 9613 records from patients diagnosed with chronic obstructive pulmonary disease was classified into three categories of cardiovascular risk (low, moderate, and high), as estimated by the Framingham Risk Score. Counterfactual explanations were generated with two different methods, MUlti Counterfactuals via Halton sampling (MUCH) and Diverse Counterfactual Explanation (DiCE). An error control mechanism is introduced in the preliminary classification phase to reduce classification errors and obtain meaningful and representative explanations. Furthermore, the concept of counterfactual conformity is introduced as a new way to validate single counterfactual explanations in terms of their conformity, based on proximity with respect to the factual observation and plausibility. The results indicate that explanations generated with MUCH are generally more plausible (lower implausibility) and more distinguishable (higher discriminative power) from the original class than those generated with DiCE, whereas DiCE shows better availability, proximity and sparsity. Furthermore, filtering the counterfactual explanations by eliminating the non-conformal ones results in an additional improvement in quality. The results of this study suggest that combining counterfactual explanations generation with conformity evaluation is worth further validation and expert assessment to enable future development of support tools that provide personalized recommendations for reducing individual risk by targeting specific subsets of biomarkers.</p>\",\"PeriodicalId\":13073,\"journal\":{\"name\":\"IEEE Journal of Biomedical and Health Informatics\",\"volume\":\"PP \",\"pages\":\"\"},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2024-11-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Journal of Biomedical and Health Informatics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1109/JBHI.2024.3492730\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Biomedical and Health Informatics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/JBHI.2024.3492730","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)在医疗保健领域的最新进展推动了对可提供个性化指导的解决方案的研究。要将这些解决方案用作临床决策支持工具,所提供的结果必须是可解释的,并且与医学知识相一致。为此,本研究探讨了如何使用可解释人工智能来描述确诊为慢性阻塞性肺病的患者罹患心血管疾病的风险。根据弗雷明汉风险评分估算的心血管疾病风险,将 9613 份慢性阻塞性肺病患者的数据集分为三类(低、中、高)。反事实解释由两种不同的方法生成,即通过哈尔顿采样生成的 MUlti 反事实(MUCH)和多样化反事实解释(DiCE)。在初步分类阶段引入了误差控制机制,以减少分类误差,获得有意义和有代表性的解释。此外,还引入了 "反事实符合性 "的概念,作为验证单一反事实解释符合性的一种新方法,其依据是与事实观察的接近性和合理性。结果表明,与使用 DiCE 生成的解释相比,使用 MUCH 生成的解释通常更可信(可信度更低),与原始类别的区分度更高(辨别力更高),而 DiCE 则显示出更好的可用性、接近性和稀疏性。此外,通过剔除不符合事实的解释来过滤反事实解释,也能进一步提高解释质量。本研究的结果表明,将反事实解释生成与符合性评价相结合值得进一步验证和专家评估,以便将来开发支持工具,通过针对特定的生物标志物子集提供降低个人风险的个性化建议。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Estimation and Conformity Evaluation of Multi-Class Counterfactual Explanations for Chronic Disease Prevention.

Recent advances in Artificial Intelligence (AI) in healthcare are driving research into solutions that can provide personalized guidance. For these solutions to be used as clinical decision support tools, the results provided must be interpretable and consistent with medical knowledge. To this end, this study explores the use of explainable AI to characterize the risk of developing cardiovascular disease in patients diagnosed with chronic obstructive pulmonary disease. A dataset of 9613 records from patients diagnosed with chronic obstructive pulmonary disease was classified into three categories of cardiovascular risk (low, moderate, and high), as estimated by the Framingham Risk Score. Counterfactual explanations were generated with two different methods, MUlti Counterfactuals via Halton sampling (MUCH) and Diverse Counterfactual Explanation (DiCE). An error control mechanism is introduced in the preliminary classification phase to reduce classification errors and obtain meaningful and representative explanations. Furthermore, the concept of counterfactual conformity is introduced as a new way to validate single counterfactual explanations in terms of their conformity, based on proximity with respect to the factual observation and plausibility. The results indicate that explanations generated with MUCH are generally more plausible (lower implausibility) and more distinguishable (higher discriminative power) from the original class than those generated with DiCE, whereas DiCE shows better availability, proximity and sparsity. Furthermore, filtering the counterfactual explanations by eliminating the non-conformal ones results in an additional improvement in quality. The results of this study suggest that combining counterfactual explanations generation with conformity evaluation is worth further validation and expert assessment to enable future development of support tools that provide personalized recommendations for reducing individual risk by targeting specific subsets of biomarkers.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Journal of Biomedical and Health Informatics
IEEE Journal of Biomedical and Health Informatics COMPUTER SCIENCE, INFORMATION SYSTEMS-COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
CiteScore
13.60
自引率
6.50%
发文量
1151
期刊介绍: IEEE Journal of Biomedical and Health Informatics publishes original papers presenting recent advances where information and communication technologies intersect with health, healthcare, life sciences, and biomedicine. Topics include acquisition, transmission, storage, retrieval, management, and analysis of biomedical and health information. The journal covers applications of information technologies in healthcare, patient monitoring, preventive care, early disease diagnosis, therapy discovery, and personalized treatment protocols. It explores electronic medical and health records, clinical information systems, decision support systems, medical and biological imaging informatics, wearable systems, body area/sensor networks, and more. Integration-related topics like interoperability, evidence-based medicine, and secure patient data are also addressed.
期刊最新文献
Machine Learning Identification and Classification of Mitosis and Migration of Cancer Cells in a Lab-on-CMOS Capacitance Sensing platform. Biomedical Information Integration via Adaptive Large Language Model Construction. BloodPatrol: Revolutionizing Blood Cancer Diagnosis - Advanced Real-Time Detection Leveraging Deep Learning & Cloud Technologies. EEG Detection and Prediction of Freezing of Gait in Parkinson's Disease Based on Spatiotemporal Coherent Modes. Functional Data Analysis of Hand Rotation for Open Surgical Suturing Skill Assessment.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1