医疗保健中的可解释性:本地机器学习可解释性技术的比较研究

Radwa El Shawi, Youssef Mohamed, M. Al-Mallah, S. Sakr
{"title":"医疗保健中的可解释性:本地机器学习可解释性技术的比较研究","authors":"Radwa El Shawi, Youssef Mohamed, M. Al-Mallah, S. Sakr","doi":"10.1109/CBMS.2019.00065","DOIUrl":null,"url":null,"abstract":"Although complex machine learning models (e.g., Random Forest, Neural Networks) are commonly outperforming the traditional simple interpretable models (e.g., Linear Regression, Decision Tree), in the healthcare domain, clinicians find it hard to understand and trust these complex models due to the lack of intuition and explanation of their predictions. With the new General Data Protection Regulation (GDPR), the importance for plausibility and verifiability of the predictions made by machine learning models has become essential. To tackle this challenge, recently, several machine learning interpretability techniques have been developed and introduced. In general, the main aim of these interpretability techniques is to shed light and provide insights into the predictions process of the machine learning models and explain how the model predictions have resulted. However, in practice, assessing the quality of the explanations provided by the various interpretability techniques is still questionable. In this paper, we present a comprehensive experimental evaluation of three recent and popular local model agnostic interpretability techniques, namely, LIME, SHAP and Anchors on different types of real-world healthcare data. Our experimental evaluation covers different aspects for its comparison including identity, stability, separability, similarity, execution time and bias detection. The results of our experiments show that LIME achieves the lowest performance for the identity metric and the highest performance for the separability metric across all datasets included in this study. On average, SHAP has the smallest average time to output explanation across all datasets included in this study. For detecting the bias, SHAP enables the participants to better detect the bias.","PeriodicalId":311634,"journal":{"name":"2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS)","volume":"32 3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"94","resultStr":"{\"title\":\"Interpretability in HealthCare A Comparative Study of Local Machine Learning Interpretability Techniques\",\"authors\":\"Radwa El Shawi, Youssef Mohamed, M. Al-Mallah, S. Sakr\",\"doi\":\"10.1109/CBMS.2019.00065\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Although complex machine learning models (e.g., Random Forest, Neural Networks) are commonly outperforming the traditional simple interpretable models (e.g., Linear Regression, Decision Tree), in the healthcare domain, clinicians find it hard to understand and trust these complex models due to the lack of intuition and explanation of their predictions. With the new General Data Protection Regulation (GDPR), the importance for plausibility and verifiability of the predictions made by machine learning models has become essential. To tackle this challenge, recently, several machine learning interpretability techniques have been developed and introduced. In general, the main aim of these interpretability techniques is to shed light and provide insights into the predictions process of the machine learning models and explain how the model predictions have resulted. However, in practice, assessing the quality of the explanations provided by the various interpretability techniques is still questionable. In this paper, we present a comprehensive experimental evaluation of three recent and popular local model agnostic interpretability techniques, namely, LIME, SHAP and Anchors on different types of real-world healthcare data. Our experimental evaluation covers different aspects for its comparison including identity, stability, separability, similarity, execution time and bias detection. The results of our experiments show that LIME achieves the lowest performance for the identity metric and the highest performance for the separability metric across all datasets included in this study. On average, SHAP has the smallest average time to output explanation across all datasets included in this study. For detecting the bias, SHAP enables the participants to better detect the bias.\",\"PeriodicalId\":311634,\"journal\":{\"name\":\"2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS)\",\"volume\":\"32 3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"94\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CBMS.2019.00065\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CBMS.2019.00065","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 94

摘要

尽管复杂的机器学习模型(例如随机森林、神经网络)通常优于传统的简单可解释模型(例如线性回归、决策树),但在医疗保健领域,由于缺乏直觉和对其预测的解释,临床医生发现很难理解和信任这些复杂的模型。随着新的通用数据保护条例(GDPR)的实施,机器学习模型所做预测的合理性和可验证性变得至关重要。为了应对这一挑战,最近已经开发并引入了几种机器学习可解释性技术。一般来说,这些可解释性技术的主要目的是阐明和提供对机器学习模型的预测过程的见解,并解释模型预测的结果。然而,在实践中,评估各种可解释性技术提供的解释的质量仍然存在问题。在本文中,我们对三种最近流行的局部模型不可知可解释性技术(即LIME、SHAP和anchor)在不同类型的现实世界医疗数据上进行了全面的实验评估。我们的实验评估涵盖了同一性、稳定性、可分离性、相似性、执行时间和偏差检测等多个方面进行比较。我们的实验结果表明,LIME在本研究中包含的所有数据集上实现了身份度量的最低性能和可分性度量的最高性能。平均而言,在本研究中包含的所有数据集中,SHAP的平均输出解释时间最短。对于偏倚的检测,SHAP使参与者能够更好地检测偏倚。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Interpretability in HealthCare A Comparative Study of Local Machine Learning Interpretability Techniques
Although complex machine learning models (e.g., Random Forest, Neural Networks) are commonly outperforming the traditional simple interpretable models (e.g., Linear Regression, Decision Tree), in the healthcare domain, clinicians find it hard to understand and trust these complex models due to the lack of intuition and explanation of their predictions. With the new General Data Protection Regulation (GDPR), the importance for plausibility and verifiability of the predictions made by machine learning models has become essential. To tackle this challenge, recently, several machine learning interpretability techniques have been developed and introduced. In general, the main aim of these interpretability techniques is to shed light and provide insights into the predictions process of the machine learning models and explain how the model predictions have resulted. However, in practice, assessing the quality of the explanations provided by the various interpretability techniques is still questionable. In this paper, we present a comprehensive experimental evaluation of three recent and popular local model agnostic interpretability techniques, namely, LIME, SHAP and Anchors on different types of real-world healthcare data. Our experimental evaluation covers different aspects for its comparison including identity, stability, separability, similarity, execution time and bias detection. The results of our experiments show that LIME achieves the lowest performance for the identity metric and the highest performance for the separability metric across all datasets included in this study. On average, SHAP has the smallest average time to output explanation across all datasets included in this study. For detecting the bias, SHAP enables the participants to better detect the bias.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Analysing the Performance of a Real-Time Healthcare 4.0 System using Shared Frailty Time to Event Models Performance of Data Enhancements and Training Optimization for Neural Network: A Polyp Detection Case Study I Know How you Feel Now, and Here's why!: Demystifying Time-Continuous High Resolution Text-Based Affect Predictions in the Wild Identifying Diabetic Retinopathy from OCT Images using Deep Transfer Learning with Artificial Neural Networks Towards an Analysis of Post-Transcriptional Gene Regulation in Psoriasis via microRNAs using Machine Learning Algorithms
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1