A Study of LIME and SHAP Model Explainers for Autonomous Disease Predictions

Sannidhi Rao, S. Mehta, Shreya Kulkarni, Harshal Dalvi, Neha Katre, M. Narvekar
{"title":"A Study of LIME and SHAP Model Explainers for Autonomous Disease Predictions","authors":"Sannidhi Rao, S. Mehta, Shreya Kulkarni, Harshal Dalvi, Neha Katre, M. Narvekar","doi":"10.1109/IBSSC56953.2022.10037324","DOIUrl":null,"url":null,"abstract":"Autonomous disease prediction systems are the new normal in the health industry today. These systems are used for decision support for medical practitioners and work based on users' health details input. These systems are based on Machine Learning models for generating predictions but at the same time are not capable to explain the rationale behind their prediction as the data size grows exponentially, resulting in the lack of user trust and transparency in the decision-making abilities of these systems. Explainable AI (XAI) can help users understand and interpret such autonomous predictions helping to restore the users' trust as well as making the decision-making process of such systems transparent. The addition of the XAI layer on top of the Machine Learning models in an autonomous system can also work as a decision support system for medical practitioners to aid the diagnosis process. In this research paper, we have analyzed the two most popular model explainers Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) for their applicability in autonomous disease prediction.","PeriodicalId":426897,"journal":{"name":"2022 IEEE Bombay Section Signature Conference (IBSSC)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Bombay Section Signature Conference (IBSSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IBSSC56953.2022.10037324","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Autonomous disease prediction systems are the new normal in the health industry today. These systems are used for decision support for medical practitioners and work based on users' health details input. These systems are based on Machine Learning models for generating predictions but at the same time are not capable to explain the rationale behind their prediction as the data size grows exponentially, resulting in the lack of user trust and transparency in the decision-making abilities of these systems. Explainable AI (XAI) can help users understand and interpret such autonomous predictions helping to restore the users' trust as well as making the decision-making process of such systems transparent. The addition of the XAI layer on top of the Machine Learning models in an autonomous system can also work as a decision support system for medical practitioners to aid the diagnosis process. In this research paper, we have analyzed the two most popular model explainers Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) for their applicability in autonomous disease prediction.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
自主疾病预测的LIME和SHAP模型解释器研究
自主疾病预测系统是当今健康行业的新常态。这些系统用于为医疗从业者提供决策支持,并根据用户的健康详细信息输入工作。这些系统基于机器学习模型来生成预测,但同时由于数据规模呈指数级增长,无法解释其预测背后的基本原理,导致这些系统的决策能力缺乏用户信任和透明度。可解释的人工智能(XAI)可以帮助用户理解和解释这种自主预测,有助于恢复用户的信任,并使这种系统的决策过程透明。在自治系统中,在机器学习模型之上添加XAI层也可以作为医疗从业者的决策支持系统,以帮助诊断过程。在本研究中,我们分析了两种最流行的模型解释器局部可解释模型不可知论解释(LIME)和SHapley加性解释(SHAP)在自主疾病预测中的适用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Decentralized Ride Hailing System using Blockchain and IPFS Implementation of RFID-based Lab Inventory System Monkeypox Skin Lesion Classification Using Transfer Learning Approach A Solution to the Techno-Economic Generation Expansion Planning using Enhanced Dwarf Mongoose Optimization Algorithm Citation Count Prediction Using Different Time Series Analysis Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1