An Explainable AI powered Early Warning System to address Patient Readmission Risk

Tanusree De, Ahmeduvesh Mevawala, Ramyasri Nemani
{"title":"An Explainable AI powered Early Warning System to address Patient Readmission Risk","authors":"Tanusree De, Ahmeduvesh Mevawala, Ramyasri Nemani","doi":"10.1109/nbec53282.2021.9618766","DOIUrl":null,"url":null,"abstract":"Hospital readmission is undesirable for all the involved parties, the patient, the hospital and the insurer. Readmission put patients at-risk for hospital acquired infections, medical errors and unfavorable outcomes. For hospitals, it leads to a gradual increase in operating expenses. For payers, readmission means additional cost. So, predicting the possibility of patient readmission is very critical and highly relevant for all the parties involved. There are powerful machine learning algorithms, like Random Forest, XGBoost, Neural Net that can be used to develop the predictive model to predict the probability of patient readmission. However, these models are all black box; they can give the prediction with high accuracy; however, they do not explain how they arrived at the prediction. Herein comes the role of Explainable AI. In this paper, we have developed a novel model-specific local explanation methodology to derive explanation at an individual patient level, considering the inner learning process of a Random-Forest model. The derived explanations from proposed methodology are human-interpretable irrespective of the complexities of the underlying Random-Forest model and the explanations provide guidance to the doctor for prescribing the necessary remedy to the patient to prevent him/her from readmission within thirty days of discharge.","PeriodicalId":297399,"journal":{"name":"2021 IEEE National Biomedical Engineering Conference (NBEC)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE National Biomedical Engineering Conference (NBEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/nbec53282.2021.9618766","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Hospital readmission is undesirable for all the involved parties, the patient, the hospital and the insurer. Readmission put patients at-risk for hospital acquired infections, medical errors and unfavorable outcomes. For hospitals, it leads to a gradual increase in operating expenses. For payers, readmission means additional cost. So, predicting the possibility of patient readmission is very critical and highly relevant for all the parties involved. There are powerful machine learning algorithms, like Random Forest, XGBoost, Neural Net that can be used to develop the predictive model to predict the probability of patient readmission. However, these models are all black box; they can give the prediction with high accuracy; however, they do not explain how they arrived at the prediction. Herein comes the role of Explainable AI. In this paper, we have developed a novel model-specific local explanation methodology to derive explanation at an individual patient level, considering the inner learning process of a Random-Forest model. The derived explanations from proposed methodology are human-interpretable irrespective of the complexities of the underlying Random-Forest model and the explanations provide guidance to the doctor for prescribing the necessary remedy to the patient to prevent him/her from readmission within thirty days of discharge.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一个可解释的人工智能驱动的早期预警系统,以解决患者再入院风险
再入院对所有相关方,病人、医院和保险公司来说都是不可取的。再入院会使患者面临医院获得性感染、医疗差错和不良后果的风险。对于医院来说,这导致了运营费用的逐渐增加。对于付款人来说,重新入学意味着额外的费用。因此,预测病人再入院的可能性对所有相关方来说都是非常关键和高度相关的。有强大的机器学习算法,比如随机森林,XGBoost,神经网络,可以用来开发预测模型来预测病人再入院的概率。然而,这些模型都是黑盒子;它们可以给出高精度的预测;然而,他们没有解释他们是如何得出这一预测的。这就是可解释人工智能的作用。在本文中,我们开发了一种新的模型特定的局部解释方法,考虑随机森林模型的内部学习过程,在个体患者水平上推导解释。无论潜在的随机森林模型的复杂性如何,从所提出的方法中得出的解释都是人类可解释的,并且这些解释为医生提供了指导,以便为患者开出必要的治疗处方,以防止他/她在出院后30天内再次入院。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Automated Paralysis Patient Monitoring System Association between Physical Performance and Autonomic Nervous System in Elderly Fallers An Automatic Vein Detection System Using Deep Learning for Intravenous (IV) Access Procedure Heart Disease Prediction Using Machine Learning Techniques Fetal Health Classification Using Supervised Learning Approach
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1