Sannidhi Rao, S. Mehta, Shreya Kulkarni, Harshal Dalvi, Neha Katre, M. Narvekar
{"title":"自主疾病预测的LIME和SHAP模型解释器研究","authors":"Sannidhi Rao, S. Mehta, Shreya Kulkarni, Harshal Dalvi, Neha Katre, M. Narvekar","doi":"10.1109/IBSSC56953.2022.10037324","DOIUrl":null,"url":null,"abstract":"Autonomous disease prediction systems are the new normal in the health industry today. These systems are used for decision support for medical practitioners and work based on users' health details input. These systems are based on Machine Learning models for generating predictions but at the same time are not capable to explain the rationale behind their prediction as the data size grows exponentially, resulting in the lack of user trust and transparency in the decision-making abilities of these systems. Explainable AI (XAI) can help users understand and interpret such autonomous predictions helping to restore the users' trust as well as making the decision-making process of such systems transparent. The addition of the XAI layer on top of the Machine Learning models in an autonomous system can also work as a decision support system for medical practitioners to aid the diagnosis process. In this research paper, we have analyzed the two most popular model explainers Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) for their applicability in autonomous disease prediction.","PeriodicalId":426897,"journal":{"name":"2022 IEEE Bombay Section Signature Conference (IBSSC)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A Study of LIME and SHAP Model Explainers for Autonomous Disease Predictions\",\"authors\":\"Sannidhi Rao, S. Mehta, Shreya Kulkarni, Harshal Dalvi, Neha Katre, M. Narvekar\",\"doi\":\"10.1109/IBSSC56953.2022.10037324\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Autonomous disease prediction systems are the new normal in the health industry today. These systems are used for decision support for medical practitioners and work based on users' health details input. These systems are based on Machine Learning models for generating predictions but at the same time are not capable to explain the rationale behind their prediction as the data size grows exponentially, resulting in the lack of user trust and transparency in the decision-making abilities of these systems. Explainable AI (XAI) can help users understand and interpret such autonomous predictions helping to restore the users' trust as well as making the decision-making process of such systems transparent. The addition of the XAI layer on top of the Machine Learning models in an autonomous system can also work as a decision support system for medical practitioners to aid the diagnosis process. In this research paper, we have analyzed the two most popular model explainers Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) for their applicability in autonomous disease prediction.\",\"PeriodicalId\":426897,\"journal\":{\"name\":\"2022 IEEE Bombay Section Signature Conference (IBSSC)\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE Bombay Section Signature Conference (IBSSC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IBSSC56953.2022.10037324\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Bombay Section Signature Conference (IBSSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IBSSC56953.2022.10037324","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Study of LIME and SHAP Model Explainers for Autonomous Disease Predictions
Autonomous disease prediction systems are the new normal in the health industry today. These systems are used for decision support for medical practitioners and work based on users' health details input. These systems are based on Machine Learning models for generating predictions but at the same time are not capable to explain the rationale behind their prediction as the data size grows exponentially, resulting in the lack of user trust and transparency in the decision-making abilities of these systems. Explainable AI (XAI) can help users understand and interpret such autonomous predictions helping to restore the users' trust as well as making the decision-making process of such systems transparent. The addition of the XAI layer on top of the Machine Learning models in an autonomous system can also work as a decision support system for medical practitioners to aid the diagnosis process. In this research paper, we have analyzed the two most popular model explainers Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) for their applicability in autonomous disease prediction.