{"title":"使用MIMIC-III数据预测肺炎死亡率风险的可解释机器学习模型","authors":"James Sanii, Wai-Yip Chan","doi":"10.1109/ISCMI56532.2022.10068438","DOIUrl":null,"url":null,"abstract":"To gain trust, machine learning (ML) models used in high stake applications such as clinical decision support need to provide explainable behaviours and outputs. To assess whether interpretable explanations can be obtained without sacrificing prediction performance, we compare using “black box” versus “glass box” models for predicting the mortality risk of patients diagnosed with pneumonia, using data in the MIMIC-III dataset. We examine five types of black box models: random forest (RF), support vector machine (SVM), gradient boosting classifier (GBC), AdaBoost (ADA), and multilayer perceptron (MLP), and three types of glassbox models: K-nearest neighbor (KNN), explainable boosting machine (EBM), and generalized additive models (GAM). When trained using 417 features, a black box RF model performs best with AUC of 0.896. With the feature set size reduced to 19, an EBM model performs the best with AUC 0.872. Both models exceed the AUC of 0.661, the best previously reported for the task. Our results suggest that ML models with inbuilt explainability may provide prediction power as attractive as black box models.","PeriodicalId":340397,"journal":{"name":"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explainable Machine Learning Models for Pneumonia Mortality Risk Prediction Using MIMIC-III Data\",\"authors\":\"James Sanii, Wai-Yip Chan\",\"doi\":\"10.1109/ISCMI56532.2022.10068438\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To gain trust, machine learning (ML) models used in high stake applications such as clinical decision support need to provide explainable behaviours and outputs. To assess whether interpretable explanations can be obtained without sacrificing prediction performance, we compare using “black box” versus “glass box” models for predicting the mortality risk of patients diagnosed with pneumonia, using data in the MIMIC-III dataset. We examine five types of black box models: random forest (RF), support vector machine (SVM), gradient boosting classifier (GBC), AdaBoost (ADA), and multilayer perceptron (MLP), and three types of glassbox models: K-nearest neighbor (KNN), explainable boosting machine (EBM), and generalized additive models (GAM). When trained using 417 features, a black box RF model performs best with AUC of 0.896. With the feature set size reduced to 19, an EBM model performs the best with AUC 0.872. Both models exceed the AUC of 0.661, the best previously reported for the task. Our results suggest that ML models with inbuilt explainability may provide prediction power as attractive as black box models.\",\"PeriodicalId\":340397,\"journal\":{\"name\":\"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)\",\"volume\":\"83 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCMI56532.2022.10068438\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCMI56532.2022.10068438","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Explainable Machine Learning Models for Pneumonia Mortality Risk Prediction Using MIMIC-III Data
To gain trust, machine learning (ML) models used in high stake applications such as clinical decision support need to provide explainable behaviours and outputs. To assess whether interpretable explanations can be obtained without sacrificing prediction performance, we compare using “black box” versus “glass box” models for predicting the mortality risk of patients diagnosed with pneumonia, using data in the MIMIC-III dataset. We examine five types of black box models: random forest (RF), support vector machine (SVM), gradient boosting classifier (GBC), AdaBoost (ADA), and multilayer perceptron (MLP), and three types of glassbox models: K-nearest neighbor (KNN), explainable boosting machine (EBM), and generalized additive models (GAM). When trained using 417 features, a black box RF model performs best with AUC of 0.896. With the feature set size reduced to 19, an EBM model performs the best with AUC 0.872. Both models exceed the AUC of 0.661, the best previously reported for the task. Our results suggest that ML models with inbuilt explainability may provide prediction power as attractive as black box models.