Traumatic brain injury (TBI) requires timely and reliable severity assessment to support critical clinical decision-making. This study proposes an interpretable machine learning framework for TBI severity prediction using two datasets: the public HPTBI dataset and a newly developed 103_TBI dataset comprising 504 patients. After data preprocessing and feature selection, ensemble learning models-particularly Random Forest and XGBoost-achieved accuracies exceeding 94%. To enhance transparency and clinical trust, we introduce a dual-layer interpretability strategy that integrates post-hoc explanation techniques (SHAP, LIME, PFI, PDP, and counterfactual analysis) with a knowledge-graph-based evaluation of feature interactions. The attribution methods show high agreement () and consistently identify key clinical predictors such as the Glasgow Coma Scale (GCS), midline shift, and pulse rate. These insights align closely with expert judgment, supporting the clinical credibility of the model explanations. Additionally, the knowledge graph reveals multivariate relationships critical to outcome determination. By integrating predictive models with clinical interpretability techniques, the proposed framework offers reliable clinical support to assist neurotrauma triage and expert validation. This work therefore demonstrates the potential of integrating explainable AI with domain knowledge to advance TBI severity prediction.
扫码关注我们
求助内容:
应助结果提醒方式:
