Introduction: Predicting and interpreting crash severity is essential for developing cost-effective safety measures. Machine learning (ML) models in crash severity studies have attracted much attention recently due to their promising predicted performance. However, the limited interpretability of ML techniques is a common critique. Additionally, the inherent data imbalance in crash datasets, mainly due to a scarcity of fatal injury (FI) crashes, presents challenges for both classifiers and interpreters. Method: Motivated by these research needs, innovative resampling techniques and ML methods are introduced and compared to model a Washington State dataset comprising traffic crashes from 2014 to 2018. Results: When compared to the traditional resampling methods, the random forest model trained on the datasets synthesized by deep-learning resampling techniques demonstrates significantly improved sensitivity and G-mean performance. Furthermore, the interpretable ML approach, Shapley Additive explanation (SHAP), approach is employed to quantify the individual and interaction effects of risk factors based on the predicted results. Significant risk factors are identified, including airbag, crash type, posted speed limit and grade percentage. With the SHAP method, the individual effects and interaction effects of risk factors are explored. It is observed that roadways in rural (urban) had positive (negative) effects on the crash severity. Compared with non-FI (nFI) crashes, speed limits have more effects on FI crashes. Drivers involved in rear/front-end crashes under the influence of alcohol were more likely to be associated with FI crashes. Practical Applications: These findings hold significant implications for the development of precise crash modification factors for transportation departments dealing with imbalanced traffic crash data.