Objective
To evaluate the performance of a deep learning (DL) model in an external dataset to assess radiographic knee osteoarthritis using Kellgren-Lawrence (KL) grades against versatile human readers.
Materials and methods
Two-hundred-eight knee anteroposterior conventional radiographs (CRs) were included in this retrospective study. Four readers (three radiologists, one orthopedic surgeon) assessed the KL grades and consensus grade was derived as the mean of these. The DL model was trained using all the CRs from Multicenter Osteoarthritis Study (MOST) and validated on Osteoarthritis Initiative (OAI) dataset and then tested on our external dataset. To assess the agreement between the graders, Cohen's quadratic kappa (k) with 95 % confidence intervals were used. Diagnostic performance was measured using confusion matrices and receiver operating characteristic (ROC) analyses.
Results
The multiclass (KL grades from 0 to 4) diagnostic performance of the DL model was multifaceted: sensitivities were between 0.372 and 1.000, specificities 0.691–0.974, PPVs 0.227–0.879, NPVs 0.622–1.000, and AUCs 0.786–0.983. The overall balanced accuracy was 0.693, AUC 0.886, and kappa 0.820. If only dichotomous KL grading (i.e. KL0-1 vs. KL2-4) was utilized, superior metrics were seen with an overall balanced accuracy of 0.902 and AUC of 0.967. A substantial agreement between each reader and DL model was found: the inter-rater agreement was 0.737 [0.685–0.790] for the radiology resident, 0.761 [0.707–0.816] for the musculoskeletal radiology fellow, 0.802 [0.761–0.843] for the senior musculoskeletal radiologist, and 0.818 [0.775–0.860] for the orthopedic surgeon.
Conclusion
In an external dataset, our DL model can grade knee osteoarthritis with diagnostic accuracy comparable to highly experienced human readers.