Decision curve analysis (DCA) bridges the gap between statistical accuracy and clinical usefulness - a distinction frequently overlooked in diagnostic research. Using a simulated cohort representing a real-world diagnostic scenario, this tutorial demonstrates how predictors with similar ROC-based performance can yield markedly different net benefit profiles when evaluated through DCA. Three tools were compared: a strong predictor (composite clinical score), a moderate biomarker (leukocytes), and a weak marker with modest AUC but limited practical value (serum sodium). Whereas ROC curves portray discrimination alone, decision curves situate performance within real clinical trade-offs, making explicit when a model adds value beyond default strategies such as treating all or none. The tutorial provides a step-by-step framework for interpretation, clarifies frequent misconceptions (thresholds, prevalence effects, calibration), and illustrates how DCA incorporates the consequences of decisions rather than just their statistical accuracy. Rather than adding 'just another metric', DCA reframes evaluation around a practical question: does using this model improve decisions across clinically reasonable thresholds?
扫码关注我们
求助内容:
应助结果提醒方式:
