Background: There is increasing interest in the public reporting of health provider benchmarking within clinical registries to identify underperforming sites (also known as outliers). As such, research into the optimal methods and ideal conditions for outlier detection is important. Objective: The aim of this study was to assess the accuracy of benchmarking and outlier classification methods for different values of clinical registry sizes and case volume minimums. Method: Clinical registry datasets were parametrically simulated varying the following parameters: number of sites, clinicians, patients and outcome events, case volume minimum and outcome prevalence. Two benchmarking models (unadjusted and risk-adjusted with logistic regression) and two outlier classification techniques (confidence intervals and control limits) were applied to each simulated dataset. The accuracy of outlier flagging was assessed using the receiver operator characteristic area under the curve (ROCAUC). Results: Risk-adjusted benchmarking performed better than unadjusted benchmarking across the registry sizes evaluated, providing up to a 20% increase in ROCAUC. The number of sites and clinicians had little effect on performance, while higher accuracy with increasing number of patients per site and outcome prevalence was observed. A threshold of 100 to 150 outcome events per site was needed to reach >80% ROCAUC. Conclusion: The use of low prevalence outcomes for benchmarking hospitals to detect outliers may be inappropriate, especially for clinical registries with low patient volumes. Implications for health information management practice: Clinical registries should consider their patient volumes and outcome prevalence before commencing benchmarking analyses to determine if acceptable accuracy can be achieved for their setting.
扫码关注我们
求助内容:
应助结果提醒方式:
