Background and objective: Missing data and unmeasured confounding are key challenges for external comparator studies. This work evaluates bias and other performance characteristics depending on missingness and unmeasured confounding by means of two case studies and simulations.
Methods: Two case studies were constructed by taking the treatment arms from two randomised controlled trials and an external real-world data source that exhibited substantial missingness. The indications of the randomised controlled trials were multiple myeloma and metastatic hormone-sensitive prostate cancer. Overall survival was taken as the main endpoint. The effects of missing data and unmeasured confounding were assessed for the case studies by reporting estimated external comparator versus randomised controlled trial treatment effects. Based on the two case studies, simulations were performed broadening the settings by varying the underlying hazard ratio, the sample size, the sample size ratio between the experimental arm and the external comparator, the number of missing covariates and the percentage of missingness. Thereby, bias and other performance metrics could be quantified dependent on these factors.
Results: For the multiple myeloma external comparator study, results were in line with the randomised controlled trial, despite missingness and potential unmeasured confounding, while for the metastatic hormone-sensitive prostate cancer case study missing data led to a low sample size, leading overall to inconclusive results. Furthermore, for the metastatic hormone-sensitive prostate cancer study, missing data in important eligibility criteria led to further limitations. Simulations were successfully applied to gain a quantitative understanding of the effects of missing data and unmeasured confounding.
Conclusions: This exploratory study confirmed external comparator strengths and limitations by quantifying the impact of missing data and unmeasured confounding using case studies and simulations. In particular, missing data in key eligibility criteria were seen to limit the ability to derive the external comparator target analysis population accurately, while simulations demonstrated the magnitude of bias to expect for various settings.