Approaches to comparing safety and efficacy of interventions include analyzing data from randomized controlled trials (RCTs), registries and observational databases (ODBs). RCTs are regarded as the gold standard but data from such trials are sometimes unavailable because a disease is uncommon, because the intervention is uncommon, because of structural limitations or because randomization cannot be done for practical or (seemingly) ethical reasons. There are many examples of an unproved intervention being so widely-believed to be effective that clinical trialists and potential subjects decline randomization. Often, when a RCT is finally done the intervention is proved ineffective or even harmful. These situations are termed medical reversals and are not uncommon [1,2]. There is also the dilemma of when seemingly similar RCTs report discordant conclisions
Data from high-quality registries, especially ODBs can be used when data from RCTs are unavailable but also have limitations. Biases and confounding co-variates may be unknown, difficult or impossible to identify and/or difficult to adjust for adequately. However, ODBs sometimes have large numbers of diverse subjects and often give answers more useful to clinicians than RCTs. Side-by-side comparisons suggest analyses from high-quality ODBs often give similar conclusions from high quality RCTs. Meta-analyses combining data from RCTs, registries and ODBs are sometimes appropriate. We suggest increased use of registries and ODBs to compare efficacy of interventions.