Combination treatments have been of increasing importance in drug development across therapeutic areas to improve treatment response, minimize the development of resistance, and/or minimize adverse events. Pre-clinical in-vitro combination experiments aim to explore the potential of such drug combinations during drug discovery by comparing the observed effect of the combination with the expected treatment effect under the assumption of no interaction (i.e., null model). This tutorial will address important design aspects of such experiments to allow proper statistical evaluation. Additionally, it will highlight the Biochemically Intuitive Generalized Loewe methodology (BIGL R package available on CRAN) to statistically detect deviations from the expectation under different null models. A clear advantage of the methodology is the quantification of the effect sizes, together with confidence interval while controlling the directional false coverage rate. Finally, a case study will showcase the workflow in analyzing combination experiments.
Time-to-event estimands are central to many oncology clinical trials. The estimands framework (addendum to the ICH E9 guideline) calls for precisely defining the treatment effect of interest to align with the clinical question of interest and requires predefining the handling of intercurrent events (ICEs) that occur after treatment initiation and "affect either the interpretation or the existence of the measurements associated with the clinical question of interest." We discuss a practical problem in clinical trial design and execution, that is, in some clinical contexts it is not feasible to systematically follow patients to an event of interest. Loss to follow-up in the presence of intercurrent events can affect the meaning and interpretation of the study results. We provide recommendations for trial design, stressing the need for close alignment of the clinical question of interest and study design, impact on data collection, and other practical implications. When patients cannot be systematically followed, compromise may be necessary to select the best available estimand that can be feasibly estimated under the circumstances. We discuss the use of sensitivity and supplementary analyses to examine assumptions of interest.
It is unclear how sceptical priors impact adaptive trials. We assessed the influence of priors expressing a spectrum of scepticism on the performance of several Bayesian, multi-stage, adaptive clinical trial designs using binary outcomes under different clinical scenarios. Simulations were conducted using fixed stopping rules and stopping rules calibrated to keep type 1 error rates at approximately 5%. We assessed total sample sizes, event rates, event counts, probabilities of conclusiveness and selecting the best arm, root mean squared errors (RMSEs) of the estimated treatment effect in the selected arms, and ideal design percentages (IDPs; which combines arm selection probabilities, power, and consequences of selecting inferior arms), with RMSEs and IDPs estimated in conclusive trials only and after selecting the control arm in inconclusive trials. Using fixed stopping rules, increasingly sceptical priors led to larger sample sizes, more events, higher IDPs in simulations ending in superiority, and lower RMSEs, lower probabilities of conclusiveness/selecting the best arm, and lower IDPs when selecting controls in inconclusive simulations. With calibrated stopping rules, the effects of increased scepticism on sample sizes and event counts were attenuated, and increased scepticism increased the probabilities of conclusiveness/selecting the best arm and IDPs when selecting controls in inconclusive simulations without substantially increasing sample sizes. Results from trial designs with gentle adaptation and non-informative priors resembled those from designs with more aggressive adaptation using weakly-to-moderately sceptical priors. In conclusion, the use of somewhat sceptical priors in adaptive trial designs with binary outcomes seems reasonable when considering multiple performance metrics simultaneously.