Pub Date : 2023-10-31DOI: 10.1080/19466315.2023.2277175
Shuhei Kaneko
{"title":"A method for ensuring a consistent dose-response relationship between an entire population and one region in multiregional dose-response studies using MCP-Mod","authors":"Shuhei Kaneko","doi":"10.1080/19466315.2023.2277175","DOIUrl":"https://doi.org/10.1080/19466315.2023.2277175","url":null,"abstract":"","PeriodicalId":51280,"journal":{"name":"Statistics in Biopharmaceutical Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135870271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-18DOI: 10.1080/19466315.2023.2257894
Diane Uschner, Oleksandr Sverdlov, Kerstine Carter, Jonathan Chipman, Olga Kuznetsova, Jone Renteria, Adam Lane, Chris Barker, Nancy Geller, Michael Proschan, Martin Posch, Sergey Tarima, Frank Bretz, William F. Rosenberger
1. AbstractRecent examples for unplanned external events are the global COVID-19 pandemic, the war in Ukraine, or most recently Hurricane Ian in Puerto Rico. Disruptions due to unplanned external events can lead to violation of assumptions in clinical trials. In certain situations, randomization tests can provide non-parametric inference that is robust to violation of the assumptions usually made in clinical trials. The ICH E9 (R1) Addendum on estimands and sensitivity analyses provides a guideline for aligning the trial objectives with strategies to address disruptions in clinical trials. In this paper, we embed randomization tests within the estimand framework to allow for inference following disruptions in clinical trials in a way that reflects recent literature. A stylized clinical trial is presented to illustrate the method, and a simulation study highlights situations when a randomization test that is conducted under the intention-to-treat principle can provide unbiased results.DisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also. FundingThe author(s) reported there is no funding associated with the work featured in this article.
{"title":"Using Randomization Tests to Address Disruptions in Clinical Trials: A Report from the NISS Ingram Olkin Forum Series on Unplanned Clinical Trial Disruptions","authors":"Diane Uschner, Oleksandr Sverdlov, Kerstine Carter, Jonathan Chipman, Olga Kuznetsova, Jone Renteria, Adam Lane, Chris Barker, Nancy Geller, Michael Proschan, Martin Posch, Sergey Tarima, Frank Bretz, William F. Rosenberger","doi":"10.1080/19466315.2023.2257894","DOIUrl":"https://doi.org/10.1080/19466315.2023.2257894","url":null,"abstract":"1. AbstractRecent examples for unplanned external events are the global COVID-19 pandemic, the war in Ukraine, or most recently Hurricane Ian in Puerto Rico. Disruptions due to unplanned external events can lead to violation of assumptions in clinical trials. In certain situations, randomization tests can provide non-parametric inference that is robust to violation of the assumptions usually made in clinical trials. The ICH E9 (R1) Addendum on estimands and sensitivity analyses provides a guideline for aligning the trial objectives with strategies to address disruptions in clinical trials. In this paper, we embed randomization tests within the estimand framework to allow for inference following disruptions in clinical trials in a way that reflects recent literature. A stylized clinical trial is presented to illustrate the method, and a simulation study highlights situations when a randomization test that is conducted under the intention-to-treat principle can provide unbiased results.DisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also. FundingThe author(s) reported there is no funding associated with the work featured in this article.","PeriodicalId":51280,"journal":{"name":"Statistics in Biopharmaceutical Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135823840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-13DOI: 10.1080/19466315.2023.2267774
Anqi Zhao, Peng Ding
AbstractRandomized trials balance all covariates on average and are the gold standard for estimating treatment effects. Chance imbalances nevertheless exist more or less in realized treatment allocations and intrigue an important question: what should we do if the treatment groups differ with respect to some important baseline characteristics? A common strategy is to conduct a preliminary test of the balance of baseline covariates after randomization, and invoke covariate adjustment for subsequent inference if and only if the realized allocation fails some prespecified criterion. Although such practice is intuitive and popular among practitioners, the existing literature has so far only evaluated its properties under strong parametric model assumptions in theory and simulation, yielding results of limited generality. To fill this gap, we examine two strategies for conducting preliminary test-based covariate adjustment by regression, and evaluate the validity and efficiency of the resulting inferences from the randomization-based perspective. The main result is twofold. First, the preliminary-test estimator based on the analysis of covariance can be even less efficient than the unadjusted difference in means, and risks anticonservative confidence intervals based on normal approximation even with the robust standard error. Second, the preliminary-test estimator based on the fully interacted specification is less efficient than its counterpart under the always-adjust strategy, and yields overconservative confidence intervals based on normal approximation. In addition, although the Fisher randomization test is still finite-sample exact for testing the sharp null hypothesis of no treatment effect on any individual, it is no longer valid for testing the weak null hypothesis of zero average treatment effect in large samples even with properly studentized test statistics. These undesirable properties are due to the asymptotic non-normality of the preliminary-test estimators. Based on theory and simulation, we echo the existing literature and do not recommend the preliminary-test procedure for covariate adjustment in randomized trials.Keywords: Causal inferencedesign-based inferenceefficiencyFisher randomization testregression adjustmentrerandomizationDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also. FundingThe author(s) reported there is no funding associated with the work featured in this article.
{"title":"A randomization-based theory for preliminary testing of covariate balance in controlled trials","authors":"Anqi Zhao, Peng Ding","doi":"10.1080/19466315.2023.2267774","DOIUrl":"https://doi.org/10.1080/19466315.2023.2267774","url":null,"abstract":"AbstractRandomized trials balance all covariates on average and are the gold standard for estimating treatment effects. Chance imbalances nevertheless exist more or less in realized treatment allocations and intrigue an important question: what should we do if the treatment groups differ with respect to some important baseline characteristics? A common strategy is to conduct a preliminary test of the balance of baseline covariates after randomization, and invoke covariate adjustment for subsequent inference if and only if the realized allocation fails some prespecified criterion. Although such practice is intuitive and popular among practitioners, the existing literature has so far only evaluated its properties under strong parametric model assumptions in theory and simulation, yielding results of limited generality. To fill this gap, we examine two strategies for conducting preliminary test-based covariate adjustment by regression, and evaluate the validity and efficiency of the resulting inferences from the randomization-based perspective. The main result is twofold. First, the preliminary-test estimator based on the analysis of covariance can be even less efficient than the unadjusted difference in means, and risks anticonservative confidence intervals based on normal approximation even with the robust standard error. Second, the preliminary-test estimator based on the fully interacted specification is less efficient than its counterpart under the always-adjust strategy, and yields overconservative confidence intervals based on normal approximation. In addition, although the Fisher randomization test is still finite-sample exact for testing the sharp null hypothesis of no treatment effect on any individual, it is no longer valid for testing the weak null hypothesis of zero average treatment effect in large samples even with properly studentized test statistics. These undesirable properties are due to the asymptotic non-normality of the preliminary-test estimators. Based on theory and simulation, we echo the existing literature and do not recommend the preliminary-test procedure for covariate adjustment in randomized trials.Keywords: Causal inferencedesign-based inferenceefficiencyFisher randomization testregression adjustmentrerandomizationDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also. FundingThe author(s) reported there is no funding associated with the work featured in this article.","PeriodicalId":51280,"journal":{"name":"Statistics in Biopharmaceutical Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135854492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-09DOI: 10.1080/19466315.2023.2268313
Tianhao Song, Lisa M. LaVange, Anastasia Ivanova
AbstractIn a multi-arm trial with predefined subgroups for each intervention to target, it is often desirable to enrich assignment to an intervention by enrolling more biomarker-positive participants to the intervention. We describe how to implement a biased coin design to achieve desired allocation ratios among interventions and between the number of biomarker-positive and biomarker-negative participants assigned to each intervention. We illustrate the proposed method with the randomization algorithm implemented in the Precision Interventions for Severe and/or Exacerbation-prone Asthma (PrecISE) trial.Key Words: Covariate-adaptive randomizationenrichmentbiomarker-positive subgroupbiased coin designDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also. FundingThe author(s) reported there is no funding associated with the work featured in this article.
{"title":"Covariate-adaptive biased coin randomization for master protocols with multiple interventions and biomarker-stratified allocation","authors":"Tianhao Song, Lisa M. LaVange, Anastasia Ivanova","doi":"10.1080/19466315.2023.2268313","DOIUrl":"https://doi.org/10.1080/19466315.2023.2268313","url":null,"abstract":"AbstractIn a multi-arm trial with predefined subgroups for each intervention to target, it is often desirable to enrich assignment to an intervention by enrolling more biomarker-positive participants to the intervention. We describe how to implement a biased coin design to achieve desired allocation ratios among interventions and between the number of biomarker-positive and biomarker-negative participants assigned to each intervention. We illustrate the proposed method with the randomization algorithm implemented in the Precision Interventions for Severe and/or Exacerbation-prone Asthma (PrecISE) trial.Key Words: Covariate-adaptive randomizationenrichmentbiomarker-positive subgroupbiased coin designDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also. FundingThe author(s) reported there is no funding associated with the work featured in this article.","PeriodicalId":51280,"journal":{"name":"Statistics in Biopharmaceutical Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135093690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-05DOI: 10.1080/19466315.2023.2267502
Ziren Jiang, Cindy Lu, Jialing Liu, Satrajit Roychoudhury, Daniel Meyer, Bo Huang, Haitao Chu
AbstractAdaptive platform trials (APTs) offer an innovative approach to studying multiple therapeutic interventions more efficiently through flexible features such as adding and dropping interventions as evidence emerges, creating a seamless process that avoids enrollment disruption. The benefits and practical challenges of implementing APTs have been widely discussed in the literature; however, less consideration has been given to how to use the non-concurrent control (NCC) data (i.e., the data generated by patients recruited in the control arm before a new treatment is added) when the outcome of interest is a time to event endpoint. Including the NCC can increase the power of the trial. However, due to the omnipresent change of standard care over time, complete borrowing of the NCC survival data may lead to some bias in the estimation. In this paper, we propose an alternative approach to borrow the concurrent observation part of the NCC data by left truncation using a simple decision-making flowchart, which can reduce the bias due to the change of standard care under certain assumptions. Then, the restricted mean survival time (RMST), estimated by the Kaplan-Meier method, is used to compare the treatment versus the pooled control group. We present two simulation studies to illustrate the performance of the decision-making flowchart method under different scenarios. We advocate researchers and drug developers to apply and validate this simple approach in practice.Key Words: platform trialnon-concurrent controlrestricted mean survival timeKaplan-Meier methodmaster protocolDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also. FundingThe author(s) reported there is no funding associated with the work featured in this article.
{"title":"Non-concurrent controls in platform trials: can we borrow their concurrent observation data?","authors":"Ziren Jiang, Cindy Lu, Jialing Liu, Satrajit Roychoudhury, Daniel Meyer, Bo Huang, Haitao Chu","doi":"10.1080/19466315.2023.2267502","DOIUrl":"https://doi.org/10.1080/19466315.2023.2267502","url":null,"abstract":"AbstractAdaptive platform trials (APTs) offer an innovative approach to studying multiple therapeutic interventions more efficiently through flexible features such as adding and dropping interventions as evidence emerges, creating a seamless process that avoids enrollment disruption. The benefits and practical challenges of implementing APTs have been widely discussed in the literature; however, less consideration has been given to how to use the non-concurrent control (NCC) data (i.e., the data generated by patients recruited in the control arm before a new treatment is added) when the outcome of interest is a time to event endpoint. Including the NCC can increase the power of the trial. However, due to the omnipresent change of standard care over time, complete borrowing of the NCC survival data may lead to some bias in the estimation. In this paper, we propose an alternative approach to borrow the concurrent observation part of the NCC data by left truncation using a simple decision-making flowchart, which can reduce the bias due to the change of standard care under certain assumptions. Then, the restricted mean survival time (RMST), estimated by the Kaplan-Meier method, is used to compare the treatment versus the pooled control group. We present two simulation studies to illustrate the performance of the decision-making flowchart method under different scenarios. We advocate researchers and drug developers to apply and validate this simple approach in practice.Key Words: platform trialnon-concurrent controlrestricted mean survival timeKaplan-Meier methodmaster protocolDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also. FundingThe author(s) reported there is no funding associated with the work featured in this article.","PeriodicalId":51280,"journal":{"name":"Statistics in Biopharmaceutical Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135480740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-05DOI: 10.1080/19466315.2023.2267494
Zhiwei Zhang, Yan Li
AbstractDose optimization studies of new therapeutic agents aim to identify one or more promising doses for further evaluation in subsequent studies. Traditionally, dose optimization has focused on finding the maximum tolerated dose (MTD), assuming that drug activity and efficacy generally increase with increasing dose. For modern targeted agents, the dose-activity relationship is often non-monotone and such that activity starts to plateau or even decline before reaching the MTD. Finding the optimal biological dose (OBD) for a targeted agent requires considering both toxicity and activity in dose optimization. This article proposes a new design for finding the OBD that utilizes generalized likelihood ratios (GLRs) to measure statistical evidence regarding key scientific questions on toxicity and activity. This GLR-based design requires no parametric modeling assumptions and only assumes that the dose-toxicity relationship is monotone and that the dose-activity relationship follows a two-sided isotonic regression model. Compared with existing designs that operate under similar assumptions, the GLR-based design is more general and more flexible, and performs competitively in simulation experiments where drug activity starts to plateau or decline before reaching the MTD.Key words: dose findingdose transition ruleisotonic regressionlaw of likelihoodmonotonicityoptimal biological doseDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also. FundingThe author(s) reported there is no funding associated with the work featured in this article.
{"title":"Generalized Likelihood Ratios for Designing Dose Optimization Studies of Targeted Therapies","authors":"Zhiwei Zhang, Yan Li","doi":"10.1080/19466315.2023.2267494","DOIUrl":"https://doi.org/10.1080/19466315.2023.2267494","url":null,"abstract":"AbstractDose optimization studies of new therapeutic agents aim to identify one or more promising doses for further evaluation in subsequent studies. Traditionally, dose optimization has focused on finding the maximum tolerated dose (MTD), assuming that drug activity and efficacy generally increase with increasing dose. For modern targeted agents, the dose-activity relationship is often non-monotone and such that activity starts to plateau or even decline before reaching the MTD. Finding the optimal biological dose (OBD) for a targeted agent requires considering both toxicity and activity in dose optimization. This article proposes a new design for finding the OBD that utilizes generalized likelihood ratios (GLRs) to measure statistical evidence regarding key scientific questions on toxicity and activity. This GLR-based design requires no parametric modeling assumptions and only assumes that the dose-toxicity relationship is monotone and that the dose-activity relationship follows a two-sided isotonic regression model. Compared with existing designs that operate under similar assumptions, the GLR-based design is more general and more flexible, and performs competitively in simulation experiments where drug activity starts to plateau or decline before reaching the MTD.Key words: dose findingdose transition ruleisotonic regressionlaw of likelihoodmonotonicityoptimal biological doseDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also. FundingThe author(s) reported there is no funding associated with the work featured in this article.","PeriodicalId":51280,"journal":{"name":"Statistics in Biopharmaceutical Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135480718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-02DOI: 10.1080/19466315.2023.2260231
John Kolassa, Eve Pickering
We are pleased to present a special section of Statistics in Bio-pharmaceutical Research, consisting of three papers developed from material presented at the Nonclinical Biostatistics Conference of 2021 (NCB21). We are excited to call your attention to this exciting work; our summary here expands that of Kolassa and Pickering (2022).
{"title":"Selected Articles from the Nonclinical Biostatistics Conference 2021","authors":"John Kolassa, Eve Pickering","doi":"10.1080/19466315.2023.2260231","DOIUrl":"https://doi.org/10.1080/19466315.2023.2260231","url":null,"abstract":"We are pleased to present a special section of Statistics in Bio-pharmaceutical Research, consisting of three papers developed from material presented at the Nonclinical Biostatistics Conference of 2021 (NCB21). We are excited to call your attention to this exciting work; our summary here expands that of Kolassa and Pickering (2022).","PeriodicalId":51280,"journal":{"name":"Statistics in Biopharmaceutical Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135947916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-27DOI: 10.1080/19466315.2023.2250119
Nicole Heussen, Ralf-Dieter Hilgers, William F. Rosenberger, Xiao Tan, Diane Uschner
AbstractRandomization-based inference is a natural way to analyze data from a clinical trial. But the presence of missing outcome data is problematic: if the data are removed, the randomization distribution is destroyed and randomization tests have no validity. In this paper we describe two approaches to imputing values for missing data that preserve the randomization distribution. We then compare these methods to population-based and parametric imputation approaches that are in standard use to compare error rates under both homogeneous and heterogeneous population models. We also describe randomization-based analogs of standard missing data mechanisms and describe a randomization-based procedure to determine if data are missing completely at random. We conclude that randomization-based methods are a reasonable approach to missing data that perform comparably to population-based methods.Keywords: Conditional reference setMissing completely at randomMissing at randomRandomization testDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also. FundingThe author(s) reported there is no funding associated with the work featured in this article.
{"title":"Randomization-Based Inference for Clinical Trials with Missing Outcome Data","authors":"Nicole Heussen, Ralf-Dieter Hilgers, William F. Rosenberger, Xiao Tan, Diane Uschner","doi":"10.1080/19466315.2023.2250119","DOIUrl":"https://doi.org/10.1080/19466315.2023.2250119","url":null,"abstract":"AbstractRandomization-based inference is a natural way to analyze data from a clinical trial. But the presence of missing outcome data is problematic: if the data are removed, the randomization distribution is destroyed and randomization tests have no validity. In this paper we describe two approaches to imputing values for missing data that preserve the randomization distribution. We then compare these methods to population-based and parametric imputation approaches that are in standard use to compare error rates under both homogeneous and heterogeneous population models. We also describe randomization-based analogs of standard missing data mechanisms and describe a randomization-based procedure to determine if data are missing completely at random. We conclude that randomization-based methods are a reasonable approach to missing data that perform comparably to population-based methods.Keywords: Conditional reference setMissing completely at randomMissing at randomRandomization testDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also. FundingThe author(s) reported there is no funding associated with the work featured in this article.","PeriodicalId":51280,"journal":{"name":"Statistics in Biopharmaceutical Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-25DOI: 10.1080/19466315.2023.2261672
Victoria P. Johnson, Michael Gekhtman, Olga M. Kuznetsova
AbstractRandomization procedures that enforce balance in prognostic factors, most commonly stratified randomization, are often employed in clinical trials. When the number of factors or factor levels is large, dynamic allocation procedures, such as the Pocock and Simon’s covariate-adaptive randomization (minimization) are preferred. In their ground-breaking work Ye and Shao (2020) identified two classes of covariate-adaptive randomization procedures. They have demonstrated theoretically that for these classes, when the model is misspecified, the robust score test (Lin and Wei, 1989) as well as the unstratified log-rank test used for analysis of time-to-event endpoints, are valid or conservative (Ye and Shao, 2020). This fact, however, was not established for minimization other than through simulations of survival endpoints. In this paper, we point out that the results of Ye and Shao can be expanded to a more general class of randomization procedures. We show, in part theoretically, in part through simulations of the within-strata imbalances, that minimization belongs to this class. Along the way we describe the asymptotic correlation matrix of the normalized within-stratum imbalances following minimization with equal prevalence of all strata. We expand the robust tests proposed by Ye and Shao for stratified randomization to minimization and examine their performance through simulations.Keywords: minimizationType I errorrobust survival analysis testsDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also. AcknowledgementsThis research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. The authors would like to thank the anonymous reviewers whose recommendations substantially improved the paper.FundingThe author(s) reported there is no funding associated with the work featured in this article.
{"title":"Validity of tests for time-to-event endpoints in studies with the Pocock and Simon covariate-adaptive randomization","authors":"Victoria P. Johnson, Michael Gekhtman, Olga M. Kuznetsova","doi":"10.1080/19466315.2023.2261672","DOIUrl":"https://doi.org/10.1080/19466315.2023.2261672","url":null,"abstract":"AbstractRandomization procedures that enforce balance in prognostic factors, most commonly stratified randomization, are often employed in clinical trials. When the number of factors or factor levels is large, dynamic allocation procedures, such as the Pocock and Simon’s covariate-adaptive randomization (minimization) are preferred. In their ground-breaking work Ye and Shao (2020) identified two classes of covariate-adaptive randomization procedures. They have demonstrated theoretically that for these classes, when the model is misspecified, the robust score test (Lin and Wei, 1989) as well as the unstratified log-rank test used for analysis of time-to-event endpoints, are valid or conservative (Ye and Shao, 2020). This fact, however, was not established for minimization other than through simulations of survival endpoints. In this paper, we point out that the results of Ye and Shao can be expanded to a more general class of randomization procedures. We show, in part theoretically, in part through simulations of the within-strata imbalances, that minimization belongs to this class. Along the way we describe the asymptotic correlation matrix of the normalized within-stratum imbalances following minimization with equal prevalence of all strata. We expand the robust tests proposed by Ye and Shao for stratified randomization to minimization and examine their performance through simulations.Keywords: minimizationType I errorrobust survival analysis testsDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also. AcknowledgementsThis research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. The authors would like to thank the anonymous reviewers whose recommendations substantially improved the paper.FundingThe author(s) reported there is no funding associated with the work featured in this article.","PeriodicalId":51280,"journal":{"name":"Statistics in Biopharmaceutical Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135770077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-20DOI: 10.1080/19466315.2023.2260776
Tianyu Zhan, Haoda Fu, Jian Kang
AbstractIn modern statistics, interests shift from pursuing the uniformly minimum variance unbiased estimator to reducing mean squared error (MSE) or residual squared error. Shrinkage-based estimation and regression methods offer better prediction accuracy and improved interpretation. However, the characterization of such optimal statistics in terms of minimizing MSE remains open and challenging in many problems, for example, estimating the treatment effect in adaptive clinical trials with pre-planned modifications to design aspects based on accumulated data. From an alternative perspective, we propose a deep neural network based automatic method to construct an improved estimator from existing ones. Theoretical properties are studied to provide guidance on applicability of our estimator to seek potential improvement. Simulation studies demonstrate that the proposed method has considerable finite-sample efficiency gain compared to several common estimators. In the Adaptive COVID-19 Treatment Trial (ACTT) as a motivating example, our ensemble estimator essentially contributes to a more ethical and efficient adaptive clinical trial with fewer patients enrolled. The proposed framework can be generally applied to various statistical problems, and can serve as a reference measure to guide statistical research.Keywords: Deep learningEfficiencyImproved statisticsDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also. Supplemental MaterialsSupplementary Materials including Appendices, Tables and Figures referenced in this article are available online. The R code and a help file to replicate results in the main article are available at https://github.com/tian-yu-zhan/DNN_Point_Estimation.This manuscript was supported by AbbVie Inc. AbbVie participated in the review and approval of the content. Tianyu Zhan is employed by AbbVie Inc., Haoda Fu is employed by Eli Lilly and Company, and Jian Kang is Professor in the Department of Biostatistics at the University of Michigan, Ann Arbor. Kang’s research was partially supported by NIH R01 GM124061 and R01 MH105561. All authors may own AbbVie stock.Conflict of InterestNo potential competing interest was reported by the authors.AcknowledgementsThe authors thank the editorial board and reviewers for their constructive comments.FundingThe author(s) reported there is no funding associated with the work featured in this article.
{"title":"Deep Neural Networks Guided Ensemble Learning for Point Estimation","authors":"Tianyu Zhan, Haoda Fu, Jian Kang","doi":"10.1080/19466315.2023.2260776","DOIUrl":"https://doi.org/10.1080/19466315.2023.2260776","url":null,"abstract":"AbstractIn modern statistics, interests shift from pursuing the uniformly minimum variance unbiased estimator to reducing mean squared error (MSE) or residual squared error. Shrinkage-based estimation and regression methods offer better prediction accuracy and improved interpretation. However, the characterization of such optimal statistics in terms of minimizing MSE remains open and challenging in many problems, for example, estimating the treatment effect in adaptive clinical trials with pre-planned modifications to design aspects based on accumulated data. From an alternative perspective, we propose a deep neural network based automatic method to construct an improved estimator from existing ones. Theoretical properties are studied to provide guidance on applicability of our estimator to seek potential improvement. Simulation studies demonstrate that the proposed method has considerable finite-sample efficiency gain compared to several common estimators. In the Adaptive COVID-19 Treatment Trial (ACTT) as a motivating example, our ensemble estimator essentially contributes to a more ethical and efficient adaptive clinical trial with fewer patients enrolled. The proposed framework can be generally applied to various statistical problems, and can serve as a reference measure to guide statistical research.Keywords: Deep learningEfficiencyImproved statisticsDisclaimerAs a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also. Supplemental MaterialsSupplementary Materials including Appendices, Tables and Figures referenced in this article are available online. The R code and a help file to replicate results in the main article are available at https://github.com/tian-yu-zhan/DNN_Point_Estimation.This manuscript was supported by AbbVie Inc. AbbVie participated in the review and approval of the content. Tianyu Zhan is employed by AbbVie Inc., Haoda Fu is employed by Eli Lilly and Company, and Jian Kang is Professor in the Department of Biostatistics at the University of Michigan, Ann Arbor. Kang’s research was partially supported by NIH R01 GM124061 and R01 MH105561. All authors may own AbbVie stock.Conflict of InterestNo potential competing interest was reported by the authors.AcknowledgementsThe authors thank the editorial board and reviewers for their constructive comments.FundingThe author(s) reported there is no funding associated with the work featured in this article.","PeriodicalId":51280,"journal":{"name":"Statistics in Biopharmaceutical Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136263995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}