Escalation with overdose control (EWOC) is a commonly used Bayesian adaptive design, which controls overdosing risk while estimating maximum tolerated dose (MTD) in cancer Phase I clinical trials. In 2010, Chen and his colleagues proposed a novel toxicity scoring system to fully utilize patients’ toxicity information by using a normalized equivalent toxicity score (NETS) in the range 0 to 1 instead of a binary indicator of dose limiting toxicity (DLT). Later in 2015, by adding underdosing control into EWOC, escalation with overdose and underdose control (EWOUC) design was proposed to guarantee patients the minimum therapeutic effect of drug in Phase I/II clinical trials. In this paper, the EWOUC-NETS design is developed by integrating the advantages of EWOUC and NETS in a Bayesian context. Moreover, both toxicity response and efficacy are treated as continuous variables to maximize trial efficiency. The dose escalation decision is based on the posterior distribution of both toxicity and efficacy outcomes, which are recursively updated with accumulated data. We compare the operation characteristics of EWOUC-NETS and existing methods through simulation studies under five scenarios. The study results show that EWOUC-NETS design treating toxicity and efficacy outcomes as continuous variables can increase accuracy in identifying the optimized utility dose (OUD) and provide better therapeutic effects.
{"title":"Bayesian dose escalation with overdose and underdose control utilizing all toxicities in Phase I/II clinical trials","authors":"Jieqi Tu, Zhengjia Chen","doi":"10.1002/bimj.202200189","DOIUrl":"10.1002/bimj.202200189","url":null,"abstract":"<p>Escalation with overdose control (EWOC) is a commonly used Bayesian adaptive design, which controls overdosing risk while estimating maximum tolerated dose (MTD) in cancer Phase I clinical trials. In 2010, Chen and his colleagues proposed a novel toxicity scoring system to fully utilize patients’ toxicity information by using a normalized equivalent toxicity score (NETS) in the range 0 to 1 instead of a binary indicator of dose limiting toxicity (DLT). Later in 2015, by adding underdosing control into EWOC, escalation with overdose and underdose control (EWOUC) design was proposed to guarantee patients the minimum therapeutic effect of drug in Phase I/II clinical trials. In this paper, the EWOUC-NETS design is developed by integrating the advantages of EWOUC and NETS in a Bayesian context. Moreover, both toxicity response and efficacy are treated as continuous variables to maximize trial efficiency. The dose escalation decision is based on the posterior distribution of both toxicity and efficacy outcomes, which are recursively updated with accumulated data. We compare the operation characteristics of EWOUC-NETS and existing methods through simulation studies under five scenarios. The study results show that EWOUC-NETS design treating toxicity and efficacy outcomes as continuous variables can increase accuracy in identifying the optimized utility dose (OUD) and provide better therapeutic effects.</p>","PeriodicalId":55360,"journal":{"name":"Biometrical Journal","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138479358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luana Boumendil, Morgane Fontaine, Vincent Lévy, Kim Pacchiardi, Raphaël Itzykson, Lucie Biard
Drug combinations have been of increasing interest in recent years for the treatment of complex diseases such as cancer, as they could reduce the risk of drug resistance. Moreover, in oncology, combining drugs may allow tackling tumor heterogeneity. Identifying potent combinations can be an arduous task since exploring the full dose–response matrix of candidate combinations over a large number of drugs is costly and sometimes unfeasible, as the quantity of available biological material is limited and may vary across patients. Our objective was to develop a rank-based screening approach for drug combinations in the setting of limited biological resources. A hierarchical Bayesian 4-parameter log-logistic (4PLL) model was used to estimate dose–response curves of dose–candidate combinations based on a parsimonious experimental design. We computed various activity ranking metrics, such as the area under the dose–response curve and Bliss synergy score, and we used the posterior distributions of ranks and the surface under the cumulative ranking curve to obtain a comprehensive final ranking of combinations. Based on simulations, our proposed method achieved good operating characteristics to identifying the most promising treatments in various scenarios with limited sample sizes and interpatient variability. We illustrate the proposed approach on real data from a combination screening experiment in acute myeloid leukemia.
{"title":"Drug combinations screening using a Bayesian ranking approach based on dose–response models","authors":"Luana Boumendil, Morgane Fontaine, Vincent Lévy, Kim Pacchiardi, Raphaël Itzykson, Lucie Biard","doi":"10.1002/bimj.202200332","DOIUrl":"10.1002/bimj.202200332","url":null,"abstract":"<p>Drug combinations have been of increasing interest in recent years for the treatment of complex diseases such as cancer, as they could reduce the risk of drug resistance. Moreover, in oncology, combining drugs may allow tackling tumor heterogeneity. Identifying potent combinations can be an arduous task since exploring the full dose–response matrix of candidate combinations over a large number of drugs is costly and sometimes unfeasible, as the quantity of available biological material is limited and may vary across patients. Our objective was to develop a rank-based screening approach for drug combinations in the setting of limited biological resources. A hierarchical Bayesian 4-parameter log-logistic (4PLL) model was used to estimate dose–response curves of dose–candidate combinations based on a parsimonious experimental design. We computed various activity ranking metrics, such as the area under the dose–response curve and Bliss synergy score, and we used the posterior distributions of ranks and the surface under the cumulative ranking curve to obtain a comprehensive final ranking of combinations. Based on simulations, our proposed method achieved good operating characteristics to identifying the most promising treatments in various scenarios with limited sample sizes and interpatient variability. We illustrate the proposed approach on real data from a combination screening experiment in acute myeloid leukemia.</p>","PeriodicalId":55360,"journal":{"name":"Biometrical Journal","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/bimj.202200332","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138178002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandra Lavalley-Morelle, Nathan Peiffer-Smadja, Simon B. Gressens, Bérénice Souhail, Alexandre Lahens, Agathe Bounhiol, François-Xavier Lescure, France Mentré, Jimmy Mullaert
During the coronavirus disease 2019 (COVID-19) pandemic, several clinical prognostic scores have been proposed and evaluated in hospitalized patients, relying on variables available at admission. However, capturing data collected from the longitudinal follow-up of patients during hospitalization may improve prediction accuracy of a clinical outcome. To answer this question, 327 patients diagnosed with COVID-19 and hospitalized in an academic French hospital between January and July 2020 are included in the analysis. Up to 59 biomarkers were measured from the patient admission to the time to death or discharge from hospital. We consider a joint model with multiple linear or nonlinear mixed-effects models for biomarkers evolution, and a competing risks model involving subdistribution hazard functions for the risks of death and discharge. The links are modeled by shared random effects, and the selection of the biomarkers is mainly based on the significance of the link between the longitudinal and survival parts. Three biomarkers are retained: the blood neutrophil counts, the arterial pH, and the C-reactive protein. The predictive performances of the model are evaluated with the time-dependent area under the curve (AUC) for different landmark and horizon times, and compared with those obtained from a baseline model that considers only information available at admission. The joint modeling approach helps to improve predictions when sufficient information is available. For landmark 6 days and horizon of 30 days, we obtain AUC [95% CI] 0.73 [0.65, 0.81] and 0.81 [0.73, 0.89] for the baseline and joint model, respectively (p = 0.04). Statistical inference is validated through a simulation study.
{"title":"Multivariate joint model under competing risks to predict death of hospitalized patients for SARS-CoV-2 infection","authors":"Alexandra Lavalley-Morelle, Nathan Peiffer-Smadja, Simon B. Gressens, Bérénice Souhail, Alexandre Lahens, Agathe Bounhiol, François-Xavier Lescure, France Mentré, Jimmy Mullaert","doi":"10.1002/bimj.202300049","DOIUrl":"10.1002/bimj.202300049","url":null,"abstract":"<p>During the coronavirus disease 2019 (COVID-19) pandemic, several clinical prognostic scores have been proposed and evaluated in hospitalized patients, relying on variables available at admission. However, capturing data collected from the longitudinal follow-up of patients during hospitalization may improve prediction accuracy of a clinical outcome. To answer this question, 327 patients diagnosed with COVID-19 and hospitalized in an academic French hospital between January and July 2020 are included in the analysis. Up to 59 biomarkers were measured from the patient admission to the time to death or discharge from hospital. We consider a joint model with multiple linear or nonlinear mixed-effects models for biomarkers evolution, and a competing risks model involving subdistribution hazard functions for the risks of death and discharge. The links are modeled by shared random effects, and the selection of the biomarkers is mainly based on the significance of the link between the longitudinal and survival parts. Three biomarkers are retained: the blood neutrophil counts, the arterial pH, and the C-reactive protein. The predictive performances of the model are evaluated with the time-dependent area under the curve (AUC) for different landmark and horizon times, and compared with those obtained from a baseline model that considers only information available at admission. The joint modeling approach helps to improve predictions when sufficient information is available. For landmark 6 days and horizon of 30 days, we obtain AUC [95% CI] 0.73 [0.65, 0.81] and 0.81 [0.73, 0.89] for the baseline and joint model, respectively (<i>p</i> = 0.04). Statistical inference is validated through a simulation study.</p>","PeriodicalId":55360,"journal":{"name":"Biometrical Journal","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71429417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Receptor occupancy in targeted tissues measures the proportion of receptors occupied by a drug at equilibrium and is sometimes used as a surrogate of drug efficacy to inform dose selection in clinical trials. We propose to incorporate data on receptor occupancy from a phase I study in healthy volunteers into a phase II proof-of-concept study in patients, with the objective of using all the available evidence to make informed decisions. A minimal physiologically based pharmacokinetic modeling is used to model receptor occupancy in healthy volunteers and to predict it in the patients of a phase II proof-of-concept study, taking into account the variability of the population parameters and the specific differences arising from the pathological condition compared to healthy volunteers. Then, given an estimated relationship between receptor occupancy and the clinical endpoint, an informative prior distribution is derived for the clinical endpoint in both the treatment and control arms of the phase II study. These distributions are incorporated into a Bayesian dynamic borrowing design to supplement concurrent phase II trial data. A simulation study in immuno-inflammation demonstrates that the proposed design increases the power of the study while maintaining a type I error at acceptable levels for realistic values of the clinical endpoint.
{"title":"Incorporation of healthy volunteers data on receptor occupancy into a phase II proof-of-concept trial using a Bayesian dynamic borrowing design","authors":"Fulvio Di Stefano, Christelle Rodrigues, Stephanie Galtier, Sandrine Guilleminot, Veronique Robert, Mauro Gasparini, Gaelle Saint-Hilary","doi":"10.1002/bimj.202200305","DOIUrl":"10.1002/bimj.202200305","url":null,"abstract":"<p>Receptor occupancy in targeted tissues measures the proportion of receptors occupied by a drug at equilibrium and is sometimes used as a surrogate of drug efficacy to inform dose selection in clinical trials. We propose to incorporate data on receptor occupancy from a phase I study in healthy volunteers into a phase II proof-of-concept study in patients, with the objective of using all the available evidence to make informed decisions. A minimal physiologically based pharmacokinetic modeling is used to model receptor occupancy in healthy volunteers and to predict it in the patients of a phase II proof-of-concept study, taking into account the variability of the population parameters and the specific differences arising from the pathological condition compared to healthy volunteers. Then, given an estimated relationship between receptor occupancy and the clinical endpoint, an informative prior distribution is derived for the clinical endpoint in both the treatment and control arms of the phase II study. These distributions are incorporated into a Bayesian dynamic borrowing design to supplement concurrent phase II trial data. A simulation study in immuno-inflammation demonstrates that the proposed design increases the power of the study while maintaining a type I error at acceptable levels for realistic values of the clinical endpoint.</p>","PeriodicalId":55360,"journal":{"name":"Biometrical Journal","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"54232411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erick Orozco-Acosta, Andrea Riebler, Aritz Adin, Maria D. Ugarte
Short-term disease forecasting at specific discrete spatial resolutions has become a high-impact decision-support tool in health planning. However, when the number of areas is very large obtaining predictions can be computationally intensive or even unfeasible using standard spatiotemporal models. The purpose of this paper is to provide a method for short-term predictions in high-dimensional areal data based on a newly proposed “divide-and-conquer” approach. We assess the predictive performance of this method and other classical spatiotemporal models in a validation study that uses cancer mortality data for the 7907 municipalities of continental Spain. The new proposal outperforms traditional models in terms of mean absolute error, root mean square error, and interval score when forecasting cancer mortality 1, 2, and 3 years ahead. Models are implemented in a fully Bayesian framework using the well-known integrated nested Laplace estimation technique.
{"title":"A scalable approach for short-term disease forecasting in high spatial resolution areal data","authors":"Erick Orozco-Acosta, Andrea Riebler, Aritz Adin, Maria D. Ugarte","doi":"10.1002/bimj.202300096","DOIUrl":"10.1002/bimj.202300096","url":null,"abstract":"<p>Short-term disease forecasting at specific discrete spatial resolutions has become a high-impact decision-support tool in health planning. However, when the number of areas is very large obtaining predictions can be computationally intensive or even unfeasible using standard spatiotemporal models. The purpose of this paper is to provide a method for short-term predictions in high-dimensional areal data based on a newly proposed “divide-and-conquer” approach. We assess the predictive performance of this method and other classical spatiotemporal models in a validation study that uses cancer mortality data for the 7907 municipalities of continental Spain. The new proposal outperforms traditional models in terms of mean absolute error, root mean square error, and interval score when forecasting cancer mortality 1, 2, and 3 years ahead. Models are implemented in a fully Bayesian framework using the well-known integrated nested Laplace estimation technique.</p>","PeriodicalId":55360,"journal":{"name":"Biometrical Journal","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/bimj.202300096","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"61566123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P-values that are derived from continuously distributed test statistics are typically uniformly distributed on (0,1) under least favorable parameter configurations (LFCs) in the null hypothesis. Conservativeness of a p-value P (meaning that P is under the null hypothesis stochastically larger than uniform on (0,1)) can occur if the test statistic from which P is derived is discrete, or if the true parameter value under the null is not an LFC. To deal with both of these sources of conservativeness, we present two approaches utilizing randomized p-values. We illustrate their effectiveness for testing a composite null hypothesis under a binomial model. We also give an example of how the proposed p-values can be used to test a composite null in group testing designs. We find that the proposed randomized p-values are less conservative compared to nonrandomized p-values under the null hypothesis, but that they are stochastically not smaller under the alternative. The problem of establishing the validity of randomized p-values has received attention in previous literature. We show that our proposed randomized p-values are valid under various discrete statistical models, which are such that the distribution of the corresponding test statistic belongs to an exponential family. The behavior of the power function for the tests based on the proposed randomized p-values as a function of the sample size is also investigated. Simulations and a real data example are used to compare the different considered p-values.
{"title":"Multiple testing of composite null hypotheses for discrete data using randomized p-values","authors":"Daniel Ochieng, Anh-Tuan Hoang, Thorsten Dickhaus","doi":"10.1002/bimj.202300077","DOIUrl":"10.1002/bimj.202300077","url":null,"abstract":"<p><i>P</i>-values that are derived from continuously distributed test statistics are typically uniformly distributed on (0,1) under least favorable parameter configurations (LFCs) in the null hypothesis. Conservativeness of a <i>p</i>-value <i>P</i> (meaning that <i>P</i> is under the null hypothesis stochastically larger than uniform on (0,1)) can occur if the test statistic from which <i>P</i> is derived is discrete, or if the true parameter value under the null is not an LFC. To deal with both of these sources of conservativeness, we present two approaches utilizing randomized <i>p</i>-values. We illustrate their effectiveness for testing a composite null hypothesis under a binomial model. We also give an example of how the proposed <i>p</i>-values can be used to test a composite null in group testing designs. We find that the proposed randomized <i>p</i>-values are less conservative compared to nonrandomized <i>p</i>-values under the null hypothesis, but that they are stochastically not smaller under the alternative. The problem of establishing the validity of randomized <i>p</i>-values has received attention in previous literature. We show that our proposed randomized <i>p</i>-values are valid under various discrete statistical models, which are such that the distribution of the corresponding test statistic belongs to an exponential family. The behavior of the power function for the tests based on the proposed randomized <i>p</i>-values as a function of the sample size is also investigated. Simulations and a real data example are used to compare the different considered <i>p</i>-values.</p>","PeriodicalId":55360,"journal":{"name":"Biometrical Journal","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/bimj.202300077","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49685340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lina L. Hernandez-Velasco, Carlos A. Abanto-Valle, Dipak K. Dey, Luis M. Castro
Human immunodeficiency virus (HIV) dynamics have been the focus of epidemiological and biostatistical research during the past decades to understand the progression of acquired immunodeficiency syndrome (AIDS) in the population. Although there are several approaches for modeling HIV dynamics, one of the most popular is based on Gaussian mixed-effects models because of its simplicity from the implementation and interpretation viewpoints. However, in some situations, Gaussian mixed-effects models cannot (a) capture serial correlation existing in longitudinal data, (b) deal with missing observations properly, and (c) accommodate skewness and heavy tails frequently presented in patients' profiles. For those cases, mixed-effects state-space models (MESSM) become a powerful tool for modeling correlated observations, including HIV dynamics, because of their flexibility in modeling the unobserved states and the observations in a simple way. Consequently, our proposal considers an MESSM where the observations' error distribution is a skew-t. This new approach is more flexible and can accommodate data sets exhibiting skewness and heavy tails. Under the Bayesian paradigm, an efficient Markov chain Monte Carlo algorithm is implemented. To evaluate the properties of the proposed models, we carried out some exciting simulation studies, including missing data in the generated data sets. Finally, we illustrate our approach with an application in the AIDS Clinical Trial Group Study 315 (ACTG-315) clinical trial data set.
{"title":"A Bayesian approach for mixed effects state-space models under skewness and heavy tails","authors":"Lina L. Hernandez-Velasco, Carlos A. Abanto-Valle, Dipak K. Dey, Luis M. Castro","doi":"10.1002/bimj.202100302","DOIUrl":"10.1002/bimj.202100302","url":null,"abstract":"<p>Human immunodeficiency virus (HIV) dynamics have been the focus of epidemiological and biostatistical research during the past decades to understand the progression of acquired immunodeficiency syndrome (AIDS) in the population. Although there are several approaches for modeling HIV dynamics, one of the most popular is based on Gaussian mixed-effects models because of its simplicity from the implementation and interpretation viewpoints. However, in some situations, Gaussian mixed-effects models cannot (a) capture serial correlation existing in longitudinal data, (b) deal with missing observations properly, and (c) accommodate skewness and heavy tails frequently presented in patients' profiles. For those cases, mixed-effects state-space models (MESSM) become a powerful tool for modeling correlated observations, including HIV dynamics, because of their flexibility in modeling the unobserved states and the observations in a simple way. Consequently, our proposal considers an MESSM where the observations' error distribution is a skew-<i>t</i>. This new approach is more flexible and can accommodate data sets exhibiting skewness and heavy tails. Under the Bayesian paradigm, an efficient Markov chain Monte Carlo algorithm is implemented. To evaluate the properties of the proposed models, we carried out some exciting simulation studies, including missing data in the generated data sets. Finally, we illustrate our approach with an application in the AIDS Clinical Trial Group Study 315 (ACTG-315) clinical trial data set.</p>","PeriodicalId":55360,"journal":{"name":"Biometrical Journal","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49685339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tim P. Morris, Ian R. White, Suzie Cro, Jonathan W. Bartlett, James R. Carpenter, Tra My Pham
For simulation studies that evaluate methods of handling missing data, we argue that generating partially observed data by fixing the complete data and repeatedly simulating the missingness indicators is a superficially attractive idea but only rarely appropriate to use.
{"title":"Comment on Oberman & Vink: Should we fix or simulate the complete data in simulation studies evaluating missing data methods?","authors":"Tim P. Morris, Ian R. White, Suzie Cro, Jonathan W. Bartlett, James R. Carpenter, Tra My Pham","doi":"10.1002/bimj.202300085","DOIUrl":"10.1002/bimj.202300085","url":null,"abstract":"<p>For simulation studies that evaluate methods of handling missing data, we argue that generating partially observed data by fixing the complete data and repeatedly simulating the missingness indicators is a superficially attractive idea but only rarely appropriate to use.</p>","PeriodicalId":55360,"journal":{"name":"Biometrical Journal","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41220887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Theoretical-information approach applied to the clinical trial designs appeared to bring several advantages when tackling a problem of finding a balance between power and expected number of successes (ENS). In particular, it was shown that the built-in parameter of the weight function allows finding the desired trade-off between the statistical power and number of treated patients in the context of small population Phase II clinical trials. However, in real clinical trials, randomized designs are more preferable. The goal of this research is to introduce randomization to a deterministic entropy-based sequential trial procedure generalized to multiarm setting. Several methods of randomization applied to an entropy-based design are investigated in terms of statistical power and ENS. Namely, the four design types are considered: (a) deterministic procedures, (b) naive randomization using the inverse of entropy criteria as weights, (c) block randomization, and (d) randomized penalty parameter. The randomized entropy-based designs are compared to randomized Gittins index (GI) and fixed randomization (FR). After the comprehensive simulation study, the following conclusion on block randomization is made: for both entropy-based and GI-based block randomization designs the degree of randomization induced by forward-looking procedures is insufficient to achieve a decent statistical power. Therefore, we propose an adjustment for the forward-looking procedure that improves power with almost no cost in terms of ENS. In addition, the properties of randomization procedures based on randomly drawn penalty parameter are also thoroughly investigated.
{"title":"Response-adaptive randomization for multiarm clinical trials using context-dependent information measures","authors":"Ksenia Kasianova, Mark Kelbert, Pavel Mozgunov","doi":"10.1002/bimj.202200301","DOIUrl":"10.1002/bimj.202200301","url":null,"abstract":"<p>Theoretical-information approach applied to the clinical trial designs appeared to bring several advantages when tackling a problem of finding a balance between power and expected number of successes (ENS). In particular, it was shown that the built-in parameter of the weight function allows finding the desired trade-off between the statistical power and number of treated patients in the context of small population Phase II clinical trials. However, in real clinical trials, randomized designs are more preferable. The goal of this research is to introduce randomization to a deterministic entropy-based sequential trial procedure generalized to multiarm setting. Several methods of randomization applied to an entropy-based design are investigated in terms of statistical power and ENS. Namely, the four design types are considered: (a) deterministic procedures, (b) naive randomization using the inverse of entropy criteria as weights, (c) block randomization, and (d) randomized penalty parameter. The randomized entropy-based designs are compared to randomized Gittins index (GI) and fixed randomization (FR). After the comprehensive simulation study, the following conclusion on block randomization is made: for both entropy-based and GI-based block randomization designs the degree of randomization induced by forward-looking procedures is insufficient to achieve a decent statistical power. Therefore, we propose an adjustment for the forward-looking procedure that improves power with almost no cost in terms of ENS. In addition, the properties of randomization procedures based on randomly drawn penalty parameter are also thoroughly investigated.</p>","PeriodicalId":55360,"journal":{"name":"Biometrical Journal","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2023-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/bimj.202200301","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41220888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}