Jingqi Duan, Corinne D Engelman, Qiongshi Lu, Hyunseung Kang
In Alzheimer's disease (AD) research, many observational studies have shown that the effect of sleeping quality, a modifiable risk factor, on cognitive decline is heterogeneous, where some adults experience faster rates of cognitive decline compared to others. However, these effects are likely confounded by unmeasured confounders, and the sensitivity of these effects to unmeasured confounders may be heterogeneous, where one subgroup's treatment effect is more sensitive than that of another subgroup. Unfortunately, compared to the overall treatment effect, there are limited investigations about the sensitivity of heterogeneous treatment effects to unmeasured confounding. The paper presents and compares methods for sensitivity analysis of heterogeneous effects in observational studies based on Rosenbaum's model for sensitivity analysis. We show that, unlike the sensitivity analysis of the overall treatment effect, the sensitivity of heterogeneous treatment effects depends on the variation in the effect sizes across subgroups and the correction for multiple testing. The data analysis further supports our findings where the overall effect of sleep disturbances on cognitive decline is significant ( -value = ). Also, the effect is more severe among males ( -value = ) and insensitive to a moderate degree of unmeasured confounding. Finally, we offer an easy-to-use R software to carry out the sensitivity analyses for heterogeneous treatment effects.
{"title":"Comparison of Methods for Sensitivity Analysis of Heterogeneous Treatment Effects in Observational Studies and Application to Alzheimer's Disease and Cognitive Decline.","authors":"Jingqi Duan, Corinne D Engelman, Qiongshi Lu, Hyunseung Kang","doi":"10.1002/sim.70446","DOIUrl":"10.1002/sim.70446","url":null,"abstract":"<p><p>In Alzheimer's disease (AD) research, many observational studies have shown that the effect of sleeping quality, a modifiable risk factor, on cognitive decline is heterogeneous, where some adults experience faster rates of cognitive decline compared to others. However, these effects are likely confounded by unmeasured confounders, and the sensitivity of these effects to unmeasured confounders may be heterogeneous, where one subgroup's treatment effect is more sensitive than that of another subgroup. Unfortunately, compared to the overall treatment effect, there are limited investigations about the sensitivity of heterogeneous treatment effects to unmeasured confounding. The paper presents and compares methods for sensitivity analysis of heterogeneous effects in observational studies based on Rosenbaum's model for sensitivity analysis. We show that, unlike the sensitivity analysis of the overall treatment effect, the sensitivity of heterogeneous treatment effects depends on the variation in the effect sizes across subgroups and the correction for multiple testing. The data analysis further supports our findings where the overall effect of sleep disturbances on cognitive decline is significant ( <math> <semantics><mrow><mi>p</mi></mrow> <annotation>$$ p $$</annotation></semantics> </math> -value = <math> <semantics><mrow><mn>5</mn> <mo>.</mo> <mn>55</mn> <mo>×</mo> <mn>1</mn> <msup><mrow><mn>0</mn></mrow> <mrow><mo>-</mo> <mn>5</mn></mrow> </msup> </mrow> <annotation>$$ 5.55times 1{0}^{-5} $$</annotation></semantics> </math> ). Also, the effect is more severe among males ( <math> <semantics><mrow><mi>p</mi></mrow> <annotation>$$ p $$</annotation></semantics> </math> -value = <math> <semantics><mrow><mn>2</mn> <mo>.</mo> <mn>00</mn> <mo>×</mo> <mn>1</mn> <msup><mrow><mn>0</mn></mrow> <mrow><mo>-</mo> <mn>4</mn></mrow> </msup> </mrow> <annotation>$$ 2.00times 1{0}^{-4} $$</annotation></semantics> </math> ) and insensitive to a moderate degree of unmeasured confounding. Finally, we offer an easy-to-use R software to carry out the sensitivity analyses for heterogeneous treatment effects.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70446"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12995544/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147475652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Systematic review and meta-analysis are widely accepted approaches for evaluating treatment effectiveness. Meta-analysis generally addresses statistical aspects of systematic reviews, such as the pooling of treatment effect sizes, assessment of heterogeneity, and statistical inference. To complement treatment effectiveness, cost-effectiveness is often conducted to encompass both clinical and economic perspectives. However, there are few statistical methods proposed for meta-analyses of cost-effectiveness, and none is used widely. In fact, meta-analysis is currently not encouraged for cost-effectiveness due to methodological and statistical complexities. In this paper, we propose simple meta-analytic methods for cost-effectiveness, which may serve as a starting point for future work. We illustrate the methods using two examples from systematic reviews on wound interventions and mental illness.
{"title":"Meta-Analysis of Cost-Effectiveness.","authors":"Heejung Bang, Hongwei Zhao","doi":"10.1002/sim.70352","DOIUrl":"10.1002/sim.70352","url":null,"abstract":"<p><p>Systematic review and meta-analysis are widely accepted approaches for evaluating treatment effectiveness. Meta-analysis generally addresses statistical aspects of systematic reviews, such as the pooling of treatment effect sizes, assessment of heterogeneity, and statistical inference. To complement treatment effectiveness, cost-effectiveness is often conducted to encompass both clinical and economic perspectives. However, there are few statistical methods proposed for meta-analyses of cost-effectiveness, and none is used widely. In fact, meta-analysis is currently not encouraged for cost-effectiveness due to methodological and statistical complexities. In this paper, we propose simple meta-analytic methods for cost-effectiveness, which may serve as a starting point for future work. We illustrate the methods using two examples from systematic reviews on wound interventions and mental illness.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70352"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12999371/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147481631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-proportional hazards cases are frequently expected in clinical trials with time-to-event endpoints (e.g., cardiology, oncology). The relevance of hazard ratios to quantify the treatment effect is questionable and potentially misleading in this context. Hence, alternative methods comparing restricted mean survival times are increasingly promoted. Specific challenges arise when planning clinical trials for comparing restricted mean survival times, as several nuisance parameter estimates are needed for calculating the sample size. Precise estimates might be difficult to obtain at the planning stage and might lead to underpowered trials. One way of dealing with this insecurity is to apply adaptive group sequential study designs with the option to adapt the sample size during an ongoing trial. Within this work, we consider such sample size adaptations, with a specific focus on the context of delayed treatment effects. We compare the performance of an adaptive design with the restricted mean survival time as the primary endpoint with other commonly chosen endpoints in this scenario by means of an extensive simulation study. With our proposed method, adaptive designs with the restricted mean survival time as the primary endpoint are now thoroughly explained. The combination test that we describe can also be useful for other adaptations than sample sizes.
{"title":"Sample Size Recalculation in Adaptive Group Sequential Study Designs for Comparing Restricted Mean Survival Times.","authors":"Carolin Herrmann, Paul Blanche","doi":"10.1002/sim.70490","DOIUrl":"10.1002/sim.70490","url":null,"abstract":"<p><p>Non-proportional hazards cases are frequently expected in clinical trials with time-to-event endpoints (e.g., cardiology, oncology). The relevance of hazard ratios to quantify the treatment effect is questionable and potentially misleading in this context. Hence, alternative methods comparing restricted mean survival times are increasingly promoted. Specific challenges arise when planning clinical trials for comparing restricted mean survival times, as several nuisance parameter estimates are needed for calculating the sample size. Precise estimates might be difficult to obtain at the planning stage and might lead to underpowered trials. One way of dealing with this insecurity is to apply adaptive group sequential study designs with the option to adapt the sample size during an ongoing trial. Within this work, we consider such sample size adaptations, with a specific focus on the context of delayed treatment effects. We compare the performance of an adaptive design with the restricted mean survival time as the primary endpoint with other commonly chosen endpoints in this scenario by means of an extensive simulation study. With our proposed method, adaptive designs with the restricted mean survival time as the primary endpoint are now thoroughly explained. The combination test that we describe can also be useful for other adaptations than sample sizes.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70490"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12999550/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147481640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cluster-randomized trials (CRTs) are experimental designs where groups or clusters of participants, rather than the individual participants themselves, are randomized to intervention groups. Analyzing CRT requires distinguishing between treatment effects at the cluster level and the individual level, which requires a clear definition of the estimands under a causal inference framework. For analyzing survival outcomes, it is common to assess the treatment effect by comparing survival functions or restricted mean survival times (RMSTs) between treatment groups. In this article, we formally characterize cluster-level and individual-level treatment effect estimands with right-censored survival outcomes in CRTs and propose doubly robust estimators for targeting such estimands. Under censoring dependent on baseline covariates, our estimators ensure consistency when either the censoring model or the outcome model is correctly specified, but not necessarily both. We explore different modeling options for the censoring and outcome models to estimate the censoring and survival distributions, and investigate a deletion-based jackknife method for variance and interval estimation. Extensive simulations demonstrate that the proposed methods perform adequately in finite samples. Finally, we illustrate our method by analyzing a completed CRT with survival endpoints.
{"title":"Estimands and Doubly Robust Estimation for Cluster-Randomized Trials With Survival Outcomes.","authors":"Xi Fang, Bingkai Wang, Liangyuan Hu, Fan Li","doi":"10.1002/sim.70457","DOIUrl":"10.1002/sim.70457","url":null,"abstract":"<p><p>Cluster-randomized trials (CRTs) are experimental designs where groups or clusters of participants, rather than the individual participants themselves, are randomized to intervention groups. Analyzing CRT requires distinguishing between treatment effects at the cluster level and the individual level, which requires a clear definition of the estimands under a causal inference framework. For analyzing survival outcomes, it is common to assess the treatment effect by comparing survival functions or restricted mean survival times (RMSTs) between treatment groups. In this article, we formally characterize cluster-level and individual-level treatment effect estimands with right-censored survival outcomes in CRTs and propose doubly robust estimators for targeting such estimands. Under censoring dependent on baseline covariates, our estimators ensure consistency when either the censoring model or the outcome model is correctly specified, but not necessarily both. We explore different modeling options for the censoring and outcome models to estimate the censoring and survival distributions, and investigate a deletion-based jackknife method for variance and interval estimation. Extensive simulations demonstrate that the proposed methods perform adequately in finite samples. Finally, we illustrate our method by analyzing a completed CRT with survival endpoints.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70457"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147318344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The robust Wald confidence interval (CI) for the Cox model is commonly used when the model may be misspecified or when weights are applied. However, it can perform poorly when there are few events in one or both treatment groups, as may occur when the event of interest is rare or when the experimental arm is highly efficacious. For instance, if we artificially remove events (assuming more events are unfavorable) from the experimental group, the resulting upper CI may increase. This is clearly counter-intuitive as a small number of events in the experimental arm represent stronger evidence for efficacy. It is well known that, when the sample size is small to moderate, likelihood CIs are better than Wald CIs in terms of actual coverage probabilities closely matching nominal levels. However, a robust version of the likelihood CI for the Cox model remains an open problem. For example, in the SAS procedure PHREG, the likelihood CI provided in the outputs is still the regular version, even when the robust option is specified. This is obviously undesirable as a user may mistakenly assume that the CI is the robust version. In this article we demonstrate that the likelihood ratio test statistic of the Cox model converges to a weighted chi-square distribution when the model is misspecified. The robust likelihood CI is then obtained by inverting the robust likelihood ratio test. The proposed CIs are evaluated through simulation studies and illustrated using real data from an HIV prevention trial. A companion R package "CoxLikelihood" is available for download on CRAN.
{"title":"Likelihood Confidence Intervals for Misspecified Cox Models.","authors":"Yongwu Shao, Xu Guo","doi":"10.1002/sim.70472","DOIUrl":"10.1002/sim.70472","url":null,"abstract":"<p><p>The robust Wald confidence interval (CI) for the Cox model is commonly used when the model may be misspecified or when weights are applied. However, it can perform poorly when there are few events in one or both treatment groups, as may occur when the event of interest is rare or when the experimental arm is highly efficacious. For instance, if we artificially remove events (assuming more events are unfavorable) from the experimental group, the resulting upper CI may increase. This is clearly counter-intuitive as a small number of events in the experimental arm represent stronger evidence for efficacy. It is well known that, when the sample size is small to moderate, likelihood CIs are better than Wald CIs in terms of actual coverage probabilities closely matching nominal levels. However, a robust version of the likelihood CI for the Cox model remains an open problem. For example, in the SAS procedure PHREG, the likelihood CI provided in the outputs is still the regular version, even when the robust option is specified. This is obviously undesirable as a user may mistakenly assume that the CI is the robust version. In this article we demonstrate that the likelihood ratio test statistic of the Cox model converges to a weighted chi-square distribution when the model is misspecified. The robust likelihood CI is then obtained by inverting the robust likelihood ratio test. The proposed CIs are evaluated through simulation studies and illustrated using real data from an HIV prevention trial. A companion R package \"CoxLikelihood\" is available for download on CRAN.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70472"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147327181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yushu Zou, Liangyuan Hu, Amanda Ricciuto, Mark Deneau, Kuan Liu
Causal inference relies on the untestable assumption of no unmeasured confounding to ensure the causal parameter of interest is identifiable. Sensitivity analysis quantifies the unmeasured confounding's impact on causal estimates. Among sensitivity analysis methods proposed in the literature, the latent confounder approach is favored for its intuitive interpretation via the use of bias parameters to specify the relationship between the observed and unobserved variables, and the sensitivity function approach directly characterizes the net causal effect of the unmeasured confounding without explicitly introducing latent variables to the causal models. In this paper, we developed and extended these two sensitivity analysis approaches, namely the Bayesian sensitivity analysis with latent confounding variables and the Bayesian sensitivity function approach for the estimation of time-varying treatment effects with longitudinal observational data subjected to time-varying unmeasured confounding. We investigated the performance of these methods in a series of simulation studies and applied them to a multicenter pediatric disease registry to provide practical guidance on their implementation.
{"title":"Bayesian Sensitivity Analysis for Causal Estimation With Time-Varying Unmeasured Confounding.","authors":"Yushu Zou, Liangyuan Hu, Amanda Ricciuto, Mark Deneau, Kuan Liu","doi":"10.1002/sim.70481","DOIUrl":"10.1002/sim.70481","url":null,"abstract":"<p><p>Causal inference relies on the untestable assumption of no unmeasured confounding to ensure the causal parameter of interest is identifiable. Sensitivity analysis quantifies the unmeasured confounding's impact on causal estimates. Among sensitivity analysis methods proposed in the literature, the latent confounder approach is favored for its intuitive interpretation via the use of bias parameters to specify the relationship between the observed and unobserved variables, and the sensitivity function approach directly characterizes the net causal effect of the unmeasured confounding without explicitly introducing latent variables to the causal models. In this paper, we developed and extended these two sensitivity analysis approaches, namely the Bayesian sensitivity analysis with latent confounding variables and the Bayesian sensitivity function approach for the estimation of time-varying treatment effects with longitudinal observational data subjected to time-varying unmeasured confounding. We investigated the performance of these methods in a series of simulation studies and applied them to a multicenter pediatric disease registry to provide practical guidance on their implementation.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70481"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12975701/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147435661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although randomized controlled trials (RCTs) are the gold standard for evaluating the efficacy and safety of treatments, they are challenged by cost, duration, enrollment, or ethical concerns. A possible solution is to incorporate external control data as a hybrid control group, for which various statistical methods are available. However, only a few of them account for confounding bias due to unknown/unmeasured covariates between the internal and external control data. Moreover, the amount of this potential bias cannot be measured using most existing methods without extensive simulations. Here, we propose a novel method for estimating the confounding effects of unmeasured covariates based on model-based regression standardization, inverse probability weighting, and augmented inverse probability weighting for continuous or binary outcomes. We also propose an estimator that dynamically borrows external data and uses a weighted mean, adjusting weights according to the estimated confounding effect of unmeasured covariates. In the proposed method, the expected amount of bias can be controlled within a prespecified "bias-tolerance cap," which may facilitate a better discussion among stakeholders about whether an effect estimate has unacceptable bias by utilizing external control data in a planning phase. Simulations showed that the proposed method regulates bias within the tolerance cap, regardless of the magnitude of confounding by unmeasured covariates, while greatly improving power and efficiency when confounding is absent. Finally, we illustrate an applicational example of our proposed method to an actual RCT and the external control datasets for patients with advanced pancreatic cancer.
{"title":"Dynamic Borrowing With a Bias-Tolerance Cap in Augmented Randomized Controlled Trials.","authors":"Kota Sawada, Shogo Nomura, Tomohiro Shinozaki","doi":"10.1002/sim.70473","DOIUrl":"10.1002/sim.70473","url":null,"abstract":"<p><p>Although randomized controlled trials (RCTs) are the gold standard for evaluating the efficacy and safety of treatments, they are challenged by cost, duration, enrollment, or ethical concerns. A possible solution is to incorporate external control data as a hybrid control group, for which various statistical methods are available. However, only a few of them account for confounding bias due to unknown/unmeasured covariates between the internal and external control data. Moreover, the amount of this potential bias cannot be measured using most existing methods without extensive simulations. Here, we propose a novel method for estimating the confounding effects of unmeasured covariates based on model-based regression standardization, inverse probability weighting, and augmented inverse probability weighting for continuous or binary outcomes. We also propose an estimator that dynamically borrows external data and uses a weighted mean, adjusting weights according to the estimated confounding effect of unmeasured covariates. In the proposed method, the expected amount of bias can be controlled within a prespecified \"bias-tolerance cap,\" which may facilitate a better discussion among stakeholders about whether an effect estimate has unacceptable bias by utilizing external control data in a planning phase. Simulations showed that the proposed method regulates bias within the tolerance cap, regardless of the magnitude of confounding by unmeasured covariates, while greatly improving power and efficiency when confounding is absent. Finally, we illustrate an applicational example of our proposed method to an actual RCT and the external control datasets for patients with advanced pancreatic cancer.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70473"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12982164/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147444989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tim P Morris, Alex Ocampo, Jesper Madsen, Hege Michiels, Sanne Roels
This commentary offers perspectives on delivering "rigorous causal inference on meaningful estimands" that differ from the opinions recently shared by Fleming et al. We (1) depict a more robust pathway for achieving this aim that incorporates clinical, causal and statistical reasoning, (2) suggest a tangibility criterion to judge the practical usefulness of an intercurrent event strategy, (3) illustrate the utility of causal inference methods in providing robust estimates when the clinical objective aligns with a hypothetical strategy, and (4) advocate for careful consideration of the tradeoffs between an estimand's relevance and the required assumptions.
{"title":"A Causal Perspective on \"Appropriate Implementation of ICH E9(R1) Addendum Strategies\" (Comment on Fleming et al.).","authors":"Tim P Morris, Alex Ocampo, Jesper Madsen, Hege Michiels, Sanne Roels","doi":"10.1002/sim.70455","DOIUrl":"https://doi.org/10.1002/sim.70455","url":null,"abstract":"<p><p>This commentary offers perspectives on delivering \"rigorous causal inference on meaningful estimands\" that differ from the opinions recently shared by Fleming et al. We (1) depict a more robust pathway for achieving this aim that incorporates clinical, causal and statistical reasoning, (2) suggest a tangibility criterion to judge the practical usefulness of an intercurrent event strategy, (3) illustrate the utility of causal inference methods in providing robust estimates when the clinical objective aligns with a hypothetical strategy, and (4) advocate for careful consideration of the tradeoffs between an estimand's relevance and the required assumptions.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70455"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147475657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kim May Lee, Babak Choodari-Oskooei, Michael J Grayling, Peter Jacko, Peter K Kimani, Aritra Mukherjee, Philip Pallmann, Tom Parke, David S Robertson, Ziyan Wang, Christina Yap, Thomas Jaki
The adoption of complex innovative clinical trial designs has steadily increased in recent years. These are trial designs that have one or more unconventional features-often resulting in multiple stages-with the goal of improving on conventional single-stage, fixed-setting designs in terms of efficiency, for example, by reducing the required sample size or the time to establish findings about an intervention. The motivation for these designs may not be difficult to follow, but their set-up and implementation is usually more challenging. Statistical properties of these designs can also be difficult to compute. Clinical trial simulation (CTS), which uses software to generate artificial data for learning, can be conducted to identify the (optimal) setting of a clinical trial, evaluate the design's statistical properties under some hypothetical scenarios for sensitivity analysis, and compare different design set-ups and data analysis strategies, all of which contribute to a better understanding of the value of unconventional features before implementing the design in an actual clinical trial. Existing literature on simulation primarily focuses on the evaluation of statistical analysis methods, with less attention on the detailed specification and planning of CTS. This tutorial presents a new framework, called OCTAVE, for outlining the details of CTS, provides practical recommendations for their implementation, and addresses key computational considerations. The target audience is trial statisticians who are involved in designing and analyzing clinical trials. This tutorial covers a range of complex innovative designs, without the expectation that readers are familiar with the mentioned examples.
{"title":"Clinical Trial Simulation: Planning With the OCTAVE Framework, Implementation and Validation Principles.","authors":"Kim May Lee, Babak Choodari-Oskooei, Michael J Grayling, Peter Jacko, Peter K Kimani, Aritra Mukherjee, Philip Pallmann, Tom Parke, David S Robertson, Ziyan Wang, Christina Yap, Thomas Jaki","doi":"10.1002/sim.70449","DOIUrl":"10.1002/sim.70449","url":null,"abstract":"<p><p>The adoption of complex innovative clinical trial designs has steadily increased in recent years. These are trial designs that have one or more unconventional features-often resulting in multiple stages-with the goal of improving on conventional single-stage, fixed-setting designs in terms of efficiency, for example, by reducing the required sample size or the time to establish findings about an intervention. The motivation for these designs may not be difficult to follow, but their set-up and implementation is usually more challenging. Statistical properties of these designs can also be difficult to compute. Clinical trial simulation (CTS), which uses software to generate artificial data for learning, can be conducted to identify the (optimal) setting of a clinical trial, evaluate the design's statistical properties under some hypothetical scenarios for sensitivity analysis, and compare different design set-ups and data analysis strategies, all of which contribute to a better understanding of the value of unconventional features before implementing the design in an actual clinical trial. Existing literature on simulation primarily focuses on the evaluation of statistical analysis methods, with less attention on the detailed specification and planning of CTS. This tutorial presents a new framework, called OCTAVE, for outlining the details of CTS, provides practical recommendations for their implementation, and addresses key computational considerations. The target audience is trial statisticians who are involved in designing and analyzing clinical trials. This tutorial covers a range of complex innovative designs, without the expectation that readers are familiar with the mentioned examples.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70449"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12989786/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147463905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate prediction of the breast cancer patient's life expectancy is essential for treatment decisions. This study aims to develop a novel model estimation and variable selection method for the partially linear additive quantile regression model when the survival times are subject to right censoring. Rather than most of the existing methods using the formulation of synthetic data points or weighting schemes to tackle censoring, we use an adapted loss function to solve censoring. Moreover, we adopt the B-spline to approximate the nonparametric additive components. To further improve the prediction accuracy, we use the group smoothly clipped absolute deviation (SCAD) penalty to select significant variables in the nonparametric additive components. To implement the proposed method, we develop an effective block-wise majorize-minimize (MM) algorithm. Furthermore, we establish the asymptotic properties for the resultant estimators. Numerical simulations illustrate that the finite sample performance of the proposed method outperforms alternative methods. Finally, we apply our method for the personalized treatment of female malignant metastatic breast cancer patients, using the Surveillance, Epidemiology, and End Results (SEER) research data.
{"title":"Partially Linear Additive Quantile Regression: Theory and Applications to Breast Cancer Patients' Survival.","authors":"Xinyi Zhao, Maozai Tian","doi":"10.1002/sim.70463","DOIUrl":"10.1002/sim.70463","url":null,"abstract":"<p><p>Accurate prediction of the breast cancer patient's life expectancy is essential for treatment decisions. This study aims to develop a novel model estimation and variable selection method for the partially linear additive quantile regression model when the survival times are subject to right censoring. Rather than most of the existing methods using the formulation of synthetic data points or weighting schemes to tackle censoring, we use an adapted loss function to solve censoring. Moreover, we adopt the B-spline to approximate the nonparametric additive components. To further improve the prediction accuracy, we use the group smoothly clipped absolute deviation (SCAD) penalty to select significant variables in the nonparametric additive components. To implement the proposed method, we develop an effective block-wise majorize-minimize (MM) algorithm. Furthermore, we establish the asymptotic properties for the resultant estimators. Numerical simulations illustrate that the finite sample performance of the proposed method outperforms alternative methods. Finally, we apply our method for the personalized treatment of female malignant metastatic breast cancer patients, using the Surveillance, Epidemiology, and End Results (SEER) research data.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70463"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147318380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}