José L Jiménez, Isobel Barrott, Francesca Gasperoni, Dominic Magirr
What can be considered an appropriate statistical method for the primary analysis of a randomized clinical trial (RCT) with a time-to-event endpoint when we anticipate non-proportional hazards owing to a delayed effect? This question has been the subject of much recent debate. The standard approach is a log-rank test and/or a Cox proportional hazards model. Alternative methods have been explored in the statistical literature, such as weighted log-rank tests and tests based on the Restricted Mean Survival Time (RMST). While weighted log-rank tests can achieve high power compared to the standard log-rank test, some choices of weights may lead to type-I error inflation under particular conditions. In addition, they are not linked to a mathematically unambiguous summary measure. Test statistics based on the RMST, on the other hand, allow one to investigate the average difference between two survival curves up to a pre-specified time point -a mathematically unambiguous summary measure. However, by emphasizing differences prior to , such test statistics may not fully capture the benefit of a new treatment in terms of long-term survival. In this article, we introduce a graphical approach for direct comparison of weighted log-rank tests and tests based on the RMST. This new perspective allows a more informed choice of the analysis method, going beyond power and type I error comparison.
当我们预计延迟效应会导致非比例危险时,对于采用时间到事件终点的随机临床试验(RCT)的主要分析,什么才是适当的统计方法?这个问题最近引起了很多争论。标准方法是对数秩检验和/或 Cox 比例危险度模型。统计文献中也探讨了其他方法,如加权对数秩检验和基于限制平均生存时间(RMST)的检验。虽然与标准对数秩检验相比,加权对数秩检验可以获得较高的检验功率,但在特定条件下,某些权重的选择可能会导致I型误差膨胀。此外,加权对数秩检验与数学上明确的总结性指标并无关联。另一方面,基于 RMST 的检验统计允许研究两条生存曲线在预先指定的时间点 τ $$ tau $$ 前的平均差异--这是一个数学上明确的总结性指标。然而,由于强调τ $$ tau $$之前的差异,这种检验统计可能无法完全反映新疗法在长期生存方面的益处。在本文中,我们介绍了一种直接比较加权对数秩检验和基于 RMST 检验的图形方法。从这一新角度出发,我们可以更明智地选择分析方法,而不仅仅局限于功率和 I 型误差的比较。
{"title":"Visualizing hypothesis tests in survival analysis under anticipated delayed effects.","authors":"José L Jiménez, Isobel Barrott, Francesca Gasperoni, Dominic Magirr","doi":"10.1002/pst.2393","DOIUrl":"https://doi.org/10.1002/pst.2393","url":null,"abstract":"<p><p>What can be considered an appropriate statistical method for the primary analysis of a randomized clinical trial (RCT) with a time-to-event endpoint when we anticipate non-proportional hazards owing to a delayed effect? This question has been the subject of much recent debate. The standard approach is a log-rank test and/or a Cox proportional hazards model. Alternative methods have been explored in the statistical literature, such as weighted log-rank tests and tests based on the Restricted Mean Survival Time (RMST). While weighted log-rank tests can achieve high power compared to the standard log-rank test, some choices of weights may lead to type-I error inflation under particular conditions. In addition, they are not linked to a mathematically unambiguous summary measure. Test statistics based on the RMST, on the other hand, allow one to investigate the average difference between two survival curves up to a pre-specified time point <math> <semantics><mrow><mi>τ</mi></mrow> <annotation>$$ tau $$</annotation></semantics> </math> -a mathematically unambiguous summary measure. However, by emphasizing differences prior to <math> <semantics><mrow><mi>τ</mi></mrow> <annotation>$$ tau $$</annotation></semantics> </math> , such test statistics may not fully capture the benefit of a new treatment in terms of long-term survival. In this article, we introduce a graphical approach for direct comparison of weighted log-rank tests and tests based on the RMST. This new perspective allows a more informed choice of the analysis method, going beyond power and type I error comparison.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140859909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2023-12-28DOI: 10.1002/pst.2353
Zixing Wang, Qingyang Zhang, Allen Xue, James Whitmore
With the advent of cancer immunotherapy, some special features including delayed treatment effect, cure rate, diminishing treatment effect and crossing survival are often observed in survival analysis. They violate the proportional hazard model assumption and pose a unique challenge for the conventional trial design and analysis strategies. Many methods like cure rate model have been developed based on mixture model to incorporate some of these features. In this work, we extend the mixture model to deal with multiple non-proportional patterns and develop its geometric average hazard ratio (gAHR) to quantify the treatment effect. We further derive a sample size and power formula based on the non-centrality parameter of the log-rank test and conduct a thorough analysis of the impact of each parameter on performance. Simulation studies showed a clear advantage of our new method over the proportional hazard based calculation across different non-proportional hazard scenarios. Moreover, the mixture modeling of two real trials demonstrates how to use the prior information on the survival distribution among patients with different biomarker and early efficacy results in practice. By comparison with a simulation-based design, the new method provided a more efficient way to compute the power and sample size with high accuracy of estimation. Overall, both theoretical derivation and empirical studies demonstrate the promise of the proposed method in powering future innovative trial designs.
{"title":"Sample size calculation for mixture model based on geometric average hazard ratio and its applications to nonproportional hazard.","authors":"Zixing Wang, Qingyang Zhang, Allen Xue, James Whitmore","doi":"10.1002/pst.2353","DOIUrl":"10.1002/pst.2353","url":null,"abstract":"<p><p>With the advent of cancer immunotherapy, some special features including delayed treatment effect, cure rate, diminishing treatment effect and crossing survival are often observed in survival analysis. They violate the proportional hazard model assumption and pose a unique challenge for the conventional trial design and analysis strategies. Many methods like cure rate model have been developed based on mixture model to incorporate some of these features. In this work, we extend the mixture model to deal with multiple non-proportional patterns and develop its geometric average hazard ratio (gAHR) to quantify the treatment effect. We further derive a sample size and power formula based on the non-centrality parameter of the log-rank test and conduct a thorough analysis of the impact of each parameter on performance. Simulation studies showed a clear advantage of our new method over the proportional hazard based calculation across different non-proportional hazard scenarios. Moreover, the mixture modeling of two real trials demonstrates how to use the prior information on the survival distribution among patients with different biomarker and early efficacy results in practice. By comparison with a simulation-based design, the new method provided a more efficient way to compute the power and sample size with high accuracy of estimation. Overall, both theoretical derivation and empirical studies demonstrate the promise of the proposed method in powering future innovative trial designs.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"325-338"},"PeriodicalIF":1.5,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139049061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The pharmaceutical industry is plagued with long, costly development and high risk. Therefore, a company's effective management and optimisation of a portfolio of projects is critical for success. Project metrics such as the probability of success enable modelling of a company's pipeline accounting for the high uncertainty inherent within the industry. Making portfolio decisions inherently involves managing risk, and statisticians are ideally positioned to champion not only the derivation of metrics for individual projects, but also advocate decision-making at a broader portfolio level. This article aims to examine the existing different portfolio decision-making approaches and to suggest opportunities for statisticians to add value in terms of introducing probabilistic thinking, quantitative decision-making, and increasingly advanced methodologies.
{"title":"Going beyond probability of success: Opportunities for statisticians to influence quantitative decision-making at the portfolio level.","authors":"Stig-Johan Wiklund, Katharine Thorn, Heiko Götte, Kimberley Hacquoil, Gaëlle Saint-Hilary, Alex Carlton","doi":"10.1002/pst.2361","DOIUrl":"10.1002/pst.2361","url":null,"abstract":"<p><p>The pharmaceutical industry is plagued with long, costly development and high risk. Therefore, a company's effective management and optimisation of a portfolio of projects is critical for success. Project metrics such as the probability of success enable modelling of a company's pipeline accounting for the high uncertainty inherent within the industry. Making portfolio decisions inherently involves managing risk, and statisticians are ideally positioned to champion not only the derivation of metrics for individual projects, but also advocate decision-making at a broader portfolio level. This article aims to examine the existing different portfolio decision-making approaches and to suggest opportunities for statisticians to add value in terms of introducing probabilistic thinking, quantitative decision-making, and increasingly advanced methodologies.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"429-438"},"PeriodicalIF":1.5,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139425264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-01-11DOI: 10.1002/pst.2359
Jinglin Zhong, David Petullo
Since the publication of ICH E9 (R1), "Addendum to statistical principles for clinical trials: on choosing appropriate estimands and defining sensitivity analyses in clinical trials," there has been a lot of debate about the hypothetical strategy for handling intercurrent events. Arguments against the hypothetical strategy are twofold: (1) the clinical question has limited clinical/regulatory interest; (2) the estimation may need strong statistical assumptions. In this article, we provide an example of a hypothetical strategy handling use of rescue medications in the acute pain setting. We argue that the treatment effect of a drug that is attributable to the treatment alone is the clinical question of interest and is important to regulators. The hypothetical strategy is important when developing non-opioid treatment as it estimates the treatment effect due to treatment during the pre-specified evaluation period whereas the treatment policy strategy does not. Two widely acceptable and non-controversial clinical inputs are required to construct a reasonable estimator. More importantly, this estimator does not rely on additional strong statistical assumptions and is considered reasonable for regulatory decision making. In this article, we point out examples where estimators for a hypothetical strategy can be constructed without any strong additional statistical assumptions besides acceptable clinical inputs. We also showcase a new way to obtain estimation based on disease specific clinical knowledge instead of strong statistical assumptions. In the example presented, we clearly demonstrate the advantages of the hypothetical strategy compared to alternative strategies including the treatment policy strategy and a composite variable strategy.
自 ICH E9 (R1) "临床试验统计原则增编:关于在临床试验中选择适当的估算对象和定义敏感性分析 "发布以来,关于处理并发症的假设策略一直争论不休。反对假设策略的观点有两个方面:(1)临床问题的临床/监管意义有限;(2)估计可能需要很强的统计假设。在本文中,我们举例说明了在急性疼痛情况下使用抢救药物的假设策略。我们认为,药物的治疗效果仅归因于治疗本身,这是临床关心的问题,对监管者也很重要。在开发非阿片类药物治疗时,假设策略非常重要,因为它可以估算出在预先指定的评估期内因治疗而产生的治疗效果,而治疗政策策略则不然。要构建一个合理的估算器,需要两个广为接受且无争议的临床输入。更重要的是,这种估算方法不依赖于额外的强统计学假设,被认为是监管决策的合理方法。在本文中,我们将举例说明,除了可接受的临床输入外,无需任何额外的强统计学假设,就能构建假设策略的估算器。我们还展示了一种基于特定疾病临床知识而非强统计假设来获得估计值的新方法。在所介绍的例子中,我们清楚地展示了假设策略与其他策略(包括治疗政策策略和复合变量策略)相比的优势。
{"title":"Application of hypothetical strategies in acute pain.","authors":"Jinglin Zhong, David Petullo","doi":"10.1002/pst.2359","DOIUrl":"10.1002/pst.2359","url":null,"abstract":"<p><p>Since the publication of ICH E9 (R1), \"Addendum to statistical principles for clinical trials: on choosing appropriate estimands and defining sensitivity analyses in clinical trials,\" there has been a lot of debate about the hypothetical strategy for handling intercurrent events. Arguments against the hypothetical strategy are twofold: (1) the clinical question has limited clinical/regulatory interest; (2) the estimation may need strong statistical assumptions. In this article, we provide an example of a hypothetical strategy handling use of rescue medications in the acute pain setting. We argue that the treatment effect of a drug that is attributable to the treatment alone is the clinical question of interest and is important to regulators. The hypothetical strategy is important when developing non-opioid treatment as it estimates the treatment effect due to treatment during the pre-specified evaluation period whereas the treatment policy strategy does not. Two widely acceptable and non-controversial clinical inputs are required to construct a reasonable estimator. More importantly, this estimator does not rely on additional strong statistical assumptions and is considered reasonable for regulatory decision making. In this article, we point out examples where estimators for a hypothetical strategy can be constructed without any strong additional statistical assumptions besides acceptable clinical inputs. We also showcase a new way to obtain estimation based on disease specific clinical knowledge instead of strong statistical assumptions. In the example presented, we clearly demonstrate the advantages of the hypothetical strategy compared to alternative strategies including the treatment policy strategy and a composite variable strategy.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"399-407"},"PeriodicalIF":1.5,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139425263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2023-12-18DOI: 10.1002/pst.2352
Stephen Privitera, Hooman Sedghamiz, Alexander Hartenstein, Tatsiana Vaitsiakhovich, Frank Kleinjung
Matching reduces confounding bias in comparing the outcomes of nonrandomized patient populations by removing systematic differences between them. Under very basic assumptions, propensity score (PS) matching can be shown to eliminate bias entirely in estimating the average treatment effect on the treated. In practice, misspecification of the PS model leads to deviations from theory and matching quality is ultimately judged by the observed post-matching balance in baseline covariates. Since covariate balance is the ultimate arbiter of successful matching, we argue for an approach to matching in which the success criterion is explicitly specified and describe an evolutionary algorithm to directly optimize an arbitrary metric of covariate balance. We demonstrate the performance of the proposed method using a simulated dataset of 275,000 patients and 10 matching covariates. We further apply the method to match 250 patients from a recently completed clinical trial to a pool of more than 160,000 patients identified from electronic health records on 101 covariates. In all cases, we find that the proposed method outperforms PS matching as measured by the specified balance criterion. We additionally find that the evolutionary approach can perform comparably to another popular direct optimization technique based on linear integer programming, while having the additional advantage of supporting arbitrary balance metrics. We demonstrate how the chosen balance metric impacts the statistical properties of the resulting matched populations, emphasizing the potential impact of using nonlinear balance functions in constructing an external control arm. We release our implementation of the considered algorithms in Python.
{"title":"An evolutionary algorithm for the direct optimization of covariate balance between nonrandomized populations.","authors":"Stephen Privitera, Hooman Sedghamiz, Alexander Hartenstein, Tatsiana Vaitsiakhovich, Frank Kleinjung","doi":"10.1002/pst.2352","DOIUrl":"10.1002/pst.2352","url":null,"abstract":"<p><p>Matching reduces confounding bias in comparing the outcomes of nonrandomized patient populations by removing systematic differences between them. Under very basic assumptions, propensity score (PS) matching can be shown to eliminate bias entirely in estimating the average treatment effect on the treated. In practice, misspecification of the PS model leads to deviations from theory and matching quality is ultimately judged by the observed post-matching balance in baseline covariates. Since covariate balance is the ultimate arbiter of successful matching, we argue for an approach to matching in which the success criterion is explicitly specified and describe an evolutionary algorithm to directly optimize an arbitrary metric of covariate balance. We demonstrate the performance of the proposed method using a simulated dataset of 275,000 patients and 10 matching covariates. We further apply the method to match 250 patients from a recently completed clinical trial to a pool of more than 160,000 patients identified from electronic health records on 101 covariates. In all cases, we find that the proposed method outperforms PS matching as measured by the specified balance criterion. We additionally find that the evolutionary approach can perform comparably to another popular direct optimization technique based on linear integer programming, while having the additional advantage of supporting arbitrary balance metrics. We demonstrate how the chosen balance metric impacts the statistical properties of the resulting matched populations, emphasizing the potential impact of using nonlinear balance functions in constructing an external control arm. We release our implementation of the considered algorithms in Python.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"288-307"},"PeriodicalIF":1.5,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138806724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2023-12-25DOI: 10.1002/pst.2357
Moses Mwangi, Geert Verbeke, Edmund Njeru Njagi, Alvaro Jose Florez, Samuel Mwalili, Anna Ivanova, Zipporah N Bukania, Geert Molenberghs
Cross-over designs are commonly used in randomized clinical trials to estimate efficacy of a new treatment. They have received a lot of attention, particularly in connection with regulatory requirements for new drugs. The main advantage of using cross-over designs over conventional parallel designs is increased precision, thanks to within-subject comparisons. In the statistical literature, more recent developments are discussed in the analysis of cross-over trials, in particular regarding repeated measures. A piecewise linear model within the framework of mixed effects has been proposed in the analysis of cross-over trials. In this article, we report on a simulation study comparing performance of a piecewise linear mixed-effects (PLME) model against two commonly cited models-Grizzle's mixed-effects (GME) and Jones & Kenward's mixed-effects (JKME) models-used in the analysis of cross-over trials. Our simulation study tried to mirror real-life situation by deriving true underlying parameters from empirical data. The findings from real-life data confirmed the original hypothesis that high-dose iodine salt have significantly lowering effect on diastolic blood pressure (DBP). We further sought to evaluate the performance of PLME model against GME and JKME models, within univariate modeling framework through a simulation study mimicking a 2 × 2 cross-over design. The fixed-effects, random-effects and residual error parameters used in the simulation process were estimated from DBP data, using a PLME model. The initial results with full specification of random intercept and slope(s), showed that the univariate PLME model performed better than the GME and JKME models in estimation of variance-covariance matrix (G) governing the random effects, allowing satisfactory model convergence during estimation. When a hierarchical view-point is adopted, in the sense that outcomes are specified conditionally upon random effects, the variance-covariance matrix of the random effects must be positive-definite. The PLME model is preferred especially in modeling an increased number of random effects, compared to the GME and JKME models that work equally well with random intercepts only. In some cases, additional random effects could explain much variability in the data, thus improving precision in estimation of the estimands (effect size) parameters.
{"title":"Evaluation of a flexible piecewise linear mixed-effects model in the analysis of randomized cross-over trials.","authors":"Moses Mwangi, Geert Verbeke, Edmund Njeru Njagi, Alvaro Jose Florez, Samuel Mwalili, Anna Ivanova, Zipporah N Bukania, Geert Molenberghs","doi":"10.1002/pst.2357","DOIUrl":"10.1002/pst.2357","url":null,"abstract":"<p><p>Cross-over designs are commonly used in randomized clinical trials to estimate efficacy of a new treatment. They have received a lot of attention, particularly in connection with regulatory requirements for new drugs. The main advantage of using cross-over designs over conventional parallel designs is increased precision, thanks to within-subject comparisons. In the statistical literature, more recent developments are discussed in the analysis of cross-over trials, in particular regarding repeated measures. A piecewise linear model within the framework of mixed effects has been proposed in the analysis of cross-over trials. In this article, we report on a simulation study comparing performance of a piecewise linear mixed-effects (PLME) model against two commonly cited models-Grizzle's mixed-effects (GME) and Jones & Kenward's mixed-effects (JKME) models-used in the analysis of cross-over trials. Our simulation study tried to mirror real-life situation by deriving true underlying parameters from empirical data. The findings from real-life data confirmed the original hypothesis that high-dose iodine salt have significantly lowering effect on diastolic blood pressure (DBP). We further sought to evaluate the performance of PLME model against GME and JKME models, within univariate modeling framework through a simulation study mimicking a 2 × 2 cross-over design. The fixed-effects, random-effects and residual error parameters used in the simulation process were estimated from DBP data, using a PLME model. The initial results with full specification of random intercept and slope(s), showed that the univariate PLME model performed better than the GME and JKME models in estimation of variance-covariance matrix (G) governing the random effects, allowing satisfactory model convergence during estimation. When a hierarchical view-point is adopted, in the sense that outcomes are specified conditionally upon random effects, the variance-covariance matrix of the random effects must be positive-definite. The PLME model is preferred especially in modeling an increased number of random effects, compared to the GME and JKME models that work equally well with random intercepts only. In some cases, additional random effects could explain much variability in the data, thus improving precision in estimation of the estimands (effect size) parameters.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"370-384"},"PeriodicalIF":1.5,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139037864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-01-14DOI: 10.1002/pst.2363
Oleksandr Sverdlov, Vance W Berger, Kerstine Carter
{"title":"On \"Re-randomization tests as sensitivity analyses to confirm immunological noninferiority of an investigational vaccine: Case study\" by Luca Grassano et al. (2023, Pharmaceutical Statistics).","authors":"Oleksandr Sverdlov, Vance W Berger, Kerstine Carter","doi":"10.1002/pst.2363","DOIUrl":"10.1002/pst.2363","url":null,"abstract":"","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"425-428"},"PeriodicalIF":1.3,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139467001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2023-12-28DOI: 10.1002/pst.2356
Alexandra Erdmann, Jan Beyersmann, Erich Bluhmki
We compare the performance of nonparametric estimators for the mean number of recurrent events and provide a systematic overview for different recurrent event settings. The mean number of recurrent events is an easily interpreted marginal feature often used for treatment comparisons in clinical trials. Incomplete observations, dependencies between successive events, terminating events acting as competing risk, or gaps between at risk periods complicate the estimation. We use survival multistate models to represent different complex recurrent event situations, profiting from recent advances in nonparametric estimation for non-Markov multistate models, and explain several estimators by using multistate intensity processes, including the common Nelson-Aalen-type estimators with and without competing mortality. In addition to building on estimation of state occupation probabilities in non-Markov models, we consider a simple extension of the Nelson-Aalen estimator by allowing for dependence on the number of prior recurrent events. We pay particular attention to the assumptions required for the censoring mechanism, one issue being that some settings require the censoring process to be entirely unrelated while others allow for state-dependent or event-driven censoring. We conducted extensive simulation studies to compare the estimators in various complex situations with recurrent events. Our practical example deals with recurrent chronic obstructive pulmonary disease exacerbations in a clinical study, which will also be used to illustrate two-sample-inference using resampling.
{"title":"Comparison of nonparametric estimators of the expected number of recurrent events.","authors":"Alexandra Erdmann, Jan Beyersmann, Erich Bluhmki","doi":"10.1002/pst.2356","DOIUrl":"10.1002/pst.2356","url":null,"abstract":"<p><p>We compare the performance of nonparametric estimators for the mean number of recurrent events and provide a systematic overview for different recurrent event settings. The mean number of recurrent events is an easily interpreted marginal feature often used for treatment comparisons in clinical trials. Incomplete observations, dependencies between successive events, terminating events acting as competing risk, or gaps between at risk periods complicate the estimation. We use survival multistate models to represent different complex recurrent event situations, profiting from recent advances in nonparametric estimation for non-Markov multistate models, and explain several estimators by using multistate intensity processes, including the common Nelson-Aalen-type estimators with and without competing mortality. In addition to building on estimation of state occupation probabilities in non-Markov models, we consider a simple extension of the Nelson-Aalen estimator by allowing for dependence on the number of prior recurrent events. We pay particular attention to the assumptions required for the censoring mechanism, one issue being that some settings require the censoring process to be entirely unrelated while others allow for state-dependent or event-driven censoring. We conducted extensive simulation studies to compare the estimators in various complex situations with recurrent events. Our practical example deals with recurrent chronic obstructive pulmonary disease exacerbations in a clinical study, which will also be used to illustrate two-sample-inference using resampling.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"339-369"},"PeriodicalIF":1.5,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139049060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2023-11-16DOI: 10.1002/pst.2350
Edoardo Lisi, Juan J Abellan
There is a growing interest in the use of physical activity data in clinical studies, particularly in diseases that limit mobility in patients. High-frequency data collected with digital sensors are typically summarised into actigraphy features aggregated at epoch level (e.g., by minute). The statistical analysis of such volume of data is not straightforward. The general trend is to derive metrics, capturing specific aspects of physical activity, that condense (say) a week worth of data into a single numerical value. Here we propose to analyse the entire time-series data using Generalised Additive Models (GAMs). GAMs are semi-parametric models that allow inclusion of both parametric and non-parametric terms in the linear predictor. The latter are smooth terms (e.g., splines) and, in the context of actigraphy minute-by-minute data analysis, they can be used to assess daily patterns of physical activity. This in turn can be used to better understand changes over time in longitudinal studies as well as to compare treatment groups. We illustrate the application of GAMs in two clinical studies where actigraphy data was collected: a non-drug, single-arm study in patients with amyotrophic lateral sclerosis, and a physical-activity sub-study included in a phase 2b clinical trial in patients with chronic obstructive pulmonary disease.
{"title":"Statistical analysis of actigraphy data with generalised additive models.","authors":"Edoardo Lisi, Juan J Abellan","doi":"10.1002/pst.2350","DOIUrl":"10.1002/pst.2350","url":null,"abstract":"<p><p>There is a growing interest in the use of physical activity data in clinical studies, particularly in diseases that limit mobility in patients. High-frequency data collected with digital sensors are typically summarised into actigraphy features aggregated at epoch level (e.g., by minute). The statistical analysis of such volume of data is not straightforward. The general trend is to derive metrics, capturing specific aspects of physical activity, that condense (say) a week worth of data into a single numerical value. Here we propose to analyse the entire time-series data using Generalised Additive Models (GAMs). GAMs are semi-parametric models that allow inclusion of both parametric and non-parametric terms in the linear predictor. The latter are smooth terms (e.g., splines) and, in the context of actigraphy minute-by-minute data analysis, they can be used to assess daily patterns of physical activity. This in turn can be used to better understand changes over time in longitudinal studies as well as to compare treatment groups. We illustrate the application of GAMs in two clinical studies where actigraphy data was collected: a non-drug, single-arm study in patients with amyotrophic lateral sclerosis, and a physical-activity sub-study included in a phase 2b clinical trial in patients with chronic obstructive pulmonary disease.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"308-324"},"PeriodicalIF":1.5,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136398647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-01-08DOI: 10.1002/pst.2360
Masahiro Kojima, Shunichiro Orihara
We propose a novel frailty model with change points applying random effects to a Cox proportional hazard model to adjust the heterogeneity between clusters. In the specially focused eight Empowered Action Group (EAG) states in India, there are problems with different survival curves for children up to the age of five in different states. Therefore, when analyzing the survival times for the eight EAG states, we need to adjust for the effects among states (clusters). Because the frailty model includes random effects, the parameters are estimated using the expectation-maximization (EM) algorithm. Additionally, our model needs to estimate change points; we thus propose a new algorithm extending the conventional estimation algorithm to the frailty model with change points to solve the problem. We show a practical example to demonstrate how to estimate the change point and the parameters of the distribution of random effect. Our proposed model can be easily analyzed using the existing R package. We conducted simulation studies with three scenarios to confirm the performance of our proposed model. We re-analyzed the survival time data of the eight EAG states in India to show the difference in analysis results with and without random effect. In conclusion, we confirmed that the frailty model with change points has a higher accuracy than the model without a random effect. Our proposed model is useful when heterogeneity needs to be taken into account. Additionally, the absence of heterogeneity did not affect the estimation of the regression parameters.
我们提出了一种带有变化点的新型虚弱模型,将随机效应应用于 Cox 比例危险模型,以调整群组间的异质性。在印度特别关注的八个赋权行动小组(EAG)邦中,存在不同邦五岁以下儿童生存曲线不同的问题。因此,在分析八个 EAG 邦的存活时间时,我们需要调整各邦(群组)之间的影响。由于虚弱模型包含随机效应,因此需要使用期望最大化(EM)算法来估计参数。此外,我们的模型还需要估计变化点;因此,我们提出了一种新算法,将传统估计算法扩展到带有变化点的虚弱模型,以解决这一问题。我们通过一个实际例子来演示如何估计变化点和随机效应分布的参数。我们提出的模型可以很容易地使用现有的 R 软件包进行分析。我们对三种情况进行了模拟研究,以证实我们提出的模型的性能。我们重新分析了印度八个 EAG 邦的生存时间数据,以显示有随机效应和无随机效应分析结果的差异。总之,我们证实了带变化点的虚弱模型比不带随机效应的模型具有更高的准确性。当需要考虑异质性时,我们提出的模型非常有用。此外,没有异质性也不会影响回归参数的估计。
{"title":"Frailty model with change points for survival analysis.","authors":"Masahiro Kojima, Shunichiro Orihara","doi":"10.1002/pst.2360","DOIUrl":"10.1002/pst.2360","url":null,"abstract":"<p><p>We propose a novel frailty model with change points applying random effects to a Cox proportional hazard model to adjust the heterogeneity between clusters. In the specially focused eight Empowered Action Group (EAG) states in India, there are problems with different survival curves for children up to the age of five in different states. Therefore, when analyzing the survival times for the eight EAG states, we need to adjust for the effects among states (clusters). Because the frailty model includes random effects, the parameters are estimated using the expectation-maximization (EM) algorithm. Additionally, our model needs to estimate change points; we thus propose a new algorithm extending the conventional estimation algorithm to the frailty model with change points to solve the problem. We show a practical example to demonstrate how to estimate the change point and the parameters of the distribution of random effect. Our proposed model can be easily analyzed using the existing R package. We conducted simulation studies with three scenarios to confirm the performance of our proposed model. We re-analyzed the survival time data of the eight EAG states in India to show the difference in analysis results with and without random effect. In conclusion, we confirmed that the frailty model with change points has a higher accuracy than the model without a random effect. Our proposed model is useful when heterogeneity needs to be taken into account. Additionally, the absence of heterogeneity did not affect the estimation of the regression parameters.</p>","PeriodicalId":19934,"journal":{"name":"Pharmaceutical Statistics","volume":" ","pages":"408-424"},"PeriodicalIF":1.5,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139403998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}