Pub Date : 2024-07-01Epub Date: 2024-05-23DOI: 10.1177/09622802241254197
Brennan C Kahan, Bryan S Blette, Michael O Harhay, Scott D Halpern, Vipul Jairath, Andrew Copas, Fan Li
Estimands can help clarify the interpretation of treatment effects and ensure that estimators are aligned with the study's objectives. Cluster-randomised trials require additional attributes to be defined within the estimand compared to individually randomised trials, including whether treatment effects are marginal or cluster-specific, and whether they are participant- or cluster-average. In this paper, we provide formal definitions of estimands encompassing both these attributes using potential outcomes notation and describe differences between them. We then provide an overview of estimators for each estimand, describe their assumptions, and show consistency (i.e. asymptotically unbiased estimation) for a series of analyses based on cluster-level summaries. Then, through a re-analysis of a published cluster-randomised trial, we demonstrate that the choice of both estimand and estimator can affect interpretation. For instance, the estimated odds ratio ranged from 1.38 (p = 0.17) to 1.83 (p = 0.03) depending on the target estimand, and for some estimands, the choice of estimator affected the conclusions by leading to smaller treatment effect estimates. We conclude that careful specification of the estimand, along with an appropriate choice of estimator, is essential to ensuring that cluster-randomised trials address the right question.
{"title":"Demystifying estimands in cluster-randomised trials.","authors":"Brennan C Kahan, Bryan S Blette, Michael O Harhay, Scott D Halpern, Vipul Jairath, Andrew Copas, Fan Li","doi":"10.1177/09622802241254197","DOIUrl":"10.1177/09622802241254197","url":null,"abstract":"<p><p>Estimands can help clarify the interpretation of treatment effects and ensure that estimators are aligned with the study's objectives. Cluster-randomised trials require additional attributes to be defined within the estimand compared to individually randomised trials, including whether treatment effects are <i>marginal</i> or <i>cluster-specific</i>, and whether they are <i>participant-</i> or <i>cluster-average</i>. In this paper, we provide formal definitions of estimands encompassing both these attributes using potential outcomes notation and describe differences between them. We then provide an overview of estimators for each estimand, describe their assumptions, and show consistency (i.e. asymptotically unbiased estimation) for a series of analyses based on cluster-level summaries. Then, through a re-analysis of a published cluster-randomised trial, we demonstrate that the choice of both estimand and estimator can affect interpretation. For instance, the estimated odds ratio ranged from 1.38 (<i>p</i> = 0.17) to 1.83 (<i>p</i> = 0.03) depending on the target estimand, and for some estimands, the choice of estimator affected the conclusions by leading to smaller treatment effect estimates. We conclude that careful specification of the estimand, along with an appropriate choice of estimator, is essential to ensuring that cluster-randomised trials address the right question.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1211-1232"},"PeriodicalIF":1.6,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11348634/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-05-23DOI: 10.1177/09622802241254220
Yasuhiro Hagiwara, Yutaka Matsuyama
Modified Poisson regression, which estimates the regression parameters in the log-binomial regression model using the Poisson quasi-likelihood estimating equation and robust variance, is a useful tool for estimating the adjusted risk and prevalence ratio in binary outcome analysis. Although several goodness-of-fit tests have been developed for other binary regressions, few goodness-of-fit tests are available for modified Poisson regression. In this study, we proposed several goodness-of-fit tests for modified Poisson regression, including the modified Hosmer-Lemeshow test with empirical variance, Tsiatis test, normalized Pearson chi-square tests with binomial variance and Poisson variance, and normalized residual sum of squares test. The original Hosmer-Lemeshow test and normalized Pearson chi-square test with binomial variance are inappropriate for the modified Poisson regression, which can produce a fitted value exceeding 1 owing to the unconstrained parameter space. A simulation study revealed that the normalized residual sum of squares test performed well regarding the type I error probability and the power for a wrong link function. We applied the proposed goodness-of-fit tests to the analysis of cross-sectional data of patients with cancer. We recommend the normalized residual sum of squares test as a goodness-of-fit test in the modified Poisson regression.
修正泊松回归使用泊松准似然估计方程和稳健方差来估计对数二叉回归模型中的回归参数,是估计二元结果分析中调整风险和流行率的有用工具。虽然针对其他二元回归已开发出几种拟合优度检验方法,但针对修正泊松回归的拟合优度检验方法却寥寥无几。在本研究中,我们为修正的泊松回归提出了几种拟合优度检验,包括修正的经验方差 Hosmer-Lemeshow 检验、Tsiatis 检验、二项式方差和泊松方差的归一化皮尔逊方差检验以及归一化残差平方和检验。原始的 Hosmer-Lemeshow 检验和二项方差归一化皮尔逊卡方检验不适合修正的泊松回归,因为修正的泊松回归由于参数空间不受约束,可能产生一个超过 1 的拟合值。模拟研究表明,归一化残差平方和检验在 I 类错误概率和错误链接函数的功率方面表现良好。我们将提出的拟合优度检验应用于癌症患者横截面数据的分析。我们推荐将归一化残差平方和检验作为修正泊松回归的拟合优度检验。
{"title":"Goodness-of-fit tests for modified Poisson regression possibly producing fitted values exceeding one in binary outcome analysis.","authors":"Yasuhiro Hagiwara, Yutaka Matsuyama","doi":"10.1177/09622802241254220","DOIUrl":"10.1177/09622802241254220","url":null,"abstract":"<p><p>Modified Poisson regression, which estimates the regression parameters in the log-binomial regression model using the Poisson quasi-likelihood estimating equation and robust variance, is a useful tool for estimating the adjusted risk and prevalence ratio in binary outcome analysis. Although several goodness-of-fit tests have been developed for other binary regressions, few goodness-of-fit tests are available for modified Poisson regression. In this study, we proposed several goodness-of-fit tests for modified Poisson regression, including the modified Hosmer-Lemeshow test with empirical variance, Tsiatis test, normalized Pearson chi-square tests with binomial variance and Poisson variance, and normalized residual sum of squares test. The original Hosmer-Lemeshow test and normalized Pearson chi-square test with binomial variance are inappropriate for the modified Poisson regression, which can produce a fitted value exceeding 1 owing to the unconstrained parameter space. A simulation study revealed that the normalized residual sum of squares test performed well regarding the type I error probability and the power for a wrong link function. We applied the proposed goodness-of-fit tests to the analysis of cross-sectional data of patients with cancer. We recommend the normalized residual sum of squares test as a goodness-of-fit test in the modified Poisson regression.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1185-1196"},"PeriodicalIF":1.6,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01Epub Date: 2015-05-17DOI: 10.1177/0962280215586011
{"title":"Retraction notice.","authors":"","doi":"10.1177/0962280215586011","DOIUrl":"10.1177/0962280215586011","url":null,"abstract":"","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"NP1"},"PeriodicalIF":1.6,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33315488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-03-29DOI: 10.1177/09622802241236953
Senmiao Ni, Zihang Zhong, Yang Zhao, Feng Chen, Jingwei Wu, Hao Yu, Jianling Bai
Cluster randomization trials with survival endpoint are predominantly used in drug development and clinical care research when drug treatments or interventions are delivered at a group level. Unlike conventional cluster randomization design, stratified cluster randomization design is generally considered more effective in reducing the impacts of imbalanced baseline prognostic factors and varying cluster sizes between groups when these stratification factors are adopted in the design. Failure to account for stratification and cluster size variability may lead to underpowered analysis and inaccurate sample size estimation. Apart from the sample size estimation in unstratified cluster randomization trials, there are no development of an explicit sample size formula for survival endpoint when a stratified cluster randomization design is employed. In this article, we present a closed-form sample size formula based on the stratified cluster log-rank statistics for stratified cluster randomization trials with survival endpoint. It provides an integrated solution for sample size estimation that account for cluster size variation, baseline hazard heterogeneity, and the estimated intracluster correlation coefficient based on the preliminary data. Simulation studies show that the proposed formula provides the appropriate sample size for achieving the desired statistical power under various parameter configurations. A real example of a stratified cluster randomization trial in the population with stable coronary heart disease is presented to illustrate our method.
{"title":"Sample size estimation for stratified cluster randomization trial with survival endpoint.","authors":"Senmiao Ni, Zihang Zhong, Yang Zhao, Feng Chen, Jingwei Wu, Hao Yu, Jianling Bai","doi":"10.1177/09622802241236953","DOIUrl":"10.1177/09622802241236953","url":null,"abstract":"<p><p>Cluster randomization trials with survival endpoint are predominantly used in drug development and clinical care research when drug treatments or interventions are delivered at a group level. Unlike conventional cluster randomization design, stratified cluster randomization design is generally considered more effective in reducing the impacts of imbalanced baseline prognostic factors and varying cluster sizes between groups when these stratification factors are adopted in the design. Failure to account for stratification and cluster size variability may lead to underpowered analysis and inaccurate sample size estimation. Apart from the sample size estimation in unstratified cluster randomization trials, there are no development of an explicit sample size formula for survival endpoint when a stratified cluster randomization design is employed. In this article, we present a closed-form sample size formula based on the stratified cluster log-rank statistics for stratified cluster randomization trials with survival endpoint. It provides an integrated solution for sample size estimation that account for cluster size variation, baseline hazard heterogeneity, and the estimated intracluster correlation coefficient based on the preliminary data. Simulation studies show that the proposed formula provides the appropriate sample size for achieving the desired statistical power under various parameter configurations. A real example of a stratified cluster randomization trial in the population with stable coronary heart disease is presented to illustrate our method.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"838-857"},"PeriodicalIF":2.3,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140319184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-03-18DOI: 10.1177/09622802241236952
Shasha Han, Joel Goh, Fanwen Meng, Melvin Khee-Shing Leow, Donald B Rubin
Existing methods that use propensity scores for heterogeneous treatment effect estimation on non-experimental data do not readily extend to the case of more than two treatment options. In this work, we develop a new propensity score-based method for heterogeneous treatment effect estimation when there are three or more treatment options, and prove that it generates unbiased estimates. We demonstrate our method on a real patient registry of patients in Singapore with diabetic dyslipidemia. On this dataset, our method generates heterogeneous treatment recommendations for patients among three options: Statins, fibrates, and non-pharmacological treatment to control patients' lipid ratios (total cholesterol divided by high-density lipoprotein level). In our numerical study, our proposed method generated more stable estimates compared to a benchmark method based on a multi-dimensional propensity score.
{"title":"Contrast-specific propensity scores for causal inference with multiple interventions.","authors":"Shasha Han, Joel Goh, Fanwen Meng, Melvin Khee-Shing Leow, Donald B Rubin","doi":"10.1177/09622802241236952","DOIUrl":"10.1177/09622802241236952","url":null,"abstract":"<p><p>Existing methods that use propensity scores for heterogeneous treatment effect estimation on non-experimental data do not readily extend to the case of more than two treatment options. In this work, we develop a new propensity score-based method for heterogeneous treatment effect estimation when there are three or more treatment options, and prove that it generates unbiased estimates. We demonstrate our method on a real patient registry of patients in Singapore with diabetic dyslipidemia. On this dataset, our method generates heterogeneous treatment recommendations for patients among three options: Statins, fibrates, and non-pharmacological treatment to control patients' lipid ratios (total cholesterol divided by high-density lipoprotein level). In our numerical study, our proposed method generated more stable estimates compared to a benchmark method based on a multi-dimensional propensity score.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"825-837"},"PeriodicalIF":2.3,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140159035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-03-19DOI: 10.1177/09622802241238998
Duc-Khanh To, Gianfranco Adimari, Monica Chiogna
The empirical likelihood is a powerful nonparametric tool, that emulates its parametric counterpart-the parametric likelihood-preserving many of its large-sample properties. This article tackles the problem of assessing the discriminatory power of three-class diagnostic tests from an empirical likelihood perspective. In particular, we concentrate on interval estimation in a three-class receiver operating characteristic analysis, where a variety of inferential tasks could be of interest. We present novel theoretical results and tailored techniques studied to efficiently solve some of such tasks. Extensive simulation experiments are provided in a supporting role, with our novel proposals compared to existing competitors, when possible. It emerges that our new proposals are extremely flexible, being able to compete with contestants and appearing suited to accommodating several distributions, such, for example, mixtures, for target populations. We illustrate the application of the novel proposals with a real data example. The article ends with a discussion and a presentation of some directions for future research.
{"title":"Interval estimation in three-class receiver operating characteristic analysis: A fairly general approach based on the empirical likelihood.","authors":"Duc-Khanh To, Gianfranco Adimari, Monica Chiogna","doi":"10.1177/09622802241238998","DOIUrl":"10.1177/09622802241238998","url":null,"abstract":"<p><p>The empirical likelihood is a powerful nonparametric tool, that emulates its parametric counterpart-the parametric likelihood-preserving many of its large-sample properties. This article tackles the problem of assessing the discriminatory power of three-class diagnostic tests from an empirical likelihood perspective. In particular, we concentrate on interval estimation in a three-class receiver operating characteristic analysis, where a variety of inferential tasks could be of interest. We present novel theoretical results and tailored techniques studied to efficiently solve some of such tasks. Extensive simulation experiments are provided in a supporting role, with our novel proposals compared to existing competitors, when possible. It emerges that our new proposals are extremely flexible, being able to compete with contestants and appearing suited to accommodating several distributions, such, for example, mixtures, for target populations. We illustrate the application of the novel proposals with a real data example. The article ends with a discussion and a presentation of some directions for future research.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"875-893"},"PeriodicalIF":2.3,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140159036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Observational data (e.g. electronic health records) has become increasingly important in evidence-based research on dynamic treatment regimes, which tailor treatments over time to patients based on their characteristics and evolving clinical history. It is of great interest for clinicians and statisticians to identify an optimal dynamic treatment regime that can produce the best expected clinical outcome for each individual and thus maximize the treatment benefit over the population. Observational data impose various challenges for using statistical tools to estimate optimal dynamic treatment regimes. Notably, the task becomes more sophisticated when the clinical outcome of primary interest is time-to-event. Here, we propose a matching-based machine learning method to identify the optimal dynamic treatment regime with time-to-event outcomes subject to right-censoring using electronic health record data. In contrast to the established inverse probability weighting-based dynamic treatment regime methods, our proposed approach provides better protection against model misspecification and extreme weights in the context of treatment sequences, effectively addressing a prevalent challenge in the longitudinal analysis of electronic health record data. In simulations, the proposed method demonstrates robust performance across a range of scenarios. In addition, we illustrate the method with an application to estimate optimal dynamic treatment regimes for patients with advanced non-small cell lung cancer using a real-world, nationwide electronic health record database from Flatiron Health.
观察数据(如电子健康记录)在动态治疗方案的循证研究中变得越来越重要,动态治疗方案是根据患者的特征和不断变化的临床病史,在一段时间内为患者量身定制治疗方案。对于临床医生和统计学家来说,如何确定一种最佳动态治疗方案,使每个人都能获得最佳预期临床结果,从而使整个人群的治疗效益最大化,是一个非常重要的问题。观察数据给使用统计工具估算最佳动态治疗方案带来了各种挑战。值得注意的是,当主要关注的临床结果是时间到事件时,这项任务就变得更加复杂。在此,我们提出了一种基于匹配的机器学习方法,利用电子健康记录数据来识别具有时间到事件结果的最佳动态治疗方案,并对其进行右删减。与已有的基于反概率权重的动态治疗机制方法相比,我们提出的方法能更好地防止治疗序列中的模型错误规范和极端权重,有效地解决了电子健康记录数据纵向分析中普遍存在的难题。在模拟实验中,所提出的方法在各种情况下都表现出稳健的性能。此外,我们还利用 Flatiron Health 公司提供的真实世界、全国范围的电子健康记录数据库,对晚期非小细胞肺癌患者的最佳动态治疗方案进行了估算,以此来说明该方法。
{"title":"A matching-based machine learning approach to estimating optimal dynamic treatment regimes with time-to-event outcomes.","authors":"Xuechen Wang, Hyejung Lee, Benjamin Haaland, Kathleen Kerrigan, Sonam Puri, Wallace Akerley, Jincheng Shen","doi":"10.1177/09622802241236954","DOIUrl":"10.1177/09622802241236954","url":null,"abstract":"<p><p>Observational data (e.g. electronic health records) has become increasingly important in evidence-based research on dynamic treatment regimes, which tailor treatments over time to patients based on their characteristics and evolving clinical history. It is of great interest for clinicians and statisticians to identify an optimal dynamic treatment regime that can produce the best expected clinical outcome for each individual and thus maximize the treatment benefit over the population. Observational data impose various challenges for using statistical tools to estimate optimal dynamic treatment regimes. Notably, the task becomes more sophisticated when the clinical outcome of primary interest is time-to-event. Here, we propose a matching-based machine learning method to identify the optimal dynamic treatment regime with time-to-event outcomes subject to right-censoring using electronic health record data. In contrast to the established inverse probability weighting-based dynamic treatment regime methods, our proposed approach provides better protection against model misspecification and extreme weights in the context of treatment sequences, effectively addressing a prevalent challenge in the longitudinal analysis of electronic health record data. In simulations, the proposed method demonstrates robust performance across a range of scenarios. In addition, we illustrate the method with an application to estimate optimal dynamic treatment regimes for patients with advanced non-small cell lung cancer using a real-world, nationwide electronic health record database from Flatiron Health.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"794-806"},"PeriodicalIF":2.3,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140159034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01DOI: 10.1177/09622802241247736
Xueqi Wang, Xinyuan Chen, Keith S Goldfeld, Monica Taljaard, Fan Li
The cluster randomized crossover design has been proposed to improve efficiency over the traditional parallel-arm cluster randomized design. While statistical methods have been developed for designing cluster randomized crossover trials, they have exclusively focused on testing the overall average treatment effect, with little attention to differential treatment effects across subpopulations. Recently, interest has grown in understanding whether treatment effects may vary across pre-specified patient subpopulations, such as those defined by demographic or clinical characteristics. In this article, we consider the two-treatment two-period cluster randomized crossover design under either a cross-sectional or closed-cohort sampling scheme, where it is of interest to detect the heterogeneity of treatment effect via an interaction test. Assuming a patterned correlation structure for both the covariate and the outcome, we derive new sample size formulas for testing the heterogeneity of treatment effect with continuous outcomes based on linear mixed models. Our formulas also address unequal cluster sizes and therefore allow us to analytically assess the impact of unequal cluster sizes on the power of the interaction test in cluster randomized crossover designs. We conduct simulations to confirm the accuracy of the proposed methods, and illustrate their application in two real cluster randomized crossover trials.
{"title":"Sample size and power calculation for testing treatment effect heterogeneity in cluster randomized crossover designs","authors":"Xueqi Wang, Xinyuan Chen, Keith S Goldfeld, Monica Taljaard, Fan Li","doi":"10.1177/09622802241247736","DOIUrl":"https://doi.org/10.1177/09622802241247736","url":null,"abstract":"The cluster randomized crossover design has been proposed to improve efficiency over the traditional parallel-arm cluster randomized design. While statistical methods have been developed for designing cluster randomized crossover trials, they have exclusively focused on testing the overall average treatment effect, with little attention to differential treatment effects across subpopulations. Recently, interest has grown in understanding whether treatment effects may vary across pre-specified patient subpopulations, such as those defined by demographic or clinical characteristics. In this article, we consider the two-treatment two-period cluster randomized crossover design under either a cross-sectional or closed-cohort sampling scheme, where it is of interest to detect the heterogeneity of treatment effect via an interaction test. Assuming a patterned correlation structure for both the covariate and the outcome, we derive new sample size formulas for testing the heterogeneity of treatment effect with continuous outcomes based on linear mixed models. Our formulas also address unequal cluster sizes and therefore allow us to analytically assess the impact of unequal cluster sizes on the power of the interaction test in cluster randomized crossover designs. We conduct simulations to confirm the accuracy of the proposed methods, and illustrate their application in two real cluster randomized crossover trials.","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":"31 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140834015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-03-19DOI: 10.1177/09622802241239003
Dimitris Rizopoulos, Jeremy Mg Taylor, Grigorios Papageorgiou, Todd M Morgan
Prostate cancer patients who undergo prostatectomy are closely monitored for recurrence and metastasis using routine prostate-specific antigen measurements. When prostate-specific antigen levels rise, salvage therapies are recommended in order to decrease the risk of metastasis. However, due to the side effects of these therapies and to avoid over-treatment, it is important to understand which patients and when to initiate these salvage therapies. In this work, we use the University of Michigan Prostatectomy Registry Data to tackle this question. Due to the observational nature of this data, we face the challenge that prostate-specific antigen is simultaneously a time-varying confounder and an intermediate variable for salvage therapy. We define different causal salvage therapy effects defined conditionally on different specifications of the longitudinal prostate-specific antigen history. We then illustrate how these effects can be estimated using the framework of joint models for longitudinal and time-to-event data. All proposed methodology is implemented in the freely-available R package JMbayes2.
对接受前列腺切除术的前列腺癌患者进行常规前列腺特异性抗原测量,密切监测复发和转移情况。当前列腺特异性抗原水平升高时,建议采用挽救疗法,以降低转移风险。然而,由于这些疗法存在副作用,为了避免过度治疗,了解哪些患者以及何时启动这些挽救性疗法非常重要。在这项研究中,我们利用密歇根大学前列腺切除术登记数据来解决这个问题。由于该数据的观察性质,我们面临的挑战是前列腺特异性抗原既是时变混杂因素,又是挽救疗法的中间变量。我们根据纵向前列腺特异性抗原历史的不同规格定义了不同的挽救治疗因果效应。然后,我们说明了如何利用纵向数据和时间到事件数据的联合模型框架来估算这些效应。所有建议的方法都在免费提供的 R 软件包 JMbayes2 中实现。
{"title":"Using joint models for longitudinal and time-to-event data to investigate the causal effect of salvage therapy after prostatectomy.","authors":"Dimitris Rizopoulos, Jeremy Mg Taylor, Grigorios Papageorgiou, Todd M Morgan","doi":"10.1177/09622802241239003","DOIUrl":"10.1177/09622802241239003","url":null,"abstract":"<p><p>Prostate cancer patients who undergo prostatectomy are closely monitored for recurrence and metastasis using routine prostate-specific antigen measurements. When prostate-specific antigen levels rise, salvage therapies are recommended in order to decrease the risk of metastasis. However, due to the side effects of these therapies and to avoid over-treatment, it is important to understand which patients and when to initiate these salvage therapies. In this work, we use the University of Michigan Prostatectomy Registry Data to tackle this question. Due to the observational nature of this data, we face the challenge that prostate-specific antigen is simultaneously a time-varying confounder and an intermediate variable for salvage therapy. We define different causal salvage therapy effects defined conditionally on different specifications of the longitudinal prostate-specific antigen history. We then illustrate how these effects can be estimated using the framework of joint models for longitudinal and time-to-event data. All proposed methodology is implemented in the freely-available R package <b>JMbayes2</b>.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"894-908"},"PeriodicalIF":1.6,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11041089/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140159037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-03-19DOI: 10.1177/09622802241231496
Yipeng Wang, Natalie DelRocco, Lifeng Lin
Assessing heterogeneity between studies is a critical step in determining whether studies can be combined and whether the synthesized results are reliable. The statistic has been a popular measure for quantifying heterogeneity, but its usage has been challenged from various perspectives in recent years. In particular, it should not be considered an absolute measure of heterogeneity, and it could be subject to large uncertainties. As such, when using to interpret the extent of heterogeneity, it is essential to account for its interval estimate. Various point and interval estimators exist for . This article summarizes these estimators. In addition, we performed a simulation study under different scenarios to investigate preferable point and interval estimates of . We found that the Sidik-Jonkman method gave precise point estimates for when the between-study variance was large, while in other cases, the DerSimonian-Laird method was suggested to estimate . When the effect measure was the mean difference or the standardized mean difference, the -profile method, the Biggerstaff-Jackson method, or the Jackson method was suggested to calculate the interval estimate for due to reasonable interval length and more reliable coverage probabilities than various alternatives. For the same reason, the Kulinskaya-Dollinger method was recommended to calculate the interval estimate for when the effect measure was the log odds ratio.
{"title":"<ArticleTitle xmlns:ns0=\"http://www.w3.org/1998/Math/MathML\">Comparisons of various estimates of the <ns0:math><ns0:mrow><ns0:msup><ns0:mi>I</ns0:mi><ns0:mn>2</ns0:mn></ns0:msup></ns0:mrow></ns0:math> statistic for quantifying between-study heterogeneity in meta-analysis.","authors":"Yipeng Wang, Natalie DelRocco, Lifeng Lin","doi":"10.1177/09622802241231496","DOIUrl":"10.1177/09622802241231496","url":null,"abstract":"<p><p>Assessing heterogeneity between studies is a critical step in determining whether studies can be combined and whether the synthesized results are reliable. The <math><mrow><msup><mi>I</mi><mn>2</mn></msup></mrow></math> statistic has been a popular measure for quantifying heterogeneity, but its usage has been challenged from various perspectives in recent years. In particular, it should not be considered an absolute measure of heterogeneity, and it could be subject to large uncertainties. As such, when using <math><mrow><msup><mi>I</mi><mn>2</mn></msup></mrow></math> to interpret the extent of heterogeneity, it is essential to account for its interval estimate. Various point and interval estimators exist for <math><mrow><msup><mi>I</mi><mn>2</mn></msup></mrow></math>. This article summarizes these estimators. In addition, we performed a simulation study under different scenarios to investigate preferable point and interval estimates of <math><mrow><msup><mi>I</mi><mn>2</mn></msup></mrow></math>. We found that the Sidik-Jonkman method gave precise point estimates for <math><mrow><msup><mi>I</mi><mn>2</mn></msup></mrow></math> when the between-study variance was large, while in other cases, the DerSimonian-Laird method was suggested to estimate <math><mrow><msup><mi>I</mi><mn>2</mn></msup></mrow></math>. When the effect measure was the mean difference or the standardized mean difference, the <math><mi>Q</mi></math>-profile method, the Biggerstaff-Jackson method, or the Jackson method was suggested to calculate the interval estimate for <math><mrow><msup><mi>I</mi><mn>2</mn></msup></mrow></math> due to reasonable interval length and more reliable coverage probabilities than various alternatives. For the same reason, the Kulinskaya-Dollinger method was recommended to calculate the interval estimate for <math><mrow><msup><mi>I</mi><mn>2</mn></msup></mrow></math> when the effect measure was the log odds ratio.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"745-764"},"PeriodicalIF":2.3,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140159017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}