Pub Date : 2024-11-01Epub Date: 2024-10-30DOI: 10.1177/09622802241281960
Amir Aamodt Kazemi, Inge Christoffer Olsen
Current instrumental variable methodology focuses mainly on estimating causal effects for a dichotomous or an ordinal treatment variable. Situations with more than two unordered treatments are less explored. The challenge is that assumptions needed to derive point-estimators become increasingly stronger with the number of relevant treatment alternatives. In this article, we aim at deriving causal point-estimators for head-to-head comparisons of the effect of multiple relevant treatments or interventions. We will achieve this with a set of plausible and well-defined rationality assumptions while only considering ordinal instruments. We demonstrate that our methodology provides asymptotically unbiased estimators in the presence of unobserved confounding effects in a simulation study. We then apply the method to compare the effectiveness of five anti-inflammatory drugs in the treatment of rheumatoid arthritis. For this, we use a clinical data set from an observational study in Norway, where price is the primary determinant of the preferred drug and can therefore be considered as an instrument. The developed methodology provides an important addition to the toolbox for causal inference when comparing more than two interventions influenced by an instrumental variable.
{"title":"Instrumental variable analysis with categorical treatment.","authors":"Amir Aamodt Kazemi, Inge Christoffer Olsen","doi":"10.1177/09622802241281960","DOIUrl":"10.1177/09622802241281960","url":null,"abstract":"<p><p>Current instrumental variable methodology focuses mainly on estimating causal effects for a dichotomous or an ordinal treatment variable. Situations with more than two unordered treatments are less explored. The challenge is that assumptions needed to derive point-estimators become increasingly stronger with the number of relevant treatment alternatives. In this article, we aim at deriving causal point-estimators for head-to-head comparisons of the effect of multiple relevant treatments or interventions. We will achieve this with a set of plausible and well-defined rationality assumptions while only considering ordinal instruments. We demonstrate that our methodology provides asymptotically unbiased estimators in the presence of unobserved confounding effects in a simulation study. We then apply the method to compare the effectiveness of five anti-inflammatory drugs in the treatment of rheumatoid arthritis. For this, we use a clinical data set from an observational study in Norway, where price is the primary determinant of the preferred drug and can therefore be considered as an instrument. The developed methodology provides an important addition to the toolbox for causal inference when comparing more than two interventions influenced by an instrumental variable.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"2043-2061"},"PeriodicalIF":1.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11577691/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142547569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-10-07DOI: 10.1177/09622802241281027
Jin Jin, Liuquan Sun, Huang-Tz Ou, Pei-Fang Su
Recurrent event data, which represent the occurrence of repeated incidences, are common in observational studies. Furthermore, collecting possible spatial correlations in health and environmental data is likely to provide more information for risk prediction. This article proposes a comprehensive proportional intensity model considering spatial random effects for recurrent event data using a Bayesian approach. The spatial information for areal data (where the spatial location is known up to a geographic unit such as a county) and georeferenced data (where the location is exactly observed) is examined. A traditional constant baseline intensity function, as well as a flexible piecewise constant baseline intensity function, are both under consideration. To estimate the parameters, a Markov chain Monte Carlo method with the Metropolis-Hastings algorithm and the adaptive Metropolis algorithm are applied. To assess the performance of model fitting, the deviance information criterion and log pseudo marginal likelihood are proposed. Overall, simulation studies demonstrate that the proposed model is significantly better than models that do not consider spatial effects if spatial correlations exist. Finally, our approach is implemented using a dataset related to the recurrence of cardiovascular diseases, which incorporates spatial information.
{"title":"Analysis of recurrent event data with spatial random effects using a Bayesian approach.","authors":"Jin Jin, Liuquan Sun, Huang-Tz Ou, Pei-Fang Su","doi":"10.1177/09622802241281027","DOIUrl":"10.1177/09622802241281027","url":null,"abstract":"<p><p>Recurrent event data, which represent the occurrence of repeated incidences, are common in observational studies. Furthermore, collecting possible spatial correlations in health and environmental data is likely to provide more information for risk prediction. This article proposes a comprehensive proportional intensity model considering spatial random effects for recurrent event data using a Bayesian approach. The spatial information for areal data (where the spatial location is known up to a geographic unit such as a county) and georeferenced data (where the location is exactly observed) is examined. A traditional constant baseline intensity function, as well as a flexible piecewise constant baseline intensity function, are both under consideration. To estimate the parameters, a Markov chain Monte Carlo method with the Metropolis-Hastings algorithm and the adaptive Metropolis algorithm are applied. To assess the performance of model fitting, the deviance information criterion and log pseudo marginal likelihood are proposed. Overall, simulation studies demonstrate that the proposed model is significantly better than models that do not consider spatial effects if spatial correlations exist. Finally, our approach is implemented using a dataset related to the recurrence of cardiovascular diseases, which incorporates spatial information.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1993-2006"},"PeriodicalIF":1.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142381664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-11-05DOI: 10.1177/09622802241283170
Anthony Sisti, Andrew Zullo, Roee Gutman
Death among subjects is common in observational studies evaluating the causal effects of interventions among geriatric or severely ill patients. High mortality rates complicate the comparison of the prevalence of adverse events between interventions. This problem is often referred to as outcome "truncation" by death. A possible solution is to estimate the survivor average causal effect, an estimand that evaluates the effects of interventions among those who would have survived under both treatment assignments. However, because the survivor average causal effect does not include subjects who would have died under one or both arms, it does not consider the relationship between adverse events and death. We propose a Bayesian method which imputes the unobserved mortality and adverse event outcomes for each participant under the intervention they did not receive. Using the imputed outcomes we define a composite ordinal outcome for each patient, combining the occurrence of death and the adverse event in an increasing scale of severity. This allows for the comparison of the effects of the interventions on death and the adverse event simultaneously among the entire sample. We implement the procedure to analyze the incidence of heart failure among geriatric patients being treated for Type II diabetes with sulfonylureas or dipeptidyl peptidase-4 inhibitors.
在评估干预措施对老年病人或重病患者的因果影响的观察性研究中,受试者死亡的情况很常见。高死亡率使比较不同干预措施的不良事件发生率变得更加复杂。这个问题通常被称为死亡导致的结果 "截断"。一种可能的解决方案是估算幸存者平均因果效应,这种估算方法可以评估干预措施对两种治疗方案下存活者的影响。然而,由于幸存者平均因果效应不包括在一种或两种治疗方法下都会死亡的受试者,因此它没有考虑不良事件与死亡之间的关系。我们提出了一种贝叶斯方法,该方法可估算出每位受试者在未接受干预的情况下未观察到的死亡率和不良事件结果。利用估算的结果,我们为每位患者定义了一个综合的序数结果,将死亡和不良事件的发生按严重程度递增结合起来。这样就可以在整个样本中同时比较干预措施对死亡和不良事件的影响。我们采用该方法分析了接受磺脲类药物或二肽基肽酶-4 抑制剂治疗的 II 型糖尿病老年患者的心力衰竭发生率。
{"title":"A Bayesian method for adverse effects estimation in observational studies with truncation by death.","authors":"Anthony Sisti, Andrew Zullo, Roee Gutman","doi":"10.1177/09622802241283170","DOIUrl":"10.1177/09622802241283170","url":null,"abstract":"<p><p>Death among subjects is common in observational studies evaluating the causal effects of interventions among geriatric or severely ill patients. High mortality rates complicate the comparison of the prevalence of adverse events between interventions. This problem is often referred to as outcome \"truncation\" by death. A possible solution is to estimate the survivor average causal effect, an estimand that evaluates the effects of interventions among those who would have survived under both treatment assignments. However, because the survivor average causal effect does not include subjects who would have died under one or both arms, it does not consider the relationship between adverse events and death. We propose a Bayesian method which imputes the unobserved mortality and adverse event outcomes for each participant under the intervention they did not receive. Using the imputed outcomes we define a composite ordinal outcome for each patient, combining the occurrence of death and the adverse event in an increasing scale of severity. This allows for the comparison of the effects of the interventions on death and the adverse event simultaneously among the entire sample. We implement the procedure to analyze the incidence of heart failure among geriatric patients being treated for Type II diabetes with sulfonylureas or dipeptidyl peptidase-4 inhibitors.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"2079-2097"},"PeriodicalIF":1.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-10-04DOI: 10.1177/09622802241283165
Sreejata Dutta, Samuel Boyd, Susan E Carlson, Danielle N Christifano, Gene T Lee, Sharla A Smith, Byron J Gajewski
Docosahexaenoic acid (DHA) supplementation has proven beneficial in reducing preterm births. However, the challenge lies in addressing nonadherence to prescribed supplementation regimens-a hurdle that significantly impacts clinical trial outcomes. Conventional methods of adherence estimation, such as pill counts and questionnaires, usually fall short when estimating adherence within a specific dosage group. Thus, we propose a Bayesian finite mixture model to estimate adherence among women with low baseline red blood cell phospholipid DHA levels (<6%) receiving higher DHA doses. In our model, adherence is defined as the proportion of participants classified into one of the two distinct components in a normal mixture distribution. Subsequently, based on the estimands from the adherence model, we introduce a novel Bayesian adaptive trial design. Unlike conventional adaptive trials that employ regularly spaced interim schedules, the novelty of our proposed trial design lies in its adaptability to adherence percentages across the treatment arm through irregular interims. The irregular interims in the proposed trial are based on the effect size estimation informed by the finite mixture model. In summary, this study presents innovative methods for leveraging the capabilities of Bayesian finite mixture models in adherence analysis and the design of adaptive clinical trials.
事实证明,补充二十二碳六烯酸 (DHA) 有利于减少早产。然而,挑战在于如何解决不遵从处方补充方案的问题--这是严重影响临床试验结果的障碍。在估算特定剂量组的依从性时,传统的依从性估算方法(如药片计数和问卷调查)通常存在不足。因此,我们提出了一种贝叶斯有限混合物模型,用于估算基线红细胞磷脂 DHA 水平较低的妇女的依从性。
{"title":"Enhancing DHA supplementation adherence: A Bayesian approach with finite mixture models and irregular interim schedules in adaptive trial designs.","authors":"Sreejata Dutta, Samuel Boyd, Susan E Carlson, Danielle N Christifano, Gene T Lee, Sharla A Smith, Byron J Gajewski","doi":"10.1177/09622802241283165","DOIUrl":"10.1177/09622802241283165","url":null,"abstract":"<p><p>Docosahexaenoic acid (DHA) supplementation has proven beneficial in reducing preterm births. However, the challenge lies in addressing nonadherence to prescribed supplementation regimens-a hurdle that significantly impacts clinical trial outcomes. Conventional methods of adherence estimation, such as pill counts and questionnaires, usually fall short when estimating adherence within a specific dosage group. Thus, we propose a Bayesian finite mixture model to estimate adherence among women with low baseline red blood cell phospholipid DHA levels (<6%) receiving higher DHA doses. In our model, adherence is defined as the proportion of participants classified into one of the two distinct components in a normal mixture distribution. Subsequently, based on the estimands from the adherence model, we introduce a novel Bayesian adaptive trial design. Unlike conventional adaptive trials that employ regularly spaced interim schedules, the novelty of our proposed trial design lies in its adaptability to adherence percentages across the treatment arm through irregular interims. The irregular interims in the proposed trial are based on the effect size estimation informed by the finite mixture model. In summary, this study presents innovative methods for leveraging the capabilities of Bayesian finite mixture models in adherence analysis and the design of adaptive clinical trials.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"2062-2078"},"PeriodicalIF":1.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11576245/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142372913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-10-16DOI: 10.1177/09622802241287711
Abigail J Burdon, Richard D Baird, Thomas Jaki
Adaptive enrichment allows for pre-defined patient subgroups of interest to be investigated throughout the course of a clinical trial. These designs have gained attention in recent years because of their potential to shorten the trial's duration and identify effective therapies tailored to specific patient groups. We describe enrichment trials which consider long-term time-to-event outcomes but also incorporate additional short-term information from routinely collected longitudinal biomarkers. These methods are suitable for use in the setting where the trajectory of the biomarker may differ between subgroups and it is believed that the long-term endpoint is influenced by treatment, subgroup and biomarker. Methods are most promising when the majority of patients have biomarker measurements for at least two time points. We implement joint modelling of longitudinal and time-to-event data to define subgroup selection and stopping criteria and we show that the familywise error rate is protected in the strong sense. To assess the results, we perform a simulation study and find that, compared to the study where longitudinal biomarker observations are ignored, incorporating biomarker information leads to increases in power and the (sub)population which truly benefits from the experimental treatment being enriched with higher probability at the interim analysis. The investigations are motivated by a trial for the treatment of metastatic breast cancer and the parameter values for the simulation study are informed using real-world data where repeated circulating tumour DNA measurements and HER2 statuses are available for each patient and are used as our longitudinal data and subgroup identifiers, respectively.
在临床试验的整个过程中,可以对预先确定的感兴趣的患者亚组进行研究。这些设计近年来备受关注,因为它们有可能缩短试验时间,并找出针对特定患者群体的有效疗法。我们介绍的富集试验既考虑了从时间到事件的长期结果,又结合了从常规收集的纵向生物标记物中获得的额外短期信息。这些方法适用于生物标志物的轨迹在不同亚组之间可能存在差异的情况,并且相信长期终点会受到治疗、亚组和生物标志物的影响。当大多数患者至少有两个时间点的生物标志物测量结果时,这种方法最有前途。我们对纵向数据和时间到事件数据进行了联合建模,以确定亚组选择和停止标准,并证明在强意义上保护了家族误差率。为了评估结果,我们进行了一项模拟研究,结果发现,与忽略纵向生物标志物观察结果的研究相比,纳入生物标志物信息会提高研究的有效性,而且真正受益于实验治疗的(亚)群体在中期分析时会以更高的概率得到扩充。这项研究是由一项治疗转移性乳腺癌的试验激发的,模拟研究的参数值是根据真实世界的数据确定的,在真实世界中,每个患者都有重复的循环肿瘤 DNA 测量数据和 HER2 状态,这些数据和状态分别作为我们的纵向数据和亚群标识符。
{"title":"Adaptive enrichment trial designs using joint modelling of longitudinal and time-to-event data.","authors":"Abigail J Burdon, Richard D Baird, Thomas Jaki","doi":"10.1177/09622802241287711","DOIUrl":"10.1177/09622802241287711","url":null,"abstract":"<p><p>Adaptive enrichment allows for pre-defined patient subgroups of interest to be investigated throughout the course of a clinical trial. These designs have gained attention in recent years because of their potential to shorten the trial's duration and identify effective therapies tailored to specific patient groups. We describe enrichment trials which consider long-term time-to-event outcomes but also incorporate additional short-term information from routinely collected longitudinal biomarkers. These methods are suitable for use in the setting where the trajectory of the biomarker may differ between subgroups and it is believed that the long-term endpoint is influenced by treatment, subgroup and biomarker. Methods are most promising when the majority of patients have biomarker measurements for at least two time points. We implement joint modelling of longitudinal and time-to-event data to define subgroup selection and stopping criteria and we show that the familywise error rate is protected in the strong sense. To assess the results, we perform a simulation study and find that, compared to the study where longitudinal biomarker observations are ignored, incorporating biomarker information leads to increases in power and the (sub)population which truly benefits from the experimental treatment being enriched with higher probability at the interim analysis. The investigations are motivated by a trial for the treatment of metastatic breast cancer and the parameter values for the simulation study are informed using real-world data where repeated circulating tumour DNA measurements and HER2 statuses are available for each patient and are used as our longitudinal data and subgroup identifiers, respectively.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"2098-2114"},"PeriodicalIF":1.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11577695/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142475112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-10-23DOI: 10.1177/09622802241282091
Aya A Mitani, Osvaldo Espin-Garcia, Daniel Fernández, Victoria Landsman
Researchers often use outcome-dependent sampling to study the exposure-outcome association. The case-control study is a widely used example of outcome-dependent sampling when the outcome is binary. When the outcome is ordinal, standard ordinal regression models generally produce biased coefficients when the sampling fractions depend on the values of the outcome variable. To address this problem, we studied the performance of survey-weighted ordinal regression models with weights inversely proportional to the sampling fractions. Through an extensive simulation study, we compared the performance of four ordinal regression models (SM: stereotype model; AC: adjacent-category logit model; CR: continuation-ratio logit model; and CM: cumulative logit model), with and without sampling weights under outcome-dependent sampling. We observed that when using weights, all four models produced estimates with negligible bias of all regression coefficients. Without weights, only stereotype model and adjacent-category logit model produced estimates with negligible to low bias for all coefficients except for the intercepts in all scenarios. In one scenario, the unweighted continuation-ratio logit model also produced estimates with low bias. The weighted stereotype model and adjacent-category logit model also produced estimates with lower relative root mean square errors compared to the unweighted models in most scenarios. In some of the scenarios with unevenly distributed categories, the weighted continuation-ratio logit model and cumulative logit model produced estimates with lower relative root mean square errors compared to the respective unweighted models. We used a study of knee osteoarthritis as an example.
{"title":"Applying survey weights to ordinal regression models for improved inference in outcome-dependent samples with ordinal outcomes.","authors":"Aya A Mitani, Osvaldo Espin-Garcia, Daniel Fernández, Victoria Landsman","doi":"10.1177/09622802241282091","DOIUrl":"10.1177/09622802241282091","url":null,"abstract":"<p><p>Researchers often use outcome-dependent sampling to study the exposure-outcome association. The case-control study is a widely used example of outcome-dependent sampling when the outcome is binary. When the outcome is ordinal, standard ordinal regression models generally produce biased coefficients when the sampling fractions depend on the values of the outcome variable. To address this problem, we studied the performance of survey-weighted ordinal regression models with weights inversely proportional to the sampling fractions. Through an extensive simulation study, we compared the performance of four ordinal regression models (SM: stereotype model; AC: adjacent-category logit model; CR: continuation-ratio logit model; and CM: cumulative logit model), with and without sampling weights under outcome-dependent sampling. We observed that when using weights, all four models produced estimates with negligible bias of all regression coefficients. Without weights, only stereotype model and adjacent-category logit model produced estimates with negligible to low bias for all coefficients except for the intercepts in all scenarios. In one scenario, the unweighted continuation-ratio logit model also produced estimates with low bias. The weighted stereotype model and adjacent-category logit model also produced estimates with lower relative root mean square errors compared to the unweighted models in most scenarios. In some of the scenarios with unevenly distributed categories, the weighted continuation-ratio logit model and cumulative logit model produced estimates with lower relative root mean square errors compared to the respective unweighted models. We used a study of knee osteoarthritis as an example.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"2007-2026"},"PeriodicalIF":1.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11577697/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142508312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-10-09DOI: 10.1177/09622802241275401
Sami Tabib, Denis Larocque
We are addressing the problem of estimating conditional average treatment effects with a continuous treatment and a continuous response, using random forests. We explore two general approaches: building trees with a split rule that seeks to increase the heterogeneity of the treatment effect estimation and building trees to predict as a proxy target variable. We conduct a simulation study to investigate several aspects including the presence or absence of confounding and colliding effects and the merits of locally centering the treatment and/or the response. Our study incorporates both existing and new implementations of random forests. The results indicate that locally centering both the response and treatment variables is generally the best strategy, and both general approaches are viable. Additionally, we provide an illustration using data from the 1987 National Medical Expenditure Survey.
{"title":"Comparison of random forest methods for conditional average treatment effect estimation with a continuous treatment.","authors":"Sami Tabib, Denis Larocque","doi":"10.1177/09622802241275401","DOIUrl":"10.1177/09622802241275401","url":null,"abstract":"<p><p>We are addressing the problem of estimating conditional average treatment effects with a continuous treatment and a continuous response, using random forests. We explore two general approaches: building trees with a split rule that seeks to increase the heterogeneity of the treatment effect estimation and building trees to predict <math><mi>Y</mi></math> as a proxy target variable. We conduct a simulation study to investigate several aspects including the presence or absence of confounding and colliding effects and the merits of locally centering the treatment and/or the response. Our study incorporates both existing and new implementations of random forests. The results indicate that locally centering both the response and treatment variables is generally the best strategy, and both general approaches are viable. Additionally, we provide an illustration using data from the 1987 National Medical Expenditure Survey.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1952-1966"},"PeriodicalIF":1.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11577706/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142393490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-10-14DOI: 10.1177/09622802241288348
Thomas Jaki, Helen Barnett, Andrew Titman, Pavel Mozgunov
In the search for effective treatments for COVID-19, the initial emphasis has been on re-purposed treatments. To maximize the chances of finding successful treatments, novel treatments that have been developed for this disease in particular, are needed. In this article, we describe and evaluate the statistical design of the AGILE platform, an adaptive randomized seamless Phase I/II trial platform that seeks to quickly establish a safe range of doses and investigates treatments for potential efficacy. The bespoke Bayesian design (i) utilizes randomization during dose-finding, (ii) shares control arm information across the platform, and (iii) uses a time-to-event endpoint with a formal testing structure and error control for evaluation of potential efficacy. Both single-agent and combination treatments are considered. We find that the design can identify potential treatments that are safe and efficacious reliably with small to moderate sample sizes.
{"title":"A seamless Phase I/II platform design with a time-to-event efficacy endpoint for potential COVID-19 therapies.","authors":"Thomas Jaki, Helen Barnett, Andrew Titman, Pavel Mozgunov","doi":"10.1177/09622802241288348","DOIUrl":"10.1177/09622802241288348","url":null,"abstract":"<p><p>In the search for effective treatments for COVID-19, the initial emphasis has been on re-purposed treatments. To maximize the chances of finding successful treatments, novel treatments that have been developed for this disease in particular, are needed. In this article, we describe and evaluate the statistical design of the AGILE platform, an adaptive randomized seamless Phase I/II trial platform that seeks to quickly establish a safe range of doses and investigates treatments for potential efficacy. The bespoke Bayesian design (i) utilizes randomization during dose-finding, (ii) shares control arm information across the platform, and (iii) uses a time-to-event endpoint with a formal testing structure and error control for evaluation of potential efficacy. Both single-agent and combination treatments are considered. We find that the design can identify potential treatments that are safe and efficacious reliably with small to moderate sample sizes.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"2115-2130"},"PeriodicalIF":1.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11577684/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142475111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-09-25DOI: 10.1177/09622802241277764
Nigel Stallard
There is a growing interest in clinical trials that investigate how patients may respond differently to an experimental treatment depending on the basis of some biomarker measured on a continuous scale, and in particular to identify some threshold value for the biomarker above which a positive treatment effect can be considered to have been demonstrated. This can be statistically challenging when the same data are used both to select the threshold and to test the treatment effect in the subpopulation that it defines. This paper describes a hierarchical testing framework to give familywise type I error rate control in this setting and proposes two specific tests that can be used within this framework. One, a simple test based on the estimated value from a linear regression model with treatment by biomarker interaction, is powerful but can lead to type I error rate inflation if the assumptions of the linear model are not met. The other is more robust to these assumptions, but can be slightly less powerful when the assumptions hold.
人们对临床试验的兴趣与日俱增,这些试验研究病人对试验性治疗的反应如何因连续测量的生物标志物的不同而不同,特别是要确定生物标志物的某个阈值,超过这个阈值就可以认为治疗效果得到了证实。如果使用相同的数据来选择阈值,并在阈值所定义的亚人群中测试治疗效果,这在统计学上可能具有挑战性。本文描述了一个分层检验框架,以便在这种情况下控制族类 I 型误差率,并提出了两个可在此框架内使用的具体检验方法。一种是基于线性回归模型估计值的简单检验,具有治疗与生物标志物交互作用的特点,但如果不符合线性模型的假设,则可能导致 I 型错误率膨胀。另一种方法对这些假设更为稳健,但当假设成立时,其有效性会稍差一些。
{"title":"Testing for a treatment effect in a selected subgroup.","authors":"Nigel Stallard","doi":"10.1177/09622802241277764","DOIUrl":"10.1177/09622802241277764","url":null,"abstract":"<p><p>There is a growing interest in clinical trials that investigate how patients may respond differently to an experimental treatment depending on the basis of some biomarker measured on a continuous scale, and in particular to identify some threshold value for the biomarker above which a positive treatment effect can be considered to have been demonstrated. This can be statistically challenging when the same data are used both to select the threshold and to test the treatment effect in the subpopulation that it defines. This paper describes a hierarchical testing framework to give familywise type I error rate control in this setting and proposes two specific tests that can be used within this framework. One, a simple test based on the estimated value from a linear regression model with treatment by biomarker interaction, is powerful but can lead to type I error rate inflation if the assumptions of the linear model are not met. The other is more robust to these assumptions, but can be slightly less powerful when the assumptions hold.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1967-1978"},"PeriodicalIF":1.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11577705/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142354184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-23DOI: 10.1177/09622802241280792
Yanqin Feng, Sijie Wu, Jieli Ding
Clustered current status data frequently occur in many fields of survival studies. Some potential factors related to the hazards of interest cannot be directly observed but are characterized through multiple correlated observable surrogates. In this article, we propose a joint modeling method for regression analysis of clustered current status data with latent variables and potentially informative cluster sizes. The proposed models consist of a factor analysis model to characterize latent variables through their multiple surrogates and an additive hazards frailty model to investigate covariate effects on the failure time and incorporate intra-cluster correlations. We develop an estimation procedure that combines the expectation-maximization algorithm and the weighted estimating equations. The consistency and asymptotic normality of the proposed estimators are established. The finite-sample performance of the proposed method is assessed via a series of simulation studies. This procedure is applied to analyze clustered current status data from the National Toxicology Program on a tumorigenicity study given by the United States Department of Health and Human Services.
{"title":"Joint regression analysis of clustered current status data with latent variables.","authors":"Yanqin Feng, Sijie Wu, Jieli Ding","doi":"10.1177/09622802241280792","DOIUrl":"https://doi.org/10.1177/09622802241280792","url":null,"abstract":"<p><p>Clustered current status data frequently occur in many fields of survival studies. Some potential factors related to the hazards of interest cannot be directly observed but are characterized through multiple correlated observable surrogates. In this article, we propose a joint modeling method for regression analysis of clustered current status data with latent variables and potentially informative cluster sizes. The proposed models consist of a factor analysis model to characterize latent variables through their multiple surrogates and an additive hazards frailty model to investigate covariate effects on the failure time and incorporate intra-cluster correlations. We develop an estimation procedure that combines the expectation-maximization algorithm and the weighted estimating equations. The consistency and asymptotic normality of the proposed estimators are established. The finite-sample performance of the proposed method is assessed via a series of simulation studies. This procedure is applied to analyze clustered current status data from the National Toxicology Program on a tumorigenicity study given by the United States Department of Health and Human Services.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"9622802241280792"},"PeriodicalIF":1.6,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142508325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}