Pub Date : 2024-09-01Epub Date: 2024-07-25DOI: 10.1177/09622802241262526
Joseph Descallar, Jun Ma, Houying Zhu, Stephane Heritier, Rory Wolfe
The cause-specific hazard Cox model is widely used in analyzing competing risks survival data, and the partial likelihood method is a standard approach when survival times contain only right censoring. In practice, however, interval-censored survival times often arise, and this means the partial likelihood method is not directly applicable. Two common remedies in practice are (i) to replace each censoring interval with a single value, such as the middle point; or (ii) to redefine the event of interest, such as the time to diagnosis instead of the time to recurrence of a disease. However, the mid-point approach can cause biased parameter estimates. In this article, we develop a penalized likelihood approach to fit semi-parametric cause-specific hazard Cox models, and this method is general enough to allow left, right, and interval censoring times. Penalty functions are used to regularize the baseline hazard estimates and also to make these estimates less affected by the number and location of knots used for the estimates. We will provide asymptotic properties for the estimated parameters. A simulation study is designed to compare our method with the mid-point partial likelihood approach. We apply our method to the Aspirin in Reducing Events in the Elderly (ASPREE) study, illustrating an application of our proposed method.
特定原因危险 Cox 模型被广泛用于分析竞争风险生存数据,当生存时间只包含右删失时,部分似然法是一种标准方法。然而,在实际应用中,往往会出现间隔删失的生存时间,这就意味着偏似然法不能直接适用。在实践中有两种常见的补救方法:(i) 用单一值(如中间点)代替每个删失区间;或 (ii) 重新定义感兴趣的事件,如用诊断时间代替疾病复发时间。然而,中点法可能会导致参数估计偏差。在本文中,我们开发了一种惩罚似然法来拟合半参数病因特异性危险 Cox 模型,这种方法具有足够的通用性,允许左侧、右侧和区间普查时间。惩罚函数用于正则化基线危险估计值,并使这些估计值较少受到用于估计的结点数量和位置的影响。我们将提供估计参数的渐近特性。我们设计了一项模拟研究,将我们的方法与中点部分似然法进行比较。我们将我们的方法应用于阿司匹林在减少老年人事件中的作用(ASPREE)研究,以说明我们提出的方法的应用。
{"title":"Cause-specific hazard Cox models with partly interval censoring - Penalized likelihood estimation using Gaussian quadrature.","authors":"Joseph Descallar, Jun Ma, Houying Zhu, Stephane Heritier, Rory Wolfe","doi":"10.1177/09622802241262526","DOIUrl":"10.1177/09622802241262526","url":null,"abstract":"<p><p>The cause-specific hazard Cox model is widely used in analyzing competing risks survival data, and the partial likelihood method is a standard approach when survival times contain only right censoring. In practice, however, interval-censored survival times often arise, and this means the partial likelihood method is not directly applicable. Two common remedies in practice are (i) to replace each censoring interval with a single value, such as the middle point; or (ii) to redefine the event of interest, such as the time to diagnosis instead of the time to recurrence of a disease. However, the mid-point approach can cause biased parameter estimates. In this article, we develop a penalized likelihood approach to fit semi-parametric cause-specific hazard Cox models, and this method is general enough to allow left, right, and interval censoring times. Penalty functions are used to regularize the baseline hazard estimates and also to make these estimates less affected by the number and location of knots used for the estimates. We will provide asymptotic properties for the estimated parameters. A simulation study is designed to compare our method with the mid-point partial likelihood approach. We apply our method to the Aspirin in Reducing Events in the Elderly (ASPREE) study, illustrating an application of our proposed method.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1531-1545"},"PeriodicalIF":1.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11523552/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141760989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-05-29DOI: 10.1177/09622802241248382
Yongdong Ouyang, Monica Taljaard, Andrew B Forbes, Fan Li
Linear mixed models are commonly used in analyzing stepped-wedge cluster randomized trials. A key consideration for analyzing a stepped-wedge cluster randomized trial is accounting for the potentially complex correlation structure, which can be achieved by specifying random-effects. The simplest random effects structure is random intercept but more complex structures such as random cluster-by-period, discrete-time decay, and more recently, the random intervention structure, have been proposed. Specifying appropriate random effects in practice can be challenging: assuming more complex correlation structures may be reasonable but they are vulnerable to computational challenges. To circumvent these challenges, robust variance estimators may be applied to linear mixed models to provide consistent estimators of standard errors of fixed effect parameters in the presence of random-effects misspecification. However, there has been no empirical investigation of robust variance estimators for stepped-wedge cluster randomized trials. In this article, we review six robust variance estimators (both standard and small-sample bias-corrected robust variance estimators) that are available for linear mixed models in R, and then describe a comprehensive simulation study to examine the performance of these robust variance estimators for stepped-wedge cluster randomized trials with a continuous outcome under different data generators. For each data generator, we investigate whether the use of a robust variance estimator with either the random intercept model or the random cluster-by-period model is sufficient to provide valid statistical inference for fixed effect parameters, when these working models are subject to random-effect misspecification. Our results indicate that the random intercept and random cluster-by-period models with robust variance estimators performed adequately. The CR3 robust variance estimator (approximate jackknife) estimator, coupled with the number of clusters minus two degrees of freedom correction, consistently gave the best coverage results, but could be slightly conservative when the number of clusters was below 16. We summarize the implications of our results for the linear mixed model analysis of stepped-wedge cluster randomized trials and offer some practical recommendations on the choice of the analytic model.
线性混合模型通常用于分析阶梯式楔形分组随机试验。分析阶梯式楔形分组随机试验的一个主要考虑因素是考虑潜在的复杂相关结构,这可以通过指定随机效应来实现。最简单的随机效应结构是随机截距,但也有人提出了更复杂的结构,如按期随机分组、离散时间衰减以及最近提出的随机干预结构。在实践中指定适当的随机效应可能具有挑战性:假设更复杂的相关结构可能是合理的,但它们容易受到计算挑战的影响。为了规避这些挑战,可以将稳健方差估计器应用于线性混合模型,以便在随机效应指定错误的情况下,提供固定效应参数标准误差的一致估计器。然而,目前还没有针对阶梯楔形分组随机试验的稳健方差估计器的实证研究。在这篇文章中,我们回顾了 R 语言中可用于线性混合模型的 6 个稳健方差估计器(包括标准稳健方差估计器和小样本偏差校正稳健方差估计器),然后介绍了一项综合模拟研究,以检验这些稳健方差估计器在不同数据生成器下用于连续结果的阶梯楔形分组随机试验的性能。对于每种数据生成器,我们研究了当这些工作模型受到随机效应错误规范的影响时,使用随机截距模型或随机逐期分组模型的稳健方差估计器是否足以为固定效应参数提供有效的统计推断。我们的结果表明,采用稳健方差估计器的随机截距模型和随机逐期聚类模型表现良好。CR3 稳健方差估计器(近似千分法)估计器加上聚类数减去两个自由度校正,一直能提供最好的覆盖结果,但当聚类数低于 16 时可能略显保守。我们总结了我们的结果对阶梯楔形分组随机试验线性混合模型分析的影响,并就分析模型的选择提出了一些实用建议。
{"title":"Maintaining the validity of inference from linear mixed models in stepped-wedge cluster randomized trials under misspecified random-effects structures.","authors":"Yongdong Ouyang, Monica Taljaard, Andrew B Forbes, Fan Li","doi":"10.1177/09622802241248382","DOIUrl":"10.1177/09622802241248382","url":null,"abstract":"<p><p>Linear mixed models are commonly used in analyzing stepped-wedge cluster randomized trials. A key consideration for analyzing a stepped-wedge cluster randomized trial is accounting for the potentially complex correlation structure, which can be achieved by specifying random-effects. The simplest random effects structure is random intercept but more complex structures such as random cluster-by-period, discrete-time decay, and more recently, the random intervention structure, have been proposed. Specifying appropriate random effects in practice can be challenging: assuming more complex correlation structures may be reasonable but they are vulnerable to computational challenges. To circumvent these challenges, robust variance estimators may be applied to linear mixed models to provide consistent estimators of standard errors of fixed effect parameters in the presence of random-effects misspecification. However, there has been no empirical investigation of robust variance estimators for stepped-wedge cluster randomized trials. In this article, we review six robust variance estimators (both standard and small-sample bias-corrected robust variance estimators) that are available for linear mixed models in R, and then describe a comprehensive simulation study to examine the performance of these robust variance estimators for stepped-wedge cluster randomized trials with a continuous outcome under different data generators. For each data generator, we investigate whether the use of a robust variance estimator with either the random intercept model or the random cluster-by-period model is sufficient to provide valid statistical inference for fixed effect parameters, when these working models are subject to random-effect misspecification. Our results indicate that the random intercept and random cluster-by-period models with robust variance estimators performed adequately. The CR3 robust variance estimator (approximate jackknife) estimator, coupled with the number of clusters minus two degrees of freedom correction, consistently gave the best coverage results, but could be slightly conservative when the number of clusters was below 16. We summarize the implications of our results for the linear mixed model analysis of stepped-wedge cluster randomized trials and offer some practical recommendations on the choice of the analytic model.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1497-1516"},"PeriodicalIF":1.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11499024/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141162723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-07-25DOI: 10.1177/09622802241262525
Qijia He, Shixiao Zhang, Michael L LeBlanc, Ying-Qi Zhao
Individualized treatment rules inform tailored treatment decisions based on the patient's information, where the goal is to optimize clinical benefit for the population. When the clinical outcome of interest is survival time, most of current approaches typically aim to maximize the expected time of survival. We propose a new criterion for constructing Individualized treatment rules that optimize the clinical benefit with survival outcomes, termed as the adjusted probability of a longer survival. This objective captures the likelihood of living longer with being on treatment, compared to the alternative, which provides an alternative and often straightforward interpretation to communicate with clinicians and patients. We view it as an alternative to the survival analysis standard of the hazard ratio and the increasingly used restricted mean survival time. We develop a new method to construct the optimal Individualized treatment rule by maximizing a nonparametric estimator of the adjusted probability of a longer survival for a decision rule. Simulation studies demonstrate the reliability of the proposed method across a range of different scenarios. We further perform data analysis using data collected from a randomized Phase III clinical trial (SWOG S0819).
个体化治疗规则根据患者的信息提供量身定制的治疗决策,其目标是优化人群的临床获益。当关注的临床结果是存活时间时,目前的大多数方法通常以最大化预期存活时间为目标。我们提出了一种新的标准,用于构建个体化治疗规则,优化临床获益与生存结果,即调整后的延长生存概率。这一目标捕捉了与其他方法相比,接受治疗后存活时间更长的可能性,为临床医生和患者提供了另一种直截了当的解释。我们将其视为危险比这一生存分析标准和使用日益广泛的受限平均生存时间的替代方案。我们开发了一种新方法,通过最大化决策规则的调整后较长生存期概率的非参数估计来构建最佳个体化治疗规则。模拟研究证明了所提方法在各种不同情况下的可靠性。我们还利用从随机 III 期临床试验(SWOG S0819)中收集的数据进行了进一步的数据分析。
{"title":"Estimating individualized treatment rules by optimizing the adjusted probability of a longer survival.","authors":"Qijia He, Shixiao Zhang, Michael L LeBlanc, Ying-Qi Zhao","doi":"10.1177/09622802241262525","DOIUrl":"10.1177/09622802241262525","url":null,"abstract":"<p><p>Individualized treatment rules inform tailored treatment decisions based on the patient's information, where the goal is to optimize clinical benefit for the population. When the clinical outcome of interest is survival time, most of current approaches typically aim to maximize the expected time of survival. We propose a new criterion for constructing Individualized treatment rules that optimize the clinical benefit with survival outcomes, termed as the adjusted probability of a longer survival. This objective captures the likelihood of living longer with being on treatment, compared to the alternative, which provides an alternative and often straightforward interpretation to communicate with clinicians and patients. We view it as an alternative to the survival analysis standard of the hazard ratio and the increasingly used restricted mean survival time. We develop a new method to construct the optimal Individualized treatment rule by maximizing a nonparametric estimator of the adjusted probability of a longer survival for a decision rule. Simulation studies demonstrate the reliability of the proposed method across a range of different scenarios. We further perform data analysis using data collected from a randomized Phase III clinical trial (SWOG S0819).</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1517-1530"},"PeriodicalIF":1.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11671293/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141760990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1177/09622802241262522
Marco Mingione, Pierfrancesco Alaimo Di Loro, Antonello Maruotti
A useful parametric specification for the expected value of an epidemiological process is revived, and its statistical and empirical efficacy are explored. The Richards' curve is flexible enough to adapt to several growth phenomena, including recent epidemics and outbreaks. Here, two different estimation methods are described. The first, based on likelihood maximisation, is particularly useful when the outbreak is still ongoing and the main goal is to obtain sufficiently accurate estimates in negligible computational run-time. The second is fully Bayesian and allows for more ambitious modelling attempts such as the inclusion of spatial and temporal dependence, but it requires more data and computational resources. Regardless of the estimation approach, the Richards' specification properly characterises the main features of any growth process (e.g. growth rate, peak phase etc.), leading to a reasonable fit and providing good short- to medium-term predictions. To demonstrate such flexibility, we show different applications using publicly available data on recent epidemics where the data collection processes and transmission patterns are extremely heterogeneous, as well as benchmark datasets widely used in the literature as illustrative.
{"title":"A useful parametric specification to model epidemiological data: Revival of the Richards' curve.","authors":"Marco Mingione, Pierfrancesco Alaimo Di Loro, Antonello Maruotti","doi":"10.1177/09622802241262522","DOIUrl":"https://doi.org/10.1177/09622802241262522","url":null,"abstract":"<p><p>A useful parametric specification for the expected value of an epidemiological process is revived, and its statistical and empirical efficacy are explored. The Richards' curve is flexible enough to adapt to several growth phenomena, including recent epidemics and outbreaks. Here, two different estimation methods are described. The first, based on likelihood maximisation, is particularly useful when the outbreak is still ongoing and the main goal is to obtain sufficiently accurate estimates in negligible computational run-time. The second is fully Bayesian and allows for more ambitious modelling attempts such as the inclusion of spatial and temporal dependence, but it requires more data and computational resources. Regardless of the estimation approach, the Richards' specification properly characterises the main features of any growth process (e.g. growth rate, peak phase etc.), leading to a reasonable fit and providing good short- to medium-term predictions. To demonstrate such flexibility, we show different applications using publicly available data on recent epidemics where the data collection processes and transmission patterns are extremely heterogeneous, as well as benchmark datasets widely used in the literature as illustrative.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":"33 8","pages":"1473-1494"},"PeriodicalIF":1.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-08-06DOI: 10.1177/09622802241259172
Francisco J Diaz
For personalized medicine, we propose a general method of evaluating the potential performance of an individualized treatment rule in future clinical applications with new patients. We focus on rules that choose the most beneficial treatment for the patient out of two active (nonplacebo) treatments, which the clinician will prescribe regularly to the patient after the decision. We develop a measure of the individualization potential (IP) of a rule. The IP compares the expected effectiveness of the rule in a future clinical individualization setting versus the effectiveness of not trying individualization. We illustrate our evaluation method by explaining how to measure the IP of a useful type of individualized rules calculated through a new parametric interaction model of data from parallel-group clinical trials with continuous responses. Our interaction model implies a structural equation model we use to estimate the rule and its IP. We examine the IP both theoretically and with simulations when the estimated individualized rule is put into practice in new patients. Our individualization approach was superior to outcome-weighted machine learning according to simulations. We also show connections with crossover and N-of-1 trials. As a real data application, we estimate a rule for the individualization of treatments for diabetic macular edema and evaluate its IP.
对于个性化医疗,我们提出了一种通用方法,用于评估个性化治疗规则在未来临床应用中对新患者的潜在表现。我们将重点放在从两种有效(非安慰剂)治疗方法中为患者选择最有益治疗方法的规则上,临床医生在做出决定后将定期为患者开具处方。我们开发了一种衡量规则个体化潜力(IP)的方法。IP 将该规则在未来临床个体化设置中的预期效果与不尝试个体化的效果进行比较。我们通过解释如何衡量一种有用的个体化规则的 IP 值来说明我们的评估方法,这种 IP 值是通过一种新的参数交互模型计算出来的,该模型的数据来自具有连续反应的平行组临床试验。我们的交互模型意味着一个结构方程模型,我们用它来估算规则及其 IP。当估计出的个体化规则在新患者身上付诸实践时,我们从理论和模拟两方面对 IP 进行了检验。根据模拟结果,我们的个性化方法优于结果加权机器学习。我们还展示了与交叉试验和 N-of-1 试验之间的联系。在实际数据应用中,我们估算了糖尿病黄斑水肿的个体化治疗规则,并对其IP进行了评估。
{"title":"Measuring the individualization potential of treatment individualization rules: Application to rules built with a new parametric interaction model for parallel-group clinical trials.","authors":"Francisco J Diaz","doi":"10.1177/09622802241259172","DOIUrl":"10.1177/09622802241259172","url":null,"abstract":"<p><p>For personalized medicine, we propose a general method of evaluating the potential performance of an individualized treatment rule in future clinical applications with new patients. We focus on rules that choose the most beneficial treatment for the patient out of two active (nonplacebo) treatments, which the clinician will prescribe regularly to the patient after the decision. We develop a measure of the individualization potential (IP) of a rule. The IP compares the expected effectiveness of the rule in a future clinical individualization setting versus the effectiveness of not trying individualization. We illustrate our evaluation method by explaining how to measure the IP of a useful type of individualized rules calculated through a new parametric interaction model of data from parallel-group clinical trials with continuous responses. Our interaction model implies a structural equation model we use to estimate the rule and its IP. We examine the IP both theoretically and with simulations when the estimated individualized rule is put into practice in new patients. Our individualization approach was superior to outcome-weighted machine learning according to simulations. We also show connections with crossover and N-of-1 trials. As a real data application, we estimate a rule for the individualization of treatments for diabetic macular edema and evaluate its IP.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1355-1375"},"PeriodicalIF":1.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141894391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-05-30DOI: 10.1177/09622802241247717
Jingxia Liu, Fan Li
Cluster randomized crossover and stepped wedge cluster randomized trials are two types of longitudinal cluster randomized trials that leverage both the within- and between-cluster comparisons to estimate the treatment effect and are increasingly used in healthcare delivery and implementation science research. While the variance expressions of estimated treatment effect have been previously developed from the method of generalized estimating equations for analyzing cluster randomized crossover trials and stepped wedge cluster randomized trials, little guidance has been provided for optimal designs to ensure maximum efficiency. Here, an optimal design refers to the combination of optimal cluster-period size and optimal number of clusters that provide the smallest variance of the treatment effect estimator or maximum efficiency under a fixed total budget. In this work, we develop optimal designs for multiple-period cluster randomized crossover trials and stepped wedge cluster randomized trials with continuous outcomes, including both closed-cohort and repeated cross-sectional sampling schemes. Local optimal design algorithms are proposed when the correlation parameters in the working correlation structure are known. MaxiMin optimal design algorithms are proposed when the exact values are unavailable, but investigators may specify a range of correlation values. The closed-form formulae of local optimal design and MaxiMin optimal design are derived for multiple-period cluster randomized crossover trials, where the cluster-period size and number of clusters are decimal. The decimal estimates from closed-form formulae can then be used to investigate the performances of integer estimates from local optimal design and MaxiMin optimal design algorithms. One unique contribution from this work, compared to the previous optimal design research, is that we adopt constrained optimization techniques to obtain integer estimates under the MaxiMin optimal design. To assist practical implementation, we also develop four SAS macros to find local optimal designs and MaxiMin optimal designs.
{"title":"Optimal designs using generalized estimating equations in cluster randomized crossover and stepped wedge trials.","authors":"Jingxia Liu, Fan Li","doi":"10.1177/09622802241247717","DOIUrl":"10.1177/09622802241247717","url":null,"abstract":"<p><p>Cluster randomized crossover and stepped wedge cluster randomized trials are two types of longitudinal cluster randomized trials that leverage both the within- and between-cluster comparisons to estimate the treatment effect and are increasingly used in healthcare delivery and implementation science research. While the variance expressions of estimated treatment effect have been previously developed from the method of generalized estimating equations for analyzing cluster randomized crossover trials and stepped wedge cluster randomized trials, little guidance has been provided for optimal designs to ensure maximum efficiency. Here, an optimal design refers to the combination of optimal cluster-period size and optimal number of clusters that provide the smallest variance of the treatment effect estimator or maximum efficiency under a fixed total budget. In this work, we develop optimal designs for multiple-period cluster randomized crossover trials and stepped wedge cluster randomized trials with continuous outcomes, including both closed-cohort and repeated cross-sectional sampling schemes. Local optimal design algorithms are proposed when the correlation parameters in the working correlation structure are known. MaxiMin optimal design algorithms are proposed when the exact values are unavailable, but investigators may specify a range of correlation values. The closed-form formulae of local optimal design and MaxiMin optimal design are derived for multiple-period cluster randomized crossover trials, where the cluster-period size and number of clusters are decimal. The decimal estimates from closed-form formulae can then be used to investigate the performances of integer estimates from local optimal design and MaxiMin optimal design algorithms. One unique contribution from this work, compared to the previous optimal design research, is that we adopt constrained optimization techniques to obtain integer estimates under the MaxiMin optimal design. To assist practical implementation, we also develop four SAS macros to find local optimal designs and MaxiMin optimal designs.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1299-1330"},"PeriodicalIF":1.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141176266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-07-23DOI: 10.1177/09622802241254569
{"title":"Erratum to \"A dose-effect network meta-analysis model with application in antidepressants using restricted cubic splines\".","authors":"","doi":"10.1177/09622802241254569","DOIUrl":"10.1177/09622802241254569","url":null,"abstract":"","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"NP1"},"PeriodicalIF":1.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11532931/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141752830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-06-06DOI: 10.1177/09622802241259170
Wei Liu, Danping Liu, Zhiwei Zhang
Prognostic biomarkers for survival outcomes are widely used in clinical research and practice. Such biomarkers are often evaluated using a C-index as well as quantities based on time-dependent receiver operating characteristic curves. Existing methods for their evaluation generally assume that censoring is uninformative in the sense that the censoring time is independent of the failure time with or without conditioning on the biomarker under evaluation. With focus on the C-index and the area under a particular receiver operating characteristic curve, we describe and compare three estimation methods that account for informative censoring based on observed baseline covariates. Two of them are straightforward extensions of existing plug-in and inverse probability weighting methods for uninformative censoring. By appealing to semiparametric theory, we also develop a doubly robust, locally efficient method that is more robust than the plug-in and inverse probability weighting methods and typically more efficient than the inverse probability weighting method. The methods are evaluated and compared in a simulation study, and applied to real data from studies of breast cancer and heart failure.
预示生存结果的生物标志物被广泛应用于临床研究和实践中。此类生物标志物通常使用 C 指数以及基于时间依赖性接收者工作特征曲线的数量进行评估。现有的评估方法通常假定普查是无信息的,即普查时间与评估生物标志物的失败时间无关。以 C 指数和特定接收者工作特征曲线下的面积为重点,我们描述并比较了三种基于观测到的基线协变量考虑信息性剔除的估算方法。其中两种方法是对现有插件和反概率加权方法的直接扩展,用于非信息性删减。通过利用半参数理论,我们还开发了一种双重稳健、局部有效的方法,它比插入式和反概率加权法更稳健,通常比反概率加权法更有效。我们在模拟研究中对这些方法进行了评估和比较,并将其应用于乳腺癌和心力衰竭研究的真实数据中。
{"title":"Evaluating prognostic biomarkers for survival outcomes subject to informative censoring.","authors":"Wei Liu, Danping Liu, Zhiwei Zhang","doi":"10.1177/09622802241259170","DOIUrl":"10.1177/09622802241259170","url":null,"abstract":"<p><p>Prognostic biomarkers for survival outcomes are widely used in clinical research and practice. Such biomarkers are often evaluated using a C-index as well as quantities based on time-dependent receiver operating characteristic curves. Existing methods for their evaluation generally assume that censoring is uninformative in the sense that the censoring time is independent of the failure time with or without conditioning on the biomarker under evaluation. With focus on the C-index and the area under a particular receiver operating characteristic curve, we describe and compare three estimation methods that account for informative censoring based on observed baseline covariates. Two of them are straightforward extensions of existing plug-in and inverse probability weighting methods for uninformative censoring. By appealing to semiparametric theory, we also develop a doubly robust, locally efficient method that is more robust than the plug-in and inverse probability weighting methods and typically more efficient than the inverse probability weighting method. The methods are evaluated and compared in a simulation study, and applied to real data from studies of breast cancer and heart failure.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1342-1354"},"PeriodicalIF":1.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141262800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-06-07DOI: 10.1177/09622802241259178
Cristian L Bayes, Jorge Luis Bazán, Luis Valdivieso
Bounded count response data arise naturally in health applications. In general, the well-known beta-binomial regression model form the basis for analyzing this data, specially when we have overdispersed data. Little attention, however, has been given to the literature on the possibility of having extreme observations and overdispersed data. We propose in this work an extension of the beta-binomial regression model, named the beta-2-binomial regression model, which provides a rather flexible approach for fitting a regression model with a wide spectrum of bounded count response data sets under the presence of overdispersion, outliers, or excess of extreme observations. This distribution possesses more skewness and kurtosis than the beta-binomial model but preserves the same mean and variance form of the beta-binomial model. Additional properties of the beta-2-binomial distribution are derived including its behavior on the limits of its parametric space. A penalized maximum likelihood approach is considered to estimate parameters of this model and a residual analysis is included to assess departures from model assumptions as well as to detect outlier observations. Simulation studies, considering the robustness to outliers, are presented confirming that the beta-2-binomial regression model is a better robust alternative, in comparison with the binomial and beta-binomial regression models. We also found that the beta-2-binomial regression model outperformed the binomial and beta-binomial regression models in our applications of predicting liver cancer development in mice and the number of inappropriate days a patient spent in a hospital.
{"title":"A robust regression model for bounded count health data.","authors":"Cristian L Bayes, Jorge Luis Bazán, Luis Valdivieso","doi":"10.1177/09622802241259178","DOIUrl":"10.1177/09622802241259178","url":null,"abstract":"<p><p>Bounded count response data arise naturally in health applications. In general, the well-known beta-binomial regression model form the basis for analyzing this data, specially when we have overdispersed data. Little attention, however, has been given to the literature on the possibility of having extreme observations and overdispersed data. We propose in this work an extension of the beta-binomial regression model, named the beta-2-binomial regression model, which provides a rather flexible approach for fitting a regression model with a wide spectrum of bounded count response data sets under the presence of overdispersion, outliers, or excess of extreme observations. This distribution possesses more skewness and kurtosis than the beta-binomial model but preserves the same mean and variance form of the beta-binomial model. Additional properties of the beta-2-binomial distribution are derived including its behavior on the limits of its parametric space. A penalized maximum likelihood approach is considered to estimate parameters of this model and a residual analysis is included to assess departures from model assumptions as well as to detect outlier observations. Simulation studies, considering the robustness to outliers, are presented confirming that the beta-2-binomial regression model is a better robust alternative, in comparison with the binomial and beta-binomial regression models. We also found that the beta-2-binomial regression model outperformed the binomial and beta-binomial regression models in our applications of predicting liver cancer development in mice and the number of inappropriate days a patient spent in a hospital.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1392-1411"},"PeriodicalIF":1.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141284833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-08-28DOI: 10.1177/09622802241259175
Elsayed Ghanem, Armin Hatefi, Hamid Usefi
The mixture of probabilistic regression models is one of the most common techniques to incorporate the information of covariates into learning of the population heterogeneity. Despite its flexibility, unreliable estimates can occur due to multicollinearity among covariates. In this paper, we develop Liu-type shrinkage methods through an unsupervised learning approach to estimate the model coefficients in the presence of multicollinearity. We evaluate the performance of our proposed methods via classification and stochastic versions of the expectation-maximization algorithm. We show using numerical simulations that the proposed methods outperform their Ridge and maximum likelihood counterparts. Finally, we apply our methods to analyze the bone mineral data of women aged 50 and older.
{"title":"Unsupervised Liu-type shrinkage estimators for mixture of regression models.","authors":"Elsayed Ghanem, Armin Hatefi, Hamid Usefi","doi":"10.1177/09622802241259175","DOIUrl":"10.1177/09622802241259175","url":null,"abstract":"<p><p>The mixture of probabilistic regression models is one of the most common techniques to incorporate the information of covariates into learning of the population heterogeneity. Despite its flexibility, unreliable estimates can occur due to multicollinearity among covariates. In this paper, we develop Liu-type shrinkage methods through an unsupervised learning approach to estimate the model coefficients in the presence of multicollinearity. We evaluate the performance of our proposed methods via classification and stochastic versions of the expectation-maximization algorithm. We show using numerical simulations that the proposed methods outperform their Ridge and maximum likelihood counterparts. Finally, we apply our methods to analyze the bone mineral data of women aged 50 and older.</p>","PeriodicalId":22038,"journal":{"name":"Statistical Methods in Medical Research","volume":" ","pages":"1376-1391"},"PeriodicalIF":1.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11457464/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142081588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}