In laboratory and clinical sciences, it is a common enough challenge to compare two methods of measurement on the same set of samples or closely related samples with the goal of assessing agreement. Current regulatory guidance cites the Bland-Altman plot, Deming regression, and concordance correlation coefficient. Each choice relies on its assumption of a particular model. A new statistical approach based on an information criterion is proposed. This integrated approach evaluates the data against six models simultaneously. This information criterion approach provides an objective, data-driven, easily calculated, informative, and clear decision rule. Real data sets from the assay comparison of patient-centric sampling are used to illustrate the new approach.
{"title":"An Information Criterion Approach for Assessing Agreement When Comparing Two Methods of Measurement.","authors":"Charles Y Tan, Yi Wang, Ogert Fisniku, Katty Wan","doi":"10.1002/sim.70315","DOIUrl":"https://doi.org/10.1002/sim.70315","url":null,"abstract":"<p><p>In laboratory and clinical sciences, it is a common enough challenge to compare two methods of measurement on the same set of samples or closely related samples with the goal of assessing agreement. Current regulatory guidance cites the Bland-Altman plot, Deming regression, and concordance correlation coefficient. Each choice relies on its assumption of a particular model. A new statistical approach based on an information criterion is proposed. This integrated approach evaluates the data against six models simultaneously. This information criterion approach provides an objective, data-driven, easily calculated, informative, and clear decision rule. Real data sets from the assay comparison of patient-centric sampling are used to illustrate the new approach.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"44 25-27","pages":"e70315"},"PeriodicalIF":1.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145564963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generalizing findings from randomized controlled trials (RCTs) to a target population is challenging when unmeasured factors influence both trial participation and outcomes. We propose a novel sensitivity analysis framework to assess the impact of such unmeasured factors on treatment effect estimates called the Proxy Pattern-Mixture Model in the context of RCTs (RCT-PPMM). By leveraging proxy variables derived from baseline covariates, our framework quantifies the potential bias in treatment effect estimates due to nonignorable selection mechanisms. The RCT-PPMM relies on two bounded sensitivity parameters that capture the deviation from sample selection at random and that can be varied systematically to determine how robust trial results are to a departure from ignorable sample selection. The approach only requires summary-level baseline covariate data for the target population (not individual-level data), thus increasing its applicability. Through simulations, we demonstrate that RCT-PPMM can provide information about the potential direction of bias and provide credible intervals that capture the true treatment effect under various nonignorable selection scenarios. We illustrate the use of the method using a yoga intervention RCT for breast cancer survivors, illustrating how conclusions may shift under plausible selection biases. Our approach offers a practical and interpretable tool for evaluating generalizability, particularly when individual-level data on nonparticipants are unavailable, but summary-level covariate data are accessible.
{"title":"A Sensitivity Analysis Framework Using the Proxy Pattern-Mixture Model for Generalization of Experimental Results.","authors":"Rebecca R Andridge, Ruoqi Song, Brady T West","doi":"10.1002/sim.70313","DOIUrl":"10.1002/sim.70313","url":null,"abstract":"<p><p>Generalizing findings from randomized controlled trials (RCTs) to a target population is challenging when unmeasured factors influence both trial participation and outcomes. We propose a novel sensitivity analysis framework to assess the impact of such unmeasured factors on treatment effect estimates called the Proxy Pattern-Mixture Model in the context of RCTs (RCT-PPMM). By leveraging proxy variables derived from baseline covariates, our framework quantifies the potential bias in treatment effect estimates due to nonignorable selection mechanisms. The RCT-PPMM relies on two bounded sensitivity parameters that capture the deviation from sample selection at random and that can be varied systematically to determine how robust trial results are to a departure from ignorable sample selection. The approach only requires summary-level baseline covariate data for the target population (not individual-level data), thus increasing its applicability. Through simulations, we demonstrate that RCT-PPMM can provide information about the potential direction of bias and provide credible intervals that capture the true treatment effect under various nonignorable selection scenarios. We illustrate the use of the method using a yoga intervention RCT for breast cancer survivors, illustrating how conclusions may shift under plausible selection biases. Our approach offers a practical and interpretable tool for evaluating generalizability, particularly when individual-level data on nonparticipants are unavailable, but summary-level covariate data are accessible.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"44 25-27","pages":"e70313"},"PeriodicalIF":1.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12593313/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145460036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chun Yin Lee, Haolun Shi, Da Ma, Mirza Faisal Beg, Jiguo Cao
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that leads to memory loss, cognitive decline, and behavioral changes, without a known cure. Neuroimages are often collected alongside the covariates at baseline to forecast the prognosis of the patients. Identifying regions of interest within the neuroimages associated with disease progression is thus of significant clinical importance. One major complication in such analysis is that the domain of the brain area in neuroimages is irregular. Another complication is that the time to AD is interval-censored, as the event can only be observed between two revisit time points. To address these complications, we propose to model the imaging predictors via bivariate splines over triangulation and incorporate the imaging predictors in a flexible class of semiparametric transformation models. The regions of interest can then be identified by maximizing a penalized likelihood. A computationally efficient expectation-maximization algorithm is devised for parameter estimation. An extensive simulation study is conducted to evaluate the finite-sample performance of the proposed method. An illustration with the AD Neuroimaging Initiative dataset is provided.
{"title":"Identification of Regions of Interest in Neuroimaging Data With Irregular Boundary Based on Semiparametric Transformation Models and Interval-Censored Outcomes.","authors":"Chun Yin Lee, Haolun Shi, Da Ma, Mirza Faisal Beg, Jiguo Cao","doi":"10.1002/sim.70309","DOIUrl":"10.1002/sim.70309","url":null,"abstract":"<p><p>Alzheimer's disease (AD) is a progressive neurodegenerative disorder that leads to memory loss, cognitive decline, and behavioral changes, without a known cure. Neuroimages are often collected alongside the covariates at baseline to forecast the prognosis of the patients. Identifying regions of interest within the neuroimages associated with disease progression is thus of significant clinical importance. One major complication in such analysis is that the domain of the brain area in neuroimages is irregular. Another complication is that the time to AD is interval-censored, as the event can only be observed between two revisit time points. To address these complications, we propose to model the imaging predictors via bivariate splines over triangulation and incorporate the imaging predictors in a flexible class of semiparametric transformation models. The regions of interest can then be identified by maximizing a penalized likelihood. A computationally efficient expectation-maximization algorithm is devised for parameter estimation. An extensive simulation study is conducted to evaluate the finite-sample performance of the proposed method. An illustration with the AD Neuroimaging Initiative dataset is provided.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"44 25-27","pages":"e70309"},"PeriodicalIF":1.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12593322/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145460050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miguel A Martinez-Beneito, Aritz Adin, Tomás Goicoa, María Dolores Ugarte
Conditional auto-regressive (CAR) distributions are widely used to deal with spatial dependence in the geographic analysis of areal data. These distributions establish multivariate dependence networks by defining conditional relationships between neighboring units, resulting in positive dependence among nearby observations. Despite their practical convenience and well-founded principles, the conditional nature of CAR distributions can lead to undesirable marginal properties, such as inherent heteroskedasticity assumptions that may significantly impact the posterior distributions. In this paper, we highlight the variance issues associated with CAR distributions, particularly focusing on edge effects and issues related to the region's geometry. We show that edge effects may be more pronounced and widespread in disease mapping studies than previously anticipated. To address these heteroskedasticity concerns, we introduce a new conditional autoregressive distribution designed to mitigate these problems. We demonstrate how this distribution effectively diminishes the practical issues identified in earlier models.
{"title":"A Proposal for Homoskedastic Modeling With Conditional Auto-Regressive Distributions.","authors":"Miguel A Martinez-Beneito, Aritz Adin, Tomás Goicoa, María Dolores Ugarte","doi":"10.1002/sim.70295","DOIUrl":"10.1002/sim.70295","url":null,"abstract":"<p><p>Conditional auto-regressive (CAR) distributions are widely used to deal with spatial dependence in the geographic analysis of areal data. These distributions establish multivariate dependence networks by defining conditional relationships between neighboring units, resulting in positive dependence among nearby observations. Despite their practical convenience and well-founded principles, the conditional nature of CAR distributions can lead to undesirable marginal properties, such as inherent heteroskedasticity assumptions that may significantly impact the posterior distributions. In this paper, we highlight the variance issues associated with CAR distributions, particularly focusing on edge effects and issues related to the region's geometry. We show that edge effects may be more pronounced and widespread in disease mapping studies than previously anticipated. To address these heteroskedasticity concerns, we introduce a new conditional autoregressive distribution designed to mitigate these problems. We demonstrate how this distribution effectively diminishes the practical issues identified in earlier models.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"44 25-27","pages":"e70295"},"PeriodicalIF":1.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12603674/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145490349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew R Scott, Oleksandr Sverdlov, Kendra Davis-Plourde, Yorghos Tripodis
Degradation models are commonly used in engineering to analyze the deterioration of systems over time. These models offer an alternative to standard longitudinal methods as they explicitly account for within-subject temporal variability through a latent stochastic process, allowing random fluctuations within a patient to be captured. This work investigates Wiener process-based degradation models with linear drift (i.e., slope) while considering a diffusion term to represent within-subject temporal variability, a random-effects term to capture between-subject variability of the slope, and a time-invariant term to account for measurement error. First-difference estimators that stabilize covariance matrix inversion and remove the influence of time-invariant confounders are presented and validated in clinically relevant settings. Monte Carlo simulations assessing relative error and coverage probability demonstrate that these models yield consistent and stable estimates. Profile likelihood methods, which reduce the dimensionality of the parameter space, also performed reliably, but should be used with caution when follow-up times are highly clustered. As a proof of concept, we applied these models to amyotrophic lateral sclerosis (ALS) data from the Pooled Resource Open-Access ALS Clinical Trials Database (PRO-ACT). We observed steeper slopes of the revised ALS Functional Rating Scale (ALSFRS-R) in individuals who died compared to those who survived, indicating that degradation model estimates are consistent with expected patterns of ALS decline. Our results demonstrate that these stochastic models provide accurate and efficient estimates of longitudinal deterioration. Future work aims to incorporate Wiener process degradation models into a joint modeling framework.
{"title":"Assessment of Wiener Process Degradation Models With Application to Amyotrophic Lateral Sclerosis Decline.","authors":"Matthew R Scott, Oleksandr Sverdlov, Kendra Davis-Plourde, Yorghos Tripodis","doi":"10.1002/sim.70323","DOIUrl":"https://doi.org/10.1002/sim.70323","url":null,"abstract":"<p><p>Degradation models are commonly used in engineering to analyze the deterioration of systems over time. These models offer an alternative to standard longitudinal methods as they explicitly account for within-subject temporal variability through a latent stochastic process, allowing random fluctuations within a patient to be captured. This work investigates Wiener process-based degradation models with linear drift (i.e., slope) while considering a diffusion term to represent within-subject temporal variability, a random-effects term to capture between-subject variability of the slope, and a time-invariant term to account for measurement error. First-difference estimators that stabilize covariance matrix inversion and remove the influence of time-invariant confounders are presented and validated in clinically relevant settings. Monte Carlo simulations assessing relative error and coverage probability demonstrate that these models yield consistent and stable estimates. Profile likelihood methods, which reduce the dimensionality of the parameter space, also performed reliably, but should be used with caution when follow-up times are highly clustered. As a proof of concept, we applied these models to amyotrophic lateral sclerosis (ALS) data from the Pooled Resource Open-Access ALS Clinical Trials Database (PRO-ACT). We observed steeper slopes of the revised ALS Functional Rating Scale (ALSFRS-R) in individuals who died compared to those who survived, indicating that degradation model estimates are consistent with expected patterns of ALS decline. Our results demonstrate that these stochastic models provide accurate and efficient estimates of longitudinal deterioration. Future work aims to incorporate Wiener process degradation models into a joint modeling framework.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"44 25-27","pages":"e70323"},"PeriodicalIF":1.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145557496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multiregional clinical trial (MRCT) has been common practice for drug development and global registration. The FDA guidance 'Demonstrating Substantial Evidence of Effectiveness for Human Drug and Biological Products Guidance for Industry' (FDA, 2019) requires that substantial evidence of effectiveness of a drug/biologic product to be demonstrated for market approval. In the situations where two pivotal MRCTs are needed to establish effectiveness of a specific indication for a drug or biological product, a systematic approach of consistency evaluation for regional effect is crucial. In this paper, we first present some existing regional consistency evaluations in a unified way that facilitates regional sample size calculation under the simple fixed effects model. Second, we extend the two commonly used consistency assessment criteria of MHLW (2007) in the context of two MRCTs and provide their evaluation and regional sample size calculation. Numerical studies demonstrate the proposed regional sample size attains the desired probability of showing regional consistency. A hypothetical example is presented to illustrate the application. We provide an R package for implementation.
{"title":"Regional Consistency Evaluation and Sample Size Calculation Under Two MRCTs.","authors":"Kunhai Qing, Xinru Ren, Shuping Jiang, Ping Yang, Menggang Yu, Jin Xu","doi":"10.1002/sim.70306","DOIUrl":"https://doi.org/10.1002/sim.70306","url":null,"abstract":"<p><p>Multiregional clinical trial (MRCT) has been common practice for drug development and global registration. The FDA guidance 'Demonstrating Substantial Evidence of Effectiveness for Human Drug and Biological Products Guidance for Industry' (FDA, 2019) requires that substantial evidence of effectiveness of a drug/biologic product to be demonstrated for market approval. In the situations where two pivotal MRCTs are needed to establish effectiveness of a specific indication for a drug or biological product, a systematic approach of consistency evaluation for regional effect is crucial. In this paper, we first present some existing regional consistency evaluations in a unified way that facilitates regional sample size calculation under the simple fixed effects model. Second, we extend the two commonly used consistency assessment criteria of MHLW (2007) in the context of two MRCTs and provide their evaluation and regional sample size calculation. Numerical studies demonstrate the proposed regional sample size attains the desired probability of showing regional consistency. A hypothetical example is presented to illustrate the application. We provide an R package for implementation.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"44 25-27","pages":"e70306"},"PeriodicalIF":1.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145460091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In clinical trials involving both mortality and morbidity, an active treatment can influence the observed risk of the first nonfatal event either directly, through its effect on the underlying nonfatal event process, or indirectly, through its effect on the death process, or both. Discerning the direct effect of treatment on the underlying first nonfatal event process holds clinical interest. However, with the competing risk of death, the Cox proportional hazards model that treats death as non-informative censoring and evaluates treatment effects on time to the first nonfatal event provides an estimate of the cause-specific hazard ratio, which may not correspond to the direct effect. To obtain the direct effect on the underlying first nonfatal event process, within the principal stratification framework, we define the principal stratum hazard and introduce the proportional principal stratum hazards model. This model estimates the principal stratum hazard ratio, which reflects the direct effect on the underlying first nonfatal event process in the presence of death and simplifies to the hazard ratio in the absence of death. The principal stratum membership is identified probabilistically using the shared frailty model, which assumes independence between the first nonfatal event process and the potential death processes, conditional on per-subject random frailty. Simulation studies are conducted to verify the reliability of our estimators. We illustrate the method using the Carvedilol Prospective Randomized Cumulative Survival trial, which involves heart-failure events.
{"title":"Causal Inference for First Non-Fatal Events With the Competing Risk of Death: A Principal Stratification Approach.","authors":"Jiren Sun, Thomas Cook","doi":"10.1002/sim.70311","DOIUrl":"10.1002/sim.70311","url":null,"abstract":"<p><p>In clinical trials involving both mortality and morbidity, an active treatment can influence the observed risk of the first nonfatal event either directly, through its effect on the underlying nonfatal event process, or indirectly, through its effect on the death process, or both. Discerning the direct effect of treatment on the underlying first nonfatal event process holds clinical interest. However, with the competing risk of death, the Cox proportional hazards model that treats death as non-informative censoring and evaluates treatment effects on time to the first nonfatal event provides an estimate of the cause-specific hazard ratio, which may not correspond to the direct effect. To obtain the direct effect on the underlying first nonfatal event process, within the principal stratification framework, we define the principal stratum hazard and introduce the proportional principal stratum hazards model. This model estimates the principal stratum hazard ratio, which reflects the direct effect on the underlying first nonfatal event process in the presence of death and simplifies to the hazard ratio in the absence of death. The principal stratum membership is identified probabilistically using the shared frailty model, which assumes independence between the first nonfatal event process and the potential death processes, conditional on per-subject random frailty. Simulation studies are conducted to verify the reliability of our estimators. We illustrate the method using the Carvedilol Prospective Randomized Cumulative Survival trial, which involves heart-failure events.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"44 25-27","pages":"e70311"},"PeriodicalIF":1.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12625808/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145542626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yiheng Wei, Juned Siddique, Bonnie Spring, Donald Hedeker
Ecological Momentary Assessments (EMA) capture real-time thoughts and behaviors in natural settings, producing rich longitudinal data for statistical analyses. However, the robustness of these analyses can be compromised by the large amount of missing data in EMA studies. To address this, multiple imputation, a method that replaces missing values with several plausible alternatives, has become increasingly popular. In this article, we introduce a two-step Bayesian multiple imputation framework which leverages the configuration of mixed models. We adopt and compare: (1) the Random Intercept Linear Mixed model; (2) the Mixed-effect Location Scale (MELS) model which accounts for subject variance influenced by covariates and random effects; and (3) the Shared Parameter MELS model which additionally links the missing data to the response variable through a random intercept logistic model. Each of these three can be used to complete the posterior distribution within the framework. In the simulation study, we extend this two-step Bayesian multiple imputation strategy to handle simultaneous missing variables in EMA data and compare the effectiveness of the multiple imputations across the three mixed models. Our analyses highlight the advantages of multiple imputations over single imputations and underscore the importance of selecting an appropriate model for the imputation process. Specifically, modeling within-subject variance and linking the missingness mechanism to the response will greatly improve the performance in certain scenarios. Furthermore, we applied our techniques to the "Make Better Choices 1 (MBC1)" study, highlighting the distinction, in particular, of imputation results between the Random Intercept Linear Mixed model and the two MELS models in terms of modeling within-subject variance.
{"title":"A Bayesian Two-Step Multiple Imputation Approach Based on Mixed Models for Missing EMA Data.","authors":"Yiheng Wei, Juned Siddique, Bonnie Spring, Donald Hedeker","doi":"10.1002/sim.70325","DOIUrl":"10.1002/sim.70325","url":null,"abstract":"<p><p>Ecological Momentary Assessments (EMA) capture real-time thoughts and behaviors in natural settings, producing rich longitudinal data for statistical analyses. However, the robustness of these analyses can be compromised by the large amount of missing data in EMA studies. To address this, multiple imputation, a method that replaces missing values with several plausible alternatives, has become increasingly popular. In this article, we introduce a two-step Bayesian multiple imputation framework which leverages the configuration of mixed models. We adopt and compare: (1) the Random Intercept Linear Mixed model; (2) the Mixed-effect Location Scale (MELS) model which accounts for subject variance influenced by covariates and random effects; and (3) the Shared Parameter MELS model which additionally links the missing data to the response variable through a random intercept logistic model. Each of these three can be used to complete the posterior distribution within the framework. In the simulation study, we extend this two-step Bayesian multiple imputation strategy to handle simultaneous missing variables in EMA data and compare the effectiveness of the multiple imputations across the three mixed models. Our analyses highlight the advantages of multiple imputations over single imputations and underscore the importance of selecting an appropriate model for the imputation process. Specifically, modeling within-subject variance and linking the missingness mechanism to the response will greatly improve the performance in certain scenarios. Furthermore, we applied our techniques to the \"Make Better Choices 1 (MBC1)\" study, highlighting the distinction, in particular, of imputation results between the Random Intercept Linear Mixed model and the two MELS models in terms of modeling within-subject variance.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"44 25-27","pages":"e70325"},"PeriodicalIF":1.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12628364/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145550997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To advance our understanding of Alzheimer's Disease (AD), especially during the preclinical stage when patients' brain functions are mostly intact, recent research has shifted towards studying AD biomarkers across the disease continuum. A widely adopted framework in AD research, proposed by Jack and colleagues, maps the progression of these biomarkers from the preclinical stage to symptomatic stages, linking their changes to the underlying pathophysiological processes of the disease. However, most existing studies rely on clinical diagnoses as a proxy for underlying AD status, potentially overlooking early stages of disease progression where biomarker changes occur before clinical symptoms appear. In this work, we develop a novel Bayesian approach to directly model the underlying AD status as a latent disease process and biomarker trajectories as nonlinear functions of disease progression. This allows for more data-driven exploration of AD progression, reducing potential biases due to inaccurate clinical diagnoses. We address the considerable heterogeneity among individuals' biomarker measurements by introducing a subject-specific latent disease trajectory as well as incorporating random intercepts to further capture additional inter-subject differences in biomarker measurements. We evaluate our model's performance through simulation studies. Applications to the Alzheimer's Disease Neuroimaging Initiative (ADNI) study yield interpretable clinical insights, illustrating the potential of our approach in facilitating the understanding of AD biomarker evolution.
{"title":"Modeling Alzheimer's Disease Biomarkers' Trajectory in the Absence of a Gold Standard Using a Bayesian Approach.","authors":"Wei Jin, Yanxun Xu, Zheyu Wang","doi":"10.1002/sim.70283","DOIUrl":"10.1002/sim.70283","url":null,"abstract":"<p><p>To advance our understanding of Alzheimer's Disease (AD), especially during the preclinical stage when patients' brain functions are mostly intact, recent research has shifted towards studying AD biomarkers across the disease continuum. A widely adopted framework in AD research, proposed by Jack and colleagues, maps the progression of these biomarkers from the preclinical stage to symptomatic stages, linking their changes to the underlying pathophysiological processes of the disease. However, most existing studies rely on clinical diagnoses as a proxy for underlying AD status, potentially overlooking early stages of disease progression where biomarker changes occur before clinical symptoms appear. In this work, we develop a novel Bayesian approach to directly model the underlying AD status as a latent disease process and biomarker trajectories as nonlinear functions of disease progression. This allows for more data-driven exploration of AD progression, reducing potential biases due to inaccurate clinical diagnoses. We address the considerable heterogeneity among individuals' biomarker measurements by introducing a subject-specific latent disease trajectory as well as incorporating random intercepts to further capture additional inter-subject differences in biomarker measurements. We evaluate our model's performance through simulation studies. Applications to the Alzheimer's Disease Neuroimaging Initiative (ADNI) study yield interpretable clinical insights, illustrating the potential of our approach in facilitating the understanding of AD biomarker evolution.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"44 25-27","pages":"e70283"},"PeriodicalIF":1.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12778863/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145490398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Widely used methods and software for group sequential tests of a null hypothesis of no treatment difference that allow for early stopping of a clinical trial depend primarily on the fact that sequentially-computed test statistics have the independent increments property. However, there are many practical situations where the sequentially-computed test statistics do not possess this property. Key examples are in trials where the primary outcome is a time to an event but where the assumption of proportional hazards is likely violated, motivating consideration of treatment effects such as the difference in restricted mean survival time or the use of approaches that are alternatives to the familiar logrank test, in which case the associated test statistics may not possess independent increments. We show that, regardless of the covariance structure of sequentially-computed test statistics, one can always derive linear combinations of these test statistics sequentially that do have the independent increments property. We also describe how to best choose these linear combinations to target specific alternative hypotheses, such as proportional or non-proportional hazards or log odds alternatives. We thus derive new, sequentially-computed test statistics that not only have the independent increments property, supporting straightforward use of existing methods and software, but that also have greater power against target alternative hypotheses than do procedures based on the original test statistics, regardless of whether or not the original statistics have the independent increments property. We illustrate with two examples.
{"title":"Independent Increments and Group Sequential Tests.","authors":"Anastasios A Tsiatis, Marie Davidian","doi":"10.1002/sim.70307","DOIUrl":"10.1002/sim.70307","url":null,"abstract":"<p><p>Widely used methods and software for group sequential tests of a null hypothesis of no treatment difference that allow for early stopping of a clinical trial depend primarily on the fact that sequentially-computed test statistics have the independent increments property. However, there are many practical situations where the sequentially-computed test statistics do not possess this property. Key examples are in trials where the primary outcome is a time to an event but where the assumption of proportional hazards is likely violated, motivating consideration of treatment effects such as the difference in restricted mean survival time or the use of approaches that are alternatives to the familiar logrank test, in which case the associated test statistics may not possess independent increments. We show that, regardless of the covariance structure of sequentially-computed test statistics, one can always derive linear combinations of these test statistics sequentially that do have the independent increments property. We also describe how to best choose these linear combinations to target specific alternative hypotheses, such as proportional or non-proportional hazards or log odds alternatives. We thus derive new, sequentially-computed test statistics that not only have the independent increments property, supporting straightforward use of existing methods and software, but that also have greater power against target alternative hypotheses than do procedures based on the original test statistics, regardless of whether or not the original statistics have the independent increments property. We illustrate with two examples.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"44 25-27","pages":"e70307"},"PeriodicalIF":1.8,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12593325/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145460105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}