Pub Date : 2025-01-15Epub Date: 2024-12-09DOI: 10.1002/sim.10297
Angela Carollo, Paul Eilers, Hein Putter, Jutta Gampe
Hazard models are the most commonly used tool to analyze time-to-event data. If more than one time scale is relevant for the event under study, models are required that can incorporate the dependence of a hazard along two (or more) time scales. Such models should be flexible to capture the joint influence of several time scales, and nonparametric smoothing techniques are obvious candidates. -splines offer a flexible way to specify such hazard surfaces, and estimation is achieved by maximizing a penalized Poisson likelihood. Standard observation schemes, such as right-censoring and left-truncation, can be accommodated in a straightforward manner. Proportional hazards regression with a baseline hazard varying over two time scales is presented. Efficient computation is possible by generalized linear array model (GLAM) algorithms or by exploiting a sparse mixed model formulation. A companion R-package is provided.
风险模型是分析事件时间数据最常用的工具。如果一个以上的时间尺度与所研究的事件相关,则需要能够将危险在两个(或更多)时间尺度上的依赖性纳入模型。这样的模型应该是灵活的,以捕捉几个时间尺度的共同影响,非参数平滑技术是明显的候选人。P $$ P $$样条提供了一种灵活的方法来指定这样的危险表面,估计是通过最大化惩罚泊松似然来实现的。标准的观测方案,如右截和左截,可以以一种直接的方式进行调整。提出了在两个时间尺度上具有基线风险变化的比例风险回归。通过广义线性阵列模型(GLAM)算法或利用稀疏混合模型公式可以实现高效的计算。提供了一个配套的r包。
{"title":"Smooth Hazards With Multiple Time Scales.","authors":"Angela Carollo, Paul Eilers, Hein Putter, Jutta Gampe","doi":"10.1002/sim.10297","DOIUrl":"10.1002/sim.10297","url":null,"abstract":"<p><p>Hazard models are the most commonly used tool to analyze time-to-event data. If more than one time scale is relevant for the event under study, models are required that can incorporate the dependence of a hazard along two (or more) time scales. Such models should be flexible to capture the joint influence of several time scales, and nonparametric smoothing techniques are obvious candidates. <math> <semantics><mrow><mi>P</mi></mrow> <annotation>$$ P $$</annotation></semantics> </math> -splines offer a flexible way to specify such hazard surfaces, and estimation is achieved by maximizing a penalized Poisson likelihood. Standard observation schemes, such as right-censoring and left-truncation, can be accommodated in a straightforward manner. Proportional hazards regression with a baseline hazard varying over two time scales is presented. Efficient computation is possible by generalized linear array model (GLAM) algorithms or by exploiting a sparse mixed model formulation. A companion R-package is provided.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"e10297"},"PeriodicalIF":1.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142795142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-15Epub Date: 2024-12-11DOI: 10.1002/sim.10300
Marie Analiz April Limpoco, Christel Faes, Niel Hens
In medical research, individual-level patient data provide invaluable information, but the patients' right to confidentiality remains of utmost priority. This poses a huge challenge when estimating statistical models such as a linear mixed model, which is an extension of linear regression models that can account for potential heterogeneity whenever data come from different data providers. Federated learning tackles this hurdle by estimating parameters without retrieving individual-level data. Instead, iterative communication of parameter estimate updates between the data providers and analysts is required. In this article, we propose an alternative framework to federated learning for fitting linear mixed models. Specifically, our approach only requires the mean, covariance, and sample size of multiple covariates from different data providers once. Using the principle of statistical sufficiency within the likelihood framework as theoretical support, this proposed strategy achieves estimates identical to those derived from actual individual-level data. We demonstrate this approach through real data on 15 068 patient records from 70 clinics at the Children's Hospital of Pennsylvania. Assuming that each clinic only shares summary statistics once, we model the COVID-19 polymerase chain reaction test cycle threshold as a function of patient information. Simplicity, communication efficiency, generalisability, and wider scope of implementation in any statistical software distinguish our approach from existing strategies in the literature.
{"title":"Linear Mixed Modeling of Federated Data When Only the Mean, Covariance, and Sample Size Are Available.","authors":"Marie Analiz April Limpoco, Christel Faes, Niel Hens","doi":"10.1002/sim.10300","DOIUrl":"10.1002/sim.10300","url":null,"abstract":"<p><p>In medical research, individual-level patient data provide invaluable information, but the patients' right to confidentiality remains of utmost priority. This poses a huge challenge when estimating statistical models such as a linear mixed model, which is an extension of linear regression models that can account for potential heterogeneity whenever data come from different data providers. Federated learning tackles this hurdle by estimating parameters without retrieving individual-level data. Instead, iterative communication of parameter estimate updates between the data providers and analysts is required. In this article, we propose an alternative framework to federated learning for fitting linear mixed models. Specifically, our approach only requires the mean, covariance, and sample size of multiple covariates from different data providers once. Using the principle of statistical sufficiency within the likelihood framework as theoretical support, this proposed strategy achieves estimates identical to those derived from actual individual-level data. We demonstrate this approach through real data on 15 068 patient records from 70 clinics at the Children's Hospital of Pennsylvania. Assuming that each clinic only shares summary statistics once, we model the COVID-19 polymerase chain reaction test cycle threshold as a function of patient information. Simplicity, communication efficiency, generalisability, and wider scope of implementation in any statistical software distinguish our approach from existing strategies in the literature.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"e10300"},"PeriodicalIF":1.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-30Epub Date: 2024-11-14DOI: 10.1002/sim.10225
Changhui Yuan, Shishun Zhao, Shuwei Li, Xinyuan Song
Partially linear models provide a valuable tool for modeling failure time data with nonlinear covariate effects. Their applicability and importance in survival analysis have been widely acknowledged. To date, numerous inference methods for such models have been developed under traditional right censoring. However, the existing studies seldom target interval-censored data, which provide more coarse information and frequently occur in many scientific studies involving periodical follow-up. In this work, we propose a flexible class of partially linear transformation models to examine parametric and nonparametric covariate effects for interval-censored outcomes. We consider the sieve maximum likelihood estimation approach that approximates the cumulative baseline hazard function and nonparametric covariate effect with the monotone splines and -splines, respectively. We develop an easy-to-implement expectation-maximization algorithm coupled with three-stage data augmentation to facilitate maximization. We establish the consistency of the proposed estimators and the asymptotic distribution of parametric components based on the empirical process techniques. Numerical results from extensive simulation studies indicate that our proposed method performs satisfactorily in finite samples. An application to a study on hypobaric decompression sickness suggests that the variable TR360 exhibits a significant dynamic and nonlinear effect on the risk of developing hypobaric decompression sickness.
部分线性模型为具有非线性协变量效应的失效时间数据建模提供了宝贵的工具。它们在生存分析中的适用性和重要性已得到广泛认可。迄今为止,在传统的右普查条件下,已开发出许多针对此类模型的推断方法。然而,现有的研究很少针对区间删失数据,而区间删失数据能提供更粗略的信息,并经常出现在许多涉及定期随访的科学研究中。在这项工作中,我们提出了一类灵活的部分线性变换模型,用于检验区间删失结果的参数和非参数协变量效应。我们考虑了筛分最大似然估计方法,该方法分别用单调样条和 B $ B $ B -样条逼近累积基线危险函数和非参数协变量效应。我们开发了一种易于实现的期望最大化算法,并结合了三阶段数据扩增以促进最大化。我们基于经验过程技术,建立了所提出估计器的一致性和参数成分的渐近分布。大量模拟研究的数值结果表明,我们提出的方法在有限样本中的表现令人满意。应用于低压减压病研究的结果表明,变量 TR360 对患低压减压病的风险有显著的动态非线性影响。
{"title":"Sieve Maximum Likelihood Estimation of Partially Linear Transformation Models With Interval-Censored Data.","authors":"Changhui Yuan, Shishun Zhao, Shuwei Li, Xinyuan Song","doi":"10.1002/sim.10225","DOIUrl":"10.1002/sim.10225","url":null,"abstract":"<p><p>Partially linear models provide a valuable tool for modeling failure time data with nonlinear covariate effects. Their applicability and importance in survival analysis have been widely acknowledged. To date, numerous inference methods for such models have been developed under traditional right censoring. However, the existing studies seldom target interval-censored data, which provide more coarse information and frequently occur in many scientific studies involving periodical follow-up. In this work, we propose a flexible class of partially linear transformation models to examine parametric and nonparametric covariate effects for interval-censored outcomes. We consider the sieve maximum likelihood estimation approach that approximates the cumulative baseline hazard function and nonparametric covariate effect with the monotone splines and <math> <semantics><mrow><mi>B</mi></mrow> <annotation>$$ B $$</annotation></semantics> </math> -splines, respectively. We develop an easy-to-implement expectation-maximization algorithm coupled with three-stage data augmentation to facilitate maximization. We establish the consistency of the proposed estimators and the asymptotic distribution of parametric components based on the empirical process techniques. Numerical results from extensive simulation studies indicate that our proposed method performs satisfactorily in finite samples. An application to a study on hypobaric decompression sickness suggests that the variable TR360 exhibits a significant dynamic and nonlinear effect on the risk of developing hypobaric decompression sickness.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"5765-5776"},"PeriodicalIF":1.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142628019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-30Epub Date: 2024-11-26DOI: 10.1002/sim.10285
Stuart G Baker
Multicancer detection (MCD) tests use blood specimens to detect preclinical cancers. A major concern is overdiagnosis, the detection of preclinical cancer on screening that would not have developed into symptomatic cancer in the absence of screening. Because overdiagnosis can lead to unnecessary and harmful treatments, its quantification is important. A key metric is the screen overdiagnosis fraction (SOF), the probability of overdiagnosis at screen detection. Estimating SOF is notoriously difficult because overdiagnosis is not observed. This estimation is more challenging with MCD tests because short-term results are needed as the technology is rapidly changing. To estimate average SOF for a program of yearly MCD tests, I introduce a novel method that requires at least two yearly MCD tests given to persons having a wide range of ages and applies only to cancers for which there is no conventional screening. The method assumes an exponential distribution for the sojourn time in an operational screen-detectable preclinical cancer (OPC) state, defined as once screen-detectable (positive screen and work-up), always screen-detectable. Because this assumption appears in only one term in the SOF formula, the results are robust to violations of the assumption. An SOF plot graphs average SOF versus mean sojourn time. With lung cancer screening data and synthetic data, SOF plots distinguished small from moderate levels of SOF. With its unique set of assumptions, the SOF plot would complement other modeling approaches for estimating SOF once sufficient short-term observational data on MCD tests become available.
多癌症检测(MCD)试验使用血液标本来检测临床前癌症。一个主要的问题是过度诊断,即在筛查中发现了临床前癌症,而如果没有进行筛查,这些癌症是不会发展成有症状的癌症的。由于过度诊断会导致不必要和有害的治疗,因此对其进行量化非常重要。一个关键指标是筛查过度诊断率(SOF),即筛查时过度诊断的概率。由于无法观察到过度诊断,因此估算 SOF 十分困难。对于 MCD 检测来说,这种估算更具挑战性,因为该技术变化迅速,需要短期结果。为了估算每年进行一次 MCD 检测项目的平均 SOF,我引入了一种新方法,该方法要求每年至少对不同年龄段的人群进行两次 MCD 检测,并且只适用于没有进行常规筛查的癌症。该方法假定在可操作筛查检测的临床前癌症(OPC)状态下的停留时间为指数分布,即一旦可筛查检测(筛查和检查结果呈阳性),则始终可筛查检测。由于这一假设只出现在 SOF 公式中的一个项中,因此结果对违反这一假设的情况是稳健的。SOF 图是平均 SOF 与平均停留时间的关系图。通过肺癌筛查数据和合成数据,SOF 图可以区分 SOF 的小度和中度水平。SOF 图具有一套独特的假设条件,一旦获得足够的 MCD 检测短期观察数据,它将成为其他估算 SOF 的建模方法的补充。
{"title":"Quantifying Overdiagnosis for Multicancer Detection Tests: A Novel Method.","authors":"Stuart G Baker","doi":"10.1002/sim.10285","DOIUrl":"10.1002/sim.10285","url":null,"abstract":"<p><p>Multicancer detection (MCD) tests use blood specimens to detect preclinical cancers. A major concern is overdiagnosis, the detection of preclinical cancer on screening that would not have developed into symptomatic cancer in the absence of screening. Because overdiagnosis can lead to unnecessary and harmful treatments, its quantification is important. A key metric is the screen overdiagnosis fraction (SOF), the probability of overdiagnosis at screen detection. Estimating SOF is notoriously difficult because overdiagnosis is not observed. This estimation is more challenging with MCD tests because short-term results are needed as the technology is rapidly changing. To estimate average SOF for a program of yearly MCD tests, I introduce a novel method that requires at least two yearly MCD tests given to persons having a wide range of ages and applies only to cancers for which there is no conventional screening. The method assumes an exponential distribution for the sojourn time in an operational screen-detectable preclinical cancer (OPC) state, defined as once screen-detectable (positive screen and work-up), always screen-detectable. Because this assumption appears in only one term in the SOF formula, the results are robust to violations of the assumption. An SOF plot graphs average SOF versus mean sojourn time. With lung cancer screening data and synthetic data, SOF plots distinguished small from moderate levels of SOF. With its unique set of assumptions, the SOF plot would complement other modeling approaches for estimating SOF once sufficient short-term observational data on MCD tests become available.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"5935-5943"},"PeriodicalIF":1.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11639630/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142732807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-30Epub Date: 2024-12-01DOI: 10.1002/sim.10277
Giuliano Netto Flores Cruz, Keegan Korthauer
Clinical decisions are often guided by clinical prediction models or diagnostic tests. Decision curve analysis (DCA) combines classical assessment of predictive performance with the consequences of using these strategies for clinical decision-making. In DCA, the best decision strategy is the one that maximizes the net benefit: the net number of true positives (or negatives) provided by a given strategy. Here, we employ Bayesian approaches to DCA, addressing four fundamental concerns when evaluating clinical decision strategies: (i) which strategies are clinically useful, (ii) what is the best available decision strategy, (iii) which of two competing strategies is better, and (iv) what is the expected net benefit loss associated with the current level of uncertainty. While often consistent with frequentist point estimates, fully Bayesian DCA allows for an intuitive probabilistic interpretation framework and the incorporation of prior evidence. We evaluate the methods using simulation and provide a comprehensive case study. Software implementation is available in the bayesDCA R package. Ultimately, the Bayesian DCA workflow may help clinicians and health policymakers adopt better-informed decisions.
{"title":"Bayesian Decision Curve Analysis With Bayesdca.","authors":"Giuliano Netto Flores Cruz, Keegan Korthauer","doi":"10.1002/sim.10277","DOIUrl":"10.1002/sim.10277","url":null,"abstract":"<p><p>Clinical decisions are often guided by clinical prediction models or diagnostic tests. Decision curve analysis (DCA) combines classical assessment of predictive performance with the consequences of using these strategies for clinical decision-making. In DCA, the best decision strategy is the one that maximizes the net benefit: the net number of true positives (or negatives) provided by a given strategy. Here, we employ Bayesian approaches to DCA, addressing four fundamental concerns when evaluating clinical decision strategies: (i) which strategies are clinically useful, (ii) what is the best available decision strategy, (iii) which of two competing strategies is better, and (iv) what is the expected net benefit loss associated with the current level of uncertainty. While often consistent with frequentist point estimates, fully Bayesian DCA allows for an intuitive probabilistic interpretation framework and the incorporation of prior evidence. We evaluate the methods using simulation and provide a comprehensive case study. Software implementation is available in the bayesDCA R package. Ultimately, the Bayesian DCA workflow may help clinicians and health policymakers adopt better-informed decisions.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"6042-6058"},"PeriodicalIF":1.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11639651/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142772448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-30Epub Date: 2024-12-01DOI: 10.1002/sim.10293
Seungjae Lee, Boram Jeong, Donghwan Lee, Woojoo Lee
In epidemiological studies, evaluating the health impacts stemming from multiple exposures is one of the important goals. To analyze the effects of multiple exposures on discrete or time-to-event health outcomes, researchers often employ generalized linear models, Cox proportional hazards models, and machine learning methods. However, observational studies are prone to unmeasured confounding factors, which can introduce the potential for substantial bias in the multiple exposure effects. To address this issue, we propose a novel outcome model-based sensitivity analysis method for non-Gaussian and time-to-event outcomes with multiple exposures. All the proposed sensitivity analysis problems are formulated as linear programming problems with quadratic and linear constraints, which can be solved efficiently. Analytic solutions are provided for some optimization problems, and a numerical study is performed to examine how the proposed sensitivity analysis behaves in finite samples. We illustrate the proposed method using two real data examples.
{"title":"Sensitivity Analysis for Effects of Multiple Exposures in the Presence of Unmeasured Confounding: Non-Gaussian and Time-to-Event Outcomes.","authors":"Seungjae Lee, Boram Jeong, Donghwan Lee, Woojoo Lee","doi":"10.1002/sim.10293","DOIUrl":"10.1002/sim.10293","url":null,"abstract":"<p><p>In epidemiological studies, evaluating the health impacts stemming from multiple exposures is one of the important goals. To analyze the effects of multiple exposures on discrete or time-to-event health outcomes, researchers often employ generalized linear models, Cox proportional hazards models, and machine learning methods. However, observational studies are prone to unmeasured confounding factors, which can introduce the potential for substantial bias in the multiple exposure effects. To address this issue, we propose a novel outcome model-based sensitivity analysis method for non-Gaussian and time-to-event outcomes with multiple exposures. All the proposed sensitivity analysis problems are formulated as linear programming problems with quadratic and linear constraints, which can be solved efficiently. Analytic solutions are provided for some optimization problems, and a numerical study is performed to examine how the proposed sensitivity analysis behaves in finite samples. We illustrate the proposed method using two real data examples.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"5996-6025"},"PeriodicalIF":1.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142772469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-30Epub Date: 2024-12-05DOI: 10.1002/sim.10298
Alexander W Levis, Rajarshi Mukherjee, Rui Wang, Heidi Fischer, Sebastien Haneuse
Missing data arise in most applied settings and are ubiquitous in electronic health records (EHR). When data are missing not at random (MNAR) with respect to measured covariates, sensitivity analyses are often considered. These solutions, however, are often unsatisfying in that they are not guaranteed to yield actionable conclusions. Motivated by an EHR-based study of long-term outcomes following bariatric surgery, we consider the use of double sampling as a means to mitigate MNAR outcome data when the statistical goals are estimation and inference regarding causal effects. We describe assumptions that are sufficient for the identification of the joint distribution of confounders, treatment, and outcome under this design. Additionally, we derive efficient and robust estimators of the average causal treatment effect under a nonparametric model and under a model assuming outcomes were, in fact, initially missing at random (MAR). We compare these in simulations to an approach that adaptively estimates based on evidence of violation of the MAR assumption. Finally, we also show that the proposed double sampling design can be extended to handle arbitrary coarsening mechanisms, and derive nonparametric efficient estimators of any smooth full data functional.
{"title":"Double Sampling for Informatively Missing Data in Electronic Health Record-Based Comparative Effectiveness Research.","authors":"Alexander W Levis, Rajarshi Mukherjee, Rui Wang, Heidi Fischer, Sebastien Haneuse","doi":"10.1002/sim.10298","DOIUrl":"10.1002/sim.10298","url":null,"abstract":"<p><p>Missing data arise in most applied settings and are ubiquitous in electronic health records (EHR). When data are missing not at random (MNAR) with respect to measured covariates, sensitivity analyses are often considered. These solutions, however, are often unsatisfying in that they are not guaranteed to yield actionable conclusions. Motivated by an EHR-based study of long-term outcomes following bariatric surgery, we consider the use of double sampling as a means to mitigate MNAR outcome data when the statistical goals are estimation and inference regarding causal effects. We describe assumptions that are sufficient for the identification of the joint distribution of confounders, treatment, and outcome under this design. Additionally, we derive efficient and robust estimators of the average causal treatment effect under a nonparametric model and under a model assuming outcomes were, in fact, initially missing at random (MAR). We compare these in simulations to an approach that adaptively estimates based on evidence of violation of the MAR assumption. Finally, we also show that the proposed double sampling design can be extended to handle arbitrary coarsening mechanisms, and derive nonparametric efficient estimators of any smooth full data functional.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"6086-6098"},"PeriodicalIF":1.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11639654/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142786604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-30Epub Date: 2024-11-28DOI: 10.1002/sim.10278
Marizeh Mussavi Rizi, Joel A Dubin, Micheal P Wallace
Identifying interventions that are optimally tailored to each individual is of significant interest in various fields, in particular precision medicine. Dynamic treatment regimes (DTRs) employ sequences of decision rules that utilize individual patient information to recommend treatments. However, the assumption that an individual's treatment does not impact the outcomes of others, known as the no interference assumption, is often challenged in practical settings. For example, in infectious disease studies, the vaccine status of individuals in close proximity can influence the likelihood of infection. Imposing this assumption when it, in fact, does not hold, may lead to biased results and impact the validity of the resulting DTR optimization. We extend the estimation method of dynamic weighted ordinary least squares (dWOLS), a doubly robust and easily implemented approach for estimating optimal DTRs, to incorporate the presence of interference within dyads (i.e., pairs of individuals). We formalize an appropriate outcome model and describe the estimation of an optimal decision rule in the dyadic-network context. Through comprehensive simulations and analysis of the Population Assessment of Tobacco and Health (PATH) data, we demonstrate the improved performance of the proposed joint optimization strategy compared to the current state-of-the-art conditional optimization methods in estimating the optimal treatment assignments when within-dyad interference exists.
{"title":"Dynamic Treatment Regimes on Dyadic Networks.","authors":"Marizeh Mussavi Rizi, Joel A Dubin, Micheal P Wallace","doi":"10.1002/sim.10278","DOIUrl":"10.1002/sim.10278","url":null,"abstract":"<p><p>Identifying interventions that are optimally tailored to each individual is of significant interest in various fields, in particular precision medicine. Dynamic treatment regimes (DTRs) employ sequences of decision rules that utilize individual patient information to recommend treatments. However, the assumption that an individual's treatment does not impact the outcomes of others, known as the no interference assumption, is often challenged in practical settings. For example, in infectious disease studies, the vaccine status of individuals in close proximity can influence the likelihood of infection. Imposing this assumption when it, in fact, does not hold, may lead to biased results and impact the validity of the resulting DTR optimization. We extend the estimation method of dynamic weighted ordinary least squares (dWOLS), a doubly robust and easily implemented approach for estimating optimal DTRs, to incorporate the presence of interference within dyads (i.e., pairs of individuals). We formalize an appropriate outcome model and describe the estimation of an optimal decision rule in the dyadic-network context. Through comprehensive simulations and analysis of the Population Assessment of Tobacco and Health (PATH) data, we demonstrate the improved performance of the proposed joint optimization strategy compared to the current state-of-the-art conditional optimization methods in estimating the optimal treatment assignments when within-dyad interference exists.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"5944-5967"},"PeriodicalIF":1.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11639660/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142751738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To complement the conventional area under the ROC curve (AUC) which cannot fully describe the diagnostic accuracy of some non-standard biomarkers, we introduce a transformed ROC curve and its associated transformed AUC (TAUC) in this article, and show that TAUC can relate the original improper biomarker to a proper biomarker after a non-monotone transformation. We then provide nonparametric estimation of the non-monotone transformation and TAUC, and establish their consistency and asymptotic normality. We conduct extensive simulation studies to assess the performance of the proposed TAUC method and compare with the traditional methods. Case studies on real biomedical data are provided to illustrate the proposed TAUC method. We are able to identify more important biomarkers that tend to escape the traditional screening method.
{"title":"Transformed ROC Curve for Biomarker Evaluation.","authors":"Jianping Yang, Pei-Fen Kuan, Xiangyu Li, Jialiang Li, Xiao-Hua Zhou","doi":"10.1002/sim.10268","DOIUrl":"10.1002/sim.10268","url":null,"abstract":"<p><p>To complement the conventional area under the ROC curve (AUC) which cannot fully describe the diagnostic accuracy of some non-standard biomarkers, we introduce a transformed ROC curve and its associated transformed AUC (TAUC) in this article, and show that TAUC can relate the original improper biomarker to a proper biomarker after a non-monotone transformation. We then provide nonparametric estimation of the non-monotone transformation and TAUC, and establish their consistency and asymptotic normality. We conduct extensive simulation studies to assess the performance of the proposed TAUC method and compare with the traditional methods. Case studies on real biomedical data are provided to illustrate the proposed TAUC method. We are able to identify more important biomarkers that tend to escape the traditional screening method.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"5681-5697"},"PeriodicalIF":1.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142628004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As a crucial tool in neuroscience, mediation analysis has been developed and widely adopted to elucidate the role of intermediary variables derived from neuroimaging data. Typically, structural equation models (SEMs) are employed to investigate the influences of exposures on outcomes, with model coefficients being interpreted as causal effects. While existing SEMs have proven to be effective tools for mediation analysis involving various neuroimaging-related mediators, limited research has explored scenarios where these mediators are derived from the shape space. In addition, the linear relationship assumption adopted in existing SEMs may lead to substantial efficiency losses and decreased predictive accuracy in real-world applications. To address these challenges, we introduce a novel framework for shape mediation analysis, designed to explore the causal relationships between genetic exposures and clinical outcomes, whether mediated or unmediated by shape-related factors while accounting for potential confounding variables. Within our framework, we apply the square-root velocity function to extract elastic shape representations, which reside within the linear Hilbert space of square-integrable functions. Subsequently, we introduce a two-layer shape regression model to characterize the relationships among neurocognitive outcomes, elastic shape mediators, genetic exposures, and clinical confounders. Both estimation and inference procedures are established for unknown parameters along with the corresponding causal estimands. The asymptotic properties of estimated quantities are investigated as well. Both simulated studies and real-data analyses demonstrate the superior performance of our proposed method in terms of estimation accuracy and robustness when compared to existing approaches for estimating causal estimands.
作为神经科学的重要工具,中介分析已被开发并广泛采用,以阐明从神经影像数据中得出的中间变量的作用。通常情况下,采用结构方程模型(SEM)来研究暴露因素对结果的影响,并将模型系数解释为因果效应。虽然现有的 SEM 已被证明是涉及各种神经影像相关中介因子的中介分析的有效工具,但对这些中介因子来自形状空间的情景的探索却很有限。此外,现有 SEM 采用的线性关系假设可能会导致实际应用中的效率损失和预测准确性降低。为了应对这些挑战,我们引入了一种新的形状中介分析框架,旨在探索遗传暴露与临床结果之间的因果关系,无论是否由形状相关因素中介,同时考虑潜在的混杂变量。在我们的框架中,我们应用平方根速度函数来提取弹性形状表征,这些表征位于平方可积分函数的线性希尔伯特空间中。随后,我们引入了一个双层形状回归模型来描述神经认知结果、弹性形状介导因素、遗传暴露和临床混杂因素之间的关系。我们为未知参数和相应的因果估计值建立了估计和推理程序。此外,还研究了估计量的渐近特性。模拟研究和真实数据分析都表明,与现有的因果估计方法相比,我们提出的方法在估计准确性和稳健性方面都有卓越表现。
{"title":"Shape Mediation Analysis in Alzheimer's Disease Studies.","authors":"Xingcai Zhou, Miyeon Yeon, Jiangyan Wang, Shengxian Ding, Kaizhou Lei, Yanyong Zhao, Rongjie Liu, Chao Huang","doi":"10.1002/sim.10265","DOIUrl":"10.1002/sim.10265","url":null,"abstract":"<p><p>As a crucial tool in neuroscience, mediation analysis has been developed and widely adopted to elucidate the role of intermediary variables derived from neuroimaging data. Typically, structural equation models (SEMs) are employed to investigate the influences of exposures on outcomes, with model coefficients being interpreted as causal effects. While existing SEMs have proven to be effective tools for mediation analysis involving various neuroimaging-related mediators, limited research has explored scenarios where these mediators are derived from the shape space. In addition, the linear relationship assumption adopted in existing SEMs may lead to substantial efficiency losses and decreased predictive accuracy in real-world applications. To address these challenges, we introduce a novel framework for shape mediation analysis, designed to explore the causal relationships between genetic exposures and clinical outcomes, whether mediated or unmediated by shape-related factors while accounting for potential confounding variables. Within our framework, we apply the square-root velocity function to extract elastic shape representations, which reside within the linear Hilbert space of square-integrable functions. Subsequently, we introduce a two-layer shape regression model to characterize the relationships among neurocognitive outcomes, elastic shape mediators, genetic exposures, and clinical confounders. Both estimation and inference procedures are established for unknown parameters along with the corresponding causal estimands. The asymptotic properties of estimated quantities are investigated as well. Both simulated studies and real-data analyses demonstrate the superior performance of our proposed method in terms of estimation accuracy and robustness when compared to existing approaches for estimating causal estimands.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"5698-5710"},"PeriodicalIF":1.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142628013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}