Pub Date : 2025-01-15Epub Date: 2024-12-16DOI: 10.1002/sim.10303
Yuki Itaya, Jun Tamura, Kenichi Hayashi, Kouji Yamamoto
Evaluating classifications is crucial in statistics and machine learning, as it influences decision-making across various fields, such as patient prognosis and therapy in critical conditions. The Matthews correlation coefficient (MCC), also known as the phi coefficient, is recognized as a performance metric with high reliability, offering a balanced measurement even in the presence of class imbalances. Despite its importance, there remains a notable lack of comprehensive research on the statistical inference of MCC. This deficiency often leads to studies merely validating and comparing MCC point estimates-a practice that, while common, overlooks the statistical significance and reliability of results. Addressing this research gap, our paper introduces and evaluates several methods to construct asymptotic confidence intervals for the single MCC and the differences between MCCs in paired designs. Through simulations across various scenarios, we evaluate the finite-sample behavior of these methods and compare their performances. Furthermore, through real data analysis, we illustrate the potential utility of our findings in comparing binary classifiers, highlighting the possible contributions of our research in this field.
{"title":"Asymptotic Properties of Matthews Correlation Coefficient.","authors":"Yuki Itaya, Jun Tamura, Kenichi Hayashi, Kouji Yamamoto","doi":"10.1002/sim.10303","DOIUrl":"10.1002/sim.10303","url":null,"abstract":"<p><p>Evaluating classifications is crucial in statistics and machine learning, as it influences decision-making across various fields, such as patient prognosis and therapy in critical conditions. The Matthews correlation coefficient (MCC), also known as the phi coefficient, is recognized as a performance metric with high reliability, offering a balanced measurement even in the presence of class imbalances. Despite its importance, there remains a notable lack of comprehensive research on the statistical inference of MCC. This deficiency often leads to studies merely validating and comparing MCC point estimates-a practice that, while common, overlooks the statistical significance and reliability of results. Addressing this research gap, our paper introduces and evaluates several methods to construct asymptotic confidence intervals for the single MCC and the differences between MCCs in paired designs. Through simulations across various scenarios, we evaluate the finite-sample behavior of these methods and compare their performances. Furthermore, through real data analysis, we illustrate the potential utility of our findings in comparing binary classifiers, highlighting the possible contributions of our research in this field.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"e10303"},"PeriodicalIF":1.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142839901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-15Epub Date: 2024-12-12DOI: 10.1002/sim.10271
Zhenyu Xu, Jason P Fine, Wenling Song, Jun Yan
Generalized estimating equations (GEE) are of great importance in analyzing clustered data without full specification of multivariate distributions. A recent approach by Luo and Pan jointly models the mean, variance, and correlation coefficients of clustered data through three sets of regressions. We note that it represents a specific case of the more general estimating equations proposed by Yan and Fine which further allow the variance to depend on the mean through a variance function. In certain scenarios, the proposed variance estimators for the variance and correlation parameters in Luo and Pan may face challenges due to the subtle dependence induced by the nested structure of the estimating equations. We characterize specific model settings where their variance estimation approach may encounter limitations and illustrate how the variance estimators in Yan and Fine can correctly account for such dependencies. In addition, we introduce a novel model selection criterion that enables the simultaneous selection of the mean-scale-correlation model. The sandwich variance estimator and the proposed model selection criterion are tested by several simulation studies and real data analysis, which validate its effectiveness in variance estimation and model selection. Our work also extends the R package geepack with the flexibility to apply different working covariance matrices for the variance and correlation structures.
{"title":"On GEE for Mean-Variance-Correlation Models: Variance Estimation and Model Selection.","authors":"Zhenyu Xu, Jason P Fine, Wenling Song, Jun Yan","doi":"10.1002/sim.10271","DOIUrl":"10.1002/sim.10271","url":null,"abstract":"<p><p>Generalized estimating equations (GEE) are of great importance in analyzing clustered data without full specification of multivariate distributions. A recent approach by Luo and Pan jointly models the mean, variance, and correlation coefficients of clustered data through three sets of regressions. We note that it represents a specific case of the more general estimating equations proposed by Yan and Fine which further allow the variance to depend on the mean through a variance function. In certain scenarios, the proposed variance estimators for the variance and correlation parameters in Luo and Pan may face challenges due to the subtle dependence induced by the nested structure of the estimating equations. We characterize specific model settings where their variance estimation approach may encounter limitations and illustrate how the variance estimators in Yan and Fine can correctly account for such dependencies. In addition, we introduce a novel model selection criterion that enables the simultaneous selection of the mean-scale-correlation model. The sandwich variance estimator and the proposed model selection criterion are tested by several simulation studies and real data analysis, which validate its effectiveness in variance estimation and model selection. Our work also extends the R package geepack with the flexibility to apply different working covariance matrices for the variance and correlation structures.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"e10271"},"PeriodicalIF":1.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-15Epub Date: 2024-12-12DOI: 10.1002/sim.10275
Matthieu Pluntz, Cyril Dalmasso, Pascale Tubert-Bitter, Ismaïl Ahmed
High-dimensional regression problems, for example with genomic or drug exposure data, typically involve automated selection of a sparse set of regressors. Penalized regression methods like the LASSO can deliver a family of candidate sparse models. To select one, there are criteria balancing log-likelihood and model size, the most common being AIC and BIC. These two methods do not take into account the implicit multiple testing performed when selecting variables in a high-dimensional regression, which makes them too liberal. We propose the extended AIC (EAIC), a new information criterion for sparse model selection in high-dimensional regressions. It allows for asymptotic FWER control when the candidate regressors are independent. It is based on a simple formula involving model log-likelihood, model size, the total number of candidate regressors, and the FWER target. In a simulation study over a wide range of linear and logistic regression settings, we assessed the variable selection performance of the EAIC and of other information criteria (including some that also use the number of candidate regressors: mBIC, mAIC, and EBIC) in conjunction with the LASSO. Our method controls the FWER in nearly all settings, in contrast to the AIC and BIC, which produce many false positives. We also illustrate it for the automated signal detection of adverse drug reactions on the French pharmacovigilance spontaneous reporting database.
{"title":"A Simple Information Criterion for Variable Selection in High-Dimensional Regression.","authors":"Matthieu Pluntz, Cyril Dalmasso, Pascale Tubert-Bitter, Ismaïl Ahmed","doi":"10.1002/sim.10275","DOIUrl":"10.1002/sim.10275","url":null,"abstract":"<p><p>High-dimensional regression problems, for example with genomic or drug exposure data, typically involve automated selection of a sparse set of regressors. Penalized regression methods like the LASSO can deliver a family of candidate sparse models. To select one, there are criteria balancing log-likelihood and model size, the most common being AIC and BIC. These two methods do not take into account the implicit multiple testing performed when selecting variables in a high-dimensional regression, which makes them too liberal. We propose the extended AIC (EAIC), a new information criterion for sparse model selection in high-dimensional regressions. It allows for asymptotic FWER control when the candidate regressors are independent. It is based on a simple formula involving model log-likelihood, model size, the total number of candidate regressors, and the FWER target. In a simulation study over a wide range of linear and logistic regression settings, we assessed the variable selection performance of the EAIC and of other information criteria (including some that also use the number of candidate regressors: mBIC, mAIC, and EBIC) in conjunction with the LASSO. Our method controls the FWER in nearly all settings, in contrast to the AIC and BIC, which produce many false positives. We also illustrate it for the automated signal detection of adverse drug reactions on the French pharmacovigilance spontaneous reporting database.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"e10275"},"PeriodicalIF":1.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-15Epub Date: 2024-12-09DOI: 10.1002/sim.10297
Angela Carollo, Paul Eilers, Hein Putter, Jutta Gampe
Hazard models are the most commonly used tool to analyze time-to-event data. If more than one time scale is relevant for the event under study, models are required that can incorporate the dependence of a hazard along two (or more) time scales. Such models should be flexible to capture the joint influence of several time scales, and nonparametric smoothing techniques are obvious candidates. -splines offer a flexible way to specify such hazard surfaces, and estimation is achieved by maximizing a penalized Poisson likelihood. Standard observation schemes, such as right-censoring and left-truncation, can be accommodated in a straightforward manner. Proportional hazards regression with a baseline hazard varying over two time scales is presented. Efficient computation is possible by generalized linear array model (GLAM) algorithms or by exploiting a sparse mixed model formulation. A companion R-package is provided.
风险模型是分析事件时间数据最常用的工具。如果一个以上的时间尺度与所研究的事件相关,则需要能够将危险在两个(或更多)时间尺度上的依赖性纳入模型。这样的模型应该是灵活的,以捕捉几个时间尺度的共同影响,非参数平滑技术是明显的候选人。P $$ P $$样条提供了一种灵活的方法来指定这样的危险表面,估计是通过最大化惩罚泊松似然来实现的。标准的观测方案,如右截和左截,可以以一种直接的方式进行调整。提出了在两个时间尺度上具有基线风险变化的比例风险回归。通过广义线性阵列模型(GLAM)算法或利用稀疏混合模型公式可以实现高效的计算。提供了一个配套的r包。
{"title":"Smooth Hazards With Multiple Time Scales.","authors":"Angela Carollo, Paul Eilers, Hein Putter, Jutta Gampe","doi":"10.1002/sim.10297","DOIUrl":"10.1002/sim.10297","url":null,"abstract":"<p><p>Hazard models are the most commonly used tool to analyze time-to-event data. If more than one time scale is relevant for the event under study, models are required that can incorporate the dependence of a hazard along two (or more) time scales. Such models should be flexible to capture the joint influence of several time scales, and nonparametric smoothing techniques are obvious candidates. <math> <semantics><mrow><mi>P</mi></mrow> <annotation>$$ P $$</annotation></semantics> </math> -splines offer a flexible way to specify such hazard surfaces, and estimation is achieved by maximizing a penalized Poisson likelihood. Standard observation schemes, such as right-censoring and left-truncation, can be accommodated in a straightforward manner. Proportional hazards regression with a baseline hazard varying over two time scales is presented. Efficient computation is possible by generalized linear array model (GLAM) algorithms or by exploiting a sparse mixed model formulation. A companion R-package is provided.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"e10297"},"PeriodicalIF":1.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142795142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-15Epub Date: 2024-12-11DOI: 10.1002/sim.10300
Marie Analiz April Limpoco, Christel Faes, Niel Hens
In medical research, individual-level patient data provide invaluable information, but the patients' right to confidentiality remains of utmost priority. This poses a huge challenge when estimating statistical models such as a linear mixed model, which is an extension of linear regression models that can account for potential heterogeneity whenever data come from different data providers. Federated learning tackles this hurdle by estimating parameters without retrieving individual-level data. Instead, iterative communication of parameter estimate updates between the data providers and analysts is required. In this article, we propose an alternative framework to federated learning for fitting linear mixed models. Specifically, our approach only requires the mean, covariance, and sample size of multiple covariates from different data providers once. Using the principle of statistical sufficiency within the likelihood framework as theoretical support, this proposed strategy achieves estimates identical to those derived from actual individual-level data. We demonstrate this approach through real data on 15 068 patient records from 70 clinics at the Children's Hospital of Pennsylvania. Assuming that each clinic only shares summary statistics once, we model the COVID-19 polymerase chain reaction test cycle threshold as a function of patient information. Simplicity, communication efficiency, generalisability, and wider scope of implementation in any statistical software distinguish our approach from existing strategies in the literature.
{"title":"Linear Mixed Modeling of Federated Data When Only the Mean, Covariance, and Sample Size Are Available.","authors":"Marie Analiz April Limpoco, Christel Faes, Niel Hens","doi":"10.1002/sim.10300","DOIUrl":"10.1002/sim.10300","url":null,"abstract":"<p><p>In medical research, individual-level patient data provide invaluable information, but the patients' right to confidentiality remains of utmost priority. This poses a huge challenge when estimating statistical models such as a linear mixed model, which is an extension of linear regression models that can account for potential heterogeneity whenever data come from different data providers. Federated learning tackles this hurdle by estimating parameters without retrieving individual-level data. Instead, iterative communication of parameter estimate updates between the data providers and analysts is required. In this article, we propose an alternative framework to federated learning for fitting linear mixed models. Specifically, our approach only requires the mean, covariance, and sample size of multiple covariates from different data providers once. Using the principle of statistical sufficiency within the likelihood framework as theoretical support, this proposed strategy achieves estimates identical to those derived from actual individual-level data. We demonstrate this approach through real data on 15 068 patient records from 70 clinics at the Children's Hospital of Pennsylvania. Assuming that each clinic only shares summary statistics once, we model the COVID-19 polymerase chain reaction test cycle threshold as a function of patient information. Simplicity, communication efficiency, generalisability, and wider scope of implementation in any statistical software distinguish our approach from existing strategies in the literature.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"e10300"},"PeriodicalIF":1.8,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-30Epub Date: 2024-11-14DOI: 10.1002/sim.10225
Changhui Yuan, Shishun Zhao, Shuwei Li, Xinyuan Song
Partially linear models provide a valuable tool for modeling failure time data with nonlinear covariate effects. Their applicability and importance in survival analysis have been widely acknowledged. To date, numerous inference methods for such models have been developed under traditional right censoring. However, the existing studies seldom target interval-censored data, which provide more coarse information and frequently occur in many scientific studies involving periodical follow-up. In this work, we propose a flexible class of partially linear transformation models to examine parametric and nonparametric covariate effects for interval-censored outcomes. We consider the sieve maximum likelihood estimation approach that approximates the cumulative baseline hazard function and nonparametric covariate effect with the monotone splines and -splines, respectively. We develop an easy-to-implement expectation-maximization algorithm coupled with three-stage data augmentation to facilitate maximization. We establish the consistency of the proposed estimators and the asymptotic distribution of parametric components based on the empirical process techniques. Numerical results from extensive simulation studies indicate that our proposed method performs satisfactorily in finite samples. An application to a study on hypobaric decompression sickness suggests that the variable TR360 exhibits a significant dynamic and nonlinear effect on the risk of developing hypobaric decompression sickness.
部分线性模型为具有非线性协变量效应的失效时间数据建模提供了宝贵的工具。它们在生存分析中的适用性和重要性已得到广泛认可。迄今为止,在传统的右普查条件下,已开发出许多针对此类模型的推断方法。然而,现有的研究很少针对区间删失数据,而区间删失数据能提供更粗略的信息,并经常出现在许多涉及定期随访的科学研究中。在这项工作中,我们提出了一类灵活的部分线性变换模型,用于检验区间删失结果的参数和非参数协变量效应。我们考虑了筛分最大似然估计方法,该方法分别用单调样条和 B $ B $ B -样条逼近累积基线危险函数和非参数协变量效应。我们开发了一种易于实现的期望最大化算法,并结合了三阶段数据扩增以促进最大化。我们基于经验过程技术,建立了所提出估计器的一致性和参数成分的渐近分布。大量模拟研究的数值结果表明,我们提出的方法在有限样本中的表现令人满意。应用于低压减压病研究的结果表明,变量 TR360 对患低压减压病的风险有显著的动态非线性影响。
{"title":"Sieve Maximum Likelihood Estimation of Partially Linear Transformation Models With Interval-Censored Data.","authors":"Changhui Yuan, Shishun Zhao, Shuwei Li, Xinyuan Song","doi":"10.1002/sim.10225","DOIUrl":"10.1002/sim.10225","url":null,"abstract":"<p><p>Partially linear models provide a valuable tool for modeling failure time data with nonlinear covariate effects. Their applicability and importance in survival analysis have been widely acknowledged. To date, numerous inference methods for such models have been developed under traditional right censoring. However, the existing studies seldom target interval-censored data, which provide more coarse information and frequently occur in many scientific studies involving periodical follow-up. In this work, we propose a flexible class of partially linear transformation models to examine parametric and nonparametric covariate effects for interval-censored outcomes. We consider the sieve maximum likelihood estimation approach that approximates the cumulative baseline hazard function and nonparametric covariate effect with the monotone splines and <math> <semantics><mrow><mi>B</mi></mrow> <annotation>$$ B $$</annotation></semantics> </math> -splines, respectively. We develop an easy-to-implement expectation-maximization algorithm coupled with three-stage data augmentation to facilitate maximization. We establish the consistency of the proposed estimators and the asymptotic distribution of parametric components based on the empirical process techniques. Numerical results from extensive simulation studies indicate that our proposed method performs satisfactorily in finite samples. An application to a study on hypobaric decompression sickness suggests that the variable TR360 exhibits a significant dynamic and nonlinear effect on the risk of developing hypobaric decompression sickness.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"5765-5776"},"PeriodicalIF":1.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142628019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-30Epub Date: 2024-11-26DOI: 10.1002/sim.10285
Stuart G Baker
Multicancer detection (MCD) tests use blood specimens to detect preclinical cancers. A major concern is overdiagnosis, the detection of preclinical cancer on screening that would not have developed into symptomatic cancer in the absence of screening. Because overdiagnosis can lead to unnecessary and harmful treatments, its quantification is important. A key metric is the screen overdiagnosis fraction (SOF), the probability of overdiagnosis at screen detection. Estimating SOF is notoriously difficult because overdiagnosis is not observed. This estimation is more challenging with MCD tests because short-term results are needed as the technology is rapidly changing. To estimate average SOF for a program of yearly MCD tests, I introduce a novel method that requires at least two yearly MCD tests given to persons having a wide range of ages and applies only to cancers for which there is no conventional screening. The method assumes an exponential distribution for the sojourn time in an operational screen-detectable preclinical cancer (OPC) state, defined as once screen-detectable (positive screen and work-up), always screen-detectable. Because this assumption appears in only one term in the SOF formula, the results are robust to violations of the assumption. An SOF plot graphs average SOF versus mean sojourn time. With lung cancer screening data and synthetic data, SOF plots distinguished small from moderate levels of SOF. With its unique set of assumptions, the SOF plot would complement other modeling approaches for estimating SOF once sufficient short-term observational data on MCD tests become available.
多癌症检测(MCD)试验使用血液标本来检测临床前癌症。一个主要的问题是过度诊断,即在筛查中发现了临床前癌症,而如果没有进行筛查,这些癌症是不会发展成有症状的癌症的。由于过度诊断会导致不必要和有害的治疗,因此对其进行量化非常重要。一个关键指标是筛查过度诊断率(SOF),即筛查时过度诊断的概率。由于无法观察到过度诊断,因此估算 SOF 十分困难。对于 MCD 检测来说,这种估算更具挑战性,因为该技术变化迅速,需要短期结果。为了估算每年进行一次 MCD 检测项目的平均 SOF,我引入了一种新方法,该方法要求每年至少对不同年龄段的人群进行两次 MCD 检测,并且只适用于没有进行常规筛查的癌症。该方法假定在可操作筛查检测的临床前癌症(OPC)状态下的停留时间为指数分布,即一旦可筛查检测(筛查和检查结果呈阳性),则始终可筛查检测。由于这一假设只出现在 SOF 公式中的一个项中,因此结果对违反这一假设的情况是稳健的。SOF 图是平均 SOF 与平均停留时间的关系图。通过肺癌筛查数据和合成数据,SOF 图可以区分 SOF 的小度和中度水平。SOF 图具有一套独特的假设条件,一旦获得足够的 MCD 检测短期观察数据,它将成为其他估算 SOF 的建模方法的补充。
{"title":"Quantifying Overdiagnosis for Multicancer Detection Tests: A Novel Method.","authors":"Stuart G Baker","doi":"10.1002/sim.10285","DOIUrl":"10.1002/sim.10285","url":null,"abstract":"<p><p>Multicancer detection (MCD) tests use blood specimens to detect preclinical cancers. A major concern is overdiagnosis, the detection of preclinical cancer on screening that would not have developed into symptomatic cancer in the absence of screening. Because overdiagnosis can lead to unnecessary and harmful treatments, its quantification is important. A key metric is the screen overdiagnosis fraction (SOF), the probability of overdiagnosis at screen detection. Estimating SOF is notoriously difficult because overdiagnosis is not observed. This estimation is more challenging with MCD tests because short-term results are needed as the technology is rapidly changing. To estimate average SOF for a program of yearly MCD tests, I introduce a novel method that requires at least two yearly MCD tests given to persons having a wide range of ages and applies only to cancers for which there is no conventional screening. The method assumes an exponential distribution for the sojourn time in an operational screen-detectable preclinical cancer (OPC) state, defined as once screen-detectable (positive screen and work-up), always screen-detectable. Because this assumption appears in only one term in the SOF formula, the results are robust to violations of the assumption. An SOF plot graphs average SOF versus mean sojourn time. With lung cancer screening data and synthetic data, SOF plots distinguished small from moderate levels of SOF. With its unique set of assumptions, the SOF plot would complement other modeling approaches for estimating SOF once sufficient short-term observational data on MCD tests become available.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"5935-5943"},"PeriodicalIF":1.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11639630/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142732807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To complement the conventional area under the ROC curve (AUC) which cannot fully describe the diagnostic accuracy of some non-standard biomarkers, we introduce a transformed ROC curve and its associated transformed AUC (TAUC) in this article, and show that TAUC can relate the original improper biomarker to a proper biomarker after a non-monotone transformation. We then provide nonparametric estimation of the non-monotone transformation and TAUC, and establish their consistency and asymptotic normality. We conduct extensive simulation studies to assess the performance of the proposed TAUC method and compare with the traditional methods. Case studies on real biomedical data are provided to illustrate the proposed TAUC method. We are able to identify more important biomarkers that tend to escape the traditional screening method.
{"title":"Transformed ROC Curve for Biomarker Evaluation.","authors":"Jianping Yang, Pei-Fen Kuan, Xiangyu Li, Jialiang Li, Xiao-Hua Zhou","doi":"10.1002/sim.10268","DOIUrl":"10.1002/sim.10268","url":null,"abstract":"<p><p>To complement the conventional area under the ROC curve (AUC) which cannot fully describe the diagnostic accuracy of some non-standard biomarkers, we introduce a transformed ROC curve and its associated transformed AUC (TAUC) in this article, and show that TAUC can relate the original improper biomarker to a proper biomarker after a non-monotone transformation. We then provide nonparametric estimation of the non-monotone transformation and TAUC, and establish their consistency and asymptotic normality. We conduct extensive simulation studies to assess the performance of the proposed TAUC method and compare with the traditional methods. Case studies on real biomedical data are provided to illustrate the proposed TAUC method. We are able to identify more important biomarkers that tend to escape the traditional screening method.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"5681-5697"},"PeriodicalIF":1.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142628004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-30Epub Date: 2024-12-01DOI: 10.1002/sim.10277
Giuliano Netto Flores Cruz, Keegan Korthauer
Clinical decisions are often guided by clinical prediction models or diagnostic tests. Decision curve analysis (DCA) combines classical assessment of predictive performance with the consequences of using these strategies for clinical decision-making. In DCA, the best decision strategy is the one that maximizes the net benefit: the net number of true positives (or negatives) provided by a given strategy. Here, we employ Bayesian approaches to DCA, addressing four fundamental concerns when evaluating clinical decision strategies: (i) which strategies are clinically useful, (ii) what is the best available decision strategy, (iii) which of two competing strategies is better, and (iv) what is the expected net benefit loss associated with the current level of uncertainty. While often consistent with frequentist point estimates, fully Bayesian DCA allows for an intuitive probabilistic interpretation framework and the incorporation of prior evidence. We evaluate the methods using simulation and provide a comprehensive case study. Software implementation is available in the bayesDCA R package. Ultimately, the Bayesian DCA workflow may help clinicians and health policymakers adopt better-informed decisions.
{"title":"Bayesian Decision Curve Analysis With Bayesdca.","authors":"Giuliano Netto Flores Cruz, Keegan Korthauer","doi":"10.1002/sim.10277","DOIUrl":"10.1002/sim.10277","url":null,"abstract":"<p><p>Clinical decisions are often guided by clinical prediction models or diagnostic tests. Decision curve analysis (DCA) combines classical assessment of predictive performance with the consequences of using these strategies for clinical decision-making. In DCA, the best decision strategy is the one that maximizes the net benefit: the net number of true positives (or negatives) provided by a given strategy. Here, we employ Bayesian approaches to DCA, addressing four fundamental concerns when evaluating clinical decision strategies: (i) which strategies are clinically useful, (ii) what is the best available decision strategy, (iii) which of two competing strategies is better, and (iv) what is the expected net benefit loss associated with the current level of uncertainty. While often consistent with frequentist point estimates, fully Bayesian DCA allows for an intuitive probabilistic interpretation framework and the incorporation of prior evidence. We evaluate the methods using simulation and provide a comprehensive case study. Software implementation is available in the bayesDCA R package. Ultimately, the Bayesian DCA workflow may help clinicians and health policymakers adopt better-informed decisions.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"6042-6058"},"PeriodicalIF":1.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11639651/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142772448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-30Epub Date: 2024-12-01DOI: 10.1002/sim.10293
Seungjae Lee, Boram Jeong, Donghwan Lee, Woojoo Lee
In epidemiological studies, evaluating the health impacts stemming from multiple exposures is one of the important goals. To analyze the effects of multiple exposures on discrete or time-to-event health outcomes, researchers often employ generalized linear models, Cox proportional hazards models, and machine learning methods. However, observational studies are prone to unmeasured confounding factors, which can introduce the potential for substantial bias in the multiple exposure effects. To address this issue, we propose a novel outcome model-based sensitivity analysis method for non-Gaussian and time-to-event outcomes with multiple exposures. All the proposed sensitivity analysis problems are formulated as linear programming problems with quadratic and linear constraints, which can be solved efficiently. Analytic solutions are provided for some optimization problems, and a numerical study is performed to examine how the proposed sensitivity analysis behaves in finite samples. We illustrate the proposed method using two real data examples.
{"title":"Sensitivity Analysis for Effects of Multiple Exposures in the Presence of Unmeasured Confounding: Non-Gaussian and Time-to-Event Outcomes.","authors":"Seungjae Lee, Boram Jeong, Donghwan Lee, Woojoo Lee","doi":"10.1002/sim.10293","DOIUrl":"10.1002/sim.10293","url":null,"abstract":"<p><p>In epidemiological studies, evaluating the health impacts stemming from multiple exposures is one of the important goals. To analyze the effects of multiple exposures on discrete or time-to-event health outcomes, researchers often employ generalized linear models, Cox proportional hazards models, and machine learning methods. However, observational studies are prone to unmeasured confounding factors, which can introduce the potential for substantial bias in the multiple exposure effects. To address this issue, we propose a novel outcome model-based sensitivity analysis method for non-Gaussian and time-to-event outcomes with multiple exposures. All the proposed sensitivity analysis problems are formulated as linear programming problems with quadratic and linear constraints, which can be solved efficiently. Analytic solutions are provided for some optimization problems, and a numerical study is performed to examine how the proposed sensitivity analysis behaves in finite samples. We illustrate the proposed method using two real data examples.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":" ","pages":"5996-6025"},"PeriodicalIF":1.8,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142772469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}