We develop here a semiparametric Gaussian Mixture Model (SGMM) for unsupervised learning with valuable spatial information taken into consideration. Specifically, we assume for each instance a random location. Then, conditional on this random location, we assume for the feature vector a standard Gaussian Mixture Model (GMM). The proposed SGMM allows the mixing probability to be nonparametrically related to the spatial location. Compared with a classical GMM, SGMM is considerably more flexible and allows the instances from the same class to be spatially clustered. To estimate the SGMM, novel EM algorithms are developed and rigorous asymptotic theories are established. Extensive numerical simulations are conducted to demonstrate our finite sample performance. For a real application, we apply our SGMM method to the CAMELYON16 dataset of whole-slide images for breast cancer detection. The SGMM method demonstrates outstanding clustering performance.
{"title":"A semiparametric Gaussian Mixture Model with spatial dependence and its application to whole-slide image clustering analysis.","authors":"Baichen Yu, Jin Liu, Hansheng Wang","doi":"10.1093/biomtc/ujaf149","DOIUrl":"https://doi.org/10.1093/biomtc/ujaf149","url":null,"abstract":"<p><p>We develop here a semiparametric Gaussian Mixture Model (SGMM) for unsupervised learning with valuable spatial information taken into consideration. Specifically, we assume for each instance a random location. Then, conditional on this random location, we assume for the feature vector a standard Gaussian Mixture Model (GMM). The proposed SGMM allows the mixing probability to be nonparametrically related to the spatial location. Compared with a classical GMM, SGMM is considerably more flexible and allows the instances from the same class to be spatially clustered. To estimate the SGMM, novel EM algorithms are developed and rigorous asymptotic theories are established. Extensive numerical simulations are conducted to demonstrate our finite sample performance. For a real application, we apply our SGMM method to the CAMELYON16 dataset of whole-slide images for breast cancer detection. The SGMM method demonstrates outstanding clustering performance.</p>","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":"81 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145653431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High-dimensional error-prone survival data are prevalent in biomedical studies, where numerous clinical or genetic variables are collected for risk assessment. The presence of measurement errors in covariates complicates parameter estimation and variable selection, leading to non-convex optimization challenges. We propose an error-in-variables additive hazards regression model for high-dimensional noisy survival data. By employing the nearest positive semi-definite matrix projection, we develop a fast Lasso approach (semi-definite projection Lasso, SPLasso) and its soft thresholding variant (SPLasso-T), both with theoretical guarantees. Under mild assumptions, we establish model selection consistency, oracle inequalities, and limiting distributions for these methods. Simulation studies and two real data applications demonstrate the methods' superior efficiency in handling high-dimensional data, particularly showcasing remarkable performance in scenarios with missing values, highlighting their robustness and practical utility in complex biomedical settings.
{"title":"SPLasso for high-dimensional additive hazards regression with covariate measurement error.","authors":"Jiarui Zhang, Hongsheng Liu, Xin Chen, Jinfeng Xu","doi":"10.1093/biomtc/ujaf130","DOIUrl":"https://doi.org/10.1093/biomtc/ujaf130","url":null,"abstract":"<p><p>High-dimensional error-prone survival data are prevalent in biomedical studies, where numerous clinical or genetic variables are collected for risk assessment. The presence of measurement errors in covariates complicates parameter estimation and variable selection, leading to non-convex optimization challenges. We propose an error-in-variables additive hazards regression model for high-dimensional noisy survival data. By employing the nearest positive semi-definite matrix projection, we develop a fast Lasso approach (semi-definite projection Lasso, SPLasso) and its soft thresholding variant (SPLasso-T), both with theoretical guarantees. Under mild assumptions, we establish model selection consistency, oracle inequalities, and limiting distributions for these methods. Simulation studies and two real data applications demonstrate the methods' superior efficiency in handling high-dimensional data, particularly showcasing remarkable performance in scenarios with missing values, highlighting their robustness and practical utility in complex biomedical settings.</p>","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":"81 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145273319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There has been substantial progress in predictive modeling for cognitive impairment in neurodegenerative disorders such as Alzheimer's disease (AD), based on neuroimaging biomarkers. However, existing approaches typically do not incorporate heterogeneity that may potentially arise due to interactions between the spatially varying imaging features and supplementary demographic, clinical and genetic risk factors in AD. Unfortunately, ignoring such heterogeneity may potentially result in poor prediction and biased estimation. Building on existing scalar-on-image regression framework, we address this issue by incorporating spatially varying interactions between brain image and supplementary risk factors to model cognitive impairment in AD. The proposed Bayesian method tackles spatial interactions via hierarchical representation for the functional regression coefficients depending on supplementary risk factors, which is embedded in a scalar-on-function framework involving a multi-resolution wavelet decomposition. To address the curse of dimensionality, we induce simultaneous sparsity and clustering via a spike and slab mixture prior, where the slab component is characterized by a latent class distribution. We develop an efficient Markov chain Monte Carlo algorithm for posterior computation. Extensive simulations and application to the longitudinal Alzheimer's Disease Neuroimaging Initiative study illustrate significantly improved prediction of cognitive impairment in AD across multiple visits by our model in comparison with alternate approaches. The proposed approach also identifies key brain regions in AD that exhibit significant association with cognitive abilities, either directly or through interactions with risk factors.
{"title":"Bayesian scalar-on-image regression with spatial interactions for modeling Alzheimer's disease.","authors":"Nilanjana Chakraborty, Qi Long, Suprateek Kundu","doi":"10.1093/biomtc/ujaf144","DOIUrl":"10.1093/biomtc/ujaf144","url":null,"abstract":"<p><p>There has been substantial progress in predictive modeling for cognitive impairment in neurodegenerative disorders such as Alzheimer's disease (AD), based on neuroimaging biomarkers. However, existing approaches typically do not incorporate heterogeneity that may potentially arise due to interactions between the spatially varying imaging features and supplementary demographic, clinical and genetic risk factors in AD. Unfortunately, ignoring such heterogeneity may potentially result in poor prediction and biased estimation. Building on existing scalar-on-image regression framework, we address this issue by incorporating spatially varying interactions between brain image and supplementary risk factors to model cognitive impairment in AD. The proposed Bayesian method tackles spatial interactions via hierarchical representation for the functional regression coefficients depending on supplementary risk factors, which is embedded in a scalar-on-function framework involving a multi-resolution wavelet decomposition. To address the curse of dimensionality, we induce simultaneous sparsity and clustering via a spike and slab mixture prior, where the slab component is characterized by a latent class distribution. We develop an efficient Markov chain Monte Carlo algorithm for posterior computation. Extensive simulations and application to the longitudinal Alzheimer's Disease Neuroimaging Initiative study illustrate significantly improved prediction of cognitive impairment in AD across multiple visits by our model in comparison with alternate approaches. The proposed approach also identifies key brain regions in AD that exhibit significant association with cognitive abilities, either directly or through interactions with risk factors.</p>","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":"81 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12613162/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145501754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Latent factor models that integrate data from multiple sources/studies or modalities have garnered considerable attention across various disciplines. However, existing methods predominantly focus either on multi-study integration or multi-modality integration, rendering them insufficient for analyzing the diverse modalities measured across multiple studies. To address this limitation and cater to practical needs, we introduce a high-dimensional generalized factor model that seamlessly integrates multi-modality data from multiple studies, while also accommodating additional covariates. We conduct a thorough investigation of the identifiability conditions to enhance the model's interpretability. To tackle the complexity of high-dimensional nonlinear integration caused by 4 large latent random matrices, we utilize a variational lower bound to approximate the observed log-likelihood by employing a variational posterior distribution. By profiling the variational parameters, we establish the asymptotical properties of estimators for model parameters using M-estimation theory. Furthermore, we devise a computationally efficient variational expectation maximization (EM) algorithm to execute the estimation process and a criterion to determine the optimal number of both study-shared and study-specific factors. Extensive simulation studies and a real-world application show that the proposed method significantly outperforms existing methods in terms of estimation accuracy and computational efficiency.
{"title":"High-dimensional multi-study multi-modality covariate-augmented generalized factor model.","authors":"Wei Liu, Qingzhi Zhong","doi":"10.1093/biomtc/ujaf107","DOIUrl":"10.1093/biomtc/ujaf107","url":null,"abstract":"<p><p>Latent factor models that integrate data from multiple sources/studies or modalities have garnered considerable attention across various disciplines. However, existing methods predominantly focus either on multi-study integration or multi-modality integration, rendering them insufficient for analyzing the diverse modalities measured across multiple studies. To address this limitation and cater to practical needs, we introduce a high-dimensional generalized factor model that seamlessly integrates multi-modality data from multiple studies, while also accommodating additional covariates. We conduct a thorough investigation of the identifiability conditions to enhance the model's interpretability. To tackle the complexity of high-dimensional nonlinear integration caused by 4 large latent random matrices, we utilize a variational lower bound to approximate the observed log-likelihood by employing a variational posterior distribution. By profiling the variational parameters, we establish the asymptotical properties of estimators for model parameters using M-estimation theory. Furthermore, we devise a computationally efficient variational expectation maximization (EM) algorithm to execute the estimation process and a criterion to determine the optimal number of both study-shared and study-specific factors. Extensive simulation studies and a real-world application show that the proposed method significantly outperforms existing methods in terms of estimation accuracy and computational efficiency.</p>","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":"81 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144871261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Belmiro P M Duarte, Anthony C Atkinson, Nuno M C Oliveira
An optimal experimental design is a structured data collection plan aimed at maximizing the amount of information gathered. Determining an optimal experimental design, however, relies on the assumption that a predetermined model structure, relating the response and covariates, is known a priori. In practical scenarios, such as dose-response modeling, the form of the model representing the "true" relationship is frequently unknown, although there exists a finite set or pool of potential alternative models. Designing experiments based on a single model from this set may lead to inefficiency or inadequacy if the "true" model differs from that assumed when calculating the design. One approach to minimize the impact of the uncertainty in the model on the experimental plan is known as model robust design. In this context, we systematically address the challenge of finding approximate optimal model robust experimental designs. Our focus is on locally optimal designs, so allowing some of the models in the pool to be nonlinear. We present three Semidefinite Programming-based formulations, each aligned with one of the classes of model robustness criteria introduced by Läuter. These formulations exploit the semidefinite representability of the robustness criteria, leading to the representation of the robust problem as a semidefinite program. To ensure comparability of information measures across various models, we employ standardized designs. To illustrate the application of our approach, we consider a dose-response study where, initially, seven models were postulated as potential candidates to describe the dose-response relationship.
{"title":"Model robust designs for dose-response models.","authors":"Belmiro P M Duarte, Anthony C Atkinson, Nuno M C Oliveira","doi":"10.1093/biomtc/ujaf112","DOIUrl":"https://doi.org/10.1093/biomtc/ujaf112","url":null,"abstract":"<p><p>An optimal experimental design is a structured data collection plan aimed at maximizing the amount of information gathered. Determining an optimal experimental design, however, relies on the assumption that a predetermined model structure, relating the response and covariates, is known a priori. In practical scenarios, such as dose-response modeling, the form of the model representing the \"true\" relationship is frequently unknown, although there exists a finite set or pool of potential alternative models. Designing experiments based on a single model from this set may lead to inefficiency or inadequacy if the \"true\" model differs from that assumed when calculating the design. One approach to minimize the impact of the uncertainty in the model on the experimental plan is known as model robust design. In this context, we systematically address the challenge of finding approximate optimal model robust experimental designs. Our focus is on locally optimal designs, so allowing some of the models in the pool to be nonlinear. We present three Semidefinite Programming-based formulations, each aligned with one of the classes of model robustness criteria introduced by Läuter. These formulations exploit the semidefinite representability of the robustness criteria, leading to the representation of the robust problem as a semidefinite program. To ensure comparability of information measures across various models, we employ standardized designs. To illustrate the application of our approach, we consider a dose-response study where, initially, seven models were postulated as potential candidates to describe the dose-response relationship.</p>","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":"81 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144941118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In semi-supervised learning, the prevailing understanding suggests that observing additional unlabeled samples improves estimation accuracy for linear parameters only in the case of model misspecification. In this work, we challenge such a claim and show that additional unlabeled samples are beneficial in high-dimensional settings. Initially focusing on a dense scenario, we introduce robust semi-supervised estimators for the regression coefficient without relying on sparse structures in the population slope. Even when the true underlying model is linear, we show that leveraging information from large-scale unlabeled data helps reduce estimation bias, thereby improving both estimation accuracy and inference robustness. Moreover, we propose semi-supervised methods with further enhanced efficiency in scenarios with a sparse linear slope. The performance of the proposed methods is demonstrated through extensive numerical studies.
{"title":"Semi-supervised linear regression: enhancing efficiency and robustness in high dimensions.","authors":"Kai Chen, Yuqian Zhang","doi":"10.1093/biomtc/ujaf113","DOIUrl":"https://doi.org/10.1093/biomtc/ujaf113","url":null,"abstract":"<p><p>In semi-supervised learning, the prevailing understanding suggests that observing additional unlabeled samples improves estimation accuracy for linear parameters only in the case of model misspecification. In this work, we challenge such a claim and show that additional unlabeled samples are beneficial in high-dimensional settings. Initially focusing on a dense scenario, we introduce robust semi-supervised estimators for the regression coefficient without relying on sparse structures in the population slope. Even when the true underlying model is linear, we show that leveraging information from large-scale unlabeled data helps reduce estimation bias, thereby improving both estimation accuracy and inference robustness. Moreover, we propose semi-supervised methods with further enhanced efficiency in scenarios with a sparse linear slope. The performance of the proposed methods is demonstrated through extensive numerical studies.</p>","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":"81 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144941170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information from frequency bands in biomedical time series provides useful summaries of the observed signal. Many existing methods consider summaries of the time series obtained over a few well-known, pre-defined frequency bands of interest. However, there is a dearth of data-driven methods for identifying frequency bands that optimally summarize frequency-domain information in the time series. A new method to identify partition points in the frequency space of a multivariate locally stationary time series is proposed. These partition points signify changes across frequencies in the time-varying behavior of the signal and provide frequency band summary measures that best preserve nonstationary dynamics of the observed series. An $L_2$-norm based discrepancy measure that finds differences in the time-varying spectral density matrix is constructed, and its asymptotic properties are derived. New nonparametric bootstrap tests are also provided to identify significant frequency partition points and to identify components and cross-components of the spectral matrix exhibiting changes over frequencies. Finite-sample performance of the proposed method is illustrated via simulations. The proposed method is used to develop optimal frequency band summary measures for characterizing time-varying behavior in resting-state electroencephalography time series, as well as identifying components and cross-components associated with each frequency partition point.
{"title":"Frequency band analysis of nonstationary multivariate time series.","authors":"Raanju R Sundararajan, Scott A Bruce","doi":"10.1093/biomtc/ujaf083","DOIUrl":"10.1093/biomtc/ujaf083","url":null,"abstract":"<p><p>Information from frequency bands in biomedical time series provides useful summaries of the observed signal. Many existing methods consider summaries of the time series obtained over a few well-known, pre-defined frequency bands of interest. However, there is a dearth of data-driven methods for identifying frequency bands that optimally summarize frequency-domain information in the time series. A new method to identify partition points in the frequency space of a multivariate locally stationary time series is proposed. These partition points signify changes across frequencies in the time-varying behavior of the signal and provide frequency band summary measures that best preserve nonstationary dynamics of the observed series. An $L_2$-norm based discrepancy measure that finds differences in the time-varying spectral density matrix is constructed, and its asymptotic properties are derived. New nonparametric bootstrap tests are also provided to identify significant frequency partition points and to identify components and cross-components of the spectral matrix exhibiting changes over frequencies. Finite-sample performance of the proposed method is illustrated via simulations. The proposed method is used to develop optimal frequency band summary measures for characterizing time-varying behavior in resting-state electroencephalography time series, as well as identifying components and cross-components associated with each frequency partition point.</p>","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":"81 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12290460/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144706182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew Pryce, Karla Diaz-Ordaz, Ruth H Keogh, Stijn Vansteelandt
When estimating heterogeneous treatment effects, missing outcome data can complicate treatment effect estimation, causing certain subgroups of the population to be poorly represented. In this work, we discuss this commonly overlooked problem and consider the impact that missing at random outcome data has on causal machine learning estimators for the conditional average treatment effect (CATE). We propose 2 de-biased machine learning estimators for the CATE, the mDR-learner, and mEP-learner, which address the issue of under-representation by integrating inverse probability of censoring weights into the DR-learner and EP-learner, respectively. We show that under reasonable conditions, these estimators are oracle efficient and illustrate their favorable performance through simulated data settings, comparing them to existing CATE estimators, including comparison to estimators that use common missing data techniques. We present an example of their application using the GBSG2 trial, exploring treatment effect heterogeneity when comparing hormonal therapies to non-hormonal therapies among breast cancer patients post surgery, and offer guidance on the decisions a practitioner must make when implementing these estimators.
{"title":"Causal machine learning for heterogeneous treatment effects in the presence of missing outcome data.","authors":"Matthew Pryce, Karla Diaz-Ordaz, Ruth H Keogh, Stijn Vansteelandt","doi":"10.1093/biomtc/ujaf098","DOIUrl":"https://doi.org/10.1093/biomtc/ujaf098","url":null,"abstract":"<p><p>When estimating heterogeneous treatment effects, missing outcome data can complicate treatment effect estimation, causing certain subgroups of the population to be poorly represented. In this work, we discuss this commonly overlooked problem and consider the impact that missing at random outcome data has on causal machine learning estimators for the conditional average treatment effect (CATE). We propose 2 de-biased machine learning estimators for the CATE, the mDR-learner, and mEP-learner, which address the issue of under-representation by integrating inverse probability of censoring weights into the DR-learner and EP-learner, respectively. We show that under reasonable conditions, these estimators are oracle efficient and illustrate their favorable performance through simulated data settings, comparing them to existing CATE estimators, including comparison to estimators that use common missing data techniques. We present an example of their application using the GBSG2 trial, exploring treatment effect heterogeneity when comparing hormonal therapies to non-hormonal therapies among breast cancer patients post surgery, and offer guidance on the decisions a practitioner must make when implementing these estimators.</p>","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":"81 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144752242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recurrent episode data frequently arise in chronic disease studies when an event of interest occurs repeatedly and each occurrence lasts for a random period of time. Understanding the heterogeneity in recurrent episode lengths can help guide dynamic and customized disease management. However, there has been relative sparse attention to methods tailored to this end. Existing approaches either do not confer direct interpretation on episode lengths or involve restrictive or unrealistic distributional assumptions, such as exchangeability of within-individual episode lengths. In this work, we propose a modeling strategy that overcomes these limitations through adopting quantile regression and sensibly incorporating time-dependent covariates. Treating recurrent episodes as clustered data, we develop an estimation procedure that properly handles the special complications, including dependent censoring, dependent truncation, and informative cluster size. Our estimation procedure is computationally simple and yields estimators with desirable asymptotic properties. Our numerical studies demonstrate the advantages of the proposed method over naive adaptations of existing approaches.
{"title":"Exploring the heterogeneity in recurrent episode lengths based on quantile regression.","authors":"Yi Liu, Guillermo E Umpierrez, Limin Peng","doi":"10.1093/biomtc/ujaf122","DOIUrl":"10.1093/biomtc/ujaf122","url":null,"abstract":"<p><p>Recurrent episode data frequently arise in chronic disease studies when an event of interest occurs repeatedly and each occurrence lasts for a random period of time. Understanding the heterogeneity in recurrent episode lengths can help guide dynamic and customized disease management. However, there has been relative sparse attention to methods tailored to this end. Existing approaches either do not confer direct interpretation on episode lengths or involve restrictive or unrealistic distributional assumptions, such as exchangeability of within-individual episode lengths. In this work, we propose a modeling strategy that overcomes these limitations through adopting quantile regression and sensibly incorporating time-dependent covariates. Treating recurrent episodes as clustered data, we develop an estimation procedure that properly handles the special complications, including dependent censoring, dependent truncation, and informative cluster size. Our estimation procedure is computationally simple and yields estimators with desirable asymptotic properties. Our numerical studies demonstrate the advantages of the proposed method over naive adaptations of existing approaches.</p>","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":"81 3","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12448847/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145091020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generalized estimating equations (GEEs) are a popular statistical method for longitudinal data analysis, requiring specification of the first 2 marginal moments of the response along with a working correlation matrix to capture temporal correlations within a cluster. When it comes to prediction at future/new time points using GEEs, a standard approach adopted by practitioners and software is to base it simply on the marginal mean model. In this article, we propose an alternative approach to prediction for independent cluster GEEs. By viewing the GEE as solving an iterative working linear model, we borrow ideas from universal kriging to construct an adjusted predictor that exploits working cross-correlations between the current and new observations within the same cluster. We establish theoretical conditions for the adjusted GEE predictor to outperform the standard GEE predictor. Simulations and an application to longitudinal data on the growth of sitka spruces demonstrate that, even when we misspecify the working correlation structure, adjusted GEE predictors can achieve better performance relative to standard GEE predictors, the so-called "oracle" GEE predictor using all time points, and potentially even cluster-specific predictions from a generalized linear mixed model.
{"title":"Adjusted predictions for generalized estimating equations.","authors":"Francis K C Hui, Samuel Muller, Alan H Welsh","doi":"10.1093/biomtc/ujaf090","DOIUrl":"https://doi.org/10.1093/biomtc/ujaf090","url":null,"abstract":"<p><p>Generalized estimating equations (GEEs) are a popular statistical method for longitudinal data analysis, requiring specification of the first 2 marginal moments of the response along with a working correlation matrix to capture temporal correlations within a cluster. When it comes to prediction at future/new time points using GEEs, a standard approach adopted by practitioners and software is to base it simply on the marginal mean model. In this article, we propose an alternative approach to prediction for independent cluster GEEs. By viewing the GEE as solving an iterative working linear model, we borrow ideas from universal kriging to construct an adjusted predictor that exploits working cross-correlations between the current and new observations within the same cluster. We establish theoretical conditions for the adjusted GEE predictor to outperform the standard GEE predictor. Simulations and an application to longitudinal data on the growth of sitka spruces demonstrate that, even when we misspecify the working correlation structure, adjusted GEE predictors can achieve better performance relative to standard GEE predictors, the so-called \"oracle\" GEE predictor using all time points, and potentially even cluster-specific predictions from a generalized linear mixed model.</p>","PeriodicalId":8930,"journal":{"name":"Biometrics","volume":"81 3","pages":""},"PeriodicalIF":1.4,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144706180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}