Pub Date : 2024-12-13DOI: 10.1016/j.csda.2024.108108
Giulia Ferrandi , Michiel E. Hochstenbach , M. Rosário Oliveira
A subspace method is introduced to solve large-scale trace ratio problems. This approach is matrix-free, requiring only the action of the two matrices involved in the trace ratio. At each iteration, a smaller trace ratio problem is addressed in the search subspace. Additionally, the algorithm is endowed with a restarting strategy, that ensures the monotonicity of the trace ratio value throughout the iterations. The behavior of the approximate solution is investigated from a theoretical viewpoint, extending existing results on Ritz values and vectors, as the angle between the search subspace and the exact solution approaches zero. Numerical experiments in multigroup classification show that this new subspace method tends to be more efficient than iterative approaches relying on (partial) eigenvalue decompositions at each step.
{"title":"A subspace method for large-scale trace ratio problems","authors":"Giulia Ferrandi , Michiel E. Hochstenbach , M. Rosário Oliveira","doi":"10.1016/j.csda.2024.108108","DOIUrl":"10.1016/j.csda.2024.108108","url":null,"abstract":"<div><div>A subspace method is introduced to solve large-scale trace ratio problems. This approach is matrix-free, requiring only the action of the two matrices involved in the trace ratio. At each iteration, a smaller trace ratio problem is addressed in the search subspace. Additionally, the algorithm is endowed with a restarting strategy, that ensures the monotonicity of the trace ratio value throughout the iterations. The behavior of the approximate solution is investigated from a theoretical viewpoint, extending existing results on Ritz values and vectors, as the angle between the search subspace and the exact solution approaches zero. Numerical experiments in multigroup classification show that this new subspace method tends to be more efficient than iterative approaches relying on (partial) eigenvalue decompositions at each step.</div></div>","PeriodicalId":55225,"journal":{"name":"Computational Statistics & Data Analysis","volume":"205 ","pages":"Article 108108"},"PeriodicalIF":1.5,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143161612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-02DOI: 10.1016/j.csda.2024.108105
Xun Zhao , Lu Tang , Weijia Zhang , Ling Zhou
The increasing interest in survey research focuses on inferring grouped association patterns between risk factors and questionnaire responses, with grouping shared across multiple response variables that jointly capture one's underlying status. Aiming to identify important risk factors that are simultaneously associated with the health and well-being of senior adults, a study based on the China Health and Retirement Survey (CHRS) is conducted. Previous studies have identified several known risk factors, yet heterogeneity in the outcome-risk factor association exists, prompting the use of subgroup analysis. A subgroup analysis procedure is devised to model a multiple mixed-type outcome which describes one's general health and well-being, while tackling additional challenges including collinearity and weak signals within block-structured covariates. Computationally, an efficient algorithm that alternately updates a set of estimating equations and likelihood functions is proposed. Theoretical results establish the asymptotic consistency and normality of the proposed estimators. The validity of the proposed method is corroborated by simulation experiments. An application of the proposed method to the CHRS data identifies caring for grandchildren as a new risk factor for poor physical and mental health.
{"title":"Subgroup learning for multiple mixed-type outcomes with block-structured covariates","authors":"Xun Zhao , Lu Tang , Weijia Zhang , Ling Zhou","doi":"10.1016/j.csda.2024.108105","DOIUrl":"10.1016/j.csda.2024.108105","url":null,"abstract":"<div><div>The increasing interest in survey research focuses on inferring grouped association patterns between risk factors and questionnaire responses, with grouping shared across multiple response variables that jointly capture one's underlying status. Aiming to identify important risk factors that are simultaneously associated with the health and well-being of senior adults, a study based on the China Health and Retirement Survey (CHRS) is conducted. Previous studies have identified several known risk factors, yet heterogeneity in the outcome-risk factor association exists, prompting the use of subgroup analysis. A subgroup analysis procedure is devised to model a multiple mixed-type outcome which describes one's general health and well-being, while tackling additional challenges including collinearity and weak signals within block-structured covariates. Computationally, an efficient algorithm that alternately updates a set of estimating equations and likelihood functions is proposed. Theoretical results establish the asymptotic consistency and normality of the proposed estimators. The validity of the proposed method is corroborated by simulation experiments. An application of the proposed method to the CHRS data identifies caring for grandchildren as a new risk factor for poor physical and mental health.</div></div>","PeriodicalId":55225,"journal":{"name":"Computational Statistics & Data Analysis","volume":"204 ","pages":"Article 108105"},"PeriodicalIF":1.5,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-28DOI: 10.1016/j.csda.2024.108104
Simon M.S. Lo , Shuolin Shi , Ralf A. Wilke
A nested Archimedean copula model for dependent states and spells is introduced and the link to a classical survival model with frailties is established. The model relaxes an important restriction of classical survival models as the distributions of unobservable heterogeneities are permitted to depend on the observable covariates. Its modular structure has practical advantages as the different components can be separately specified and estimation can be done sequentially or separately. This makes the model versatile and adaptable in empirical work. An application to labour market transitions with linked administrative data supports the need for a flexible specification of the dependence structure and the model for the marginal survivals. The conventional Markov Chain Model is shown to give sizeably biased results in the application.
{"title":"A copula duration model with dependent states and spells","authors":"Simon M.S. Lo , Shuolin Shi , Ralf A. Wilke","doi":"10.1016/j.csda.2024.108104","DOIUrl":"10.1016/j.csda.2024.108104","url":null,"abstract":"<div><div>A nested Archimedean copula model for dependent states and spells is introduced and the link to a classical survival model with frailties is established. The model relaxes an important restriction of classical survival models as the distributions of unobservable heterogeneities are permitted to depend on the observable covariates. Its modular structure has practical advantages as the different components can be separately specified and estimation can be done sequentially or separately. This makes the model versatile and adaptable in empirical work. An application to labour market transitions with linked administrative data supports the need for a flexible specification of the dependence structure and the model for the marginal survivals. The conventional Markov Chain Model is shown to give sizeably biased results in the application.</div></div>","PeriodicalId":55225,"journal":{"name":"Computational Statistics & Data Analysis","volume":"204 ","pages":"Article 108104"},"PeriodicalIF":1.5,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-26DOI: 10.1016/j.csda.2024.108106
Na Young Yoo , Hyunju Lee , Ji Hwan Cha
Motivated by real data sets to be analyzed in this paper, we develop a new general class of bivariate distributions that can model the effect of the so-called ‘load-sharing configuration’ in a system with two components based on the reversed hazard rate. Under such load-sharing configuration, after the failure of one component, the surviving component has to shoulder extra load, which eventually results in its failure at an earlier time than what is expected under the case of independence. In the developed class of bivariate distributions, it is assumed that the residual lifetime of the remaining component is shortened according to the reversed hazard rate order. We derive the joint survival function, joint probability density function and the marginal distributions. We discuss a bivariate ageing property of the developed class of distributions. Some specific families of bivariate distributions which can be usefully applied in practice are obtained. These families of bivariate distributions are applied to some real data sets to illustrate their usefulness.
{"title":"Development of a new general class of bivariate distributions based on reversed hazard rate order","authors":"Na Young Yoo , Hyunju Lee , Ji Hwan Cha","doi":"10.1016/j.csda.2024.108106","DOIUrl":"10.1016/j.csda.2024.108106","url":null,"abstract":"<div><div>Motivated by real data sets to be analyzed in this paper, we develop a new general class of bivariate distributions that can model the effect of the so-called ‘load-sharing configuration’ in a system with two components based on the reversed hazard rate. Under such load-sharing configuration, after the failure of one component, the surviving component has to shoulder extra load, which eventually results in its failure at an earlier time than what is expected under the case of independence. In the developed class of bivariate distributions, it is assumed that the residual lifetime of the remaining component is shortened according to the reversed hazard rate order. We derive the joint survival function, joint probability density function and the marginal distributions. We discuss a bivariate ageing property of the developed class of distributions. Some specific families of bivariate distributions which can be usefully applied in practice are obtained. These families of bivariate distributions are applied to some real data sets to illustrate their usefulness.</div></div>","PeriodicalId":55225,"journal":{"name":"Computational Statistics & Data Analysis","volume":"204 ","pages":"Article 108106"},"PeriodicalIF":1.5,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142747095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-22DOI: 10.1016/j.csda.2024.108097
Hyungjin Kim , Chuljin Park , Heeyoung Kim
We propose a novel framework for efficient parameter estimation in simulation models, formulated as an optimization problem that minimizes the discrepancy between physical system observations and simulation model outputs. Our framework, called multi-task optimization with Bayesian neural network surrogates (MOBS), is designed for scenarios that require the simultaneous estimation of multiple sets of parameters, each set corresponding to a distinct set of observations, while also enabling fast parameter estimation essential for real-time process monitoring and control. MOBS integrates a heuristic search algorithm, utilizing a single-layer Bayesian neural network surrogate model trained on an initial simulation dataset. This surrogate model is shared across multiple tasks to select and evaluate candidate parameter values, facilitating efficient multi-task optimization. We provide a closed-form parameter screening rule and demonstrate that the expected number of simulation runs converges to a user-specified threshold. Our framework was applied to a numerical example and a semiconductor manufacturing case study, significantly reducing computational costs while achieving accurate parameter estimation.
{"title":"Multi-task optimization with Bayesian neural network surrogates for parameter estimation of a simulation model","authors":"Hyungjin Kim , Chuljin Park , Heeyoung Kim","doi":"10.1016/j.csda.2024.108097","DOIUrl":"10.1016/j.csda.2024.108097","url":null,"abstract":"<div><div>We propose a novel framework for efficient parameter estimation in simulation models, formulated as an optimization problem that minimizes the discrepancy between physical system observations and simulation model outputs. Our framework, called multi-task optimization with Bayesian neural network surrogates (MOBS), is designed for scenarios that require the simultaneous estimation of multiple sets of parameters, each set corresponding to a distinct set of observations, while also enabling fast parameter estimation essential for real-time process monitoring and control. MOBS integrates a heuristic search algorithm, utilizing a single-layer Bayesian neural network surrogate model trained on an initial simulation dataset. This surrogate model is shared across multiple tasks to select and evaluate candidate parameter values, facilitating efficient multi-task optimization. We provide a closed-form parameter screening rule and demonstrate that the expected number of simulation runs converges to a user-specified threshold. Our framework was applied to a numerical example and a semiconductor manufacturing case study, significantly reducing computational costs while achieving accurate parameter estimation.</div></div>","PeriodicalId":55225,"journal":{"name":"Computational Statistics & Data Analysis","volume":"204 ","pages":"Article 108097"},"PeriodicalIF":1.5,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142701344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-17DOI: 10.1016/j.csda.2024.108089
Jingyan Huang, Hock Peng Chan
We propose here a sparsity likelihood stopping rule to detect change-points when there are multiple data streams. It is optimal in the sense of minimizing, asymptotically, the detection delay when the change-points is present in only a small fraction of the data streams. This optimality holds at all levels of change-point sparsity. A key contribution of this paper is that we show optimality when there is extreme sparsity. Extreme sparsity refers to the number of data streams with change-points increasing very slowly as the number of data streams goes to infinity. The theoretical results are backed by a numerical study that shows the sparsity likelihood stopping rule performing well at all levels of sparsity. Applications of the stopping rule on non-normal models are also illustrated here.
{"title":"Optimal sequential detection by sparsity likelihood","authors":"Jingyan Huang, Hock Peng Chan","doi":"10.1016/j.csda.2024.108089","DOIUrl":"10.1016/j.csda.2024.108089","url":null,"abstract":"<div><div>We propose here a sparsity likelihood stopping rule to detect change-points when there are multiple data streams. It is optimal in the sense of minimizing, asymptotically, the detection delay when the change-points is present in only a small fraction of the data streams. This optimality holds at all levels of change-point sparsity. A key contribution of this paper is that we show optimality when there is extreme sparsity. Extreme sparsity refers to the number of data streams with change-points increasing very slowly as the number of data streams goes to infinity. The theoretical results are backed by a numerical study that shows the sparsity likelihood stopping rule performing well at all levels of sparsity. Applications of the stopping rule on non-normal models are also illustrated here.</div></div>","PeriodicalId":55225,"journal":{"name":"Computational Statistics & Data Analysis","volume":"203 ","pages":"Article 108089"},"PeriodicalIF":1.5,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142705705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The stochastic FitzHugh-Nagumo (FHN) model is a two-dimensional nonlinear stochastic differential equation with additive degenerate noise, whose first component, the only one observed, describes the membrane voltage evolution of a single neuron. Due to its low-dimensionality, its analytical and numerical tractability and its neuronal interpretation, it has been used as a case study to test the performance of different statistical methods in estimating the underlying model parameters. Existing methods, however, often require complete observations, non-degeneracy of the noise or a complex architecture (e.g., to estimate the transition density of the process, ‘‘recovering’’ the unobserved second component) and they may not (satisfactorily) estimate all model parameters simultaneously. Moreover, these studies lack real data applications for the stochastic FHN model. The proposed method tackles all challenges (non-globally Lipschitz drift, non-explicit solution, lack of available transition density, degeneracy of the noise and partial observations). It is an intuitive and easy-to-implement sequential Monte Carlo approximate Bayesian computation algorithm, which relies on a recent computationally efficient and structure-preserving numerical splitting scheme for synthetic data generation and on summary statistics exploiting the structural properties of the process. All model parameters are successfully estimated from simulated data and, more remarkably, real action potential data of rats. The presented novel real-data fit may broaden the scope and credibility of this classic and widely used neuronal model.
{"title":"Inference for the stochastic FitzHugh-Nagumo model from real action potential data via approximate Bayesian computation","authors":"Adeline Samson , Massimiliano Tamborrino , Irene Tubikanec","doi":"10.1016/j.csda.2024.108095","DOIUrl":"10.1016/j.csda.2024.108095","url":null,"abstract":"<div><div>The stochastic FitzHugh-Nagumo (FHN) model is a two-dimensional nonlinear stochastic differential equation with additive degenerate noise, whose first component, the only one observed, describes the membrane voltage evolution of a single neuron. Due to its low-dimensionality, its analytical and numerical tractability and its neuronal interpretation, it has been used as a case study to test the performance of different statistical methods in estimating the underlying model parameters. Existing methods, however, often require complete observations, non-degeneracy of the noise or a complex architecture (e.g., to estimate the transition density of the process, ‘‘recovering’’ the unobserved second component) and they may not (satisfactorily) estimate all model parameters simultaneously. Moreover, these studies lack real data applications for the stochastic FHN model. The proposed method tackles all challenges (non-globally Lipschitz drift, non-explicit solution, lack of available transition density, degeneracy of the noise and partial observations). It is an intuitive and easy-to-implement sequential Monte Carlo approximate Bayesian computation algorithm, which relies on a recent computationally efficient and structure-preserving numerical splitting scheme for synthetic data generation and on summary statistics exploiting the structural properties of the process. All model parameters are successfully estimated from simulated data and, more remarkably, real action potential data of rats. The presented novel real-data fit may broaden the scope and credibility of this classic and widely used neuronal model.</div></div>","PeriodicalId":55225,"journal":{"name":"Computational Statistics & Data Analysis","volume":"204 ","pages":"Article 108095"},"PeriodicalIF":1.5,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142701342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-15DOI: 10.1016/j.csda.2024.108096
Steven De Keyser, Irène Gijbels
The aim is to generalize 2-Wasserstein dependence coefficients to measure dependence between a finite number of random vectors. This generalization includes theoretical properties, and in particular focuses on an interpretation of maximal dependence and an asymptotic normality result for a proposed semi-parametric estimator under a Gaussian copula assumption. In addition, it is of interest to look at general axioms for dependence measures between multiple random vectors, at plausible normalizations, and at various examples. Afterwards, it is important to study plug-in estimators based on penalized empirical covariance matrices in order to deal with high dimensionality issues and taking possible marginal independencies into account by inducing (block) sparsity. The latter ideas are investigated via a simulation study, considering other dependence coefficients as well. The use of the developed methods is illustrated in two real data applications.
{"title":"High-dimensional copula-based Wasserstein dependence","authors":"Steven De Keyser, Irène Gijbels","doi":"10.1016/j.csda.2024.108096","DOIUrl":"10.1016/j.csda.2024.108096","url":null,"abstract":"<div><div>The aim is to generalize 2-Wasserstein dependence coefficients to measure dependence between a finite number of random vectors. This generalization includes theoretical properties, and in particular focuses on an interpretation of maximal dependence and an asymptotic normality result for a proposed semi-parametric estimator under a Gaussian copula assumption. In addition, it is of interest to look at general axioms for dependence measures between multiple random vectors, at plausible normalizations, and at various examples. Afterwards, it is important to study plug-in estimators based on penalized empirical covariance matrices in order to deal with high dimensionality issues and taking possible marginal independencies into account by inducing (block) sparsity. The latter ideas are investigated via a simulation study, considering other dependence coefficients as well. The use of the developed methods is illustrated in two real data applications.</div></div>","PeriodicalId":55225,"journal":{"name":"Computational Statistics & Data Analysis","volume":"204 ","pages":"Article 108096"},"PeriodicalIF":1.5,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142701345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-12DOI: 10.1016/j.csda.2024.108094
Tui H. Nolan , Sylvia Richardson , Hélène Ruffieux
The analysis of multivariate functional curves has the potential to yield important scientific discoveries in domains such as healthcare, medicine, economics and social sciences. However, it is common for real-world settings to present longitudinal data that are both irregularly and sparsely observed, which introduces important challenges for the current functional data methodology. A Bayesian hierarchical framework for multivariate functional principal component analysis is proposed, which accommodates the intricacies of such irregular observation settings by flexibly pooling information across subjects and correlated curves. The model represents common latent dynamics via shared functional principal component scores, thereby effectively borrowing strength across curves while circumventing the computationally challenging task of estimating covariance matrices. These scores also provide a parsimonious representation of the major modes of joint variation of the curves and constitute interpretable scalar summaries that can be employed in follow-up analyses. Estimation is conducted using variational inference, ensuring that accurate posterior approximation and robust uncertainty quantification are achieved. The algorithm also introduces a novel variational message passing fragment for multivariate functional principal component Gaussian likelihood that enables modularity and reuse across models. Detailed simulations assess the effectiveness of the approach in sharing information from sparse and irregularly sampled multivariate curves. The methodology is also exploited to estimate the molecular disease courses of individual patients with SARS-CoV-2 infection and characterise patient heterogeneity in recovery outcomes; this study reveals key coordinated dynamics across the immune, inflammatory and metabolic systems, which are associated with long-COVID symptoms up to one year post disease onset. The approach is implemented in the R package bayesFPCA.
多变量函数曲线分析有可能在医疗保健、医学、经济学和社会科学等领域产生重要的科学发现。然而,现实世界中常见的纵向数据既不规则又观测稀疏,这给当前的函数数据方法带来了重大挑战。本文提出了一种用于多元函数主成分分析的贝叶斯分层框架,该框架通过灵活地汇集受试者和相关曲线的信息,来适应这种不规则观测环境的复杂性。该模型通过共享的功能主成分得分来表示共同的潜在动态,从而有效地借用曲线间的力量,同时避免了估计协方差矩阵这一具有计算挑战性的任务。这些分数还提供了曲线联合变化主要模式的简明表述,并构成了可在后续分析中使用的可解释的标量总结。使用变异推理进行估计,确保实现精确的后验近似和稳健的不确定性量化。该算法还为多元函数主成分高斯似然引入了一个新颖的变分信息传递片段,实现了模块化和跨模型重用。详细的模拟评估了该方法在共享稀疏和不规则采样多元曲线信息方面的有效性。这项研究揭示了免疫、炎症和新陈代谢系统的关键协调动态,这些系统与发病后一年内的长COVID症状有关。该方法在 R 软件包 bayesFPCA 中实现。
{"title":"Efficient Bayesian functional principal component analysis of irregularly-observed multivariate curves","authors":"Tui H. Nolan , Sylvia Richardson , Hélène Ruffieux","doi":"10.1016/j.csda.2024.108094","DOIUrl":"10.1016/j.csda.2024.108094","url":null,"abstract":"<div><div>The analysis of multivariate functional curves has the potential to yield important scientific discoveries in domains such as healthcare, medicine, economics and social sciences. However, it is common for real-world settings to present longitudinal data that are both irregularly and sparsely observed, which introduces important challenges for the current functional data methodology. A Bayesian hierarchical framework for multivariate functional principal component analysis is proposed, which accommodates the intricacies of such irregular observation settings by flexibly pooling information across subjects and correlated curves. The model represents common latent dynamics via shared functional principal component scores, thereby effectively borrowing strength across curves while circumventing the computationally challenging task of estimating covariance matrices. These scores also provide a parsimonious representation of the major modes of joint variation of the curves and constitute interpretable scalar summaries that can be employed in follow-up analyses. Estimation is conducted using variational inference, ensuring that accurate posterior approximation and robust uncertainty quantification are achieved. The algorithm also introduces a novel variational message passing fragment for multivariate functional principal component Gaussian likelihood that enables modularity and reuse across models. Detailed simulations assess the effectiveness of the approach in sharing information from sparse and irregularly sampled multivariate curves. The methodology is also exploited to estimate the molecular disease courses of individual patients with SARS-CoV-2 infection and characterise patient heterogeneity in recovery outcomes; this study reveals key coordinated dynamics across the immune, inflammatory and metabolic systems, which are associated with long-COVID symptoms up to one year post disease onset. The approach is implemented in the R package <span>bayesFPCA</span>.</div></div>","PeriodicalId":55225,"journal":{"name":"Computational Statistics & Data Analysis","volume":"203 ","pages":"Article 108094"},"PeriodicalIF":1.5,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-12DOI: 10.1016/j.csda.2024.108093
Tong Zou, Hal S. Stern
Directional data require specialized models because of the non-Euclidean nature of their domain. When a directional variable is observed jointly with linear variables, modeling their dependence adds an additional layer of complexity. A Bayesian nonparametric approach is introduced to analyze directional-linear data. Firstly, the projected normal distribution is extended to model the joint distribution of linear variables and a directional variable with arbitrary dimension projected from a higher-dimensional augmented multivariate normal distribution. The new distribution is called the semi-projected normal distribution (SPN) and can be used as the mixture distribution in a Dirichlet process model to obtain a more flexible class of models for directional-linear data. Then, a conditional inverse-Wishart distribution is proposed as part of the prior distribution to address an identifiability issue inherited from the projected normal and preserve conjugacy with the SPN. The SPN mixture model shows superior performance in clustering on synthetic data compared to the semi-wrapped Gaussian model. The experiments show the ability of the SPN mixture model to characterize bloodstain patterns. A hierarchical Dirichlet process model with the SPN distribution is built to estimate the likelihood of bloodstain patterns under a posited causal mechanism for use in a likelihood ratio approach to the analysis of forensic bloodstain pattern evidence.
{"title":"A Dirichlet process model for directional-linear data with application to bloodstain pattern analysis","authors":"Tong Zou, Hal S. Stern","doi":"10.1016/j.csda.2024.108093","DOIUrl":"10.1016/j.csda.2024.108093","url":null,"abstract":"<div><div>Directional data require specialized models because of the non-Euclidean nature of their domain. When a directional variable is observed jointly with linear variables, modeling their dependence adds an additional layer of complexity. A Bayesian nonparametric approach is introduced to analyze directional-linear data. Firstly, the projected normal distribution is extended to model the joint distribution of linear variables and a directional variable with arbitrary dimension projected from a higher-dimensional augmented multivariate normal distribution. The new distribution is called the semi-projected normal distribution (SPN) and can be used as the mixture distribution in a Dirichlet process model to obtain a more flexible class of models for directional-linear data. Then, a conditional inverse-Wishart distribution is proposed as part of the prior distribution to address an identifiability issue inherited from the projected normal and preserve conjugacy with the SPN. The SPN mixture model shows superior performance in clustering on synthetic data compared to the semi-wrapped Gaussian model. The experiments show the ability of the SPN mixture model to characterize bloodstain patterns. A hierarchical Dirichlet process model with the SPN distribution is built to estimate the likelihood of bloodstain patterns under a posited causal mechanism for use in a likelihood ratio approach to the analysis of forensic bloodstain pattern evidence.</div></div>","PeriodicalId":55225,"journal":{"name":"Computational Statistics & Data Analysis","volume":"204 ","pages":"Article 108093"},"PeriodicalIF":1.5,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142701140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}