Pub Date : 2024-01-20DOI: 10.1007/s00180-023-01451-4
Shirin Nezampour, Alireza Nematollahi, Robert T. Krafty, Mehdi Maadooliat
This paper develops a nonparametric method for estimating the spectral density of multivariate stationary time series using basis expansion. A likelihood-based approach is used to fit the model through the minimization of a penalized Whittle negative log-likelihood. Then, a Newton-type algorithm is developed for the computation. In this method, we smooth the Cholesky factors of the multivariate spectral density matrix in a way that the reconstructed estimate based on the smoothed Cholesky components is consistent and positive-definite. In a simulation study, we have illustrated and compared our proposed method with other competitive approaches. Finally, we apply our approach to two real-world problems, Electroencephalogram signals analysis, (El Nitilde{n}o) Cycle.
{"title":"A new approach to nonparametric estimation of multivariate spectral density function using basis expansion","authors":"Shirin Nezampour, Alireza Nematollahi, Robert T. Krafty, Mehdi Maadooliat","doi":"10.1007/s00180-023-01451-4","DOIUrl":"https://doi.org/10.1007/s00180-023-01451-4","url":null,"abstract":"<p>This paper develops a nonparametric method for estimating the spectral density of multivariate stationary time series using basis expansion. A likelihood-based approach is used to fit the model through the minimization of a penalized Whittle negative log-likelihood. Then, a Newton-type algorithm is developed for the computation. In this method, we smooth the Cholesky factors of the multivariate spectral density matrix in a way that the reconstructed estimate based on the smoothed Cholesky components is consistent and positive-definite. In a simulation study, we have illustrated and compared our proposed method with other competitive approaches. Finally, we apply our approach to two real-world problems, Electroencephalogram signals analysis, <span>(El Nitilde{n}o)</span> Cycle.\u0000</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"13 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139508567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-17DOI: 10.1007/s00180-023-01446-1
Jeongjin Lee, Taehwa Choi, Sangbum Choi
Broken adaptive ridge (BAR) is a penalized regression method that performs variable selection via a computationally scalable surrogate to (L_0) regularization. The BAR regression has many appealing features; it converges to selection with (L_0) penalties as a result of reweighting (L_2) penalties, and satisfies the oracle property with grouping effect for highly correlated covariates. In this paper, we investigate the BAR procedure for variable selection in a semiparametric accelerated failure time model with complex high-dimensional censored data. Coupled with Buckley-James-type responses, BAR-based variable selection procedures can be performed when event times are censored in complex ways, such as right-censored, left-censored, or double-censored. Our approach utilizes a two-stage cyclic coordinate descent algorithm to minimize the objective function by iteratively estimating the pseudo survival response and regression coefficients along the direction of coordinates. Under some weak regularity conditions, we establish both the oracle property and the grouping effect of the proposed BAR estimator. Numerical studies are conducted to investigate the finite-sample performance of the proposed algorithm and an application to real data is provided as a data example.
断裂自适应脊(BAR)是一种惩罚回归方法,它通过可计算扩展的代用 (L_0) 正则化来执行变量选择。BAR 回归有很多吸引人的特点:它收敛于 (L_0) 惩罚的选择,作为 (L_2) 惩罚重新加权的结果,并且在高度相关的协变量上满足具有分组效应的 Oracle 特性。在本文中,我们研究了在具有复杂高维删减数据的半参数加速故障时间模型中进行变量选择的 BAR 程序。与 Buckley-James 型响应相结合,基于 BAR 的变量选择程序可在事件时间以复杂方式(如右删失、左删失或双删失)删失时执行。我们的方法采用两阶段循环坐标下降算法,通过沿坐标方向迭代估计伪生存响应和回归系数,使目标函数最小化。在一些弱正则性条件下,我们建立了所提出的 BAR 估计器的甲骨文属性和分组效应。我们进行了数值研究,以考察所提算法的有限样本性能,并提供了一个应用于真实数据的数据示例。
{"title":"Censored broken adaptive ridge regression in high-dimension","authors":"Jeongjin Lee, Taehwa Choi, Sangbum Choi","doi":"10.1007/s00180-023-01446-1","DOIUrl":"https://doi.org/10.1007/s00180-023-01446-1","url":null,"abstract":"<p>Broken adaptive ridge (BAR) is a penalized regression method that performs variable selection via a computationally scalable surrogate to <span>(L_0)</span> regularization. The BAR regression has many appealing features; it converges to selection with <span>(L_0)</span> penalties as a result of reweighting <span>(L_2)</span> penalties, and satisfies the oracle property with grouping effect for highly correlated covariates. In this paper, we investigate the BAR procedure for variable selection in a semiparametric accelerated failure time model with complex high-dimensional censored data. Coupled with Buckley-James-type responses, BAR-based variable selection procedures can be performed when event times are censored in complex ways, such as right-censored, left-censored, or double-censored. Our approach utilizes a two-stage cyclic coordinate descent algorithm to minimize the objective function by iteratively estimating the pseudo survival response and regression coefficients along the direction of coordinates. Under some weak regularity conditions, we establish both the oracle property and the grouping effect of the proposed BAR estimator. Numerical studies are conducted to investigate the finite-sample performance of the proposed algorithm and an application to real data is provided as a data example.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"262 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139482136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The support vector machine (SVM) is a powerful classifier used for binary classification to improve the prediction accuracy. However, the nondifferentiability of the SVM hinge loss function can lead to computational difficulties in high-dimensional settings. To overcome this problem, we rely on the Bernstein polynomial and propose a new smoothed version of the SVM hinge loss called the Bernstein support vector machine (BernSVC). This extension is suitable for the high dimension regime. As the BernSVC objective loss function is twice differentiable everywhere, we propose two efficient algorithms for computing the solution of the penalized BernSVC. The first algorithm is based on coordinate descent with the maximization-majorization principle and the second algorithm is the iterative reweighted least squares-type algorithm. Under standard assumptions, we derive a cone condition and a restricted strong convexity to establish an upper bound for the weighted lasso BernSVC estimator. By using a local linear approximation, we extend the latter result to the penalized BernSVC with nonconvex penalties SCAD and MCP. Our bound holds with high probability and achieves the so-called fast rate under mild conditions on the design matrix. Simulation studies are considered to illustrate the prediction accuracy of BernSVC relative to its competitors and also to compare the performance of the two algorithms in terms of computational timing and error estimation. The use of the proposed method is illustrated through analysis of three large-scale real data examples.
{"title":"High-dimensional penalized Bernstein support vector classifier","authors":"Rachid Kharoubi, Abdallah Mkhadri, Karim Oualkacha","doi":"10.1007/s00180-023-01448-z","DOIUrl":"https://doi.org/10.1007/s00180-023-01448-z","url":null,"abstract":"<p>The support vector machine (SVM) is a powerful classifier used for binary classification to improve the prediction accuracy. However, the nondifferentiability of the SVM hinge loss function can lead to computational difficulties in high-dimensional settings. To overcome this problem, we rely on the Bernstein polynomial and propose a new smoothed version of the SVM hinge loss called the Bernstein support vector machine (BernSVC). This extension is suitable for the high dimension regime. As the BernSVC objective loss function is twice differentiable everywhere, we propose two efficient algorithms for computing the solution of the penalized BernSVC. The first algorithm is based on coordinate descent with the maximization-majorization principle and the second algorithm is the iterative reweighted least squares-type algorithm. Under standard assumptions, we derive a cone condition and a restricted strong convexity to establish an upper bound for the weighted lasso BernSVC estimator. By using a local linear approximation, we extend the latter result to the penalized BernSVC with nonconvex penalties SCAD and MCP. Our bound holds with high probability and achieves the so-called fast rate under mild conditions on the design matrix. Simulation studies are considered to illustrate the prediction accuracy of BernSVC relative to its competitors and also to compare the performance of the two algorithms in terms of computational timing and error estimation. The use of the proposed method is illustrated through analysis of three large-scale real data examples.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"262 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139482088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-12DOI: 10.1007/s00180-023-01450-5
Kévin Elie-Dit-Cosaque, Véronique Maume-Deschamps
We propose a random forest based estimation procedure for Quantile-Oriented Sensitivity Analysis—QOSA. In order to be efficient, a cross-validation step on the leaf size of trees is required. Our full estimation procedure is tested on both simulated data and a real dataset. Our estimators use either the bootstrap samples or the original sample in the estimation. Also, they are either based on a quantile plug-in procedure (the R-estimators) or on a direct minimization (the Q-estimators). This leads to 8 different estimators which are compared on simulations. From these simulations, it seems that the estimation method based on a direct minimization is better than the one plugging the quantile. This is a significant result because the method with direct minimization requires only one sample and could therefore be preferred.
{"title":"Random forest based quantile-oriented sensitivity analysis indices estimation","authors":"Kévin Elie-Dit-Cosaque, Véronique Maume-Deschamps","doi":"10.1007/s00180-023-01450-5","DOIUrl":"https://doi.org/10.1007/s00180-023-01450-5","url":null,"abstract":"<p>We propose a random forest based estimation procedure for Quantile-Oriented Sensitivity Analysis—QOSA. In order to be efficient, a cross-validation step on the leaf size of trees is required. Our full estimation procedure is tested on both simulated data and a real dataset. Our estimators use either the bootstrap samples or the original sample in the estimation. Also, they are either based on a quantile plug-in procedure (the <i>R</i>-estimators) or on a direct minimization (the <i>Q</i>-estimators). This leads to 8 different estimators which are compared on simulations. From these simulations, it seems that the estimation method based on a direct minimization is better than the one plugging the quantile. This is a significant result because the method with direct minimization requires only one sample and could therefore be preferred.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"54 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139462061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-10DOI: 10.1007/s00180-023-01449-y
Abstract
Rating migration matrix is a crux to assess credit risks. Modeling and predicting these matrices are then an issue of great importance for risk managers in any financial institution. As a challenger to usual parametric modeling approaches, we propose a new structured dictionary learning model with auto-regressive regularization that is able to meet key expectations and constraints: small amount of data, fast evolution in time of these matrices, economic interpretability of the calibrated model. To show the model applicability, we present a numerical test with both synthetic and real data and a comparison study with the widely used parametric Gaussian Copula model: it turns out that our new approach based on dictionary learning significantly outperforms the Gaussian Copula model.
{"title":"Structured dictionary learning of rating migration matrices for credit risk modeling","authors":"","doi":"10.1007/s00180-023-01449-y","DOIUrl":"https://doi.org/10.1007/s00180-023-01449-y","url":null,"abstract":"<h3>Abstract</h3> <p>Rating migration matrix is a crux to assess credit risks. Modeling and predicting these matrices are then an issue of great importance for risk managers in any financial institution. As a challenger to usual parametric modeling approaches, we propose a new structured dictionary learning model with auto-regressive regularization that is able to meet key expectations and constraints: small amount of data, fast evolution in time of these matrices, economic interpretability of the calibrated model. To show the model applicability, we present a numerical test with both synthetic and real data and a comparison study with the widely used parametric Gaussian Copula model: it turns out that our new approach based on dictionary learning significantly outperforms the Gaussian Copula model.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"44 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-03DOI: 10.1007/s00180-023-01444-3
Abstract
The ability of individuals to recall events is influenced by the time interval between the monitoring time and the occurrence of the event. In this article, we introduce a non-recall probability function that incorporates this information into our modeling framework. We model the time-to-event using the Weibull distribution and adopt a latent variable approach to handle situations where recall is not possible. In the classical framework, we obtain point estimators using expectation-maximization algorithm and construct the observed Fisher information matrix using missing information principle. Within the Bayesian paradigm, we derive point estimators under suitable choice of priors and calculate highest posterior density intervals using Markov Chain Monte Carlo samples. To assess the performance of the proposed estimators, we conduct an extensive simulation study. Additionally, we utilize age at menarche and breastfeeding datasets as examples to illustrate the effectiveness of the proposed methodology.
{"title":"A latent variable approach for modeling recall-based time-to-event data with Weibull distribution","authors":"","doi":"10.1007/s00180-023-01444-3","DOIUrl":"https://doi.org/10.1007/s00180-023-01444-3","url":null,"abstract":"<h3>Abstract</h3> <p>The ability of individuals to recall events is influenced by the time interval between the monitoring time and the occurrence of the event. In this article, we introduce a non-recall probability function that incorporates this information into our modeling framework. We model the time-to-event using the Weibull distribution and adopt a latent variable approach to handle situations where recall is not possible. In the classical framework, we obtain point estimators using expectation-maximization algorithm and construct the observed Fisher information matrix using missing information principle. Within the Bayesian paradigm, we derive point estimators under suitable choice of priors and calculate highest posterior density intervals using Markov Chain Monte Carlo samples. To assess the performance of the proposed estimators, we conduct an extensive simulation study. Additionally, we utilize age at menarche and breastfeeding datasets as examples to illustrate the effectiveness of the proposed methodology.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"23 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139096435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-03DOI: 10.1007/s00180-023-01445-2
Manuel Febrero-Bande, Pedro Galeano, Eduardo García-Portugués, Wenceslao González-Manteiga
A goodness-of-fit test for the Functional Linear Model with Scalar Response (FLMSR) with responses Missing at Random (MAR) is proposed in this paper. The test statistic relies on a marked empirical process indexed by the projected functional covariate and its distribution under the null hypothesis is calibrated using a wild bootstrap procedure. The computation and performance of the test rely on having an accurate estimator of the functional slope of the FLMSR when the sample has MAR responses. Three estimation methods based on the Functional Principal Components (FPCs) of the covariate are considered. First, the simplified method estimates the functional slope by simply discarding observations with missing responses. Second, the imputed method estimates the functional slope by imputing the missing responses using the simplified estimator. Third, the inverse probability weighted method incorporates the missing response generation mechanism when imputing. Furthermore, both cross-validation and LASSO regression are used to select the FPCs used by each estimator. Several Monte Carlo experiments are conducted to analyze the behavior of the testing procedure in combination with the functional slope estimators. Results indicate that estimators performing missing-response imputation achieve the highest power. The testing procedure is applied to check for linear dependence between the average number of sunny days per year and the mean curve of daily temperatures at weather stations in Spain.
本文提出了带有随机缺失(MAR)响应的标量响应功能线性模型(FLMSR)的拟合优度检验。该检验统计量依赖于以投影函数协变量为索引的标记经验过程,其在零假设下的分布是通过野外自举程序校准的。当样本有 MAR 反应时,检验的计算和性能依赖于对 FLMSR 函数斜率的准确估计。我们考虑了三种基于协变量函数主成分(FPCs)的估计方法。首先,简化方法通过简单地剔除缺失响应的观测值来估计功能斜率。第二,估算法通过使用简化估算器估算缺失的响应来估计功能斜率。第三,反概率加权法在估算时纳入了缺失响应生成机制。此外,还使用交叉验证和 LASSO 回归来选择每种估计器使用的 FPC。我们进行了多次蒙特卡罗实验,分析了测试程序与函数斜率估计器相结合的行为。结果表明,进行缺失反应归因的估计器的功率最高。测试程序被用于检查西班牙气象站的年平均晴天数与日平均气温曲线之间是否存在线性关系。
{"title":"Testing for linearity in scalar-on-function regression with responses missing at random","authors":"Manuel Febrero-Bande, Pedro Galeano, Eduardo García-Portugués, Wenceslao González-Manteiga","doi":"10.1007/s00180-023-01445-2","DOIUrl":"https://doi.org/10.1007/s00180-023-01445-2","url":null,"abstract":"<p>A goodness-of-fit test for the Functional Linear Model with Scalar Response (FLMSR) with responses Missing at Random (MAR) is proposed in this paper. The test statistic relies on a marked empirical process indexed by the projected functional covariate and its distribution under the null hypothesis is calibrated using a wild bootstrap procedure. The computation and performance of the test rely on having an accurate estimator of the functional slope of the FLMSR when the sample has MAR responses. Three estimation methods based on the Functional Principal Components (FPCs) of the covariate are considered. First, the <i>simplified</i> method estimates the functional slope by simply discarding observations with missing responses. Second, the <i>imputed</i> method estimates the functional slope by imputing the missing responses using the simplified estimator. Third, the <i>inverse probability weighted</i> method incorporates the missing response generation mechanism when imputing. Furthermore, both cross-validation and LASSO regression are used to select the FPCs used by each estimator. Several Monte Carlo experiments are conducted to analyze the behavior of the testing procedure in combination with the functional slope estimators. Results indicate that estimators performing missing-response imputation achieve the highest power. The testing procedure is applied to check for linear dependence between the average number of sunny days per year and the mean curve of daily temperatures at weather stations in Spain.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"8 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139093938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-20DOI: 10.1007/s00180-023-01441-6
Abstract
Despite many statistical applications brush the question of data quality aside, it is a fundamental concern inherent to external data collection. In this paper, data quality relates to the confidence one can have about the covariate values in a regression framework. More precisely, we study how to integrate the information of data quality given by a ((n times p))-matrix, with n the number of individuals and p the number of explanatory variables. In this view, we suggest a latent variable model that drives the generation of the covariate values, and introduce a new algorithm that takes all these information into account for prediction. Our approach provides unbiased estimators of the regression coefficients, and allows to make predictions adapted to some given quality pattern. The usefulness of our procedure is illustrated through simulations and real-life applications. Kindly check and confirm whether the corresponding author is correctly identified.Yes
摘要 尽管许多统计应用将数据质量问题搁置一旁,但它却是外部数据收集所固有的一个基本问题。在本文中,数据质量关系到人们对回归框架中协变量值的置信度。更准确地说,我们研究的是如何整合由 (((n 次 p))-矩阵给出的数据质量信息。-矩阵给出的数据质量信息,其中 n 代表个体数量,p 代表解释变量数量。根据这一观点,我们提出了一个驱动协变量值生成的潜变量模型,并引入了一种新算法,将所有这些信息纳入预测考虑。我们的方法可提供无偏的回归系数估计值,并可根据给定的质量模式进行预测。我们通过模拟和实际应用说明了我们的程序的实用性。请检查并确认相应作者的身份是否正确。
{"title":"Estimation and prediction with data quality indexes in linear regressions","authors":"","doi":"10.1007/s00180-023-01441-6","DOIUrl":"https://doi.org/10.1007/s00180-023-01441-6","url":null,"abstract":"<h3>Abstract</h3> <p>Despite many statistical applications brush the question of data quality aside, it is a fundamental concern inherent to external data collection. In this paper, data quality relates to the confidence one can have about the covariate values in a regression framework. More precisely, we study how to integrate the information of data quality given by a <span> <span>((n times p))</span> </span>-matrix, with <em>n</em> the number of individuals and <em>p</em> the number of explanatory variables. In this view, we suggest a latent variable model that drives the generation of the covariate values, and introduce a new algorithm that takes all these information into account for prediction. Our approach provides unbiased estimators of the regression coefficients, and allows to make predictions adapted to some given quality pattern. The usefulness of our procedure is illustrated through simulations and real-life applications. <?oxy_aq_start?>Kindly check and confirm whether the corresponding author is correctly identified.<?oxy_aq_end?><?oxy_aqreply_start?>Yes<?oxy_aqreply_end?></p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"6 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138818581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-14DOI: 10.1007/s00180-023-01443-4
Peiyi Zhang, Tianning Dong, Faming Liang
State estimation for large-scale non-Gaussian dynamic systems remains an unresolved issue, given nonscalability of the existing particle filter algorithms. To address this issue, this paper extends the Langevinized ensemble Kalman filter (LEnKF) algorithm to non-Gaussian dynamic systems by introducing a latent Gaussian measurement variable to the dynamic system. The extended LEnKF algorithm can converge to the right filtering distribution as the number of stages become large, while inheriting the scalability of the LEnKF algorithm with respect to the sample size and state dimension. The performance of the extended LEnKF algorithm is illustrated by dynamic network embedding and dynamic Poisson spatial models.
{"title":"An extended Langevinized ensemble Kalman filter for non-Gaussian dynamic systems","authors":"Peiyi Zhang, Tianning Dong, Faming Liang","doi":"10.1007/s00180-023-01443-4","DOIUrl":"https://doi.org/10.1007/s00180-023-01443-4","url":null,"abstract":"<p>State estimation for large-scale non-Gaussian dynamic systems remains an unresolved issue, given nonscalability of the existing particle filter algorithms. To address this issue, this paper extends the Langevinized ensemble Kalman filter (LEnKF) algorithm to non-Gaussian dynamic systems by introducing a latent Gaussian measurement variable to the dynamic system. The extended LEnKF algorithm can converge to the right filtering distribution as the number of stages become large, while inheriting the scalability of the LEnKF algorithm with respect to the sample size and state dimension. The performance of the extended LEnKF algorithm is illustrated by dynamic network embedding and dynamic Poisson spatial models.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"38 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138629856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-11DOI: 10.1007/s00180-023-01442-5
Jen-Chieh Teng, Chin-Tsang Chiang, Alvin Lim
In the analysis of qualification stage data from FIRST Robotics Competition (FRC) championships, the ratio (1.67–1.68) of the number of observations (110–114 matches) to the number of parameters (66–68 robots) in each division has been found to be quite small for the most commonly used winning margin power rating (WMPR) model. This usually leads to imprecise estimates and inaccurate predictions in such three-on-three matches that FRC tournaments are composed of. With the recognition of a clustering feature in estimated robot strengths, a more flexible model with latent clusters of robots was proposed to alleviate overparameterization of the WMPR model. Since its structure can be regarded as a dimension reduction of the parameter space in the WMPR model, the identification of clusters of robot strengths is naturally transformed into a model selection problem. Instead of comparing a huge number of competing models ((7.76times 10^{67}) to (3.66times 10^{70})), we develop an effective method to estimate the number of clusters, clusters of robots and robot strengths in the format of qualification stage data from the FRC championships. The new method consists of two parts: (i) a combination of hierarchical and non-hierarchical classifications to determine candidate models; and (ii) variant goodness-of-fit criteria to select optimal models. In contrast to existing hierarchical classification, each step of our proposed non-hierarchical classification is based on estimated robot strengths from a candidate model in the preceding non-hierarchical classification step. A great advantage of the proposed methodology is its ability to consider the possibility of reassigning robots to other clusters. To reduce overestimation of the number of clusters by the mean squared prediction error criteria, corresponding Bayesian information criteria are further established as alternatives for model selection. With a coherent assembly of these essential elements, a systematic procedure is presented to perform the estimation of parameters. In addition, we propose two indices to measure the nested relation between clusters from any two models and monotonic association between robot strengths from any two models. Data from the 2018 and 2019 FRC championships and a simulation study are also used to illustrate the applicability and superiority of our proposed methodology.
{"title":"An effective method for identifying clusters of robot strengths","authors":"Jen-Chieh Teng, Chin-Tsang Chiang, Alvin Lim","doi":"10.1007/s00180-023-01442-5","DOIUrl":"https://doi.org/10.1007/s00180-023-01442-5","url":null,"abstract":"<p>In the analysis of qualification stage data from FIRST Robotics Competition (FRC) championships, the ratio (1.67–1.68) of the number of observations (110–114 matches) to the number of parameters (66–68 robots) in each division has been found to be quite small for the most commonly used winning margin power rating (WMPR) model. This usually leads to imprecise estimates and inaccurate predictions in such three-on-three matches that FRC tournaments are composed of. With the recognition of a clustering feature in estimated robot strengths, a more flexible model with latent clusters of robots was proposed to alleviate overparameterization of the WMPR model. Since its structure can be regarded as a dimension reduction of the parameter space in the WMPR model, the identification of clusters of robot strengths is naturally transformed into a model selection problem. Instead of comparing a huge number of competing models <span>((7.76times 10^{67})</span> to <span>(3.66times 10^{70}))</span>, we develop an effective method to estimate the number of clusters, clusters of robots and robot strengths in the format of qualification stage data from the FRC championships. The new method consists of two parts: (i) a combination of hierarchical and non-hierarchical classifications to determine candidate models; and (ii) variant goodness-of-fit criteria to select optimal models. In contrast to existing hierarchical classification, each step of our proposed non-hierarchical classification is based on estimated robot strengths from a candidate model in the preceding non-hierarchical classification step. A great advantage of the proposed methodology is its ability to consider the possibility of reassigning robots to other clusters. To reduce overestimation of the number of clusters by the mean squared prediction error criteria, corresponding Bayesian information criteria are further established as alternatives for model selection. With a coherent assembly of these essential elements, a systematic procedure is presented to perform the estimation of parameters. In addition, we propose two indices to measure the nested relation between clusters from any two models and monotonic association between robot strengths from any two models. Data from the 2018 and 2019 FRC championships and a simulation study are also used to illustrate the applicability and superiority of our proposed methodology.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"12 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138576940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}