Pub Date : 2024-02-10DOI: 10.1007/s00180-024-01459-4
Jiwon Park, Dipak K. Dey, Víctor H. Lachos
Finite mixture models have been widely used to model and analyze data from heterogeneous populations. In practical scenarios, these types of data often confront upper and/or lower detection limits due to the constraints imposed by experimental apparatuses. Additional complexity arises when measures of each mixture component significantly deviate from the normal distribution, manifesting characteristics such as multimodality, asymmetry, and heavy-tailed behavior, simultaneously. This paper introduces a flexible model tailored for censored data to address these intricacies, leveraging the finite mixture of skew-t distributions. An Expectation Conditional Maximization Either (ECME) algorithm, is developed to efficiently derive parameter estimates by iteratively maximizing the observed data log-likelihood function. The algorithm has closed-form expressions at the E-step that rely on formulas for the mean and variance of truncated skew-t distributions. Moreover, a method based on general information principles is presented for approximating the asymptotic covariance matrix of the estimators. Results obtained from the analysis of both simulated and real datasets demonstrate the proposed method’s effectiveness.
有限混合物模型已被广泛用于异质群体数据的建模和分析。在实际应用中,由于实验设备的限制,这些类型的数据往往面临检测上限和/或下限的问题。当每个混合物成分的测量值明显偏离正态分布,同时表现出多模态、不对称和重尾行为等特征时,就会产生额外的复杂性。本文利用倾斜-t 分布的有限混合物,介绍了一种为删减数据定制的灵活模型,以解决这些错综复杂的问题。本文开发了一种期望条件最大化算法(ECME),通过迭代最大化观测数据的对数似然函数,有效地得出参数估计。该算法在 E 步有闭式表达式,依赖于截断偏斜-t 分布的均值和方差公式。此外,还提出了一种基于一般信息原理的方法,用于逼近估计值的渐近协方差矩阵。对模拟数据集和真实数据集的分析结果证明了所提方法的有效性。
{"title":"Finite mixture of regression models for censored data based on the skew-t distribution","authors":"Jiwon Park, Dipak K. Dey, Víctor H. Lachos","doi":"10.1007/s00180-024-01459-4","DOIUrl":"https://doi.org/10.1007/s00180-024-01459-4","url":null,"abstract":"<p>Finite mixture models have been widely used to model and analyze data from heterogeneous populations. In practical scenarios, these types of data often confront upper and/or lower detection limits due to the constraints imposed by experimental apparatuses. Additional complexity arises when measures of each mixture component significantly deviate from the normal distribution, manifesting characteristics such as multimodality, asymmetry, and heavy-tailed behavior, simultaneously. This paper introduces a flexible model tailored for censored data to address these intricacies, leveraging the finite mixture of skew-<i>t</i> distributions. An Expectation Conditional Maximization Either (ECME) algorithm, is developed to efficiently derive parameter estimates by iteratively maximizing the observed data log-likelihood function. The algorithm has closed-form expressions at the E-step that rely on formulas for the mean and variance of truncated skew-<i>t</i> distributions. Moreover, a method based on general information principles is presented for approximating the asymptotic covariance matrix of the estimators. Results obtained from the analysis of both simulated and real datasets demonstrate the proposed method’s effectiveness.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"38 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139765979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-09DOI: 10.1007/s00180-024-01456-7
Marco Antonio Montufar-Benítez, Jaime Mora-Vargas, Carlos Arturo Soto-Campos, Gilberto Pérez-Lechuga, José Raúl Castro-Esparza
The main goal in this study was to determine confidence intervals for average age, average seniority, and average money-savings, for faculty members in a university retirement system using a simulation model. The simulation—built-in Arena—considers age, seniority, and the probability of continuing in the institution as the main input random variables in the model. An annual interest rate of 7% and an average annual salary increase of 3% were considered. The scenario simulated consisted of the teacher and the university making contributions, the faculty 5% of his salary, and the university 5% of the teacher’s salary. Since the base salaries with which teachers join to university are variable, we considered a monthly salary of MXN 23 181.2, corresponding to full-time teachers with middle salaries. The results obtained by a simulation of 30 replicates showed that the confidence intervals for the average age at retirement were (55.0, 55.2) years, for the average seniority (22.1, 22.3) years, and for the average savings amount (329 795.2, 341 287.0) MXN. Moreover, the risk that a retiree of 62 years of age and more of 25 years of work, is alive after his savings runs out is approximately 98% and this happens at 64 years of age.
{"title":"A simulation model to analyze the behavior of a faculty retirement plan: a case study in Mexico","authors":"Marco Antonio Montufar-Benítez, Jaime Mora-Vargas, Carlos Arturo Soto-Campos, Gilberto Pérez-Lechuga, José Raúl Castro-Esparza","doi":"10.1007/s00180-024-01456-7","DOIUrl":"https://doi.org/10.1007/s00180-024-01456-7","url":null,"abstract":"<p>The main goal in this study was to determine confidence intervals for average age, average seniority, and average money-savings, for faculty members in a university retirement system using a simulation model. The simulation—built-in Arena—considers age, seniority, and the probability of continuing in the institution as the main input random variables in the model. An annual interest rate of 7% and an average annual salary increase of 3% were considered. The scenario simulated consisted of the teacher and the university making contributions, the faculty 5% of his salary, and the university 5% of the teacher’s salary. Since the base salaries with which teachers join to university are variable, we considered a monthly salary of MXN 23 181.2, corresponding to full-time teachers with middle salaries. The results obtained by a simulation of 30 replicates showed that the confidence intervals for the average age at retirement were (55.0, 55.2) years, for the average seniority (22.1, 22.3) years, and for the average savings amount (329 795.2, 341 287.0) MXN. Moreover, the risk that a retiree of 62 years of age and more of 25 years of work, is alive after his savings runs out is approximately 98% and this happens at 64 years of age.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"4 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139765967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-09DOI: 10.1007/s00180-024-01460-x
Abstract
Fitting concentric ellipses is a crucial yet challenging task in image processing, pattern recognition, and astronomy. To address this complexity, researchers have introduced simplified models by imposing geometric assumptions. These assumptions enable the linearization of the model through reparameterization, allowing for the extension of various fitting methods. However, these restrictive assumptions often fail to hold in real-world scenarios, limiting their practical applicability. In this work, we propose two novel estimators that relax these assumptions: the Least Squares method (LS) and the Gradient Algebraic Fit (GRAF). Since these methods are iterative, we provide numerical implementations and strategies for obtaining reliable initial guesses. Moreover, we employ perturbation theory to conduct a first-order analysis, deriving the leading terms of their Mean Squared Errors and their theoretical lower bounds. Our theoretical findings reveal that the GRAF is statistically efficient, while the LS method is not. We further validate our theoretical results and the performance of the proposed estimators through a series of numerical experiments on both real and synthetic data.
摘要 拟合同心椭圆是图像处理、模式识别和天文学中一项重要而又具有挑战性的任务。为了解决这一复杂问题,研究人员通过施加几何假设引入了简化模型。这些假设通过重新参数化使模型线性化,从而扩展了各种拟合方法。然而,这些限制性假设在现实世界中往往不成立,限制了它们的实际应用性。在这项工作中,我们提出了两种放宽这些假设的新型估计方法:最小二乘法(LS)和梯度代数拟合法(GRAF)。由于这些方法都是迭代法,我们提供了数值实现方法和策略,以获得可靠的初始猜测。此外,我们还利用扰动理论进行了一阶分析,得出了它们的均方误差前导项及其理论下限。我们的理论研究结果表明,GRAF 在统计上是高效的,而 LS 方法则不然。我们通过对真实数据和合成数据进行一系列数值实验,进一步验证了我们的理论结果和所提估计方法的性能。
{"title":"Fitting concentric elliptical shapes under general model","authors":"","doi":"10.1007/s00180-024-01460-x","DOIUrl":"https://doi.org/10.1007/s00180-024-01460-x","url":null,"abstract":"<h3>Abstract</h3> <p>Fitting concentric ellipses is a crucial yet challenging task in image processing, pattern recognition, and astronomy. To address this complexity, researchers have introduced simplified models by imposing geometric assumptions. These assumptions enable the linearization of the model through reparameterization, allowing for the extension of various fitting methods. However, these restrictive assumptions often fail to hold in real-world scenarios, limiting their practical applicability. In this work, we propose two novel estimators that relax these assumptions: the Least Squares method (LS) and the Gradient Algebraic Fit (GRAF). Since these methods are iterative, we provide numerical implementations and strategies for obtaining reliable initial guesses. Moreover, we employ perturbation theory to conduct a first-order analysis, deriving the leading terms of their Mean Squared Errors and their theoretical lower bounds. Our theoretical findings reveal that the GRAF is statistically efficient, while the LS method is not. We further validate our theoretical results and the performance of the proposed estimators through a series of numerical experiments on both real and synthetic data.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"40 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139765832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-31DOI: 10.1007/s00180-023-01453-2
Nicholas Spyrison, Dianne Cook, Przemyslaw Biecek
The increased predictive power of machine learning models comes at the cost of increased complexity and loss of interpretability, particularly in comparison to parametric statistical models. This trade-off has led to the emergence of eXplainable AI (XAI) which provides methods, such as local explanations (LEs) and local variable attributions (LVAs), to shed light on how a model use predictors to arrive at a prediction. These provide a point estimate of the linear variable importance in the vicinity of a single observation. However, LVAs tend not to effectively handle association between predictors. To understand how the interaction between predictors affects the variable importance estimate, we can convert LVAs into linear projections and use the radial tour. This is also useful for learning how a model has made a mistake, or the effect of outliers, or the clustering of observations. The approach is illustrated with examples from categorical (penguin species, chocolate types) and quantitative (soccer/football salaries, house prices) response models. The methods are implemented in the R package cheem, available on CRAN.
机器学习模型预测能力的提高是以复杂性的增加和可解释性的丧失为代价的,尤其是与参数统计模型相比。这种权衡导致了可解释人工智能(XAI)的出现,它提供了一些方法,如局部解释(LE)和局部变量归因(LVA),以揭示模型是如何利用预测因子得出预测结果的。这些方法提供了对单个观测值附近线性变量重要性的点估计。然而,线性变量归因往往不能有效地处理预测因子之间的关联。为了了解预测因子之间的交互作用如何影响变量重要性估计值,我们可以将 LVA 转换为线性投影并使用径向游程。这对于了解模型如何出错、异常值的影响或观察结果的聚类也很有用。我们以分类(企鹅种类、巧克力类型)和定量(足球/橄榄球工资、房价)响应模型为例,对该方法进行了说明。这些方法在 CRAN 上提供的 R 软件包 cheem 中实现。
{"title":"Exploring local explanations of nonlinear models using animated linear projections","authors":"Nicholas Spyrison, Dianne Cook, Przemyslaw Biecek","doi":"10.1007/s00180-023-01453-2","DOIUrl":"https://doi.org/10.1007/s00180-023-01453-2","url":null,"abstract":"<p>The increased predictive power of machine learning models comes at the cost of increased complexity and loss of interpretability, particularly in comparison to parametric statistical models. This trade-off has led to the emergence of eXplainable AI (XAI) which provides methods, such as local explanations (LEs) and local variable attributions (LVAs), to shed light on how a model use predictors to arrive at a prediction. These provide a point estimate of the linear variable importance in the vicinity of a single observation. However, LVAs tend not to effectively handle association between predictors. To understand how the interaction between predictors affects the variable importance estimate, we can convert LVAs into linear projections and use the radial tour. This is also useful for learning how a model has made a mistake, or the effect of outliers, or the clustering of observations. The approach is illustrated with examples from categorical (penguin species, chocolate types) and quantitative (soccer/football salaries, house prices) response models. The methods are implemented in the R package cheem, available on CRAN.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"12 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139649042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-31DOI: 10.1007/s00180-024-01455-8
Pavithra Hariharan, P. G. Sankaran
The current status censoring takes place in survival analysis when the exact event times are not known, but each individual is monitored once for their survival status. The current status data often arise in medical research, from situations that involve multiple causes of failure. Examining current status competing risks data, commonly encountered in epidemiological studies and clinical trials, is more advantageous with Bayesian methods compared to conventional approaches. They excel in integrating prior knowledge with the observed data and delivering accurate results even with small samples. Inspired by these advantages, the present study is pioneering in introducing a Bayesian framework for both modelling and analysis of current status competing risks data together with covariates. By means of the proportional hazards model, estimation procedures for the regression parameters and cumulative incidence functions are established assuming appropriate prior distributions. The posterior computation is performed using an adaptive Metropolis–Hastings algorithm. Methods for comparing and validating models have been devised. An assessment of the finite sample characteristics of the estimators is conducted through simulation studies. Through the application of this Bayesian approach to prostate cancer clinical trial data, its practical efficacy is demonstrated.
{"title":"Semiparametric regression modelling of current status competing risks data: a Bayesian approach","authors":"Pavithra Hariharan, P. G. Sankaran","doi":"10.1007/s00180-024-01455-8","DOIUrl":"https://doi.org/10.1007/s00180-024-01455-8","url":null,"abstract":"<p>The current status censoring takes place in survival analysis when the exact event times are not known, but each individual is monitored once for their survival status. The current status data often arise in medical research, from situations that involve multiple causes of failure. Examining current status competing risks data, commonly encountered in epidemiological studies and clinical trials, is more advantageous with Bayesian methods compared to conventional approaches. They excel in integrating prior knowledge with the observed data and delivering accurate results even with small samples. Inspired by these advantages, the present study is pioneering in introducing a Bayesian framework for both modelling and analysis of current status competing risks data together with covariates. By means of the proportional hazards model, estimation procedures for the regression parameters and cumulative incidence functions are established assuming appropriate prior distributions. The posterior computation is performed using an adaptive Metropolis–Hastings algorithm. Methods for comparing and validating models have been devised. An assessment of the finite sample characteristics of the estimators is conducted through simulation studies. Through the application of this Bayesian approach to prostate cancer clinical trial data, its practical efficacy is demonstrated.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"37 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139649048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-26DOI: 10.1007/s00180-024-01454-9
Kevin Rupp, Rudolf Schill, Jonas Süskind, Peter Georg, Maren Klever, Andreas Lösch, Lars Grasedyck, Tilo Wettig, Rainer Spang
We consider continuous-time Markov chains that describe the stochastic evolution of a dynamical system by a transition-rate matrix Q which depends on a parameter (theta ). Computing the probability distribution over states at time t requires the matrix exponential (exp ,left( tQright) ,), and inferring (theta ) from data requires its derivative (partial exp ,left( tQright) ,/partial theta ). Both are challenging to compute when the state space and hence the size of Q is huge. This can happen when the state space consists of all combinations of the values of several interacting discrete variables. Often it is even impossible to store Q. However, when Q can be written as a sum of tensor products, computing (exp ,left( tQright) ,) becomes feasible by the uniformization method, which does not require explicit storage of Q. Here we provide an analogous algorithm for computing (partial exp ,left( tQright) ,/partial theta ), the differentiated uniformization method. We demonstrate our algorithm for the stochastic SIR model of epidemic spread, for which we show that Q can be written as a sum of tensor products. We estimate monthly infection and recovery rates during the first wave of the COVID-19 pandemic in Austria and quantify their uncertainty in a full Bayesian analysis. Implementation and data are available at https://github.com/spang-lab/TenSIR.
{"title":"Differentiated uniformization: a new method for inferring Markov chains on combinatorial state spaces including stochastic epidemic models","authors":"Kevin Rupp, Rudolf Schill, Jonas Süskind, Peter Georg, Maren Klever, Andreas Lösch, Lars Grasedyck, Tilo Wettig, Rainer Spang","doi":"10.1007/s00180-024-01454-9","DOIUrl":"https://doi.org/10.1007/s00180-024-01454-9","url":null,"abstract":"<p>We consider continuous-time Markov chains that describe the stochastic evolution of a dynamical system by a transition-rate matrix <i>Q</i> which depends on a parameter <span>(theta )</span>. Computing the probability distribution over states at time <i>t</i> requires the matrix exponential <span>(exp ,left( tQright) ,)</span>, and inferring <span>(theta )</span> from data requires its derivative <span>(partial exp ,left( tQright) ,/partial theta )</span>. Both are challenging to compute when the state space and hence the size of <i>Q</i> is huge. This can happen when the state space consists of all combinations of the values of several interacting discrete variables. Often it is even impossible to store <i>Q</i>. However, when <i>Q</i> can be written as a sum of tensor products, computing <span>(exp ,left( tQright) ,)</span> becomes feasible by the uniformization method, which does not require explicit storage of <i>Q</i>. Here we provide an analogous algorithm for computing <span>(partial exp ,left( tQright) ,/partial theta )</span>, the <i>differentiated uniformization method</i>. We demonstrate our algorithm for the stochastic SIR model of epidemic spread, for which we show that <i>Q</i> can be written as a sum of tensor products. We estimate monthly infection and recovery rates during the first wave of the COVID-19 pandemic in Austria and quantify their uncertainty in a full Bayesian analysis. Implementation and data are available at https://github.com/spang-lab/TenSIR.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"74 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139578734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-20DOI: 10.1007/s00180-023-01451-4
Shirin Nezampour, Alireza Nematollahi, Robert T. Krafty, Mehdi Maadooliat
This paper develops a nonparametric method for estimating the spectral density of multivariate stationary time series using basis expansion. A likelihood-based approach is used to fit the model through the minimization of a penalized Whittle negative log-likelihood. Then, a Newton-type algorithm is developed for the computation. In this method, we smooth the Cholesky factors of the multivariate spectral density matrix in a way that the reconstructed estimate based on the smoothed Cholesky components is consistent and positive-definite. In a simulation study, we have illustrated and compared our proposed method with other competitive approaches. Finally, we apply our approach to two real-world problems, Electroencephalogram signals analysis, (El Nitilde{n}o) Cycle.
{"title":"A new approach to nonparametric estimation of multivariate spectral density function using basis expansion","authors":"Shirin Nezampour, Alireza Nematollahi, Robert T. Krafty, Mehdi Maadooliat","doi":"10.1007/s00180-023-01451-4","DOIUrl":"https://doi.org/10.1007/s00180-023-01451-4","url":null,"abstract":"<p>This paper develops a nonparametric method for estimating the spectral density of multivariate stationary time series using basis expansion. A likelihood-based approach is used to fit the model through the minimization of a penalized Whittle negative log-likelihood. Then, a Newton-type algorithm is developed for the computation. In this method, we smooth the Cholesky factors of the multivariate spectral density matrix in a way that the reconstructed estimate based on the smoothed Cholesky components is consistent and positive-definite. In a simulation study, we have illustrated and compared our proposed method with other competitive approaches. Finally, we apply our approach to two real-world problems, Electroencephalogram signals analysis, <span>(El Nitilde{n}o)</span> Cycle.\u0000</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"13 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139508567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-17DOI: 10.1007/s00180-023-01446-1
Jeongjin Lee, Taehwa Choi, Sangbum Choi
Broken adaptive ridge (BAR) is a penalized regression method that performs variable selection via a computationally scalable surrogate to (L_0) regularization. The BAR regression has many appealing features; it converges to selection with (L_0) penalties as a result of reweighting (L_2) penalties, and satisfies the oracle property with grouping effect for highly correlated covariates. In this paper, we investigate the BAR procedure for variable selection in a semiparametric accelerated failure time model with complex high-dimensional censored data. Coupled with Buckley-James-type responses, BAR-based variable selection procedures can be performed when event times are censored in complex ways, such as right-censored, left-censored, or double-censored. Our approach utilizes a two-stage cyclic coordinate descent algorithm to minimize the objective function by iteratively estimating the pseudo survival response and regression coefficients along the direction of coordinates. Under some weak regularity conditions, we establish both the oracle property and the grouping effect of the proposed BAR estimator. Numerical studies are conducted to investigate the finite-sample performance of the proposed algorithm and an application to real data is provided as a data example.
断裂自适应脊(BAR)是一种惩罚回归方法,它通过可计算扩展的代用 (L_0) 正则化来执行变量选择。BAR 回归有很多吸引人的特点:它收敛于 (L_0) 惩罚的选择,作为 (L_2) 惩罚重新加权的结果,并且在高度相关的协变量上满足具有分组效应的 Oracle 特性。在本文中,我们研究了在具有复杂高维删减数据的半参数加速故障时间模型中进行变量选择的 BAR 程序。与 Buckley-James 型响应相结合,基于 BAR 的变量选择程序可在事件时间以复杂方式(如右删失、左删失或双删失)删失时执行。我们的方法采用两阶段循环坐标下降算法,通过沿坐标方向迭代估计伪生存响应和回归系数,使目标函数最小化。在一些弱正则性条件下,我们建立了所提出的 BAR 估计器的甲骨文属性和分组效应。我们进行了数值研究,以考察所提算法的有限样本性能,并提供了一个应用于真实数据的数据示例。
{"title":"Censored broken adaptive ridge regression in high-dimension","authors":"Jeongjin Lee, Taehwa Choi, Sangbum Choi","doi":"10.1007/s00180-023-01446-1","DOIUrl":"https://doi.org/10.1007/s00180-023-01446-1","url":null,"abstract":"<p>Broken adaptive ridge (BAR) is a penalized regression method that performs variable selection via a computationally scalable surrogate to <span>(L_0)</span> regularization. The BAR regression has many appealing features; it converges to selection with <span>(L_0)</span> penalties as a result of reweighting <span>(L_2)</span> penalties, and satisfies the oracle property with grouping effect for highly correlated covariates. In this paper, we investigate the BAR procedure for variable selection in a semiparametric accelerated failure time model with complex high-dimensional censored data. Coupled with Buckley-James-type responses, BAR-based variable selection procedures can be performed when event times are censored in complex ways, such as right-censored, left-censored, or double-censored. Our approach utilizes a two-stage cyclic coordinate descent algorithm to minimize the objective function by iteratively estimating the pseudo survival response and regression coefficients along the direction of coordinates. Under some weak regularity conditions, we establish both the oracle property and the grouping effect of the proposed BAR estimator. Numerical studies are conducted to investigate the finite-sample performance of the proposed algorithm and an application to real data is provided as a data example.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"262 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139482136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The support vector machine (SVM) is a powerful classifier used for binary classification to improve the prediction accuracy. However, the nondifferentiability of the SVM hinge loss function can lead to computational difficulties in high-dimensional settings. To overcome this problem, we rely on the Bernstein polynomial and propose a new smoothed version of the SVM hinge loss called the Bernstein support vector machine (BernSVC). This extension is suitable for the high dimension regime. As the BernSVC objective loss function is twice differentiable everywhere, we propose two efficient algorithms for computing the solution of the penalized BernSVC. The first algorithm is based on coordinate descent with the maximization-majorization principle and the second algorithm is the iterative reweighted least squares-type algorithm. Under standard assumptions, we derive a cone condition and a restricted strong convexity to establish an upper bound for the weighted lasso BernSVC estimator. By using a local linear approximation, we extend the latter result to the penalized BernSVC with nonconvex penalties SCAD and MCP. Our bound holds with high probability and achieves the so-called fast rate under mild conditions on the design matrix. Simulation studies are considered to illustrate the prediction accuracy of BernSVC relative to its competitors and also to compare the performance of the two algorithms in terms of computational timing and error estimation. The use of the proposed method is illustrated through analysis of three large-scale real data examples.
{"title":"High-dimensional penalized Bernstein support vector classifier","authors":"Rachid Kharoubi, Abdallah Mkhadri, Karim Oualkacha","doi":"10.1007/s00180-023-01448-z","DOIUrl":"https://doi.org/10.1007/s00180-023-01448-z","url":null,"abstract":"<p>The support vector machine (SVM) is a powerful classifier used for binary classification to improve the prediction accuracy. However, the nondifferentiability of the SVM hinge loss function can lead to computational difficulties in high-dimensional settings. To overcome this problem, we rely on the Bernstein polynomial and propose a new smoothed version of the SVM hinge loss called the Bernstein support vector machine (BernSVC). This extension is suitable for the high dimension regime. As the BernSVC objective loss function is twice differentiable everywhere, we propose two efficient algorithms for computing the solution of the penalized BernSVC. The first algorithm is based on coordinate descent with the maximization-majorization principle and the second algorithm is the iterative reweighted least squares-type algorithm. Under standard assumptions, we derive a cone condition and a restricted strong convexity to establish an upper bound for the weighted lasso BernSVC estimator. By using a local linear approximation, we extend the latter result to the penalized BernSVC with nonconvex penalties SCAD and MCP. Our bound holds with high probability and achieves the so-called fast rate under mild conditions on the design matrix. Simulation studies are considered to illustrate the prediction accuracy of BernSVC relative to its competitors and also to compare the performance of the two algorithms in terms of computational timing and error estimation. The use of the proposed method is illustrated through analysis of three large-scale real data examples.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"262 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139482088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-12DOI: 10.1007/s00180-023-01450-5
Kévin Elie-Dit-Cosaque, Véronique Maume-Deschamps
We propose a random forest based estimation procedure for Quantile-Oriented Sensitivity Analysis—QOSA. In order to be efficient, a cross-validation step on the leaf size of trees is required. Our full estimation procedure is tested on both simulated data and a real dataset. Our estimators use either the bootstrap samples or the original sample in the estimation. Also, they are either based on a quantile plug-in procedure (the R-estimators) or on a direct minimization (the Q-estimators). This leads to 8 different estimators which are compared on simulations. From these simulations, it seems that the estimation method based on a direct minimization is better than the one plugging the quantile. This is a significant result because the method with direct minimization requires only one sample and could therefore be preferred.
{"title":"Random forest based quantile-oriented sensitivity analysis indices estimation","authors":"Kévin Elie-Dit-Cosaque, Véronique Maume-Deschamps","doi":"10.1007/s00180-023-01450-5","DOIUrl":"https://doi.org/10.1007/s00180-023-01450-5","url":null,"abstract":"<p>We propose a random forest based estimation procedure for Quantile-Oriented Sensitivity Analysis—QOSA. In order to be efficient, a cross-validation step on the leaf size of trees is required. Our full estimation procedure is tested on both simulated data and a real dataset. Our estimators use either the bootstrap samples or the original sample in the estimation. Also, they are either based on a quantile plug-in procedure (the <i>R</i>-estimators) or on a direct minimization (the <i>Q</i>-estimators). This leads to 8 different estimators which are compared on simulations. From these simulations, it seems that the estimation method based on a direct minimization is better than the one plugging the quantile. This is a significant result because the method with direct minimization requires only one sample and could therefore be preferred.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"54 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139462061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}