首页 > 最新文献

Computational Statistics最新文献

英文 中文
Using the Krylov subspace formulation to improve regularisation and interpretation in partial least squares regression 利用克雷洛夫子空间公式改进偏最小二乘回归中的正则化和解释能力
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-09-12 DOI: 10.1007/s00180-024-01545-7
Tommy Löfstedt

Partial least squares regression (PLS-R) has been an important regression method in the life sciences and many other fields for decades. However, PLS-R is typically solved using an opaque algorithmic approach, rather than through an optimisation formulation and procedure. There is a clear optimisation formulation of the PLS-R problem based on a Krylov subspace formulation, but it is only rarely considered. The popularity of PLS-R is attributed to the ability to interpret the data through the model components, but the model components are not available when solving the PLS-R problem using the Krylov subspace formulation. We therefore highlight a simple reformulation of the PLS-R problem using the Krylov subspace formulation as a promising modelling framework for PLS-R, and illustrate one of the main benefits of this reformulation—that it allows arbitrary penalties of the regression coefficients in the PLS-R model. Further, we propose an approach to estimate the PLS-R model components for the solution found through the Krylov subspace formulation, that are those we would have obtained had we been able to use the common algorithms for estimating the PLS-R model. We illustrate the utility of the proposed method on simulated and real data.

几十年来,偏最小二乘回归(PLS-R)一直是生命科学和许多其他领域的重要回归方法。然而,PLS-R 通常采用不透明的算法方法,而不是通过优化公式和程序来解决。基于 Krylov 子空间公式的 PLS-R 问题有一个明确的优化公式,但很少被考虑。PLS-R 的流行归因于通过模型成分解释数据的能力,但在使用 Krylov 子空间公式求解 PLS-R 问题时,模型成分是不可用的。因此,我们强调使用 Krylov 子空间公式对 PLS-R 问题进行简单重拟,将其作为 PLS-R 的一个有前途的建模框架,并说明了这种重拟的一个主要优点--它允许对 PLS-R 模型中的回归系数进行任意惩罚。此外,我们还提出了一种方法,用于估计通过克雷洛夫子空间公式找到的解决方案的 PLS-R 模型成分,也就是我们在使用普通算法估计 PLS-R 模型时会得到的那些成分。我们在模拟数据和真实数据上说明了所提方法的实用性。
{"title":"Using the Krylov subspace formulation to improve regularisation and interpretation in partial least squares regression","authors":"Tommy Löfstedt","doi":"10.1007/s00180-024-01545-7","DOIUrl":"https://doi.org/10.1007/s00180-024-01545-7","url":null,"abstract":"<p>Partial least squares regression (PLS-R) has been an important regression method in the life sciences and many other fields for decades. However, PLS-R is typically solved using an opaque algorithmic approach, rather than through an optimisation formulation and procedure. There is a clear optimisation formulation of the PLS-R problem based on a Krylov subspace formulation, but it is only rarely considered. The popularity of PLS-R is attributed to the ability to interpret the data through the model components, but the model components are not available when solving the PLS-R problem using the Krylov subspace formulation. We therefore highlight a simple reformulation of the PLS-R problem using the Krylov subspace formulation as a promising modelling framework for PLS-R, and illustrate one of the main benefits of this reformulation—that it allows arbitrary penalties of the regression coefficients in the PLS-R model. Further, we propose an approach to estimate the PLS-R model components for the solution found through the Krylov subspace formulation, that are those we would have obtained had we been able to use the common algorithms for estimating the PLS-R model. We illustrate the utility of the proposed method on simulated and real data.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"25 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust matrix factor analysis method with adaptive parameter adjustment using Cauchy weighting 利用考奇加权进行自适应参数调整的稳健矩阵因子分析方法
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-09-12 DOI: 10.1007/s00180-024-01548-4
Junchen Li

In recent years, high-dimensional matrix factor models have been widely applied in various fields. However, there are few methods that effectively handle heavy-tailed data. To address this problem, we introduced a smooth Cauchy loss function and established an optimization objective through norm minimization, deriving a Cauchy version of the weighted iterative estimation method. Unlike the Huber loss weighted estimation method, the weight calculation in this method is a smooth function rather than a piecewise function. It also considers the need to update parameters in the Cauchy loss function with each iteration during estimation. Ultimately, we propose a weighted estimation method with adaptive parameter adjustment. Subsequently, this paper analyzes the theoretical properties of the method, proving that it has a fast convergence rate. Through data simulation, our method demonstrates significant advantages. Thus, it can serve as a better alternative to other existing estimation methods. Finally, we analyzed a dataset of regional population movements between cities, demonstrating that our proposed method offers estimations with excellent interpretability compared to other methods.

近年来,高维矩阵因子模型被广泛应用于各个领域。然而,能有效处理重尾数据的方法却很少。针对这一问题,我们引入了平滑的 Cauchy 损失函数,并通过规范最小化建立了优化目标,从而推导出了 Cauchy 版本的加权迭代估计方法。与 Huber 损失加权估计方法不同的是,该方法中的权重计算是一个平滑函数,而不是一个片断函数。它还考虑了在估计过程中,每次迭代都需要更新 Cauchy 损失函数中的参数。最终,我们提出了一种自适应参数调整的加权估计方法。随后,本文分析了该方法的理论特性,证明它具有快速收敛率。通过数据模拟,我们的方法展现出了显著的优势。因此,它可以更好地替代现有的其他估计方法。最后,我们分析了一个城市间区域人口流动的数据集,证明与其他方法相比,我们提出的方法能提供可解释性极佳的估算结果。
{"title":"Robust matrix factor analysis method with adaptive parameter adjustment using Cauchy weighting","authors":"Junchen Li","doi":"10.1007/s00180-024-01548-4","DOIUrl":"https://doi.org/10.1007/s00180-024-01548-4","url":null,"abstract":"<p>In recent years, high-dimensional matrix factor models have been widely applied in various fields. However, there are few methods that effectively handle heavy-tailed data. To address this problem, we introduced a smooth Cauchy loss function and established an optimization objective through norm minimization, deriving a Cauchy version of the weighted iterative estimation method. Unlike the Huber loss weighted estimation method, the weight calculation in this method is a smooth function rather than a piecewise function. It also considers the need to update parameters in the Cauchy loss function with each iteration during estimation. Ultimately, we propose a weighted estimation method with adaptive parameter adjustment. Subsequently, this paper analyzes the theoretical properties of the method, proving that it has a fast convergence rate. Through data simulation, our method demonstrates significant advantages. Thus, it can serve as a better alternative to other existing estimation methods. Finally, we analyzed a dataset of regional population movements between cities, demonstrating that our proposed method offers estimations with excellent interpretability compared to other methods.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"5 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A precise and efficient exceedance-set algorithm for detecting environmental extremes 用于检测极端环境的精确高效超限集算法
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-09-06 DOI: 10.1007/s00180-024-01540-y
Thomas Suesse, Alexander Brenning

Inference for predicted exceedance sets is important for various environmental issues such as detecting environmental anomalies and emergencies with high confidence. A critical part is to construct inner and outer predicted exceedance sets using an algorithm that samples from the predictive distribution. The simple currently used sampling procedure can lead to misleading conclusions for some locations due to relatively large standard errors when proportions are estimated from independent observations. Instead we propose an algorithm that calculates probabilities numerically using the Genz–Bretz algorithm, which is based on quasi-random numbers leading to more accurate inner and outer sets, as illustrated on rainfall data in the state of Paraná, Brazil.

预测超标集的推断对各种环境问题都很重要,如以高置信度检测环境异常和紧急情况。其中一个关键部分是使用从预测分布中采样的算法构建内部和外部预测超标集。目前使用的简单取样程序可能会对某些地点产生误导性结论,因为从独立观测值估算比例时,标准误差相对较大。相反,我们提出了一种使用 Genz-Bretz 算法数值计算概率的算法,该算法以准随机数为基础,可得出更准确的内部和外部集合,如巴西巴拉那州的降雨数据所示。
{"title":"A precise and efficient exceedance-set algorithm for detecting environmental extremes","authors":"Thomas Suesse, Alexander Brenning","doi":"10.1007/s00180-024-01540-y","DOIUrl":"https://doi.org/10.1007/s00180-024-01540-y","url":null,"abstract":"<p>Inference for predicted exceedance sets is important for various environmental issues such as detecting environmental anomalies and emergencies with high confidence. A critical part is to construct inner and outer predicted exceedance sets using an algorithm that samples from the predictive distribution. The simple currently used sampling procedure can lead to misleading conclusions for some locations due to relatively large standard errors when proportions are estimated from independent observations. Instead we propose an algorithm that calculates probabilities numerically using the Genz–Bretz algorithm, which is based on quasi-random numbers leading to more accurate inner and outer sets, as illustrated on rainfall data in the state of Paraná, Brazil.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"60 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Change point estimation for Gaussian time series data with copula-based Markov chain models 利用基于共轭的马尔科夫链模型对高斯时间序列数据进行变化点估计
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-09-05 DOI: 10.1007/s00180-024-01541-x
Li-Hsien Sun, Yu-Kai Wang, Lien-Hsi Liu, Takeshi Emura, Chi-Yang Chiu

This paper proposes a method for change-point estimation, focusing on detecting structural shifts within time series data. Traditional maximum likelihood estimation (MLE) methods assume either independence or linear dependence via auto-regressive models. To address this limitation, the paper introduces copula-based Markov chain models, offering more flexible dependence modeling. These models treat a Gaussian time series as a Markov chain and utilize copula functions to handle serial dependence. The profile MLE procedure is then employed to estimate the change-point and other model parameters, with the Newton–Raphson algorithm facilitating numerical calculations for the estimators. The proposed approach is evaluated through simulations and real stock return data, considering two distinct periods: the 2008 financial crisis and the COVID-19 pandemic in 2020.

本文提出了一种变化点估计方法,重点是检测时间序列数据中的结构变化。传统的最大似然估计(MLE)方法通过自回归模型假设独立性或线性依赖性。为了解决这一局限性,本文引入了基于 copula 的马尔科夫链模型,提供更灵活的依赖性建模。这些模型将高斯时间序列视为马尔科夫链,并利用 copula 函数来处理序列依赖性。然后采用轮廓 MLE 程序来估计变化点和其他模型参数,牛顿-拉斐森算法可方便地进行估计值的数值计算。考虑到 2008 年金融危机和 2020 年 COVID-19 大流行这两个不同时期,通过模拟和实际股票回报数据对所提出的方法进行了评估。
{"title":"Change point estimation for Gaussian time series data with copula-based Markov chain models","authors":"Li-Hsien Sun, Yu-Kai Wang, Lien-Hsi Liu, Takeshi Emura, Chi-Yang Chiu","doi":"10.1007/s00180-024-01541-x","DOIUrl":"https://doi.org/10.1007/s00180-024-01541-x","url":null,"abstract":"<p>This paper proposes a method for change-point estimation, focusing on detecting structural shifts within time series data. Traditional maximum likelihood estimation (MLE) methods assume either independence or linear dependence via auto-regressive models. To address this limitation, the paper introduces copula-based Markov chain models, offering more flexible dependence modeling. These models treat a Gaussian time series as a Markov chain and utilize copula functions to handle serial dependence. The profile MLE procedure is then employed to estimate the change-point and other model parameters, with the Newton–Raphson algorithm facilitating numerical calculations for the estimators. The proposed approach is evaluated through simulations and real stock return data, considering two distinct periods: the 2008 financial crisis and the COVID-19 pandemic in 2020.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"46 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
INet for network integration 用于网络集成的 INet
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-09-04 DOI: 10.1007/s00180-024-01536-8
Valeria Policastro, Matteo Magnani, Claudia Angelini, Annamaria Carissimo

When collecting several data sets and heterogeneous data types on a given phenomenon of interest, the individual analysis of each data set will provide only a particular view of such phenomenon. Instead, integrating all the data may widen and deepen the results, offering a better view of the entire system. In the context of network integration, we propose the INet algorithm. INet assumes a similar network structure, representing latent variables in different network layers of the same system. Therefore, by combining individual edge weights and topological network structures, INet first constructs a Consensus Network that represents the shared information underneath the different layers to provide a global view of the entities that play a fundamental role in the phenomenon of interest. Then, it derives a Case Specific Network for each layer containing peculiar information of the single data type not present in all the others. We demonstrated good performance with our method through simulated data and detected new insights by analyzing biological and sociological datasets.

在收集有关某一特定现象的多个数据集和不同类型的数据时,对每个数据集的单独分析只能提供对该现象的特定看法。相反,整合所有数据可以拓宽和深化分析结果,为整个系统提供更好的视角。在网络整合方面,我们提出了 INet 算法。INet 假设网络结构相似,代表同一系统不同网络层中的潜在变量。因此,通过结合单个边缘权重和拓扑网络结构,INet 首先构建了一个共识网络,该网络代表了不同网络层下的共享信息,为在相关现象中扮演重要角色的实体提供了一个全局视图。然后,它为每一层推导出一个案例特定网络,其中包含所有其他层中不存在的单一数据类型的特殊信息。我们通过模拟数据证明了我们的方法性能良好,并通过分析生物和社会学数据集发现了新的见解。
{"title":"INet for network integration","authors":"Valeria Policastro, Matteo Magnani, Claudia Angelini, Annamaria Carissimo","doi":"10.1007/s00180-024-01536-8","DOIUrl":"https://doi.org/10.1007/s00180-024-01536-8","url":null,"abstract":"<p>When collecting several data sets and heterogeneous data types on a given phenomenon of interest, the individual analysis of each data set will provide only a particular view of such phenomenon. Instead, integrating all the data may widen and deepen the results, offering a better view of the entire system. In the context of network integration, we propose the <span>INet</span> algorithm. <span>INet</span> assumes a similar network structure, representing latent variables in different network layers of the same system. Therefore, by combining individual edge weights and topological network structures, <span>INet</span> first constructs a <span>Consensus Network</span> that represents the shared information underneath the different layers to provide a global view of the entities that play a fundamental role in the phenomenon of interest. Then, it derives a <span>Case Specific Network</span> for each layer containing peculiar information of the single data type not present in all the others. We demonstrated good performance with our method through simulated data and detected new insights by analyzing biological and sociological datasets.\u0000</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"13 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalized linear model based on latent factors and supervised components 基于潜在因素和监督成分的广义线性模型
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-09-02 DOI: 10.1007/s00180-024-01544-8
Julien Gibaud, Xavier Bry, Catherine Trottier

In a context of component-based multivariate modeling we propose to model the residual dependence of the responses. Each response of a response vector is assumed to depend, through a Generalized Linear Model, on a set of explanatory variables. The vast majority of explanatory variables are partitioned into conceptually homogeneous variable groups, viewed as explanatory themes. Variables in themes are supposed many and some of them are highly correlated or even collinear. Thus, generalized linear regression demands dimension reduction and regularization with respect to each theme. Besides them, we consider a small set of “additional” covariates not conceptually linked to the themes, and demanding no regularization. Supervised Component Generalized Linear Regression proposed to both regularize and reduce the dimension of the explanatory space by searching each theme for an appropriate number of orthogonal components, which both contribute to predict the responses and capture relevant structural information in themes. In this paper, we introduce random latent variables (a.k.a. factors) so as to model the covariance matrix of the linear predictors of the responses conditional on the components. To estimate the model, we present an algorithm combining supervised component-based model estimation with factor model estimation. This methodology is tested on simulated data and then applied to an agricultural ecology dataset.

在基于成分的多元建模中,我们建议对响应的残差依赖性进行建模。通过广义线性模型,假设响应向量的每个响应都取决于一组解释变量。绝大多数解释变量被划分为概念上同质的变量组,被视为解释主题。主题中的变量应该很多,其中一些变量高度相关,甚至相互关联。因此,广义线性回归要求对每个主题进行降维和正则化处理。除此之外,我们还考虑了一小部分 "附加 "协变量,这些协变量与主题没有概念上的联系,也不需要正则化。监督成分广义线性回归(Supervised Component Generalized Linear Regression)建议,通过在每个主题中搜索适当数量的正交成分来规整和降低解释空间的维度,这些正交成分既有助于预测反应,又能捕捉主题中的相关结构信息。在本文中,我们引入了随机潜变量(又称因子),从而建立以成分为条件的响应线性预测因子协方差矩阵模型。为了估计模型,我们提出了一种算法,将基于成分的监督模型估计与因子模型估计相结合。该方法在模拟数据上进行了测试,然后应用于农业生态数据集。
{"title":"Generalized linear model based on latent factors and supervised components","authors":"Julien Gibaud, Xavier Bry, Catherine Trottier","doi":"10.1007/s00180-024-01544-8","DOIUrl":"https://doi.org/10.1007/s00180-024-01544-8","url":null,"abstract":"<p>In a context of component-based multivariate modeling we propose to model the residual dependence of the responses. Each response of a response vector is assumed to depend, through a Generalized Linear Model, on a set of explanatory variables. The vast majority of explanatory variables are partitioned into conceptually homogeneous variable groups, viewed as explanatory themes. Variables in themes are supposed many and some of them are highly correlated or even collinear. Thus, generalized linear regression demands dimension reduction and regularization with respect to each theme. Besides them, we consider a small set of “additional” covariates not conceptually linked to the themes, and demanding no regularization. Supervised Component Generalized Linear Regression proposed to both regularize and reduce the dimension of the explanatory space by searching each theme for an appropriate number of orthogonal components, which both contribute to predict the responses and capture relevant structural information in themes. In this paper, we introduce random latent variables (a.k.a. factors) so as to model the covariance matrix of the linear predictors of the responses conditional on the components. To estimate the model, we present an algorithm combining supervised component-based model estimation with factor model estimation. This methodology is tested on simulated data and then applied to an agricultural ecology dataset.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"33 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SpICE: an interpretable method for spatial data SpICE:空间数据的可解释方法
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-08-26 DOI: 10.1007/s00180-024-01538-6
Natalia da Silva, Ignacio Alvarez-Castro, Leonardo Moreno, Andrés Sosa

Statistical learning methods are widely utilised in tackling complex problems due to their flexibility, good predictive performance and ability to capture complex relationships among variables. Additionally, recently developed automatic workflows have provided a standardised approach for implementing statistical learning methods across various applications. However, these tools highlight one of the main drawbacks of statistical learning: the lack of interpretability of the results. In the past few years, a large amount of research has been focused on methods for interpreting black box models. Having interpretable statistical learning methods is necessary for obtaining a deeper understanding of these models. Specifically in problems in which spatial information is relevant, combining interpretable methods with spatial data can help to provide a better understanding of the problem and an improved interpretation of the results. This paper is focused on the individual conditional expectation plot (ICE-plot), a model-agnostic method for interpreting statistical learning models and combining them with spatial information. An ICE-plot extension is proposed in which spatial information is used as a restriction to define spatial ICE (SpICE) curves. Spatial ICE curves are estimated using real data in the context of an economic problem concerning property valuation in Montevideo, Uruguay. Understanding the key factors that influence property valuation is essential for decision-making, and spatial data play a relevant role in this regard.

统计学习方法具有灵活性、良好的预测性能和捕捉变量间复杂关系的能力,因此被广泛用于解决复杂问题。此外,最近开发的自动工作流程为在各种应用中实施统计学习方法提供了标准化方法。然而,这些工具突出了统计学习的一个主要缺点:结果缺乏可解释性。在过去几年中,大量研究都集中在解释黑盒模型的方法上。拥有可解释的统计学习方法对于深入理解这些模型非常必要。特别是在与空间信息相关的问题中,将可解释的方法与空间数据相结合,有助于更好地理解问题和改进对结果的解释。本文的重点是个体条件期望图(ICE-plot),这是一种与模型无关的方法,用于解释统计学习模型并将其与空间信息相结合。本文提出了 ICE-plot 的扩展,其中空间信息被用作定义空间 ICE(SpICE)曲线的限制条件。在乌拉圭蒙得维的亚,利用真实数据估算了空间 ICE 曲线。了解影响房地产估价的关键因素对决策至关重要,而空间数据在这方面发挥着重要作用。
{"title":"SpICE: an interpretable method for spatial data","authors":"Natalia da Silva, Ignacio Alvarez-Castro, Leonardo Moreno, Andrés Sosa","doi":"10.1007/s00180-024-01538-6","DOIUrl":"https://doi.org/10.1007/s00180-024-01538-6","url":null,"abstract":"<p>Statistical learning methods are widely utilised in tackling complex problems due to their flexibility, good predictive performance and ability to capture complex relationships among variables. Additionally, recently developed automatic workflows have provided a standardised approach for implementing statistical learning methods across various applications. However, these tools highlight one of the main drawbacks of statistical learning: the lack of interpretability of the results. In the past few years, a large amount of research has been focused on methods for interpreting black box models. Having interpretable statistical learning methods is necessary for obtaining a deeper understanding of these models. Specifically in problems in which spatial information is relevant, combining interpretable methods with spatial data can help to provide a better understanding of the problem and an improved interpretation of the results. This paper is focused on the individual conditional expectation plot (ICE-plot), a model-agnostic method for interpreting statistical learning models and combining them with spatial information. An ICE-plot extension is proposed in which spatial information is used as a restriction to define spatial ICE (SpICE) curves. Spatial ICE curves are estimated using real data in the context of an economic problem concerning property valuation in Montevideo, Uruguay. Understanding the key factors that influence property valuation is essential for decision-making, and spatial data play a relevant role in this regard.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"58 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of evaluation metrics for classification in imbalanced data 不平衡数据分类评价指标的性能
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-08-24 DOI: 10.1007/s00180-024-01539-5
Alex de la Cruz Huayanay, Jorge L. Bazán, Cibele M. Russo

This paper investigates the effectiveness of various metrics for selecting the adequate model for binary classification when data is imbalanced. Through an extensive simulation study involving 12 commonly used metrics of classification, our findings indicate that the Matthews Correlation Coefficient, G-Mean, and Cohen’s kappa consistently yield favorable performance. Conversely, the area under the curve and Accuracy metrics demonstrate poor performance across all studied scenarios, while other seven metrics exhibit varying degrees of effectiveness in specific scenarios. Furthermore, we discuss a practical application in the financial area, which confirms the robust performance of these metrics in facilitating model selection among alternative link functions.

本文研究了在数据不平衡的情况下,为二元分类选择适当模型的各种指标的有效性。通过涉及 12 个常用分类指标的广泛模拟研究,我们的研究结果表明,马修斯相关系数、G-均值和科恩卡帕一直都能产生良好的性能。相反,曲线下面积和准确度指标在所有研究场景中都表现不佳,而其他七个指标在特定场景中表现出不同程度的有效性。此外,我们还讨论了金融领域的一个实际应用,该应用证实了这些指标在促进从备选链接函数中选择模型方面的强大性能。
{"title":"Performance of evaluation metrics for classification in imbalanced data","authors":"Alex de la Cruz Huayanay, Jorge L. Bazán, Cibele M. Russo","doi":"10.1007/s00180-024-01539-5","DOIUrl":"https://doi.org/10.1007/s00180-024-01539-5","url":null,"abstract":"<p>This paper investigates the effectiveness of various metrics for selecting the adequate model for binary classification when data is imbalanced. Through an extensive simulation study involving 12 commonly used metrics of classification, our findings indicate that the Matthews Correlation Coefficient, G-Mean, and Cohen’s kappa consistently yield favorable performance. Conversely, the area under the curve and Accuracy metrics demonstrate poor performance across all studied scenarios, while other seven metrics exhibit varying degrees of effectiveness in specific scenarios. Furthermore, we discuss a practical application in the financial area, which confirms the robust performance of these metrics in facilitating model selection among alternative link functions.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"23 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A theory of contrasts for modified Freeman–Tukey statistics and its applications to Tukey’s post-hoc tests for contingency tables 修正弗里曼-图基统计的对比理论及其在或然表图基事后检验中的应用
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-08-17 DOI: 10.1007/s00180-024-01537-7
Yoshio Takane, Eric J. Beh, Rosaria Lombardo

This paper presents a theory of contrasts designed for modified Freeman–Tukey (FT) statistics which are derived through square-root transformations of observed frequencies (proportions) in contingency tables. Some modifications of the original FT statistic are necessary to allow for ANOVA-like exact decompositions of the global goodness of fit (GOF) measures. The square-root transformations have an important effect of stabilizing (equalizing) variances. The theory is then used to derive Tukey’s post-hoc pairwise comparison tests for contingency tables. Tukey’s tests are more restrictive, but are more powerful, than Scheffè’s post-hoc tests developed earlier for the analysis of contingency tables. Throughout this paper, numerical examples are given to illustrate the theory. Modified FT statistics, like other similar statistics for contingency tables, are based on a large-sample rationale. Small Monte-Carlo studies are conducted to investigate asymptotic (and non-asymptotic) behaviors of the proposed statistics.

本文介绍了一种对比理论,该理论是为修正的弗里曼-图基(FT)统计量而设计的,该统计量是通过对或然率表中的观察频率(比例)进行平方根变换而得出的。为了对全局拟合优度(GOF)进行类似方差分析的精确分解,有必要对原始 FT 统计量进行一些修改。平方根变换具有稳定(均衡)方差的重要作用。然后,利用该理论推导出针对或然表的 Tukey 事后配对比较检验。Tukey 检验比 Scheffè 早先为分析或然率表而开发的事后检验更具限制性,但更强大。本文通篇以数字示例来说明理论。修正的 FT 统计法与其他类似的或然率统计法一样,都是基于大样本的原理。本文进行了小规模的蒙特卡洛研究,以研究拟议统计量的渐近(和非渐近)行为。
{"title":"A theory of contrasts for modified Freeman–Tukey statistics and its applications to Tukey’s post-hoc tests for contingency tables","authors":"Yoshio Takane, Eric J. Beh, Rosaria Lombardo","doi":"10.1007/s00180-024-01537-7","DOIUrl":"https://doi.org/10.1007/s00180-024-01537-7","url":null,"abstract":"<p>This paper presents a theory of contrasts designed for modified Freeman–Tukey (FT) statistics which are derived through square-root transformations of observed frequencies (proportions) in contingency tables. Some modifications of the original FT statistic are necessary to allow for ANOVA-like exact decompositions of the global goodness of fit (GOF) measures. The square-root transformations have an important effect of stabilizing (equalizing) variances. The theory is then used to derive Tukey’s post-hoc pairwise comparison tests for contingency tables. Tukey’s tests are more restrictive, but are more powerful, than Scheffè’s post-hoc tests developed earlier for the analysis of contingency tables. Throughout this paper, numerical examples are given to illustrate the theory. Modified FT statistics, like other similar statistics for contingency tables, are based on a large-sample rationale. Small Monte-Carlo studies are conducted to investigate asymptotic (and non-asymptotic) behaviors of the proposed statistics.\u0000</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"32 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel nonconvex, smooth-at-origin penalty for statistical learning 用于统计学习的新型非凸、平滑原点罚则
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-08-07 DOI: 10.1007/s00180-024-01525-x
Majnu John, Sujit Vettam, Yihren Wu

Nonconvex penalties are utilized for regularization in high-dimensional statistical learning algorithms primarily because they yield unbiased or nearly unbiased estimators for the parameters in the model. Nonconvex penalties existing in the literature such as SCAD, MCP, Laplace and arctan have a singularity at origin which makes them useful also for variable selection. However, in several high-dimensional frameworks such as deep learning, variable selection is less of a concern. In this paper, we present a nonconvex penalty which is smooth at origin. The paper includes asymptotic results for ordinary least squares estimators regularized with the new penalty function, showing asymptotic bias that vanishes exponentially fast. We also conducted simulations to better understand the finite sample properties and conducted an empirical study employing deep neural network architecture on three datasets and convolutional neural network on four datasets. The empirical study based on artificial neural networks showed better performance for the new regularization approach in five out of the seven datasets.

在高维统计学习算法中,非凸惩罚被用于正则化,主要是因为它们能为模型中的参数提供无偏或接近无偏的估计值。文献中现有的非凸惩罚,如 SCAD、MCP、Laplace 和 arctan,在原点处都有一个奇点,这使它们也适用于变量选择。然而,在深度学习等一些高维框架中,变量选择就不那么重要了。在本文中,我们提出了一种在原点处平滑的非凸罚分。本文包括用新惩罚函数正则化的普通最小二乘估计器的渐近结果,显示了以指数速度消失的渐近偏差。我们还进行了模拟以更好地理解有限样本特性,并在三个数据集上使用深度神经网络架构进行了实证研究,在四个数据集上使用卷积神经网络进行了实证研究。基于人工神经网络的实证研究表明,在七个数据集中,有五个数据集的新正则化方法性能更好。
{"title":"A novel nonconvex, smooth-at-origin penalty for statistical learning","authors":"Majnu John, Sujit Vettam, Yihren Wu","doi":"10.1007/s00180-024-01525-x","DOIUrl":"https://doi.org/10.1007/s00180-024-01525-x","url":null,"abstract":"<p>Nonconvex penalties are utilized for regularization in high-dimensional statistical learning algorithms primarily because they yield unbiased or nearly unbiased estimators for the parameters in the model. Nonconvex penalties existing in the literature such as SCAD, MCP, Laplace and arctan have a singularity at origin which makes them useful also for variable selection. However, in several high-dimensional frameworks such as deep learning, variable selection is less of a concern. In this paper, we present a nonconvex penalty which is smooth at origin. The paper includes asymptotic results for ordinary least squares estimators regularized with the new penalty function, showing asymptotic bias that vanishes exponentially fast. We also conducted simulations to better understand the finite sample properties and conducted an empirical study employing deep neural network architecture on three datasets and convolutional neural network on four datasets. The empirical study based on artificial neural networks showed better performance for the new regularization approach in five out of the seven datasets.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"4 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141969706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computational Statistics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1