首页 > 最新文献

Computational Statistics最新文献

英文 中文
Bayes estimation of ratio of scale-like parameters for inverse Gaussian distributions and applications to classification 贝叶斯估计反高斯分布的比例类参数比率及其在分类中的应用
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-09-19 DOI: 10.1007/s00180-024-01554-6
Ankur Chakraborty, Nabakumar Jana

We consider two inverse Gaussian populations with a common mean but different scale-like parameters, where all parameters are unknown. We construct noninformative priors for the ratio of the scale-like parameters to derive matching priors of different orders. Reference priors are proposed for different groups of parameters. The Bayes estimators of the common mean and ratio of the scale-like parameters are also derived. We propose confidence intervals of the conditional error rate in classifying an observation into inverse Gaussian distributions. A generalized variable-based confidence interval and the highest posterior density credible intervals for the error rate are computed. We estimate parameters of the mixture of these inverse Gaussian distributions and obtain estimates of the expected probability of correct classification. An intensive simulation study has been carried out to compare the estimators and expected probability of correct classification. Real data-based examples are given to show the practicality and effectiveness of the estimators.

我们考虑两个具有共同均值但不同类比参数的反高斯群体,其中所有参数都是未知的。我们为类标度参数的比率构建了非信息前验,从而推导出不同阶次的匹配前验。我们还为不同的参数组提出了参考先验。我们还推导出了类比例参数的共同均值和比率的贝叶斯估计值。我们提出了将观测分类为反高斯分布的条件误差率置信区间。我们计算了基于变量的广义置信区间和误差率的最高后验密度可信区间。我们估计了这些逆高斯分布的混合物参数,并获得了正确分类的预期概率估计值。为了比较估计值和正确分类的预期概率,我们进行了深入的模拟研究。我们还给出了基于真实数据的示例,以展示估计器的实用性和有效性。
{"title":"Bayes estimation of ratio of scale-like parameters for inverse Gaussian distributions and applications to classification","authors":"Ankur Chakraborty, Nabakumar Jana","doi":"10.1007/s00180-024-01554-6","DOIUrl":"https://doi.org/10.1007/s00180-024-01554-6","url":null,"abstract":"<p>We consider two inverse Gaussian populations with a common mean but different scale-like parameters, where all parameters are unknown. We construct noninformative priors for the ratio of the scale-like parameters to derive matching priors of different orders. Reference priors are proposed for different groups of parameters. The Bayes estimators of the common mean and ratio of the scale-like parameters are also derived. We propose confidence intervals of the conditional error rate in classifying an observation into inverse Gaussian distributions. A generalized variable-based confidence interval and the highest posterior density credible intervals for the error rate are computed. We estimate parameters of the mixture of these inverse Gaussian distributions and obtain estimates of the expected probability of correct classification. An intensive simulation study has been carried out to compare the estimators and expected probability of correct classification. Real data-based examples are given to show the practicality and effectiveness of the estimators.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"50 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multivariate approaches to investigate the home and away behavior of football teams playing football matches 研究足球队主客场比赛行为的多元方法
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-09-17 DOI: 10.1007/s00180-024-01553-7
Antonello D’Ambra, Pietro Amenta, Antonio Lucadamo

Compared to other European competitions, participation in the Uefa Champions League is a real “bargain” for football clubs due to the hefty bonuses awarded based on performance during the group qualification phase. To perform successfully in football depends on several multidimensional factors, and analyzing the main ones remains challenging. In the performance study, little attention has been paid to teams’ behavior when playing at home and away. Our study combines statistical techniques to develop a procedure to examine teams’ performance. Several considerations make the 2022–2023 Serie A league season particularly interesting to analyze with our approach. Except for Napoli, all the teams showed different home-and-away behaviors concerning the results obtained at the season’s end. Ball possession and corners have positively influenced scored points in both home and away games with a different impact. The precision indicator was not an essential variable. The procedure highlighted the negative roles played by offside, as well as yellow and red cards.

与其他欧洲赛事相比,参加欧洲冠军联赛对足球俱乐部来说是真正的 "实惠",因为根据小组资格赛阶段的表现可获得高额奖金。要想在足球比赛中取得好成绩,取决于多个多维因素,而分析其中的主要因素仍具有挑战性。在成绩研究中,人们很少关注球队在主客场比赛中的表现。我们的研究结合了统计技术,制定了一套考察球队表现的程序。有几个因素使得 2022-2023 赛季的意甲联赛特别值得用我们的方法进行分析。除那不勒斯外,所有球队在赛季结束时的主客场表现都不尽相同。在主客场比赛中,控球率和角球对得分都有积极影响,但影响程度不同。精确度指标并非重要变量。该程序强调了越位以及黄牌和红牌的负面作用。
{"title":"Multivariate approaches to investigate the home and away behavior of football teams playing football matches","authors":"Antonello D’Ambra, Pietro Amenta, Antonio Lucadamo","doi":"10.1007/s00180-024-01553-7","DOIUrl":"https://doi.org/10.1007/s00180-024-01553-7","url":null,"abstract":"<p>Compared to other European competitions, participation in the Uefa Champions League is a real “bargain” for football clubs due to the hefty bonuses awarded based on performance during the group qualification phase. To perform successfully in football depends on several multidimensional factors, and analyzing the main ones remains challenging. In the performance study, little attention has been paid to teams’ behavior when playing at home and away. Our study combines statistical techniques to develop a procedure to examine teams’ performance. Several considerations make the 2022–2023 Serie A league season particularly interesting to analyze with our approach. Except for Napoli, all the teams showed different home-and-away behaviors concerning the results obtained at the season’s end. Ball possession and corners have positively influenced scored points in both home and away games with a different impact. The precision indicator was not an essential variable. The procedure highlighted the negative roles played by offside, as well as yellow and red cards.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"2 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Kendall correlations and radar charts to include goals for and goals against in soccer rankings 肯德尔相关性和雷达图,在足球排名中纳入进球数和失球数
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-09-17 DOI: 10.1007/s00180-024-01542-w
Roy Cerqueti, Raffaele Mattera, Valerio Ficcadenti

This paper deals with the challenging themes of the way sporting teams and athletes are ranked in sports competitions. Starting from the paradigmatic case of soccer, we advance a new method for ranking teams in the official national championships through computational statistics methods based on Kendall correlations and radar charts. In detail, we consider the goals for and against the teams in the individual matches as a further source of score assignment beyond the usual win-tie-lose trichotomy. Our approach overcomes some biases in the scoring rules that are currently employed. The methodological proposal is tested over the relevant case of the Italian “Serie A” championships played during 1930–2023.

本文探讨了体育比赛中运动队和运动员排名方式这一具有挑战性的主题。我们从足球这一典型案例出发,通过基于肯德尔相关性和雷达图的计算统计方法,提出了一种在官方全国锦标赛中对球队进行排名的新方法。具体而言,我们考虑了单场比赛中球队的进球数和失球数,将其作为除通常的胜平负三分法之外的另一种分数分配来源。我们的方法克服了目前采用的评分规则中的一些偏差。我们在 1930-2023 年期间举行的意大利甲级联赛冠军赛的相关案例中对这一方法建议进行了测试。
{"title":"Kendall correlations and radar charts to include goals for and goals against in soccer rankings","authors":"Roy Cerqueti, Raffaele Mattera, Valerio Ficcadenti","doi":"10.1007/s00180-024-01542-w","DOIUrl":"https://doi.org/10.1007/s00180-024-01542-w","url":null,"abstract":"<p>This paper deals with the challenging themes of the way sporting teams and athletes are ranked in sports competitions. Starting from the paradigmatic case of soccer, we advance a new method for ranking teams in the official national championships through computational statistics methods based on Kendall correlations and radar charts. In detail, we consider the goals for and against the teams in the individual matches as a further source of score assignment beyond the usual win-tie-lose trichotomy. Our approach overcomes some biases in the scoring rules that are currently employed. The methodological proposal is tested over the relevant case of the Italian “Serie A” championships played during 1930–2023.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"35 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian adaptive lasso quantile regression with non-ignorable missing responses 具有不可忽略的缺失响应的贝叶斯自适应套索量化回归
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-09-16 DOI: 10.1007/s00180-024-01546-6
Ranran Chen, Mai Dao, Keying Ye, Min Wang

In this paper, we develop a fully Bayesian adaptive lasso quantile regression model to analyze data with non-ignorable missing responses, which frequently occur in various fields of study. Specifically, we employ a logistic regression model to deal with missing data of non-ignorable mechanism. By using the asymmetric Laplace working likelihood for the data and specifying Laplace priors for the regression coefficients, our proposed method extends the Bayesian lasso framework by imposing specific penalization parameters on each regression coefficient, enhancing our estimation and variable selection capability. Furthermore, we embrace the normal-exponential mixture representation of the asymmetric Laplace distribution and the Student-t approximation of the logistic regression model to develop a simple and efficient Gibbs sampling algorithm for generating posterior samples and making statistical inferences. The finite-sample performance of the proposed algorithm is investigated through various simulation studies and a real-data example.

在本文中,我们开发了一种全贝叶斯自适应套索量子回归模型,用于分析在各个研究领域经常出现的不可忽略的缺失响应数据。具体来说,我们采用逻辑回归模型来处理不可忽略机制的缺失数据。通过对数据使用非对称拉普拉斯工作似然,并为回归系数指定拉普拉斯先验,我们提出的方法扩展了贝叶斯套索框架,对每个回归系数施加了特定的惩罚参数,从而增强了我们的估计和变量选择能力。此外,我们还采用了非对称拉普拉斯分布的正态-指数混合表示法和逻辑回归模型的 Student-t 近似方法,开发了一种简单高效的吉布斯抽样算法,用于生成后验样本并进行统计推断。通过各种模拟研究和一个真实数据示例,研究了所提算法的有限样本性能。
{"title":"Bayesian adaptive lasso quantile regression with non-ignorable missing responses","authors":"Ranran Chen, Mai Dao, Keying Ye, Min Wang","doi":"10.1007/s00180-024-01546-6","DOIUrl":"https://doi.org/10.1007/s00180-024-01546-6","url":null,"abstract":"<p>In this paper, we develop a fully Bayesian adaptive lasso quantile regression model to analyze data with non-ignorable missing responses, which frequently occur in various fields of study. Specifically, we employ a logistic regression model to deal with missing data of non-ignorable mechanism. By using the asymmetric Laplace working likelihood for the data and specifying Laplace priors for the regression coefficients, our proposed method extends the Bayesian lasso framework by imposing specific penalization parameters on each regression coefficient, enhancing our estimation and variable selection capability. Furthermore, we embrace the normal-exponential mixture representation of the asymmetric Laplace distribution and the Student-<i>t</i> approximation of the logistic regression model to develop a simple and efficient Gibbs sampling algorithm for generating posterior samples and making statistical inferences. The finite-sample performance of the proposed algorithm is investigated through various simulation studies and a real-data example.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"94 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Statistical visualisation of tidy and geospatial data in R via kernel smoothing methods in the eks package 通过eks软件包中的核平滑方法在R语言中实现整洁数据和地理空间数据的统计可视化
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-09-14 DOI: 10.1007/s00180-024-01543-9
Tarn Duong

Kernel smoothers are essential tools for data analysis due to their ability to convey complex statistical information with concise graphical visualisations. Their inclusion in the base distribution and in the many user-contributed add-on packages of the R statistical analysis environment caters well to many practitioners. Though there remain some important gaps for specialised data, most notably for tidy and geospatial data. The proposed eks package fills in these gaps. In addition to kernel density estimation, this package also caters for more complex data analysis situations, such as density derivative estimation, density-based classification (supervised learning) and mean shift clustering (unsupervised learning). We illustrate with experimental data how to obtain and to interpret the statistical visualisations for these kernel smoothing methods.

核平滑器能以简洁的图形直观地表达复杂的统计信息,是数据分析的重要工具。R 统计分析环境的基本发行版和许多用户贡献的附加软件包中都包含了这些工具,很好地满足了许多从业人员的需求。不过,对于专业数据,尤其是整洁数据和地理空间数据,仍然存在一些重要的空白。拟议的 eks 软件包填补了这些空白。除核密度估计外,该软件包还可用于更复杂的数据分析情况,如密度导数估计、基于密度的分类(监督学习)和均值移动聚类(无监督学习)。我们将用实验数据说明如何获得和解释这些核平滑方法的统计可视化效果。
{"title":"Statistical visualisation of tidy and geospatial data in R via kernel smoothing methods in the eks package","authors":"Tarn Duong","doi":"10.1007/s00180-024-01543-9","DOIUrl":"https://doi.org/10.1007/s00180-024-01543-9","url":null,"abstract":"<p>Kernel smoothers are essential tools for data analysis due to their ability to convey complex statistical information with concise graphical visualisations. Their inclusion in the base distribution and in the many user-contributed add-on packages of the <span>R</span> statistical analysis environment caters well to many practitioners. Though there remain some important gaps for specialised data, most notably for tidy and geospatial data. The proposed <span>eks</span> package fills in these gaps. In addition to kernel density estimation, this package also caters for more complex data analysis situations, such as density derivative estimation, density-based classification (supervised learning) and mean shift clustering (unsupervised learning). We illustrate with experimental data how to obtain and to interpret the statistical visualisations for these kernel smoothing methods.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"119 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using the Krylov subspace formulation to improve regularisation and interpretation in partial least squares regression 利用克雷洛夫子空间公式改进偏最小二乘回归中的正则化和解释能力
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-09-12 DOI: 10.1007/s00180-024-01545-7
Tommy Löfstedt

Partial least squares regression (PLS-R) has been an important regression method in the life sciences and many other fields for decades. However, PLS-R is typically solved using an opaque algorithmic approach, rather than through an optimisation formulation and procedure. There is a clear optimisation formulation of the PLS-R problem based on a Krylov subspace formulation, but it is only rarely considered. The popularity of PLS-R is attributed to the ability to interpret the data through the model components, but the model components are not available when solving the PLS-R problem using the Krylov subspace formulation. We therefore highlight a simple reformulation of the PLS-R problem using the Krylov subspace formulation as a promising modelling framework for PLS-R, and illustrate one of the main benefits of this reformulation—that it allows arbitrary penalties of the regression coefficients in the PLS-R model. Further, we propose an approach to estimate the PLS-R model components for the solution found through the Krylov subspace formulation, that are those we would have obtained had we been able to use the common algorithms for estimating the PLS-R model. We illustrate the utility of the proposed method on simulated and real data.

几十年来,偏最小二乘回归(PLS-R)一直是生命科学和许多其他领域的重要回归方法。然而,PLS-R 通常采用不透明的算法方法,而不是通过优化公式和程序来解决。基于 Krylov 子空间公式的 PLS-R 问题有一个明确的优化公式,但很少被考虑。PLS-R 的流行归因于通过模型成分解释数据的能力,但在使用 Krylov 子空间公式求解 PLS-R 问题时,模型成分是不可用的。因此,我们强调使用 Krylov 子空间公式对 PLS-R 问题进行简单重拟,将其作为 PLS-R 的一个有前途的建模框架,并说明了这种重拟的一个主要优点--它允许对 PLS-R 模型中的回归系数进行任意惩罚。此外,我们还提出了一种方法,用于估计通过克雷洛夫子空间公式找到的解决方案的 PLS-R 模型成分,也就是我们在使用普通算法估计 PLS-R 模型时会得到的那些成分。我们在模拟数据和真实数据上说明了所提方法的实用性。
{"title":"Using the Krylov subspace formulation to improve regularisation and interpretation in partial least squares regression","authors":"Tommy Löfstedt","doi":"10.1007/s00180-024-01545-7","DOIUrl":"https://doi.org/10.1007/s00180-024-01545-7","url":null,"abstract":"<p>Partial least squares regression (PLS-R) has been an important regression method in the life sciences and many other fields for decades. However, PLS-R is typically solved using an opaque algorithmic approach, rather than through an optimisation formulation and procedure. There is a clear optimisation formulation of the PLS-R problem based on a Krylov subspace formulation, but it is only rarely considered. The popularity of PLS-R is attributed to the ability to interpret the data through the model components, but the model components are not available when solving the PLS-R problem using the Krylov subspace formulation. We therefore highlight a simple reformulation of the PLS-R problem using the Krylov subspace formulation as a promising modelling framework for PLS-R, and illustrate one of the main benefits of this reformulation—that it allows arbitrary penalties of the regression coefficients in the PLS-R model. Further, we propose an approach to estimate the PLS-R model components for the solution found through the Krylov subspace formulation, that are those we would have obtained had we been able to use the common algorithms for estimating the PLS-R model. We illustrate the utility of the proposed method on simulated and real data.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"25 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust matrix factor analysis method with adaptive parameter adjustment using Cauchy weighting 利用考奇加权进行自适应参数调整的稳健矩阵因子分析方法
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-09-12 DOI: 10.1007/s00180-024-01548-4
Junchen Li

In recent years, high-dimensional matrix factor models have been widely applied in various fields. However, there are few methods that effectively handle heavy-tailed data. To address this problem, we introduced a smooth Cauchy loss function and established an optimization objective through norm minimization, deriving a Cauchy version of the weighted iterative estimation method. Unlike the Huber loss weighted estimation method, the weight calculation in this method is a smooth function rather than a piecewise function. It also considers the need to update parameters in the Cauchy loss function with each iteration during estimation. Ultimately, we propose a weighted estimation method with adaptive parameter adjustment. Subsequently, this paper analyzes the theoretical properties of the method, proving that it has a fast convergence rate. Through data simulation, our method demonstrates significant advantages. Thus, it can serve as a better alternative to other existing estimation methods. Finally, we analyzed a dataset of regional population movements between cities, demonstrating that our proposed method offers estimations with excellent interpretability compared to other methods.

近年来,高维矩阵因子模型被广泛应用于各个领域。然而,能有效处理重尾数据的方法却很少。针对这一问题,我们引入了平滑的 Cauchy 损失函数,并通过规范最小化建立了优化目标,从而推导出了 Cauchy 版本的加权迭代估计方法。与 Huber 损失加权估计方法不同的是,该方法中的权重计算是一个平滑函数,而不是一个片断函数。它还考虑了在估计过程中,每次迭代都需要更新 Cauchy 损失函数中的参数。最终,我们提出了一种自适应参数调整的加权估计方法。随后,本文分析了该方法的理论特性,证明它具有快速收敛率。通过数据模拟,我们的方法展现出了显著的优势。因此,它可以更好地替代现有的其他估计方法。最后,我们分析了一个城市间区域人口流动的数据集,证明与其他方法相比,我们提出的方法能提供可解释性极佳的估算结果。
{"title":"Robust matrix factor analysis method with adaptive parameter adjustment using Cauchy weighting","authors":"Junchen Li","doi":"10.1007/s00180-024-01548-4","DOIUrl":"https://doi.org/10.1007/s00180-024-01548-4","url":null,"abstract":"<p>In recent years, high-dimensional matrix factor models have been widely applied in various fields. However, there are few methods that effectively handle heavy-tailed data. To address this problem, we introduced a smooth Cauchy loss function and established an optimization objective through norm minimization, deriving a Cauchy version of the weighted iterative estimation method. Unlike the Huber loss weighted estimation method, the weight calculation in this method is a smooth function rather than a piecewise function. It also considers the need to update parameters in the Cauchy loss function with each iteration during estimation. Ultimately, we propose a weighted estimation method with adaptive parameter adjustment. Subsequently, this paper analyzes the theoretical properties of the method, proving that it has a fast convergence rate. Through data simulation, our method demonstrates significant advantages. Thus, it can serve as a better alternative to other existing estimation methods. Finally, we analyzed a dataset of regional population movements between cities, demonstrating that our proposed method offers estimations with excellent interpretability compared to other methods.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"5 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A precise and efficient exceedance-set algorithm for detecting environmental extremes 用于检测极端环境的精确高效超限集算法
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-09-06 DOI: 10.1007/s00180-024-01540-y
Thomas Suesse, Alexander Brenning

Inference for predicted exceedance sets is important for various environmental issues such as detecting environmental anomalies and emergencies with high confidence. A critical part is to construct inner and outer predicted exceedance sets using an algorithm that samples from the predictive distribution. The simple currently used sampling procedure can lead to misleading conclusions for some locations due to relatively large standard errors when proportions are estimated from independent observations. Instead we propose an algorithm that calculates probabilities numerically using the Genz–Bretz algorithm, which is based on quasi-random numbers leading to more accurate inner and outer sets, as illustrated on rainfall data in the state of Paraná, Brazil.

预测超标集的推断对各种环境问题都很重要,如以高置信度检测环境异常和紧急情况。其中一个关键部分是使用从预测分布中采样的算法构建内部和外部预测超标集。目前使用的简单取样程序可能会对某些地点产生误导性结论,因为从独立观测值估算比例时,标准误差相对较大。相反,我们提出了一种使用 Genz-Bretz 算法数值计算概率的算法,该算法以准随机数为基础,可得出更准确的内部和外部集合,如巴西巴拉那州的降雨数据所示。
{"title":"A precise and efficient exceedance-set algorithm for detecting environmental extremes","authors":"Thomas Suesse, Alexander Brenning","doi":"10.1007/s00180-024-01540-y","DOIUrl":"https://doi.org/10.1007/s00180-024-01540-y","url":null,"abstract":"<p>Inference for predicted exceedance sets is important for various environmental issues such as detecting environmental anomalies and emergencies with high confidence. A critical part is to construct inner and outer predicted exceedance sets using an algorithm that samples from the predictive distribution. The simple currently used sampling procedure can lead to misleading conclusions for some locations due to relatively large standard errors when proportions are estimated from independent observations. Instead we propose an algorithm that calculates probabilities numerically using the Genz–Bretz algorithm, which is based on quasi-random numbers leading to more accurate inner and outer sets, as illustrated on rainfall data in the state of Paraná, Brazil.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"60 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Change point estimation for Gaussian time series data with copula-based Markov chain models 利用基于共轭的马尔科夫链模型对高斯时间序列数据进行变化点估计
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-09-05 DOI: 10.1007/s00180-024-01541-x
Li-Hsien Sun, Yu-Kai Wang, Lien-Hsi Liu, Takeshi Emura, Chi-Yang Chiu

This paper proposes a method for change-point estimation, focusing on detecting structural shifts within time series data. Traditional maximum likelihood estimation (MLE) methods assume either independence or linear dependence via auto-regressive models. To address this limitation, the paper introduces copula-based Markov chain models, offering more flexible dependence modeling. These models treat a Gaussian time series as a Markov chain and utilize copula functions to handle serial dependence. The profile MLE procedure is then employed to estimate the change-point and other model parameters, with the Newton–Raphson algorithm facilitating numerical calculations for the estimators. The proposed approach is evaluated through simulations and real stock return data, considering two distinct periods: the 2008 financial crisis and the COVID-19 pandemic in 2020.

本文提出了一种变化点估计方法,重点是检测时间序列数据中的结构变化。传统的最大似然估计(MLE)方法通过自回归模型假设独立性或线性依赖性。为了解决这一局限性,本文引入了基于 copula 的马尔科夫链模型,提供更灵活的依赖性建模。这些模型将高斯时间序列视为马尔科夫链,并利用 copula 函数来处理序列依赖性。然后采用轮廓 MLE 程序来估计变化点和其他模型参数,牛顿-拉斐森算法可方便地进行估计值的数值计算。考虑到 2008 年金融危机和 2020 年 COVID-19 大流行这两个不同时期,通过模拟和实际股票回报数据对所提出的方法进行了评估。
{"title":"Change point estimation for Gaussian time series data with copula-based Markov chain models","authors":"Li-Hsien Sun, Yu-Kai Wang, Lien-Hsi Liu, Takeshi Emura, Chi-Yang Chiu","doi":"10.1007/s00180-024-01541-x","DOIUrl":"https://doi.org/10.1007/s00180-024-01541-x","url":null,"abstract":"<p>This paper proposes a method for change-point estimation, focusing on detecting structural shifts within time series data. Traditional maximum likelihood estimation (MLE) methods assume either independence or linear dependence via auto-regressive models. To address this limitation, the paper introduces copula-based Markov chain models, offering more flexible dependence modeling. These models treat a Gaussian time series as a Markov chain and utilize copula functions to handle serial dependence. The profile MLE procedure is then employed to estimate the change-point and other model parameters, with the Newton–Raphson algorithm facilitating numerical calculations for the estimators. The proposed approach is evaluated through simulations and real stock return data, considering two distinct periods: the 2008 financial crisis and the COVID-19 pandemic in 2020.</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"46 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
INet for network integration 用于网络集成的 INet
IF 1.3 4区 数学 Q3 STATISTICS & PROBABILITY Pub Date : 2024-09-04 DOI: 10.1007/s00180-024-01536-8
Valeria Policastro, Matteo Magnani, Claudia Angelini, Annamaria Carissimo

When collecting several data sets and heterogeneous data types on a given phenomenon of interest, the individual analysis of each data set will provide only a particular view of such phenomenon. Instead, integrating all the data may widen and deepen the results, offering a better view of the entire system. In the context of network integration, we propose the INet algorithm. INet assumes a similar network structure, representing latent variables in different network layers of the same system. Therefore, by combining individual edge weights and topological network structures, INet first constructs a Consensus Network that represents the shared information underneath the different layers to provide a global view of the entities that play a fundamental role in the phenomenon of interest. Then, it derives a Case Specific Network for each layer containing peculiar information of the single data type not present in all the others. We demonstrated good performance with our method through simulated data and detected new insights by analyzing biological and sociological datasets.

在收集有关某一特定现象的多个数据集和不同类型的数据时,对每个数据集的单独分析只能提供对该现象的特定看法。相反,整合所有数据可以拓宽和深化分析结果,为整个系统提供更好的视角。在网络整合方面,我们提出了 INet 算法。INet 假设网络结构相似,代表同一系统不同网络层中的潜在变量。因此,通过结合单个边缘权重和拓扑网络结构,INet 首先构建了一个共识网络,该网络代表了不同网络层下的共享信息,为在相关现象中扮演重要角色的实体提供了一个全局视图。然后,它为每一层推导出一个案例特定网络,其中包含所有其他层中不存在的单一数据类型的特殊信息。我们通过模拟数据证明了我们的方法性能良好,并通过分析生物和社会学数据集发现了新的见解。
{"title":"INet for network integration","authors":"Valeria Policastro, Matteo Magnani, Claudia Angelini, Annamaria Carissimo","doi":"10.1007/s00180-024-01536-8","DOIUrl":"https://doi.org/10.1007/s00180-024-01536-8","url":null,"abstract":"<p>When collecting several data sets and heterogeneous data types on a given phenomenon of interest, the individual analysis of each data set will provide only a particular view of such phenomenon. Instead, integrating all the data may widen and deepen the results, offering a better view of the entire system. In the context of network integration, we propose the <span>INet</span> algorithm. <span>INet</span> assumes a similar network structure, representing latent variables in different network layers of the same system. Therefore, by combining individual edge weights and topological network structures, <span>INet</span> first constructs a <span>Consensus Network</span> that represents the shared information underneath the different layers to provide a global view of the entities that play a fundamental role in the phenomenon of interest. Then, it derives a <span>Case Specific Network</span> for each layer containing peculiar information of the single data type not present in all the others. We demonstrated good performance with our method through simulated data and detected new insights by analyzing biological and sociological datasets.\u0000</p>","PeriodicalId":55223,"journal":{"name":"Computational Statistics","volume":"13 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142186115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computational Statistics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1