Pub Date : 2024-07-02DOI: 10.1007/s11222-024-10443-3
Ning Ning, Edward Ionides
Stochastic models for collections of interacting populations have crucial roles in many scientific fields such as epidemiology, ecology, performance engineering, and queueing theory, to name a few. However, the standard approach to extending an ordinary differential equation model to a Markov chain does not have sufficient flexibility in the mean-variance relationship to match data. To handle that, we develop new approaches using Dirichlet noise to construct collections of independent or dependent noise processes. This permits the modeling of high-frequency variation in transition rates both within and between the populations under study. Our theory is developed in a general framework of time-inhomogeneous Markov processes equipped with a general graphical structure. We demonstrate our approach on a widely analyzed measles dataset, adding Dirichlet noise to a classical Susceptible–Exposed–Infected–Recovered model. Our methodology shows improved statistical fit measured by log-likelihood and provides new insights into the dynamics of this biological system.
{"title":"Systemic infinitesimal over-dispersion on graphical dynamic models","authors":"Ning Ning, Edward Ionides","doi":"10.1007/s11222-024-10443-3","DOIUrl":"https://doi.org/10.1007/s11222-024-10443-3","url":null,"abstract":"<p>Stochastic models for collections of interacting populations have crucial roles in many scientific fields such as epidemiology, ecology, performance engineering, and queueing theory, to name a few. However, the standard approach to extending an ordinary differential equation model to a Markov chain does not have sufficient flexibility in the mean-variance relationship to match data. To handle that, we develop new approaches using Dirichlet noise to construct collections of independent or dependent noise processes. This permits the modeling of high-frequency variation in transition rates both within and between the populations under study. Our theory is developed in a general framework of time-inhomogeneous Markov processes equipped with a general graphical structure. We demonstrate our approach on a widely analyzed measles dataset, adding Dirichlet noise to a classical Susceptible–Exposed–Infected–Recovered model. Our methodology shows improved statistical fit measured by log-likelihood and provides new insights into the dynamics of this biological system.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"14 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141531285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-27DOI: 10.1007/s11222-024-10451-3
Douglas O. Cardoso, João Domingos Gomes da Silva Junior, Carla Silva Oliveira, Celso Marques, Laura Silva de Assis
Spectral clustering techniques depend on the eigenstructure of a similarity matrix to assign data points to clusters, so that points within the same cluster exhibit high similarity and are compared to those in different clusters. This work aimed to develop a spectral method that could be compared to clustering algorithms that represent the current state of the art. This investigation conceived a novel spectral clustering method, as well as five policies that guide its execution, based on spectral graph theory and embodying hierarchical clustering principles. Computational experiments comparing the proposed method with six state-of-the-art algorithms were undertaken in this study to evaluate the clustering methods under scrutiny. The assessment was performed using two evaluation metrics, specifically the adjusted Rand index, and modularity. The obtained results furnish compelling evidence, indicating that the proposed method is competitive and possesses distinctive properties compared to those elucidated in the existing literature. This suggests that our approach stands as a viable alternative, offering a robust choice within the spectrum of available same-purpose tools.
{"title":"Greedy recursive spectral bisection for modularity-bound hierarchical divisive community detection","authors":"Douglas O. Cardoso, João Domingos Gomes da Silva Junior, Carla Silva Oliveira, Celso Marques, Laura Silva de Assis","doi":"10.1007/s11222-024-10451-3","DOIUrl":"https://doi.org/10.1007/s11222-024-10451-3","url":null,"abstract":"<p>Spectral clustering techniques depend on the eigenstructure of a similarity matrix to assign data points to clusters, so that points within the same cluster exhibit high similarity and are compared to those in different clusters. This work aimed to develop a spectral method that could be compared to clustering algorithms that represent the current state of the art. This investigation conceived a novel spectral clustering method, as well as five policies that guide its execution, based on spectral graph theory and embodying hierarchical clustering principles. Computational experiments comparing the proposed method with six state-of-the-art algorithms were undertaken in this study to evaluate the clustering methods under scrutiny. The assessment was performed using two evaluation metrics, specifically the adjusted Rand index, and modularity. The obtained results furnish compelling evidence, indicating that the proposed method is competitive and possesses distinctive properties compared to those elucidated in the existing literature. This suggests that our approach stands as a viable alternative, offering a robust choice within the spectrum of available same-purpose tools.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"11 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141517773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-27DOI: 10.1007/s11222-024-10453-1
Weitao Hu, Weiping Zhang
To enhance the robustness and flexibility of Bayesian quantile regression models using the asymmetric Laplace or asymmetric Huberised-type (AH) distribution, which lacks changeable mode, diminishing influence of outliers, and asymmetry under median regression, we propose a new generalized AH distribution which is achieved through a hierarchical mixture representation, thus leading to a flexible Bayesian Huberised quantile regression framework. With many parameters in the model, we develop an efficient Markov chain Monte Carlo procedure based on the Metropolis-within-Gibbs sampling algorithm. The robustness and flexibility of the new distribution are examined through intensive simulation studies and application to two real data sets.
{"title":"Flexible Bayesian quantile regression based on the generalized asymmetric Huberised-type distribution","authors":"Weitao Hu, Weiping Zhang","doi":"10.1007/s11222-024-10453-1","DOIUrl":"https://doi.org/10.1007/s11222-024-10453-1","url":null,"abstract":"<p>To enhance the robustness and flexibility of Bayesian quantile regression models using the asymmetric Laplace or asymmetric Huberised-type (AH) distribution, which lacks changeable mode, diminishing influence of outliers, and asymmetry under median regression, we propose a new generalized AH distribution which is achieved through a hierarchical mixture representation, thus leading to a flexible Bayesian Huberised quantile regression framework. With many parameters in the model, we develop an efficient Markov chain Monte Carlo procedure based on the Metropolis-within-Gibbs sampling algorithm. The robustness and flexibility of the new distribution are examined through intensive simulation studies and application to two real data sets.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"359 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141529333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-26DOI: 10.1007/s11222-024-10454-0
Sarah Elizabeth Heaps, Ian Hyla Jermyn
Factor models are widely used for dimension reduction in the analysis of multivariate data. This is achieved through decomposition of a (p times p) covariance matrix into the sum of two components. Through a latent factor representation, they can be interpreted as a diagonal matrix of idiosyncratic variances and a shared variation matrix, that is, the product of a (p times k) factor loadings matrix and its transpose. If (k ll p), this defines a parsimonious factorisation of the covariance matrix. Historically, little attention has been paid to incorporating prior information in Bayesian analyses using factor models where, at best, the prior for the factor loadings is order invariant. In this work, a class of structured priors is developed that can encode ideas of dependence structure about the shared variation matrix. The construction allows data-informed shrinkage towards sensible parametric structures while also facilitating inference over the number of factors. Using an unconstrained reparameterisation of stationary vector autoregressions, the methodology is extended to stationary dynamic factor models. For computational inference, parameter-expanded Markov chain Monte Carlo samplers are proposed, including an efficient adaptive Gibbs sampler. Two substantive applications showcase the scope of the methodology and its inferential benefits.
{"title":"Structured prior distributions for the covariance matrix in latent factor models","authors":"Sarah Elizabeth Heaps, Ian Hyla Jermyn","doi":"10.1007/s11222-024-10454-0","DOIUrl":"https://doi.org/10.1007/s11222-024-10454-0","url":null,"abstract":"<p>Factor models are widely used for dimension reduction in the analysis of multivariate data. This is achieved through decomposition of a <span>(p times p)</span> covariance matrix into the sum of two components. Through a latent factor representation, they can be interpreted as a diagonal matrix of idiosyncratic variances and a shared variation matrix, that is, the product of a <span>(p times k)</span> factor loadings matrix and its transpose. If <span>(k ll p)</span>, this defines a parsimonious factorisation of the covariance matrix. Historically, little attention has been paid to incorporating prior information in Bayesian analyses using factor models where, at best, the prior for the factor loadings is order invariant. In this work, a class of structured priors is developed that can encode ideas of dependence structure about the shared variation matrix. The construction allows data-informed shrinkage towards sensible parametric structures while also facilitating inference over the number of factors. Using an unconstrained reparameterisation of stationary vector autoregressions, the methodology is extended to stationary dynamic factor models. For computational inference, parameter-expanded Markov chain Monte Carlo samplers are proposed, including an efficient adaptive Gibbs sampler. Two substantive applications showcase the scope of the methodology and its inferential benefits.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"548 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-25DOI: 10.1007/s11222-024-10458-w
Diego I. Gallardo, Marcelo Bourguignon, José S. Romeo
We present a novel frailty model for modeling clustered survival data. In particular, we consider the Birnbaum–Saunders (BS) distribution for the frailty terms with a new directly parameterized on the variance of the frailty distribution. This allows, among other things, compare the estimated frailty terms among traditional models, such as the gamma frailty model. Some mathematical properties of the new model are studied including the conditional distribution of frailties among the survivors, the frailty of individuals dying at time t, and the Kendall’s (tau ) measure. Furthermore, an explicit form to the derivatives of the Laplace transform for the BS distribution using the di Bruno’s formula is found. Parametric, non-parametric and semiparametric versions of the BS frailty model are studied. We use a simple Expectation-Maximization (EM) algorithm to estimate the model parameters and evaluate its performance under different censoring proportion by a Monte Carlo simulation study. We also show that the BS frailty model is competitive over the gamma and weighted Lindley frailty models under misspecification. We illustrate our methodology by using a real data sets.
我们提出了一种新的虚弱模型,用于对聚类生存数据建模。特别是,我们考虑了 Birnbaum-Saunders(BS)分布的虚弱项,并直接以虚弱分布的方差作为新的参数。这样,除其他外,我们就能将估计的虚弱项与传统模型(如伽马虚弱模型)进行比较。研究了新模型的一些数学特性,包括幸存者中虚弱度的条件分布、在时间 t 死亡的个体的虚弱度以及 Kendall's (tau )度量。此外,还利用 di Bruno 公式找到了 BS 分布拉普拉斯变换导数的明确形式。我们研究了 BS 虚弱模型的参数、非参数和半参数版本。我们使用简单的期望最大化(EM)算法来估计模型参数,并通过蒙特卡罗模拟研究来评估其在不同删减比例下的性能。我们还证明,BS脆性模型相对于伽马脆性模型和加权林德利脆性模型而言,在错误设置的情况下是有竞争力的。我们使用真实数据集来说明我们的方法。
{"title":"Birnbaum–Saunders frailty regression models for clustered survival data","authors":"Diego I. Gallardo, Marcelo Bourguignon, José S. Romeo","doi":"10.1007/s11222-024-10458-w","DOIUrl":"https://doi.org/10.1007/s11222-024-10458-w","url":null,"abstract":"<p>We present a novel frailty model for modeling clustered survival data. In particular, we consider the Birnbaum–Saunders (BS) distribution for the frailty terms with a new directly parameterized on the variance of the frailty distribution. This allows, among other things, compare the estimated frailty terms among traditional models, such as the gamma frailty model. Some mathematical properties of the new model are studied including the conditional distribution of frailties among the survivors, the frailty of individuals dying at time <i>t</i>, and the Kendall’s <span>(tau )</span> measure. Furthermore, an explicit form to the derivatives of the Laplace transform for the BS distribution using the di Bruno’s formula is found. Parametric, non-parametric and semiparametric versions of the BS frailty model are studied. We use a simple Expectation-Maximization (EM) algorithm to estimate the model parameters and evaluate its performance under different censoring proportion by a Monte Carlo simulation study. We also show that the BS frailty model is competitive over the gamma and weighted Lindley frailty models under misspecification. We illustrate our methodology by using a real data sets.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"148 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141517774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-25DOI: 10.1007/s11222-024-10440-6
Rémy Abergel, Olivier Bouaziz, Grégory Nuel
The Adaptive Ridge Algorithm is an iterative algorithm designed for variable selection. It is also known under the denomination of Iteratively Reweighted Least-Squares Algorithm in the communities of Compressed Sensing and Sparse Signals Recovery. Besides, it can also be interpreted as an optimization algorithm dedicated to the minimization of possibly nonconvex (ell ^q) penalized energies (with (0<q<2)). In the literature, this algorithm can be derived using various mathematical approaches, namely Half Quadratic Minimization, Majorization-Minimization, Alternating Minimization or Local Approximations. In this work, we will show how the Adaptive Ridge Algorithm can be simply derived and analyzed from a single equation, corresponding to a variational reformulation of the (ell ^q) penalty. We will describe in detail how the Adaptive Ridge Algorithm can be numerically implemented and we will perform a thorough experimental study of its parameters. We will also show how the variational formulation of the (ell ^q) penalty combined with modern duality principles can be used to design an interesting variant of the Adaptive Ridge Algorithm dedicated to the minimization of quadratic functions over (nonconvex) (ell ^q) balls.
{"title":"A review on the Adaptive-Ridge Algorithm with several extensions","authors":"Rémy Abergel, Olivier Bouaziz, Grégory Nuel","doi":"10.1007/s11222-024-10440-6","DOIUrl":"https://doi.org/10.1007/s11222-024-10440-6","url":null,"abstract":"<p>The Adaptive Ridge Algorithm is an iterative algorithm designed for variable selection. It is also known under the denomination of Iteratively Reweighted Least-Squares Algorithm in the communities of Compressed Sensing and Sparse Signals Recovery. Besides, it can also be interpreted as an optimization algorithm dedicated to the minimization of possibly nonconvex <span>(ell ^q)</span> penalized energies (with <span>(0<q<2)</span>). In the literature, this algorithm can be derived using various mathematical approaches, namely Half Quadratic Minimization, Majorization-Minimization, Alternating Minimization or Local Approximations. In this work, we will show how the Adaptive Ridge Algorithm can be simply derived and analyzed from a single equation, corresponding to a variational reformulation of the <span>(ell ^q)</span> penalty. We will describe in detail how the Adaptive Ridge Algorithm can be numerically implemented and we will perform a thorough experimental study of its parameters. We will also show how the variational formulation of the <span>(ell ^q)</span> penalty combined with modern duality principles can be used to design an interesting variant of the Adaptive Ridge Algorithm dedicated to the minimization of quadratic functions over (nonconvex) <span>(ell ^q)</span> balls.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"26 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-25DOI: 10.1007/s11222-024-10456-y
Wisdom Aselisewine, Suvra Pal
Cure rate models have been thoroughly investigated across various domains, encompassing medicine, reliability, and finance. The merging of machine learning (ML) with cure models is emerging as a promising strategy to improve predictive accuracy and gain profound insights into the underlying mechanisms influencing the probability of cure. The current body of literature has explored the benefits of incorporating a single ML algorithm with cure models. However, there is a notable absence of a comprehensive study that compares the performances of various ML algorithms in this context. This paper seeks to address and bridge this gap. Specifically, we focus on the well-known mixture cure model and examine the incorporation of five distinct ML algorithms: extreme gradient boosting, neural networks, support vector machines, random forests, and decision trees. To bolster the robustness of our comparison, we also include cure models with logistic and spline-based regression. For parameter estimation, we formulate an expectation maximization algorithm. A comprehensive simulation study is conducted across diverse scenarios to compare various models based on the accuracy and precision of estimates for different quantities of interest, along with the predictive accuracy of cure. The results derived from both the simulation study, as well as the analysis of real cutaneous melanoma data, indicate that the incorporation of ML models into cure model provides a beneficial contribution to the ongoing endeavors aimed at improving the accuracy of cure rate estimation.
治愈率模型已在医学、可靠性和金融等多个领域得到深入研究。将机器学习(ML)与治愈模型相结合,正在成为一种有前途的策略,可提高预测准确性,并深入了解影响治愈概率的潜在机制。目前已有大量文献探讨了将单一 ML 算法与治愈模型相结合的益处。然而,在这种情况下比较各种 ML 算法性能的综合研究却明显缺乏。本文试图解决并弥补这一空白。具体来说,我们将重点放在众所周知的混合治愈模型上,并研究了五种不同的 ML 算法:极梯度提升、神经网络、支持向量机、随机森林和决策树。为了增强比较的稳健性,我们还纳入了基于逻辑和样条回归的治愈模型。在参数估计方面,我们采用了期望最大化算法。我们在不同场景下进行了全面的模拟研究,根据不同相关数量的估计准确度和精确度以及治愈预测准确度对各种模型进行了比较。模拟研究和真实皮肤黑色素瘤数据分析得出的结果表明,将 ML 模型纳入治愈模型可为目前旨在提高治愈率估算准确性的工作做出有益贡献。
{"title":"Enhancing cure rate analysis through integration of machine learning models: a comparative study","authors":"Wisdom Aselisewine, Suvra Pal","doi":"10.1007/s11222-024-10456-y","DOIUrl":"https://doi.org/10.1007/s11222-024-10456-y","url":null,"abstract":"<p>Cure rate models have been thoroughly investigated across various domains, encompassing medicine, reliability, and finance. The merging of machine learning (ML) with cure models is emerging as a promising strategy to improve predictive accuracy and gain profound insights into the underlying mechanisms influencing the probability of cure. The current body of literature has explored the benefits of incorporating a single ML algorithm with cure models. However, there is a notable absence of a comprehensive study that compares the performances of various ML algorithms in this context. This paper seeks to address and bridge this gap. Specifically, we focus on the well-known mixture cure model and examine the incorporation of five distinct ML algorithms: extreme gradient boosting, neural networks, support vector machines, random forests, and decision trees. To bolster the robustness of our comparison, we also include cure models with logistic and spline-based regression. For parameter estimation, we formulate an expectation maximization algorithm. A comprehensive simulation study is conducted across diverse scenarios to compare various models based on the accuracy and precision of estimates for different quantities of interest, along with the predictive accuracy of cure. The results derived from both the simulation study, as well as the analysis of real cutaneous melanoma data, indicate that the incorporation of ML models into cure model provides a beneficial contribution to the ongoing endeavors aimed at improving the accuracy of cure rate estimation.\u0000</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"111 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141517772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-24DOI: 10.1007/s11222-024-10452-2
Tianming Bai, Aretha L. Teckentrup, Konstantinos C. Zygalakis
This work is concerned with the use of Gaussian surrogate models for Bayesian inverse problems associated with linear partial differential equations. A particular focus is on the regime where only a small amount of training data is available. In this regime the type of Gaussian prior used is of critical importance with respect to how well the surrogate model will perform in terms of Bayesian inversion. We extend the framework of Raissi et. al. (2017) to construct PDE-informed Gaussian priors that we then use to construct different approximate posteriors. A number of different numerical experiments illustrate the superiority of the PDE-informed Gaussian priors over more traditional priors.
{"title":"Gaussian processes for Bayesian inverse problems associated with linear partial differential equations","authors":"Tianming Bai, Aretha L. Teckentrup, Konstantinos C. Zygalakis","doi":"10.1007/s11222-024-10452-2","DOIUrl":"https://doi.org/10.1007/s11222-024-10452-2","url":null,"abstract":"<p>This work is concerned with the use of Gaussian surrogate models for Bayesian inverse problems associated with linear partial differential equations. A particular focus is on the regime where only a small amount of training data is available. In this regime the type of Gaussian prior used is of critical importance with respect to how well the surrogate model will perform in terms of Bayesian inversion. We extend the framework of Raissi et. al. (2017) to construct PDE-informed Gaussian priors that we then use to construct different approximate posteriors. A number of different numerical experiments illustrate the superiority of the PDE-informed Gaussian priors over more traditional priors.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"196 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141517775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-21DOI: 10.1007/s11222-024-10447-z
Patrick Zietkiewicz, Ioannis Kosmidis
The widespread use of maximum Jeffreys’-prior penalized likelihood in binomial-response generalized linear models, and in logistic regression, in particular, are supported by the results of Kosmidis and Firth (Biometrika 108:71–82, 2021. https://doi.org/10.1093/biomet/asaa052), who show that the resulting estimates are always finite-valued, even in cases where the maximum likelihood estimates are not, which is a practical issue regardless of the size of the data set. In logistic regression, the implied adjusted score equations are formally bias-reducing in asymptotic frameworks with a fixed number of parameters and appear to deliver a substantial reduction in the persistent bias of the maximum likelihood estimator in high-dimensional settings where the number of parameters grows asymptotically as a proportion of the number of observations. In this work, we develop and present two new variants of iteratively reweighted least squares for estimating generalized linear models with adjusted score equations for mean bias reduction and maximization of the likelihood penalized by a positive power of the Jeffreys-prior penalty, which eliminate the requirement of storing O(n) quantities in memory, and can operate with data sets that exceed computer memory or even hard drive capacity. We achieve that through incremental QR decompositions, which enable IWLS iterations to have access only to data chunks of predetermined size. Both procedures can also be readily adapted to fit generalized linear models when distinct parts of the data is stored across different sites and, due to privacy concerns, cannot be fully transferred across sites. We assess the procedures through a real-data application with millions of observations.
Kosmidis 和 Firth(Biometrika 108:71-82,2021.https://doi.org/10.1093/biomet/asaa052)的研究结果表明,即使在最大似然估计值不是有限值的情况下,所得到的估计值也总是有限值的,这是一个实际问题,无论数据集的大小如何。这也支持了在二项式响应广义线性模型中,特别是在逻辑回归中广泛使用最大杰弗里斯先验惩罚似然法。在逻辑回归中,隐含的调整得分方程在参数数量固定的渐近框架中具有形式上的减偏性,并且在参数数量与观测值数量成比例渐近增长的高维环境中,似乎能大幅减少最大似然估计的持续偏差。在这项工作中,我们开发并提出了两种新的迭代加权最小二乘法变体,用于估计广义线性模型,其调整得分方程可减少平均偏差,并通过杰弗里斯-先验惩罚的正幂来惩罚似然最大化,从而消除了在内存中存储 O(n) 量的要求,并可在超过计算机内存甚至硬盘容量的数据集上运行。我们通过增量 QR 分解来实现这一点,这使得 IWLS 的迭代只能访问预定大小的数据块。当数据的不同部分存储在不同的站点,并且出于隐私考虑,无法在不同站点之间完全传输时,这两种程序也可以很容易地适应广义线性模型。我们通过一个拥有数百万观测数据的真实数据应用来评估这两种程序。
{"title":"Bounded-memory adjusted scores estimation in generalized linear models with large data sets","authors":"Patrick Zietkiewicz, Ioannis Kosmidis","doi":"10.1007/s11222-024-10447-z","DOIUrl":"https://doi.org/10.1007/s11222-024-10447-z","url":null,"abstract":"<p>The widespread use of maximum Jeffreys’-prior penalized likelihood in binomial-response generalized linear models, and in logistic regression, in particular, are supported by the results of Kosmidis and Firth (Biometrika 108:71–82, 2021. https://doi.org/10.1093/biomet/asaa052), who show that the resulting estimates are always finite-valued, even in cases where the maximum likelihood estimates are not, which is a practical issue regardless of the size of the data set. In logistic regression, the implied adjusted score equations are formally bias-reducing in asymptotic frameworks with a fixed number of parameters and appear to deliver a substantial reduction in the persistent bias of the maximum likelihood estimator in high-dimensional settings where the number of parameters grows asymptotically as a proportion of the number of observations. In this work, we develop and present two new variants of iteratively reweighted least squares for estimating generalized linear models with adjusted score equations for mean bias reduction and maximization of the likelihood penalized by a positive power of the Jeffreys-prior penalty, which eliminate the requirement of storing <i>O</i>(<i>n</i>) quantities in memory, and can operate with data sets that exceed computer memory or even hard drive capacity. We achieve that through incremental QR decompositions, which enable IWLS iterations to have access only to data chunks of predetermined size. Both procedures can also be readily adapted to fit generalized linear models when distinct parts of the data is stored across different sites and, due to privacy concerns, cannot be fully transferred across sites. We assess the procedures through a real-data application with millions of observations.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"16 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-19DOI: 10.1007/s11222-024-10448-y
Silius M. Vandeskog, Sara Martino, Raphaël Huser
We develop a comprehensive methodological workflow for Bayesian modelling of high-dimensional spatial extremes that lets us describe both weakening extremal dependence at increasing levels and changes in the type of extremal dependence class as a function of the distance between locations. This is achieved with a latent Gaussian version of the spatial conditional extremes model that allows for computationally efficient inference with R-INLA. Inference is made more robust using a post hoc adjustment method that accounts for possible model misspecification. This added robustness makes it possible to extract more information from the available data during inference using a composite likelihood. The developed methodology is applied to the modelling of extreme hourly precipitation from high-resolution radar data in Norway. Inference is performed quickly, and the resulting model fit successfully captures the main trends in the extremal dependence structure of the data. The post hoc adjustment is found to further improve model performance.
{"title":"An efficient workflow for modelling high-dimensional spatial extremes","authors":"Silius M. Vandeskog, Sara Martino, Raphaël Huser","doi":"10.1007/s11222-024-10448-y","DOIUrl":"https://doi.org/10.1007/s11222-024-10448-y","url":null,"abstract":"<p>We develop a comprehensive methodological workflow for Bayesian modelling of high-dimensional spatial extremes that lets us describe both weakening extremal dependence at increasing levels and changes in the type of extremal dependence class as a function of the distance between locations. This is achieved with a latent Gaussian version of the spatial conditional extremes model that allows for computationally efficient inference with <span>R-INLA</span>. Inference is made more robust using a post hoc adjustment method that accounts for possible model misspecification. This added robustness makes it possible to extract more information from the available data during inference using a composite likelihood. The developed methodology is applied to the modelling of extreme hourly precipitation from high-resolution radar data in Norway. Inference is performed quickly, and the resulting model fit successfully captures the main trends in the extremal dependence structure of the data. The post hoc adjustment is found to further improve model performance.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":"39 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141517776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}