A correction regarding [Latz 2021, Stat. Comput. 31, 39].
A correction regarding [Latz 2021, Stat. Comput. 31, 39].
Stochastic models for collections of interacting populations have crucial roles in many scientific fields such as epidemiology, ecology, performance engineering, and queueing theory, to name a few. However, the standard approach to extending an ordinary differential equation model to a Markov chain does not have sufficient flexibility in the mean-variance relationship to match data. To handle that, we develop new approaches using Dirichlet noise to construct collections of independent or dependent noise processes. This permits the modeling of high-frequency variation in transition rates both within and between the populations under study. Our theory is developed in a general framework of time-inhomogeneous Markov processes equipped with a general graphical structure. We demonstrate our approach on a widely analyzed measles dataset, adding Dirichlet noise to a classical Susceptible–Exposed–Infected–Recovered model. Our methodology shows improved statistical fit measured by log-likelihood and provides new insights into the dynamics of this biological system.
Spectral clustering techniques depend on the eigenstructure of a similarity matrix to assign data points to clusters, so that points within the same cluster exhibit high similarity and are compared to those in different clusters. This work aimed to develop a spectral method that could be compared to clustering algorithms that represent the current state of the art. This investigation conceived a novel spectral clustering method, as well as five policies that guide its execution, based on spectral graph theory and embodying hierarchical clustering principles. Computational experiments comparing the proposed method with six state-of-the-art algorithms were undertaken in this study to evaluate the clustering methods under scrutiny. The assessment was performed using two evaluation metrics, specifically the adjusted Rand index, and modularity. The obtained results furnish compelling evidence, indicating that the proposed method is competitive and possesses distinctive properties compared to those elucidated in the existing literature. This suggests that our approach stands as a viable alternative, offering a robust choice within the spectrum of available same-purpose tools.
To enhance the robustness and flexibility of Bayesian quantile regression models using the asymmetric Laplace or asymmetric Huberised-type (AH) distribution, which lacks changeable mode, diminishing influence of outliers, and asymmetry under median regression, we propose a new generalized AH distribution which is achieved through a hierarchical mixture representation, thus leading to a flexible Bayesian Huberised quantile regression framework. With many parameters in the model, we develop an efficient Markov chain Monte Carlo procedure based on the Metropolis-within-Gibbs sampling algorithm. The robustness and flexibility of the new distribution are examined through intensive simulation studies and application to two real data sets.
Factor models are widely used for dimension reduction in the analysis of multivariate data. This is achieved through decomposition of a (p times p) covariance matrix into the sum of two components. Through a latent factor representation, they can be interpreted as a diagonal matrix of idiosyncratic variances and a shared variation matrix, that is, the product of a (p times k) factor loadings matrix and its transpose. If (k ll p), this defines a parsimonious factorisation of the covariance matrix. Historically, little attention has been paid to incorporating prior information in Bayesian analyses using factor models where, at best, the prior for the factor loadings is order invariant. In this work, a class of structured priors is developed that can encode ideas of dependence structure about the shared variation matrix. The construction allows data-informed shrinkage towards sensible parametric structures while also facilitating inference over the number of factors. Using an unconstrained reparameterisation of stationary vector autoregressions, the methodology is extended to stationary dynamic factor models. For computational inference, parameter-expanded Markov chain Monte Carlo samplers are proposed, including an efficient adaptive Gibbs sampler. Two substantive applications showcase the scope of the methodology and its inferential benefits.
We present a novel frailty model for modeling clustered survival data. In particular, we consider the Birnbaum–Saunders (BS) distribution for the frailty terms with a new directly parameterized on the variance of the frailty distribution. This allows, among other things, compare the estimated frailty terms among traditional models, such as the gamma frailty model. Some mathematical properties of the new model are studied including the conditional distribution of frailties among the survivors, the frailty of individuals dying at time t, and the Kendall’s (tau ) measure. Furthermore, an explicit form to the derivatives of the Laplace transform for the BS distribution using the di Bruno’s formula is found. Parametric, non-parametric and semiparametric versions of the BS frailty model are studied. We use a simple Expectation-Maximization (EM) algorithm to estimate the model parameters and evaluate its performance under different censoring proportion by a Monte Carlo simulation study. We also show that the BS frailty model is competitive over the gamma and weighted Lindley frailty models under misspecification. We illustrate our methodology by using a real data sets.
The Adaptive Ridge Algorithm is an iterative algorithm designed for variable selection. It is also known under the denomination of Iteratively Reweighted Least-Squares Algorithm in the communities of Compressed Sensing and Sparse Signals Recovery. Besides, it can also be interpreted as an optimization algorithm dedicated to the minimization of possibly nonconvex (ell ^q) penalized energies (with (0<q<2)). In the literature, this algorithm can be derived using various mathematical approaches, namely Half Quadratic Minimization, Majorization-Minimization, Alternating Minimization or Local Approximations. In this work, we will show how the Adaptive Ridge Algorithm can be simply derived and analyzed from a single equation, corresponding to a variational reformulation of the (ell ^q) penalty. We will describe in detail how the Adaptive Ridge Algorithm can be numerically implemented and we will perform a thorough experimental study of its parameters. We will also show how the variational formulation of the (ell ^q) penalty combined with modern duality principles can be used to design an interesting variant of the Adaptive Ridge Algorithm dedicated to the minimization of quadratic functions over (nonconvex) (ell ^q) balls.
Cure rate models have been thoroughly investigated across various domains, encompassing medicine, reliability, and finance. The merging of machine learning (ML) with cure models is emerging as a promising strategy to improve predictive accuracy and gain profound insights into the underlying mechanisms influencing the probability of cure. The current body of literature has explored the benefits of incorporating a single ML algorithm with cure models. However, there is a notable absence of a comprehensive study that compares the performances of various ML algorithms in this context. This paper seeks to address and bridge this gap. Specifically, we focus on the well-known mixture cure model and examine the incorporation of five distinct ML algorithms: extreme gradient boosting, neural networks, support vector machines, random forests, and decision trees. To bolster the robustness of our comparison, we also include cure models with logistic and spline-based regression. For parameter estimation, we formulate an expectation maximization algorithm. A comprehensive simulation study is conducted across diverse scenarios to compare various models based on the accuracy and precision of estimates for different quantities of interest, along with the predictive accuracy of cure. The results derived from both the simulation study, as well as the analysis of real cutaneous melanoma data, indicate that the incorporation of ML models into cure model provides a beneficial contribution to the ongoing endeavors aimed at improving the accuracy of cure rate estimation.
This work is concerned with the use of Gaussian surrogate models for Bayesian inverse problems associated with linear partial differential equations. A particular focus is on the regime where only a small amount of training data is available. In this regime the type of Gaussian prior used is of critical importance with respect to how well the surrogate model will perform in terms of Bayesian inversion. We extend the framework of Raissi et. al. (2017) to construct PDE-informed Gaussian priors that we then use to construct different approximate posteriors. A number of different numerical experiments illustrate the superiority of the PDE-informed Gaussian priors over more traditional priors.
The widespread use of maximum Jeffreys’-prior penalized likelihood in binomial-response generalized linear models, and in logistic regression, in particular, are supported by the results of Kosmidis and Firth (Biometrika 108:71–82, 2021. https://doi.org/10.1093/biomet/asaa052), who show that the resulting estimates are always finite-valued, even in cases where the maximum likelihood estimates are not, which is a practical issue regardless of the size of the data set. In logistic regression, the implied adjusted score equations are formally bias-reducing in asymptotic frameworks with a fixed number of parameters and appear to deliver a substantial reduction in the persistent bias of the maximum likelihood estimator in high-dimensional settings where the number of parameters grows asymptotically as a proportion of the number of observations. In this work, we develop and present two new variants of iteratively reweighted least squares for estimating generalized linear models with adjusted score equations for mean bias reduction and maximization of the likelihood penalized by a positive power of the Jeffreys-prior penalty, which eliminate the requirement of storing O(n) quantities in memory, and can operate with data sets that exceed computer memory or even hard drive capacity. We achieve that through incremental QR decompositions, which enable IWLS iterations to have access only to data chunks of predetermined size. Both procedures can also be readily adapted to fit generalized linear models when distinct parts of the data is stored across different sites and, due to privacy concerns, cannot be fully transferred across sites. We assess the procedures through a real-data application with millions of observations.

