Pub Date : 2024-05-22DOI: 10.1007/s11222-024-10432-6
Vladimir Pastukhov
In this paper, we introduce and study fused lasso nearly-isotonic signal approximation, which is a combination of fused lasso and generalized nearly-isotonic regression. We show how these three estimators relate to each other and derive solution to a general problem. Our estimator is computationally feasible and provides a trade-off between monotonicity, block sparsity, and goodness-of-fit. Next, we prove that fusion and near-isotonisation in a one-dimensional case can be applied interchangably, and this step-wise procedure gives the solution to the original optimization problem. This property of the estimator is very important, because it provides a direct way to construct a path solution when one of the penalization parameters is fixed. Also, we derive an unbiased estimator of degrees of freedom of the estimator.
{"title":"Fused lasso nearly-isotonic signal approximation in general dimensions","authors":"Vladimir Pastukhov","doi":"10.1007/s11222-024-10432-6","DOIUrl":"https://doi.org/10.1007/s11222-024-10432-6","url":null,"abstract":"<p>In this paper, we introduce and study fused lasso nearly-isotonic signal approximation, which is a combination of fused lasso and generalized nearly-isotonic regression. We show how these three estimators relate to each other and derive solution to a general problem. Our estimator is computationally feasible and provides a trade-off between monotonicity, block sparsity, and goodness-of-fit. Next, we prove that fusion and near-isotonisation in a one-dimensional case can be applied interchangably, and this step-wise procedure gives the solution to the original optimization problem. This property of the estimator is very important, because it provides a direct way to construct a path solution when one of the penalization parameters is fixed. Also, we derive an unbiased estimator of degrees of freedom of the estimator.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141150213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-22DOI: 10.1007/s11222-024-10429-1
Matthias Templ
{"title":"Robust multipe imputation with GAM","authors":"Matthias Templ","doi":"10.1007/s11222-024-10429-1","DOIUrl":"https://doi.org/10.1007/s11222-024-10429-1","url":null,"abstract":"","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141112906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-21DOI: 10.1007/s11222-024-10404-w
Alex Cooper, Aki Vehtari, Catherine Forbes, Dan Simpson, Lauren Kennedy
Brute force cross-validation (CV) is a method for predictive assessment and model selection that is general and applicable to a wide range of Bayesian models. Naive or ‘brute force’ CV approaches are often too computationally costly for interactive modeling workflows, especially when inference relies on Markov chain Monte Carlo (MCMC). We propose overcoming this limitation using massively parallel MCMC. Using accelerator hardware such as graphics processor units, our approach can be about as fast (in wall clock time) as a single full-data model fit. Parallel CV is flexible because it can easily exploit a wide range data partitioning schemes, such as those designed for non-exchangeable data. It can also accommodate a range of scoring rules. We propose MCMC diagnostics, including a summary of MCMC mixing based on the popular potential scale reduction factor ((widehat{textrm{R}})) and MCMC effective sample size ((widehat{textrm{ESS}})) measures. We also describe a method for determining whether an (widehat{textrm{R}}) diagnostic indicates approximate stationarity of the chains, that may be of more general interest for applications beyond parallel CV. Finally, we show that parallel CV and its diagnostics can be implemented with online algorithms, allowing parallel CV to scale up to very large blocking designs on memory-constrained computing accelerators.
{"title":"Bayesian cross-validation by parallel Markov chain Monte Carlo","authors":"Alex Cooper, Aki Vehtari, Catherine Forbes, Dan Simpson, Lauren Kennedy","doi":"10.1007/s11222-024-10404-w","DOIUrl":"https://doi.org/10.1007/s11222-024-10404-w","url":null,"abstract":"<p>Brute force cross-validation (CV) is a method for predictive assessment and model selection that is general and applicable to a wide range of Bayesian models. Naive or ‘brute force’ CV approaches are often too computationally costly for interactive modeling workflows, especially when inference relies on Markov chain Monte Carlo (MCMC). We propose overcoming this limitation using massively parallel MCMC. Using accelerator hardware such as graphics processor units, our approach can be about as fast (in wall clock time) as a single full-data model fit. Parallel CV is flexible because it can easily exploit a wide range data partitioning schemes, such as those designed for non-exchangeable data. It can also accommodate a range of scoring rules. We propose MCMC diagnostics, including a summary of MCMC mixing based on the popular potential scale reduction factor (<span>(widehat{textrm{R}})</span>) and MCMC effective sample size (<span>(widehat{textrm{ESS}})</span>) measures. We also describe a method for determining whether an <span>(widehat{textrm{R}})</span> diagnostic indicates approximate stationarity of the chains, that may be of more general interest for applications beyond parallel CV. Finally, we show that parallel CV and its diagnostics can be implemented with online algorithms, allowing parallel CV to scale up to very large blocking designs on memory-constrained computing accelerators.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141150196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-13DOI: 10.1007/s11222-024-10430-8
Yu-Chien Bo Ning, Ning Ning
Sparse principal component analysis (SPCA) is a popular tool for dimensionality reduction in high-dimensional data. However, there is still a lack of theoretically justified Bayesian SPCA methods that can scale well computationally. One of the major challenges in Bayesian SPCA is selecting an appropriate prior for the loadings matrix, considering that principal components are mutually orthogonal. We propose a novel parameter-expanded coordinate ascent variational inference (PX-CAVI) algorithm. This algorithm utilizes a spike and slab prior, which incorporates parameter expansion to cope with the orthogonality constraint. Besides comparing to two popular SPCA approaches, we introduce the PX-EM algorithm as an EM analogue to the PX-CAVI algorithm for comparison. Through extensive numerical simulations, we demonstrate that the PX-CAVI algorithm outperforms these SPCA approaches, showcasing its superiority in terms of performance. We study the posterior contraction rate of the variational posterior, providing a novel contribution to the existing literature. The PX-CAVI algorithm is then applied to study a lung cancer gene expression dataset. The (textsf{R}) package (textsf{VBsparsePCA}) with an implementation of the algorithm is available on the Comprehensive R Archive Network (CRAN).
{"title":"Spike and slab Bayesian sparse principal component analysis","authors":"Yu-Chien Bo Ning, Ning Ning","doi":"10.1007/s11222-024-10430-8","DOIUrl":"https://doi.org/10.1007/s11222-024-10430-8","url":null,"abstract":"<p>Sparse principal component analysis (SPCA) is a popular tool for dimensionality reduction in high-dimensional data. However, there is still a lack of theoretically justified Bayesian SPCA methods that can scale well computationally. One of the major challenges in Bayesian SPCA is selecting an appropriate prior for the loadings matrix, considering that principal components are mutually orthogonal. We propose a novel parameter-expanded coordinate ascent variational inference (PX-CAVI) algorithm. This algorithm utilizes a spike and slab prior, which incorporates parameter expansion to cope with the orthogonality constraint. Besides comparing to two popular SPCA approaches, we introduce the PX-EM algorithm as an EM analogue to the PX-CAVI algorithm for comparison. Through extensive numerical simulations, we demonstrate that the PX-CAVI algorithm outperforms these SPCA approaches, showcasing its superiority in terms of performance. We study the posterior contraction rate of the variational posterior, providing a novel contribution to the existing literature. The PX-CAVI algorithm is then applied to study a lung cancer gene expression dataset. The <span>(textsf{R})</span> package <span>(textsf{VBsparsePCA})</span> with an implementation of the algorithm is available on the Comprehensive R Archive Network (CRAN).</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140941321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-07DOI: 10.1007/s11222-024-10431-7
Dongrak Choi, Woojung Bae, Jun Yan, Sangwook Kang
We propose a set of goodness-of-fit tests for the semiparametric accelerated failure time (AFT) model, including an omnibus test, a link function test, and a functional form test. This set of tests is derived from a multi-parameter cumulative sum process shown to follow asymptotically a zero-mean Gaussian process. Its evaluation is based on the asymptotically equivalent perturbed version, which enables both graphical and numerical evaluations of the assumed AFT model. Empirical p-values are obtained using the Kolmogorov-type supremum test, which provides a reliable approach for estimating the significance of both proposed un-standardized and standardized test statistics. The proposed procedure is illustrated using the rank-based estimator but is general in the sense that it is directly applicable to some other popular estimators such as induced smoothed rank-based estimator or least-squares estimator that satisfies certain properties. Our proposed methods are rigorously evaluated using extensive simulation experiments that demonstrate their effectiveness in maintaining a Type I error rate and detecting departures from the assumed AFT model in practical sample sizes and censoring rates. Furthermore, the proposed approach is applied to the analysis of the Primary Biliary Cirrhosis data, a widely studied dataset in survival analysis, providing further evidence of the practical usefulness of the proposed methods in real-world scenarios. To make the proposed methods more accessible to researchers, we have implemented them in the R package afttest, which is publicly available on the Comprehensive R Archive Network.
我们为半参数加速失效时间(AFT)模型提出了一套拟合优度检验,包括总括检验、链接函数检验和函数形式检验。这组检验来自一个多参数累积和过程,该过程在渐近上遵循零均值高斯过程。它的评估基于渐近等效的扰动版本,可以对假定的 AFT 模型进行图形和数值评估。使用 Kolmogorov 型 supremum 检验获得经验 p 值,该检验为估计拟议的非标准化和标准化检验统计量的显著性提供了一种可靠的方法。我们使用基于秩的估计器对所提出的程序进行了说明,但该程序具有通用性,可直接适用于其他一些流行的估计器,如满足某些属性的诱导平滑秩估计器或最小二乘估计器。我们通过大量的模拟实验对所提出的方法进行了严格评估,证明了这些方法在保持 I 类错误率以及检测实际样本量和删减率偏离假定 AFT 模型方面的有效性。此外,我们还将所提出的方法应用于原发性胆汁性肝硬化数据的分析,这是一个在生存分析中被广泛研究的数据集,进一步证明了所提出的方法在现实世界中的实用性。为了让研究人员更容易使用所提出的方法,我们在 R 软件包 afttest 中实现了这些方法,该软件包可在 R 综合存档网络上公开获取。
{"title":"A general model-checking procedure for semiparametric accelerated failure time models","authors":"Dongrak Choi, Woojung Bae, Jun Yan, Sangwook Kang","doi":"10.1007/s11222-024-10431-7","DOIUrl":"https://doi.org/10.1007/s11222-024-10431-7","url":null,"abstract":"<p>We propose a set of goodness-of-fit tests for the semiparametric accelerated failure time (AFT) model, including an omnibus test, a link function test, and a functional form test. This set of tests is derived from a multi-parameter cumulative sum process shown to follow asymptotically a zero-mean Gaussian process. Its evaluation is based on the asymptotically equivalent perturbed version, which enables both graphical and numerical evaluations of the assumed AFT model. Empirical <i>p</i>-values are obtained using the Kolmogorov-type supremum test, which provides a reliable approach for estimating the significance of both proposed un-standardized and standardized test statistics. The proposed procedure is illustrated using the rank-based estimator but is general in the sense that it is directly applicable to some other popular estimators such as induced smoothed rank-based estimator or least-squares estimator that satisfies certain properties. Our proposed methods are rigorously evaluated using extensive simulation experiments that demonstrate their effectiveness in maintaining a Type I error rate and detecting departures from the assumed AFT model in practical sample sizes and censoring rates. Furthermore, the proposed approach is applied to the analysis of the Primary Biliary Cirrhosis data, a widely studied dataset in survival analysis, providing further evidence of the practical usefulness of the proposed methods in real-world scenarios. To make the proposed methods more accessible to researchers, we have implemented them in the <b>R</b> package <b>afttest</b>, which is publicly available on the Comprehensive R Archive Network.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140882172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.1007/s11222-024-10427-3
Joaquín Martínez-Minaya, Haavard Rue
Compositional Data Analysis (CoDa) has gained popularity in recent years. This type of data consists of values from disjoint categories that sum up to a constant. Both Dirichlet regression and logistic-normal regression have become popular as CoDa analysis methods. However, fitting this kind of multivariate models presents challenges, especially when structured random effects are included in the model, such as temporal or spatial effects. To overcome these challenges, we propose the logistic-normal Dirichlet Model (LNDM). We seamlessly incorporate this approach into the R-INLA package, facilitating model fitting and model prediction within the framework of Latent Gaussian Models. Moreover, we explore metrics like Deviance Information Criteria, Watanabe Akaike information criterion, and cross-validation measure conditional predictive ordinate for model selection in R-INLA for CoDa. Illustrating LNDM through two simulated examples and with an ecological case study on Arabidopsis thaliana in the Iberian Peninsula, we underscore its potential as an effective tool for managing CoDa and large CoDa databases.
{"title":"A flexible Bayesian tool for CoDa mixed models: logistic-normal distribution with Dirichlet covariance","authors":"Joaquín Martínez-Minaya, Haavard Rue","doi":"10.1007/s11222-024-10427-3","DOIUrl":"https://doi.org/10.1007/s11222-024-10427-3","url":null,"abstract":"<p>Compositional Data Analysis (CoDa) has gained popularity in recent years. This type of data consists of values from disjoint categories that sum up to a constant. Both Dirichlet regression and logistic-normal regression have become popular as CoDa analysis methods. However, fitting this kind of multivariate models presents challenges, especially when structured random effects are included in the model, such as temporal or spatial effects. To overcome these challenges, we propose the logistic-normal Dirichlet Model (LNDM). We seamlessly incorporate this approach into the <b>R-INLA</b> package, facilitating model fitting and model prediction within the framework of Latent Gaussian Models. Moreover, we explore metrics like Deviance Information Criteria, Watanabe Akaike information criterion, and cross-validation measure conditional predictive ordinate for model selection in <b>R-INLA</b> for CoDa. Illustrating LNDM through two simulated examples and with an ecological case study on <i>Arabidopsis thaliana</i> in the Iberian Peninsula, we underscore its potential as an effective tool for managing CoDa and large CoDa databases.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140613645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-14DOI: 10.1007/s11222-024-10428-2
Ziyang Yang, Idris A. Eckley, Paul Fearnhead
We consider the challenge of efficiently detecting changes within a network of sensors, where we also need to minimise communication between sensors and the cloud. We propose an online, communication-efficient method to detect such changes. The procedure works by performing likelihood ratio tests at each time point, and two thresholds are chosen to filter unimportant test statistics and make decisions based on the aggregated test statistics respectively. We provide asymptotic theory concerning consistency and the asymptotic distribution if there are no changes. Simulation results suggest that our method can achieve similar performance to the idealised setting, where we have no constraints on communication between sensors, but substantially reduce the transmission costs.
{"title":"A communication-efficient, online changepoint detection method for monitoring distributed sensor networks","authors":"Ziyang Yang, Idris A. Eckley, Paul Fearnhead","doi":"10.1007/s11222-024-10428-2","DOIUrl":"https://doi.org/10.1007/s11222-024-10428-2","url":null,"abstract":"<p>We consider the challenge of efficiently detecting changes within a network of sensors, where we also need to minimise communication between sensors and the cloud. We propose an online, communication-efficient method to detect such changes. The procedure works by performing likelihood ratio tests at each time point, and two thresholds are chosen to filter unimportant test statistics and make decisions based on the aggregated test statistics respectively. We provide asymptotic theory concerning consistency and the asymptotic distribution if there are no changes. Simulation results suggest that our method can achieve similar performance to the idealised setting, where we have no constraints on communication between sensors, but substantially reduce the transmission costs.\u0000</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140573112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-12DOI: 10.1007/s11222-024-10415-7
Ilaria Bombelli, Maurizio Vichi
Methodology is described for fitting a fuzzy partition and a parsimonious consensus hierarchy (ultrametric matrix) to a set of hierarchies of the same set of objects. A model defining a fuzzy partition of a set of hierarchical classifications, with every class of the partition synthesized by a parsimonious consensus hierarchy is described. Each consensus includes an optimal consensus hard partition of objects and all the hierarchical agglomerative aggregations among the clusters of the consensus partition. The performances of the methodology are illustrated by an extended simulation study and applications to real data. A discussion is provided on the new methodology and some interesting future developments are described.
{"title":"Parsimonious consensus hierarchies, partitions and fuzzy partitioning of a set of hierarchies","authors":"Ilaria Bombelli, Maurizio Vichi","doi":"10.1007/s11222-024-10415-7","DOIUrl":"https://doi.org/10.1007/s11222-024-10415-7","url":null,"abstract":"<p>Methodology is described for fitting a fuzzy partition and a parsimonious consensus hierarchy (ultrametric matrix) to a set of hierarchies of the same set of objects. A model defining a fuzzy partition of a set of hierarchical classifications, with every class of the partition synthesized by a parsimonious consensus hierarchy is described. Each consensus includes an optimal consensus hard partition of objects and all the hierarchical agglomerative aggregations among the clusters of the consensus partition. The performances of the methodology are illustrated by an extended simulation study and applications to real data. A discussion is provided on the new methodology and some interesting future developments are described.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140573123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-08DOI: 10.1007/s11222-024-10426-4
Frank Rotiroti, Stephen G. Walker
We present an approach to selecting the distributions in sampling-resampling which improves the efficiency of the weighted bootstrap. To complement the standard scheme of sampling from the prior and reweighting with the likelihood, we introduce a reversed scheme, which samples from the (normalized) likelihood and reweights with the prior. We begin with some motivating examples, before developing the relevant theory. We then apply the approach to the particle filtering of time series, including nonlinear and non-Gaussian Bayesian state-space models, a task that demands efficiency, given the repeated application of the weighted bootstrap. Through simulation studies on a normal dynamic linear model, Poisson hidden Markov model, and stochastic volatility model, we demonstrate the gains in efficiency obtained by the approach, involving the choice of the standard or reversed filter. In addition, for the stochastic volatility model, we provide three real-data examples, including a comparison with importance sampling methods that attempt to incorporate information about the data indirectly into the standard filtering scheme and an extension to multivariate models. We determine that the reversed filtering scheme offers an advantage over such auxiliary methods owing to its ability to incorporate information about the data directly into the sampling, an ability that further facilitates its performance in higher-dimensional settings.
{"title":"Reversed particle filtering for hidden markov models","authors":"Frank Rotiroti, Stephen G. Walker","doi":"10.1007/s11222-024-10426-4","DOIUrl":"https://doi.org/10.1007/s11222-024-10426-4","url":null,"abstract":"<p>We present an approach to selecting the distributions in sampling-resampling which improves the efficiency of the weighted bootstrap. To complement the standard scheme of sampling from the prior and reweighting with the likelihood, we introduce a reversed scheme, which samples from the (normalized) likelihood and reweights with the prior. We begin with some motivating examples, before developing the relevant theory. We then apply the approach to the particle filtering of time series, including nonlinear and non-Gaussian Bayesian state-space models, a task that demands efficiency, given the repeated application of the weighted bootstrap. Through simulation studies on a normal dynamic linear model, Poisson hidden Markov model, and stochastic volatility model, we demonstrate the gains in efficiency obtained by the approach, involving the choice of the standard or reversed filter. In addition, for the stochastic volatility model, we provide three real-data examples, including a comparison with importance sampling methods that attempt to incorporate information about the data indirectly into the standard filtering scheme and an extension to multivariate models. We determine that the reversed filtering scheme offers an advantage over such auxiliary methods owing to its ability to incorporate information about the data directly into the sampling, an ability that further facilitates its performance in higher-dimensional settings.</p>","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140573264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-08DOI: 10.1007/s11222-024-10421-9
Marion Naveau, Guillaume Kon Kam King, Renaud Rincent, Laure Sansonnet, Maud Delattre
{"title":"Correction to: Bayesian high-dimensional covariate selection in non-linear mixed-effects models using the SAEM algorithm","authors":"Marion Naveau, Guillaume Kon Kam King, Renaud Rincent, Laure Sansonnet, Maud Delattre","doi":"10.1007/s11222-024-10421-9","DOIUrl":"https://doi.org/10.1007/s11222-024-10421-9","url":null,"abstract":"","PeriodicalId":22058,"journal":{"name":"Statistics and Computing","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140573247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}