{"title":"Willem van Zwet’s contributions to the profession","authors":"N. Fisher, A. Smith","doi":"10.1214/21-aos2053","DOIUrl":"https://doi.org/10.1214/21-aos2053","url":null,"abstract":"","PeriodicalId":22375,"journal":{"name":"The Annals of Statistics","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85500523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Inference for a two-stage enrichment design","authors":"Zhantao Lin, N. Flournoy, W. Rosenberger","doi":"10.1214/21-aos2051","DOIUrl":"https://doi.org/10.1214/21-aos2051","url":null,"abstract":"","PeriodicalId":22375,"journal":{"name":"The Annals of Statistics","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82676670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the task of estimating a conditional density using i.i.d. samples from a joint distribution, which is a fundamental problem with applications in both classification and uncertainty quantification for regression. For joint density estimation, minimax rates have been characterized for general density classes in terms of uniform (metric) entropy, a well-studied notion of statistical capacity. When applying these results to conditional density estimation, the use of uniform entropy -- which is infinite when the covariate space is unbounded and suffers from the curse of dimensionality -- can lead to suboptimal rates. Consequently, minimax rates for conditional density estimation cannot be characterized using these classical results. We resolve this problem for well-specified models, obtaining matching (within logarithmic factors) upper and lower bounds on the minimax Kullback--Leibler risk in terms of the empirical Hellinger entropy for the conditional density class. The use of empirical entropy allows us to appeal to concentration arguments based on local Rademacher complexity, which -- in contrast to uniform entropy -- leads to matching rates for large, potentially nonparametric classes and captures the correct dependence on the complexity of the covariate space. Our results require only that the conditional densities are bounded above, and do not require that they are bounded below or otherwise satisfy any tail conditions.
{"title":"Minimax rates for conditional density estimation via empirical entropy","authors":"Blair Bilodeau, Dylan J. Foster, Daniel M. Roy","doi":"10.1214/23-AOS2270","DOIUrl":"https://doi.org/10.1214/23-AOS2270","url":null,"abstract":"We consider the task of estimating a conditional density using i.i.d. samples from a joint distribution, which is a fundamental problem with applications in both classification and uncertainty quantification for regression. For joint density estimation, minimax rates have been characterized for general density classes in terms of uniform (metric) entropy, a well-studied notion of statistical capacity. When applying these results to conditional density estimation, the use of uniform entropy -- which is infinite when the covariate space is unbounded and suffers from the curse of dimensionality -- can lead to suboptimal rates. Consequently, minimax rates for conditional density estimation cannot be characterized using these classical results. We resolve this problem for well-specified models, obtaining matching (within logarithmic factors) upper and lower bounds on the minimax Kullback--Leibler risk in terms of the empirical Hellinger entropy for the conditional density class. The use of empirical entropy allows us to appeal to concentration arguments based on local Rademacher complexity, which -- in contrast to uniform entropy -- leads to matching rates for large, potentially nonparametric classes and captures the correct dependence on the complexity of the covariate space. Our results require only that the conditional densities are bounded above, and do not require that they are bounded below or otherwise satisfy any tail conditions.","PeriodicalId":22375,"journal":{"name":"The Annals of Statistics","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74769228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The asymptotic normality for a large family of eigenvalue statistics of a general sample covariance matrix is derived under the ultra-high dimensional setting, that is, when the dimension to sample size ratio $p/n to infty$. Based on this CLT result, we first adapt the covariance matrix test problem to the new ultra-high dimensional context. Then as a second application, we develop a new test for the separable covariance structure of a matrix-valued white noise. Simulation experiments are conducted for the investigation of finite-sample properties of the general asymptotic normality of eigenvalue statistics, as well as the second test for separable covariance structure of matrix-valued white noise.
在超高维设置下,即当维数与样本量之比$p/n to infty$时,导出了一般样本协方差矩阵的一大组特征值统计量的渐近正态性。在此CLT结果的基础上,我们首先将协方差矩阵检验问题应用于新的超高维环境。然后作为第二个应用,我们提出了一个新的检验矩阵值白噪声的可分离协方差结构的方法。对特征值统计量一般渐近正态性的有限样本性质进行了仿真实验研究,并对矩阵值白噪声的可分协方差结构进行了二次检验。
{"title":"Asymptotic normality for eigenvalue statistics of a general sample covariance matrix when p/n→∞ and applications","authors":"Jiaxin Qiu, Zeng Li, Jianfeng Yao","doi":"10.1214/23-aos2300","DOIUrl":"https://doi.org/10.1214/23-aos2300","url":null,"abstract":"The asymptotic normality for a large family of eigenvalue statistics of a general sample covariance matrix is derived under the ultra-high dimensional setting, that is, when the dimension to sample size ratio $p/n to infty$. Based on this CLT result, we first adapt the covariance matrix test problem to the new ultra-high dimensional context. Then as a second application, we develop a new test for the separable covariance structure of a matrix-valued white noise. Simulation experiments are conducted for the investigation of finite-sample properties of the general asymptotic normality of eigenvalue statistics, as well as the second test for separable covariance structure of matrix-valued white noise.","PeriodicalId":22375,"journal":{"name":"The Annals of Statistics","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89544403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Completely randomized experiments have been the gold standard for drawing causal inference because they can balance all potential confounding on average. However, they may suffer from unbalanced covariates for realized treatment assignments. Rerandomization, a design that rerandomizes the treatment assignment until a prespecified covariate balance criterion is met, has recently got attention due to its easy implementation, improved covariate balance and more efficient inference. Researchers have then suggested to use the treatment assignments that minimize the covariate imbalance, namely the optimally balanced design. This has caused again the long-time controversy between two philosophies for designing experiments: randomization versus optimal and thus almost deterministic designs. Existing literature argued that rerandomization with overly balanced observed covariates can lead to highly imbalanced unobserved covariates, making it vulnerable to model misspecification. On the contrary, rerandomization with properly balanced covariates can provide robust inference for treatment effects while sacrific-ing some efficiency compared to the ideally optimal design. In this paper, we show it is possible that, by making the covariate imbalance diminishing at a proper rate as the sample size increases, rerandomization can achieve its ideally optimal precision that one can expect with perfectly balanced covariates, while still maintaining its robustness. We further investigate conditions on the number of covariates for achieving the desired optimality. Our results rely on a more delicate asymptotic analysis for rerandomization, allowing both diminishing covariate imbalance threshold (or equivalently the acceptance probability) and diverging number of covariates. The derived theory for rerandomization provides a deeper understanding of its large-sample property and can better guide its practical implementation. Furthermore, it also helps reconcile the controversy between randomized and optimal designs in an asymptotic sense.
{"title":"Rerandomization with diminishing covariate imbalance and diverging number of covariates","authors":"Yuhao Wang, Xinran Li","doi":"10.1214/22-aos2235","DOIUrl":"https://doi.org/10.1214/22-aos2235","url":null,"abstract":"Completely randomized experiments have been the gold standard for drawing causal inference because they can balance all potential confounding on average. However, they may suffer from unbalanced covariates for realized treatment assignments. Rerandomization, a design that rerandomizes the treatment assignment until a prespecified covariate balance criterion is met, has recently got attention due to its easy implementation, improved covariate balance and more efficient inference. Researchers have then suggested to use the treatment assignments that minimize the covariate imbalance, namely the optimally balanced design. This has caused again the long-time controversy between two philosophies for designing experiments: randomization versus optimal and thus almost deterministic designs. Existing literature argued that rerandomization with overly balanced observed covariates can lead to highly imbalanced unobserved covariates, making it vulnerable to model misspecification. On the contrary, rerandomization with properly balanced covariates can provide robust inference for treatment effects while sacrific-ing some efficiency compared to the ideally optimal design. In this paper, we show it is possible that, by making the covariate imbalance diminishing at a proper rate as the sample size increases, rerandomization can achieve its ideally optimal precision that one can expect with perfectly balanced covariates, while still maintaining its robustness. We further investigate conditions on the number of covariates for achieving the desired optimality. Our results rely on a more delicate asymptotic analysis for rerandomization, allowing both diminishing covariate imbalance threshold (or equivalently the acceptance probability) and diverging number of covariates. The derived theory for rerandomization provides a deeper understanding of its large-sample property and can better guide its practical implementation. Furthermore, it also helps reconcile the controversy between randomized and optimal designs in an asymptotic sense.","PeriodicalId":22375,"journal":{"name":"The Annals of Statistics","volume":"170 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74143009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study uniform consistency in nonparametric mixture models as well as closely related mixture of regression (also known as mixed regression) models, where the regression functions are allowed to be nonparametric and the error distributions are assumed to be convolutions of a Gaussian density. We construct uniformly consistent estimators under general conditions while simultaneously highlighting several pain points in extending existing pointwise consistency results to uniform results. The resulting analysis turns out to be nontrivial, and several novel technical tools are developed along the way. In the case of mixed regression, we prove $L^1$ convergence of the regression functions while allowing for the component regression functions to intersect arbitrarily often, which presents additional technical challenges. We also consider generalizations to general (i.e. non-convolutional) nonparametric mixtures.
{"title":"Uniform consistency in nonparametric mixture models","authors":"Bryon Aragam, Ruiyi Yang","doi":"10.1214/22-aos2255","DOIUrl":"https://doi.org/10.1214/22-aos2255","url":null,"abstract":"We study uniform consistency in nonparametric mixture models as well as closely related mixture of regression (also known as mixed regression) models, where the regression functions are allowed to be nonparametric and the error distributions are assumed to be convolutions of a Gaussian density. We construct uniformly consistent estimators under general conditions while simultaneously highlighting several pain points in extending existing pointwise consistency results to uniform results. The resulting analysis turns out to be nontrivial, and several novel technical tools are developed along the way. In the case of mixed regression, we prove $L^1$ convergence of the regression functions while allowing for the component regression functions to intersect arbitrarily often, which presents additional technical challenges. We also consider generalizations to general (i.e. non-convolutional) nonparametric mixtures.","PeriodicalId":22375,"journal":{"name":"The Annals of Statistics","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75951770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The iterated conditional sequential Monte Carlo (i-CSMC) algorithm from Andrieu, Doucet and Holenstein (2010) is an MCMC approach for efficiently sampling from the joint posterior distribution of the $T$ latent states in challenging time-series models, e.g. in non-linear or non-Gaussian state-space models. It is also the main ingredient in particle Gibbs samplers which infer unknown model parameters alongside the latent states. In this work, we first prove that the i-CSMC algorithm suffers from a curse of dimension in the dimension of the states, $D$: it breaks down unless the number of samples ("particles"), $N$, proposed by the algorithm grows exponentially with $D$. Then, we present a novel"local"version of the algorithm which proposes particles using Gaussian random-walk moves that are suitably scaled with $D$. We prove that this iterated random-walk conditional sequential Monte Carlo (i-RW-CSMC) algorithm avoids the curse of dimension: for arbitrary $N$, its acceptance rates and expected squared jumping distance converge to non-trivial limits as $D to infty$. If $T = N = 1$, our proposed algorithm reduces to a Metropolis--Hastings or Barker's algorithm with Gaussian random-walk moves and we recover the well known scaling limits for such algorithms.
Andrieu, Doucet和Holenstein(2010)提出的迭代条件序列蒙特卡罗(i-CSMC)算法是一种MCMC方法,用于在具有挑战性的时间序列模型(例如非线性或非高斯状态空间模型)中有效地从$T$潜在状态的联合后验分布中采样。它也是粒子吉布斯采样器的主要成分,它可以推断出未知的模型参数以及潜在状态。在这项工作中,我们首先证明了i-CSMC算法在状态维度中遭受维度诅咒$D$:除非算法提出的样本(“粒子”)数量$N$随$D$呈指数增长,否则它会崩溃。然后,我们提出了一种新的“局部”版本的算法,该算法使用高斯随机行走移动来提出粒子,该移动适当地缩放$D$。我们证明了这种迭代随机漫步条件序列蒙特卡罗(i-RW-CSMC)算法避免了维数诅咒:对于任意$N$,它的接受率和期望的平方跳跃距离收敛于非平凡极限$D to infty$。如果$T = N = 1$,我们提出的算法减少到一个Metropolis- Hastings或Barker的算法与高斯随机行走移动,我们恢复了众所周知的缩放限制的算法。
{"title":"Conditional sequential Monte Carlo in high dimensions","authors":"Axel Finke, Alexandre Hoang Thiery","doi":"10.1214/22-aos2252","DOIUrl":"https://doi.org/10.1214/22-aos2252","url":null,"abstract":"The iterated conditional sequential Monte Carlo (i-CSMC) algorithm from Andrieu, Doucet and Holenstein (2010) is an MCMC approach for efficiently sampling from the joint posterior distribution of the $T$ latent states in challenging time-series models, e.g. in non-linear or non-Gaussian state-space models. It is also the main ingredient in particle Gibbs samplers which infer unknown model parameters alongside the latent states. In this work, we first prove that the i-CSMC algorithm suffers from a curse of dimension in the dimension of the states, $D$: it breaks down unless the number of samples (\"particles\"), $N$, proposed by the algorithm grows exponentially with $D$. Then, we present a novel\"local\"version of the algorithm which proposes particles using Gaussian random-walk moves that are suitably scaled with $D$. We prove that this iterated random-walk conditional sequential Monte Carlo (i-RW-CSMC) algorithm avoids the curse of dimension: for arbitrary $N$, its acceptance rates and expected squared jumping distance converge to non-trivial limits as $D to infty$. If $T = N = 1$, our proposed algorithm reduces to a Metropolis--Hastings or Barker's algorithm with Gaussian random-walk moves and we recover the well known scaling limits for such algorithms.","PeriodicalId":22375,"journal":{"name":"The Annals of Statistics","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84427709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we analyse singular values of a large $ptimes n$ data matrix $mathbf{X}_n= (mathbf{x}_{n1},ldots,mathbf{x}_{nn})$ where the column $mathbf{x}_{nj}$'s are independent $p$-dimensional vectors, possibly with different distributions. Such data matrices are common in high-dimensional statistics. Under a key assumption that the covariance matrices $mathbf{Sigma}_{nj}=text{Cov}(mathbf{x}_{nj})$ can be asymptotically simultaneously diagonalizable, and appropriate convergence of their spectra, we establish a limiting distribution for the singular values of $mathbf{X}_n$ when both dimension $p$ and $n$ grow to infinity in a comparable magnitude. The matrix model goes beyond and includes many existing works on different types of sample covariance matrices, including the weighted sample covariance matrix, the Gram matrix model and the sample covariance matrix of linear times series models. Furthermore, we develop two applications of our general approach. First, we obtain the existence and uniqueness of a new limiting spectral distribution of realized covariance matrices for a multi-dimensional diffusion process with anisotropic time-varying co-volatility processes. Secondly, we derive the limiting spectral distribution for singular values of the data matrix for a recent matrix-valued auto-regressive model. Finally, for a generalized finite mixture model, the limiting spectral distribution for singular values of the data matrix is obtained.
{"title":"On singular values of data matrices with general independent columns","authors":"T. Mei, Chen Wang, Jianfeng Yao","doi":"10.1214/23-aos2263","DOIUrl":"https://doi.org/10.1214/23-aos2263","url":null,"abstract":"In this paper, we analyse singular values of a large $ptimes n$ data matrix $mathbf{X}_n= (mathbf{x}_{n1},ldots,mathbf{x}_{nn})$ where the column $mathbf{x}_{nj}$'s are independent $p$-dimensional vectors, possibly with different distributions. Such data matrices are common in high-dimensional statistics. Under a key assumption that the covariance matrices $mathbf{Sigma}_{nj}=text{Cov}(mathbf{x}_{nj})$ can be asymptotically simultaneously diagonalizable, and appropriate convergence of their spectra, we establish a limiting distribution for the singular values of $mathbf{X}_n$ when both dimension $p$ and $n$ grow to infinity in a comparable magnitude. The matrix model goes beyond and includes many existing works on different types of sample covariance matrices, including the weighted sample covariance matrix, the Gram matrix model and the sample covariance matrix of linear times series models. Furthermore, we develop two applications of our general approach. First, we obtain the existence and uniqueness of a new limiting spectral distribution of realized covariance matrices for a multi-dimensional diffusion process with anisotropic time-varying co-volatility processes. Secondly, we derive the limiting spectral distribution for singular values of the data matrix for a recent matrix-valued auto-regressive model. Finally, for a generalized finite mixture model, the limiting spectral distribution for singular values of the data matrix is obtained.","PeriodicalId":22375,"journal":{"name":"The Annals of Statistics","volume":"125 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84921301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider a space structured population model generated by two point clouds: a homogeneous Poisson process $M$ with intensity $ntoinfty$ as a model for a parent generation together with a Cox point process $N$ as offspring generation, with conditional intensity given by the convolution of $M$ with a scaled dispersal density $sigma^{-1}f(cdot/sigma)$. Based on a realisation of $M$ and $N$, we study the nonparametric estimation of $f$ and the estimation of the physical scale parameter $sigma>0$ simultaneously for all regimes $sigma=sigma_n$. We establish that the optimal rates of convergence do not depend monotonously on the scale and we construct minimax estimators accordingly whether $sigma$ is known or considered as a nuisance, in which case we can estimate it and achieve asymptotic minimaxity by plug-in. The statistical reconstruction exhibits a competition between a direct and a deconvolution problem. Our study reveals in particular the existence of a least favourable intermediate inference scale, a phenomenon that seems to be new.
{"title":"Dispersal density estimation across scales","authors":"M. Hoffmann, Mathias Trabs","doi":"10.1214/23-aos2290","DOIUrl":"https://doi.org/10.1214/23-aos2290","url":null,"abstract":"We consider a space structured population model generated by two point clouds: a homogeneous Poisson process $M$ with intensity $ntoinfty$ as a model for a parent generation together with a Cox point process $N$ as offspring generation, with conditional intensity given by the convolution of $M$ with a scaled dispersal density $sigma^{-1}f(cdot/sigma)$. Based on a realisation of $M$ and $N$, we study the nonparametric estimation of $f$ and the estimation of the physical scale parameter $sigma>0$ simultaneously for all regimes $sigma=sigma_n$. We establish that the optimal rates of convergence do not depend monotonously on the scale and we construct minimax estimators accordingly whether $sigma$ is known or considered as a nuisance, in which case we can estimate it and achieve asymptotic minimaxity by plug-in. The statistical reconstruction exhibits a competition between a direct and a deconvolution problem. Our study reveals in particular the existence of a least favourable intermediate inference scale, a phenomenon that seems to be new.","PeriodicalId":22375,"journal":{"name":"The Annals of Statistics","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80997231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Historically time-reversibility of the transitions or processes underpinning Markov chain Monte Carlo methods (MCMC) has played a key role in their development, while the self-adjointness of associated operators together with the use of classical functional analysis techniques on Hilbert spaces have led to powerful and practically successful tools to characterise and compare their performance. Similar results for algorithms relying on nonreversible Markov processes are scarce. We show that for a type of nonreversible Monte Carlo Markov chains and processes, of current or renewed interest in the physics and statistical literatures, it is possible to develop comparison results which closely mirror those available in the reversible scenario. We show that these results shed light on earlier literature, proving some conjectures and strengthening some earlier results.
{"title":"Peskun–Tierney ordering for Markovian Monte Carlo: Beyond the reversible scenario","authors":"C. Andrieu, Samuel Livingstone","doi":"10.1214/20-aos2008","DOIUrl":"https://doi.org/10.1214/20-aos2008","url":null,"abstract":"Historically time-reversibility of the transitions or processes underpinning Markov chain Monte Carlo methods (MCMC) has played a key role in their development, while the self-adjointness of associated operators together with the use of classical functional analysis techniques on Hilbert spaces have led to powerful and practically successful tools to characterise and compare their performance. Similar results for algorithms relying on nonreversible Markov processes are scarce. We show that for a type of nonreversible Monte Carlo Markov chains and processes, of current or renewed interest in the physics and statistical literatures, it is possible to develop comparison results which closely mirror those available in the reversible scenario. We show that these results shed light on earlier literature, proving some conjectures and strengthening some earlier results.","PeriodicalId":22375,"journal":{"name":"The Annals of Statistics","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81283373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}