Xin Jin , Anirban Bhattacharya , Riddhi Pratim Ghosh
{"title":"Diaconis-Ylvisaker先验的高维Bernstein-von Mises定理","authors":"Xin Jin , Anirban Bhattacharya , Riddhi Pratim Ghosh","doi":"10.1016/j.jmva.2023.105279","DOIUrl":null,"url":null,"abstract":"<div><p><span>We study the asymptotic normality<span><span><span> of the posterior distribution of canonical parameter in the </span>exponential family under the Diaconis–Ylvisaker prior which is a </span>conjugate prior when the dimension of parameter space increases with the sample size. We prove under mild conditions on the true parameter value </span></span><span><math><msub><mrow><mi>θ</mi></mrow><mrow><mn>0</mn></mrow></msub></math></span><span><span> and hyperparameters of priors, the difference between the posterior distribution and a normal distribution centered at the </span>maximum likelihood estimator<span>, and variance equal to the inverse of the Fisher information matrix goes to 0 in the expected total variation distance. The proof assumes dimension of parameter space </span></span><span><math><mi>d</mi></math></span> grows linearly with sample size <span><math><mi>n</mi></math></span> only requiring <span><math><mrow><mi>d</mi><mo>=</mo><mi>o</mi><mrow><mo>(</mo><mi>n</mi><mo>)</mo></mrow></mrow></math></span><span>. En route, we derive a concentration inequality of the quadratic form of the maximum likelihood estimator without any specific assumption such as sub-Gaussianity. A specific illustration is provided for the Multinomial-Dirichlet model with an extension to the density estimation and Normal mean estimation problems.</span></p></div>","PeriodicalId":16431,"journal":{"name":"Journal of Multivariate Analysis","volume":"200 ","pages":"Article 105279"},"PeriodicalIF":1.4000,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"High-dimensional Bernstein–von Mises theorem for the Diaconis–Ylvisaker prior\",\"authors\":\"Xin Jin , Anirban Bhattacharya , Riddhi Pratim Ghosh\",\"doi\":\"10.1016/j.jmva.2023.105279\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p><span>We study the asymptotic normality<span><span><span> of the posterior distribution of canonical parameter in the </span>exponential family under the Diaconis–Ylvisaker prior which is a </span>conjugate prior when the dimension of parameter space increases with the sample size. We prove under mild conditions on the true parameter value </span></span><span><math><msub><mrow><mi>θ</mi></mrow><mrow><mn>0</mn></mrow></msub></math></span><span><span> and hyperparameters of priors, the difference between the posterior distribution and a normal distribution centered at the </span>maximum likelihood estimator<span>, and variance equal to the inverse of the Fisher information matrix goes to 0 in the expected total variation distance. The proof assumes dimension of parameter space </span></span><span><math><mi>d</mi></math></span> grows linearly with sample size <span><math><mi>n</mi></math></span> only requiring <span><math><mrow><mi>d</mi><mo>=</mo><mi>o</mi><mrow><mo>(</mo><mi>n</mi><mo>)</mo></mrow></mrow></math></span><span>. En route, we derive a concentration inequality of the quadratic form of the maximum likelihood estimator without any specific assumption such as sub-Gaussianity. A specific illustration is provided for the Multinomial-Dirichlet model with an extension to the density estimation and Normal mean estimation problems.</span></p></div>\",\"PeriodicalId\":16431,\"journal\":{\"name\":\"Journal of Multivariate Analysis\",\"volume\":\"200 \",\"pages\":\"Article 105279\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2023-11-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Multivariate Analysis\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0047259X23001252\",\"RegionNum\":3,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"STATISTICS & PROBABILITY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Multivariate Analysis","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0047259X23001252","RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"STATISTICS & PROBABILITY","Score":null,"Total":0}
High-dimensional Bernstein–von Mises theorem for the Diaconis–Ylvisaker prior
We study the asymptotic normality of the posterior distribution of canonical parameter in the exponential family under the Diaconis–Ylvisaker prior which is a conjugate prior when the dimension of parameter space increases with the sample size. We prove under mild conditions on the true parameter value and hyperparameters of priors, the difference between the posterior distribution and a normal distribution centered at the maximum likelihood estimator, and variance equal to the inverse of the Fisher information matrix goes to 0 in the expected total variation distance. The proof assumes dimension of parameter space grows linearly with sample size only requiring . En route, we derive a concentration inequality of the quadratic form of the maximum likelihood estimator without any specific assumption such as sub-Gaussianity. A specific illustration is provided for the Multinomial-Dirichlet model with an extension to the density estimation and Normal mean estimation problems.
期刊介绍:
Founded in 1971, the Journal of Multivariate Analysis (JMVA) is the central venue for the publication of new, relevant methodology and particularly innovative applications pertaining to the analysis and interpretation of multidimensional data.
The journal welcomes contributions to all aspects of multivariate data analysis and modeling, including cluster analysis, discriminant analysis, factor analysis, and multidimensional continuous or discrete distribution theory. Topics of current interest include, but are not limited to, inferential aspects of
Copula modeling
Functional data analysis
Graphical modeling
High-dimensional data analysis
Image analysis
Multivariate extreme-value theory
Sparse modeling
Spatial statistics.