{"title":"贝叶斯层次模型中最大后验估计器的路径跟踪方法:估计值如何取决于超参数","authors":"Zilai Si, Yucong Liu, Alexander Strang","doi":"10.1137/22m153330x","DOIUrl":null,"url":null,"abstract":"SIAM Journal on Optimization, Volume 34, Issue 3, Page 2201-2230, September 2024. <br/> Abstract. Maximum a posteriori (MAP) estimation, like all Bayesian methods, depends on prior assumptions. These assumptions are often chosen to promote specific features in the recovered estimate. The form of the chosen prior determines the shape of the posterior distribution, thus the behavior of the estimator and complexity of the associated optimization problem. Here, we consider a family of Gaussian hierarchical models with generalized gamma hyperpriors designed to promote sparsity in linear inverse problems. By varying the hyperparameters, we move continuously between priors that act as smoothed [math] penalties with flexible [math], smoothing, and scale. We then introduce a predictor-corrector method that tracks MAP solution paths as the hyperparameters vary. Path following allows a user to explore the space of possible MAP solutions and to test the sensitivity of solutions to changes in the prior assumptions. By tracing paths from a convex region to a nonconvex region, the user could find local minimizers in strongly sparsity promoting regimes that are consistent with a convex relaxation derived using related prior assumptions. We show experimentally that these solutions are less error prone than direct optimization of the nonconvex problem.","PeriodicalId":49529,"journal":{"name":"SIAM Journal on Optimization","volume":null,"pages":null},"PeriodicalIF":2.6000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Path-Following Methods for Maximum a Posteriori Estimators in Bayesian Hierarchical Models: How Estimates Depend on Hyperparameters\",\"authors\":\"Zilai Si, Yucong Liu, Alexander Strang\",\"doi\":\"10.1137/22m153330x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"SIAM Journal on Optimization, Volume 34, Issue 3, Page 2201-2230, September 2024. <br/> Abstract. Maximum a posteriori (MAP) estimation, like all Bayesian methods, depends on prior assumptions. These assumptions are often chosen to promote specific features in the recovered estimate. The form of the chosen prior determines the shape of the posterior distribution, thus the behavior of the estimator and complexity of the associated optimization problem. Here, we consider a family of Gaussian hierarchical models with generalized gamma hyperpriors designed to promote sparsity in linear inverse problems. By varying the hyperparameters, we move continuously between priors that act as smoothed [math] penalties with flexible [math], smoothing, and scale. We then introduce a predictor-corrector method that tracks MAP solution paths as the hyperparameters vary. Path following allows a user to explore the space of possible MAP solutions and to test the sensitivity of solutions to changes in the prior assumptions. By tracing paths from a convex region to a nonconvex region, the user could find local minimizers in strongly sparsity promoting regimes that are consistent with a convex relaxation derived using related prior assumptions. We show experimentally that these solutions are less error prone than direct optimization of the nonconvex problem.\",\"PeriodicalId\":49529,\"journal\":{\"name\":\"SIAM Journal on Optimization\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SIAM Journal on Optimization\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1137/22m153330x\",\"RegionNum\":1,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM Journal on Optimization","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1137/22m153330x","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
Path-Following Methods for Maximum a Posteriori Estimators in Bayesian Hierarchical Models: How Estimates Depend on Hyperparameters
SIAM Journal on Optimization, Volume 34, Issue 3, Page 2201-2230, September 2024. Abstract. Maximum a posteriori (MAP) estimation, like all Bayesian methods, depends on prior assumptions. These assumptions are often chosen to promote specific features in the recovered estimate. The form of the chosen prior determines the shape of the posterior distribution, thus the behavior of the estimator and complexity of the associated optimization problem. Here, we consider a family of Gaussian hierarchical models with generalized gamma hyperpriors designed to promote sparsity in linear inverse problems. By varying the hyperparameters, we move continuously between priors that act as smoothed [math] penalties with flexible [math], smoothing, and scale. We then introduce a predictor-corrector method that tracks MAP solution paths as the hyperparameters vary. Path following allows a user to explore the space of possible MAP solutions and to test the sensitivity of solutions to changes in the prior assumptions. By tracing paths from a convex region to a nonconvex region, the user could find local minimizers in strongly sparsity promoting regimes that are consistent with a convex relaxation derived using related prior assumptions. We show experimentally that these solutions are less error prone than direct optimization of the nonconvex problem.
期刊介绍:
The SIAM Journal on Optimization contains research articles on the theory and practice of optimization. The areas addressed include linear and quadratic programming, convex programming, nonlinear programming, complementarity problems, stochastic optimization, combinatorial optimization, integer programming, and convex, nonsmooth and variational analysis. Contributions may emphasize optimization theory, algorithms, software, computational practice, applications, or the links between these subjects.