{"title":"Learned Regularization for Inverse Problems: Insights from a Spectral Model","authors":"Martin Burger, Samira Kabri","doi":"arxiv-2312.09845","DOIUrl":null,"url":null,"abstract":"The aim of this paper is to provide a theoretically founded investigation of\nstate-of-the-art learning approaches for inverse problems. We give an extended\ndefinition of regularization methods and their convergence in terms of the\nunderlying data distributions, which paves the way for future theoretical\nstudies. Based on a simple spectral learning model previously introduced for\nsupervised learning, we investigate some key properties of different learning\nparadigms for inverse problems, which can be formulated independently of\nspecific architectures. In particular we investigate the regularization\nproperties, bias, and critical dependence on training data distributions.\nMoreover, our framework allows to highlight and compare the specific behavior\nof the different paradigms in the infinite-dimensional limit.","PeriodicalId":501061,"journal":{"name":"arXiv - CS - Numerical Analysis","volume":"38 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Numerical Analysis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2312.09845","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The aim of this paper is to provide a theoretically founded investigation of
state-of-the-art learning approaches for inverse problems. We give an extended
definition of regularization methods and their convergence in terms of the
underlying data distributions, which paves the way for future theoretical
studies. Based on a simple spectral learning model previously introduced for
supervised learning, we investigate some key properties of different learning
paradigms for inverse problems, which can be formulated independently of
specific architectures. In particular we investigate the regularization
properties, bias, and critical dependence on training data distributions.
Moreover, our framework allows to highlight and compare the specific behavior
of the different paradigms in the infinite-dimensional limit.