Bilevel Methods for Image Reconstruction

Caroline Crockett, J. Fessler
{"title":"Bilevel Methods for Image Reconstruction","authors":"Caroline Crockett, J. Fessler","doi":"10.1561/2000000111","DOIUrl":null,"url":null,"abstract":"This review discusses methods for learning parameters for image reconstruction problems using bilevel formulations. Image reconstruction typically involves optimizing a cost function to recover a vector of unknown variables that agrees with collected measurements and prior assumptions. State-of-the-art image reconstruction methods learn these prior assumptions from training data using various machine learning techniques, such as bilevel methods. One can view the bilevel problem as formalizing hyperparameter optimization, as bridging machine learning and cost function based optimization methods, or as a method to learn variables best suited to a specific task. More formally, bilevel problems attempt to minimize an upper-level loss function, where variables in the upper-level loss function are themselves minimizers of a lower-level cost function. This review contains a running example problem of learning tuning parameters and the coefficients for sparsifying filters used in a regularizer. Such filters generalize the popular total variation regularization method, and learned filters are closely related to convolutional neural networks approaches that are rapidly gaining in popularity. Here, the lower-level problem is to reconstruct an image using a regularizer with learned sparsifying filters; the corresponding upper-level optimization problem involves a measure of reconstructed image quality based on training data. This review discusses multiple perspectives to motivate the use of bilevel methods and to make them more easily accessible to different audiences. We then turn to ways to optimize the bilevel problem, providing pros and cons of the variety of proposed approaches. Finally we overview bilevel applications in image reconstruction. 1 ar X iv :2 10 9. 09 61 0v 1 [ m at h. O C ] 2 0 Se p 20 21","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"270 1","pages":"121-289"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Found. Trends Signal Process.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1561/2000000111","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 17

Abstract

This review discusses methods for learning parameters for image reconstruction problems using bilevel formulations. Image reconstruction typically involves optimizing a cost function to recover a vector of unknown variables that agrees with collected measurements and prior assumptions. State-of-the-art image reconstruction methods learn these prior assumptions from training data using various machine learning techniques, such as bilevel methods. One can view the bilevel problem as formalizing hyperparameter optimization, as bridging machine learning and cost function based optimization methods, or as a method to learn variables best suited to a specific task. More formally, bilevel problems attempt to minimize an upper-level loss function, where variables in the upper-level loss function are themselves minimizers of a lower-level cost function. This review contains a running example problem of learning tuning parameters and the coefficients for sparsifying filters used in a regularizer. Such filters generalize the popular total variation regularization method, and learned filters are closely related to convolutional neural networks approaches that are rapidly gaining in popularity. Here, the lower-level problem is to reconstruct an image using a regularizer with learned sparsifying filters; the corresponding upper-level optimization problem involves a measure of reconstructed image quality based on training data. This review discusses multiple perspectives to motivate the use of bilevel methods and to make them more easily accessible to different audiences. We then turn to ways to optimize the bilevel problem, providing pros and cons of the variety of proposed approaches. Finally we overview bilevel applications in image reconstruction. 1 ar X iv :2 10 9. 09 61 0v 1 [ m at h. O C ] 2 0 Se p 20 21
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
图像重建的双层方法
本文讨论了使用双层公式学习图像重建问题参数的方法。图像重建通常涉及优化成本函数,以恢复未知变量向量,该向量与收集的测量值和先前的假设一致。最先进的图像重建方法使用各种机器学习技术(如双层方法)从训练数据中学习这些先验假设。人们可以将双层问题视为形式化超参数优化,将机器学习和基于成本函数的优化方法连接起来,或者将其视为学习最适合特定任务的变量的方法。更正式地说,双层问题试图最小化上层损失函数,其中上层损失函数中的变量本身就是下层成本函数的最小化值。这篇评论包含了一个运行的例子问题,学习调优参数和在正则化器中使用的稀疏滤波器的系数。这种滤波器推广了流行的全变分正则化方法,学习滤波器与卷积神经网络方法密切相关,卷积神经网络方法正在迅速普及。这里,较低级的问题是使用带有学习稀疏化过滤器的正则器来重建图像;相应的上层优化问题涉及到基于训练数据的重构图像质量度量。这篇综述讨论了多个角度,以激励双层方法的使用,并使它们更容易为不同的受众所接受。然后,我们转向优化双层问题的方法,提供各种建议方法的优点和缺点。最后概述了双层结构在图像重建中的应用。[au:] [au:] [au:][m] [m] [m] [m] [m] [m] [m] [m] [m
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Generalizing Graph Signal Processing: High Dimensional Spaces, Models and Structures An Introduction to Quantum Machine Learning for Engineers Signal Decomposition Using Masked Proximal Operators Online Component Analysis, Architectures and Applications Wireless for Machine Learning: A Survey
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1