Zhuoxu Cui, Qingyong Zhu, Jing Cheng, Bo Zhang, Dong Liang
{"title":"Deep unfolding as iterative regularization for imaging inverse problems","authors":"Zhuoxu Cui, Qingyong Zhu, Jing Cheng, Bo Zhang, Dong Liang","doi":"10.1088/1361-6420/ad1a3c","DOIUrl":null,"url":null,"abstract":"\n Deep unfolding methods have gained significant popularity in the field of inverse problems as they have driven the design of deep neural networks (DNNs) using iterative algorithms. In contrast to general DNNs, unfolding methods offer improved interpretability and performance. However, their theoretical stability or regularity in solving inverse problems remains subject to certain limitations. To address this, we reevaluate unfolded DNNs and observe that their algorithmically-driven cascading structure exhibits a closer resemblance to iterative regularization. Recognizing this, we propose a modified training approach and configure termination criteria for unfolded DNNs, thereby establishing the unfolding method as an iterative regularization technique. Specifically, our method involves the joint learning of a convex penalty function using an input-convex neural network (ICNN) to quantify distance to a real data manifold. Then, we train a DNN unfolded from the proximal gradient descent algorithm, incorporating this learned penalty. Additionally, we introduce a new termination criterion for the unfolded DNN. Under the assumption that the real data manifold intersects the solutions of the inverse problem with a unique real solution, even when measurements contain perturbations, we provide a theoretical proof of the stable convergence of the unfolded DNN to this solution. Furthermore, we demonstrate with an example of MRI reconstruction that the proposed method outperforms original unfolding methods and traditional regularization methods in terms of reconstruction quality, stability, and convergence speed.","PeriodicalId":50275,"journal":{"name":"Inverse Problems","volume":"14 6","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Inverse Problems","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1088/1361-6420/ad1a3c","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
Abstract
Deep unfolding methods have gained significant popularity in the field of inverse problems as they have driven the design of deep neural networks (DNNs) using iterative algorithms. In contrast to general DNNs, unfolding methods offer improved interpretability and performance. However, their theoretical stability or regularity in solving inverse problems remains subject to certain limitations. To address this, we reevaluate unfolded DNNs and observe that their algorithmically-driven cascading structure exhibits a closer resemblance to iterative regularization. Recognizing this, we propose a modified training approach and configure termination criteria for unfolded DNNs, thereby establishing the unfolding method as an iterative regularization technique. Specifically, our method involves the joint learning of a convex penalty function using an input-convex neural network (ICNN) to quantify distance to a real data manifold. Then, we train a DNN unfolded from the proximal gradient descent algorithm, incorporating this learned penalty. Additionally, we introduce a new termination criterion for the unfolded DNN. Under the assumption that the real data manifold intersects the solutions of the inverse problem with a unique real solution, even when measurements contain perturbations, we provide a theoretical proof of the stable convergence of the unfolded DNN to this solution. Furthermore, we demonstrate with an example of MRI reconstruction that the proposed method outperforms original unfolding methods and traditional regularization methods in terms of reconstruction quality, stability, and convergence speed.
期刊介绍:
An interdisciplinary journal combining mathematical and experimental papers on inverse problems with theoretical, numerical and practical approaches to their solution.
As well as applied mathematicians, physical scientists and engineers, the readership includes those working in geophysics, radar, optics, biology, acoustics, communication theory, signal processing and imaging, among others.
The emphasis is on publishing original contributions to methods of solving mathematical, physical and applied problems. To be publishable in this journal, papers must meet the highest standards of scientific quality, contain significant and original new science and should present substantial advancement in the field. Due to the broad scope of the journal, we require that authors provide sufficient introductory material to appeal to the wide readership and that articles which are not explicitly applied include a discussion of possible applications.