Pub Date : 2024-11-19eCollection Date: 2025-01-01DOI: 10.1080/01630563.2024.2422064
Giovanni S Alberti, Matteo Santacesaria, Silvia Sciutto
In this work, we present and study Continuous Generative Neural Networks (CGNNs), namely, generative models in the continuous setting: the output of a CGNN belongs to an infinite-dimensional function space. The architecture is inspired by DCGAN, with one fully connected layer, several convolutional layers and nonlinear activation functions. In the continuous L2 setting, the dimensions of the spaces of each layer are replaced by the scales of a multiresolution analysis of a compactly supported wavelet. We present conditions on the convolutional filters and on the nonlinearity that guarantee that a CGNN is injective. This theory finds applications to inverse problems, and allows for deriving Lipschitz stability estimates for (possibly nonlinear) infinite-dimensional inverse problems with unknowns belonging to the manifold generated by a CGNN. Several numerical simulations, including signal deblurring, illustrate and validate this approach.
{"title":"Continuous Generative Neural Networks: A Wavelet-Based Architecture in Function Spaces.","authors":"Giovanni S Alberti, Matteo Santacesaria, Silvia Sciutto","doi":"10.1080/01630563.2024.2422064","DOIUrl":"10.1080/01630563.2024.2422064","url":null,"abstract":"<p><p>In this work, we present and study Continuous Generative Neural Networks (CGNNs), namely, generative models in the continuous setting: the output of a CGNN belongs to an infinite-dimensional function space. The architecture is inspired by DCGAN, with one fully connected layer, several convolutional layers and nonlinear activation functions. In the continuous <i>L</i> <sup>2</sup> setting, the dimensions of the spaces of each layer are replaced by the scales of a multiresolution analysis of a compactly supported wavelet. We present conditions on the convolutional filters and on the nonlinearity that guarantee that a CGNN is injective. This theory finds applications to inverse problems, and allows for deriving Lipschitz stability estimates for (possibly nonlinear) infinite-dimensional inverse problems with unknowns belonging to the manifold generated by a CGNN. Several numerical simulations, including signal deblurring, illustrate and validate this approach.</p>","PeriodicalId":54707,"journal":{"name":"Numerical Functional Analysis and Optimization","volume":"46 1","pages":"1-44"},"PeriodicalIF":1.4,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11649217/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142848133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-11eCollection Date: 2024-01-01DOI: 10.1080/01630563.2024.2384849
Mehrsa Pourya, Sebastian Neumayer, Michael Unser
We propose a regularization scheme for image reconstruction that leverages the power of deep learning while hinging on classic sparsity-promoting models. Many deep-learning-based models are hard to interpret and cumbersome to analyze theoretically. In contrast, our scheme is interpretable because it corresponds to the minimization of a series of convex problems. For each problem in the series, a mask is generated based on the previous solution to refine the regularization strength spatially. In this way, the model becomes progressively attentive to the image structure. For the underlying update operator, we prove the existence of a fixed point. As a special case, we investigate a mask generator for which the fixed-point iterations converge to a critical point of an explicit energy functional. In our experiments, we match the performance of state-of-the-art learned variational models for the solution of inverse problems. Additionally, we offer a promising balance between interpretability, theoretical guarantees, reliability, and performance.
{"title":"Iteratively Refined Image Reconstruction with Learned Attentive Regularizers.","authors":"Mehrsa Pourya, Sebastian Neumayer, Michael Unser","doi":"10.1080/01630563.2024.2384849","DOIUrl":"10.1080/01630563.2024.2384849","url":null,"abstract":"<p><p>We propose a regularization scheme for image reconstruction that leverages the power of deep learning while hinging on classic sparsity-promoting models. Many deep-learning-based models are hard to interpret and cumbersome to analyze theoretically. In contrast, our scheme is interpretable because it corresponds to the minimization of a series of convex problems. For each problem in the series, a mask is generated based on the previous solution to refine the regularization strength spatially. In this way, the model becomes progressively attentive to the image structure. For the underlying update operator, we prove the existence of a fixed point. As a special case, we investigate a mask generator for which the fixed-point iterations converge to a critical point of an explicit energy functional. In our experiments, we match the performance of state-of-the-art learned variational models for the solution of inverse problems. Additionally, we offer a promising balance between interpretability, theoretical guarantees, reliability, and performance.</p>","PeriodicalId":54707,"journal":{"name":"Numerical Functional Analysis and Optimization","volume":"45 7-9","pages":"411-440"},"PeriodicalIF":1.4,"publicationDate":"2024-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11371266/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-11DOI: 10.1080/01630563.2024.2384869
Stefan Kindermann
We consider infinite-dimensional generalized Hilbert matrices of the form Hi,j=didjxi+xj, where di are nonnegative weights and xi are pairwise distinct positive numbers. We state sufficient and, fo...
我们考虑 Hi,j=didjxi+xj 形式的无限维广义希尔伯特矩阵,其中 di 是非负权重,xi 是成对的不同正数。我们陈述了充分的和有...
{"title":"On the Type of Ill-Posedness of Generalized Hilbert Matrices and Related Operators","authors":"Stefan Kindermann","doi":"10.1080/01630563.2024.2384869","DOIUrl":"https://doi.org/10.1080/01630563.2024.2384869","url":null,"abstract":"We consider infinite-dimensional generalized Hilbert matrices of the form Hi,j=didjxi+xj, where di are nonnegative weights and xi are pairwise distinct positive numbers. We state sufficient and, fo...","PeriodicalId":54707,"journal":{"name":"Numerical Functional Analysis and Optimization","volume":"9 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-29DOI: 10.1080/01630563.2024.2349006
Yan Tang, Shiqing Zhang, Yeol Je Cho
In this paper, we focus on the solution of a class of monotone inclusion problems in reflexive Banach spaces. To reflect the geometry of the space and the operator, a more general proximal point it...
{"title":"On the Bregman-proximal iterative algorithm for the monotone inclusion problem in Banach spaces","authors":"Yan Tang, Shiqing Zhang, Yeol Je Cho","doi":"10.1080/01630563.2024.2349006","DOIUrl":"https://doi.org/10.1080/01630563.2024.2349006","url":null,"abstract":"In this paper, we focus on the solution of a class of monotone inclusion problems in reflexive Banach spaces. To reflect the geometry of the space and the operator, a more general proximal point it...","PeriodicalId":54707,"journal":{"name":"Numerical Functional Analysis and Optimization","volume":"28 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141259739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-02DOI: 10.1080/01630563.2024.2333255
Yan Ni, Liu Zexian
Conjugate gradient methods are a class of very effective iterative methods for large-scale unconstrained optimization. In this paper, a new Dai-Liao conjugate gradient method for solving large-scal...
{"title":"A New Dai-Liao Conjugate Gradient Method based on Approximately Optimal Stepsize for Unconstrained Optimization","authors":"Yan Ni, Liu Zexian","doi":"10.1080/01630563.2024.2333255","DOIUrl":"https://doi.org/10.1080/01630563.2024.2333255","url":null,"abstract":"Conjugate gradient methods are a class of very effective iterative methods for large-scale unconstrained optimization. In this paper, a new Dai-Liao conjugate gradient method for solving large-scal...","PeriodicalId":54707,"journal":{"name":"Numerical Functional Analysis and Optimization","volume":"85 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140580246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-02DOI: 10.1080/01630563.2024.2333251
A. V. Fominyh
The paper deals with systems of ordinary differential equations containing in the right-hand side controls which are discontinuous in phase variables. These controls cause the occurrence of sliding...
本文涉及的常微分方程系统在右侧控制中包含在相位变量中不连续的控制。这些控制导致出现滑动...
{"title":"On Diferential Inclusions Arising from Some Discontinuous Systems","authors":"A. V. Fominyh","doi":"10.1080/01630563.2024.2333251","DOIUrl":"https://doi.org/10.1080/01630563.2024.2333251","url":null,"abstract":"The paper deals with systems of ordinary differential equations containing in the right-hand side controls which are discontinuous in phase variables. These controls cause the occurrence of sliding...","PeriodicalId":54707,"journal":{"name":"Numerical Functional Analysis and Optimization","volume":"89 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140579714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-02DOI: 10.1080/01630563.2024.2333250
Sezer Erdem, Serkan Demiriz, Adem Şahin
In the current work, it is constructed the Motzkin matrix obtained by using Motzkin numbers M=(mrs) and is examined the sequence spaces c(M) and c0(M) described as the domain of Motzkin matrix M...
在当前的工作中,利用莫兹金数 M=(mrs) 构造了莫兹金矩阵,并研究了作为莫兹金矩阵 M 域的序列空间 c(M) 和 c0(M)...
{"title":"Motzkin Sequence Spaces and Motzkin Core","authors":"Sezer Erdem, Serkan Demiriz, Adem Şahin","doi":"10.1080/01630563.2024.2333250","DOIUrl":"https://doi.org/10.1080/01630563.2024.2333250","url":null,"abstract":"In the current work, it is constructed the Motzkin matrix obtained by using Motzkin numbers M=(mrs) and is examined the sequence spaces c(M) and c0(M) described as the domain of Motzkin matrix M...","PeriodicalId":54707,"journal":{"name":"Numerical Functional Analysis and Optimization","volume":"131 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140579813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-04DOI: 10.1080/01630563.2024.2320663
Markus Holzleitner, Sergei V. Pereverzyev, Werner Zellinger
The problem of domain generalization is to learn, given data from different source distributions, a model that can be expected to generalize well on new target distributions which are only seen thr...
{"title":"Domain Generalization by Functional Regression","authors":"Markus Holzleitner, Sergei V. Pereverzyev, Werner Zellinger","doi":"10.1080/01630563.2024.2320663","DOIUrl":"https://doi.org/10.1080/01630563.2024.2320663","url":null,"abstract":"The problem of domain generalization is to learn, given data from different source distributions, a model that can be expected to generalize well on new target distributions which are only seen thr...","PeriodicalId":54707,"journal":{"name":"Numerical Functional Analysis and Optimization","volume":"232 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140033909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.1080/01630563.2024.2318597
Richard Findling, Ulrich Kohlenbach
In this paper we give a quantitative analysis of an explicit iteration method due to C.E. Chidume for the approximation of a zero of an m-accretive operator A:X→2X in Banach spaces which does not i...
{"title":"Rates of Convergence and Metastability for Chidume’s Algorithm for the Approximation of Zeros of Accretive Operators in Banach Spaces","authors":"Richard Findling, Ulrich Kohlenbach","doi":"10.1080/01630563.2024.2318597","DOIUrl":"https://doi.org/10.1080/01630563.2024.2318597","url":null,"abstract":"In this paper we give a quantitative analysis of an explicit iteration method due to C.E. Chidume for the approximation of a zero of an m-accretive operator A:X→2X in Banach spaces which does not i...","PeriodicalId":54707,"journal":{"name":"Numerical Functional Analysis and Optimization","volume":"15 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140005088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-23DOI: 10.1080/01630563.2024.2318572
Hanna L. Myleiko, Sergei G. Solodky
The unsupervised domain adaptation problem with covariate shift assumption is considered. Within the framework of the Reproducing Kernel Hilbert Space concept, an algorithm is constructed that is a...
{"title":"Regularized Nyström Subsampling in Covariate Shift Domain Adaptation Problems","authors":"Hanna L. Myleiko, Sergei G. Solodky","doi":"10.1080/01630563.2024.2318572","DOIUrl":"https://doi.org/10.1080/01630563.2024.2318572","url":null,"abstract":"The unsupervised domain adaptation problem with covariate shift assumption is considered. Within the framework of the Reproducing Kernel Hilbert Space concept, an algorithm is constructed that is a...","PeriodicalId":54707,"journal":{"name":"Numerical Functional Analysis and Optimization","volume":"36 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139949651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}