Pub Date : 2024-07-19DOI: 10.1007/s10208-024-09669-4
Guillaume Carlier, Alex Delalande, Quentin Mérigot
We study the quantitative stability of the mapping that to a measure associates its pushforward measure by a fixed (non-smooth) optimal transport map. We exhibit a tight Hölder-behavior for this operation under minimal assumptions. Our proof essentially relies on a new bound that quantifies the size of the singular sets of a convex and Lipschitz continuous function on a bounded domain.
{"title":"Quantitative Stability of the Pushforward Operation by an Optimal Transport Map","authors":"Guillaume Carlier, Alex Delalande, Quentin Mérigot","doi":"10.1007/s10208-024-09669-4","DOIUrl":"https://doi.org/10.1007/s10208-024-09669-4","url":null,"abstract":"<p>We study the quantitative stability of the mapping that to a measure associates its pushforward measure by a fixed (non-smooth) optimal transport map. We exhibit a tight Hölder-behavior for this operation under minimal assumptions. Our proof essentially relies on a new bound that quantifies the size of the singular sets of a convex and Lipschitz continuous function on a bounded domain.</p>","PeriodicalId":55151,"journal":{"name":"Foundations of Computational Mathematics","volume":"25 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141730631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1007/s10208-024-09660-z
Wojciech Chachólski, Andrea Guidolin, Isaac Ren, Martina Scolamiero, Francesca Tombari
Under certain conditions, Koszul complexes can be used to calculate relative Betti diagrams of vector space-valued functors indexed by a poset, without the explicit computation of global minimal relative resolutions. In relative homological algebra of such functors, free functors are replaced by an arbitrary family of functors. Relative Betti diagrams encode the multiplicities of these functors in minimal relative resolutions. In this article we provide conditions under which grading the chosen family of functors leads to explicit Koszul complexes whose homology dimensions are the relative Betti diagrams, thus giving a scheme for the computation of these numerical descriptors.
{"title":"Koszul Complexes and Relative Homological Algebra of Functors Over Posets","authors":"Wojciech Chachólski, Andrea Guidolin, Isaac Ren, Martina Scolamiero, Francesca Tombari","doi":"10.1007/s10208-024-09660-z","DOIUrl":"https://doi.org/10.1007/s10208-024-09660-z","url":null,"abstract":"<p>Under certain conditions, Koszul complexes can be used to calculate relative Betti diagrams of vector space-valued functors indexed by a poset, without the explicit computation of global minimal relative resolutions. In relative homological algebra of such functors, free functors are replaced by an arbitrary family of functors. Relative Betti diagrams encode the multiplicities of these functors in minimal relative resolutions. In this article we provide conditions under which grading the chosen family of functors leads to explicit Koszul complexes whose homology dimensions are the relative Betti diagrams, thus giving a scheme for the computation of these numerical descriptors.\u0000</p>","PeriodicalId":55151,"journal":{"name":"Foundations of Computational Mathematics","volume":"14 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141425513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-14DOI: 10.1007/s10208-024-09653-y
Damek Davis, Liwei Jiang
Classical results show that gradient descent converges linearly to minimizers of smooth strongly convex functions. A natural question is whether there exists a locally nearly linearly convergent method for nonsmooth functions with quadratic growth. This work designs such a method for a wide class of nonsmooth and nonconvex locally Lipschitz functions, including max-of-smooth, Shapiro’s decomposable class, and generic semialgebraic functions. The algorithm is parameter-free and derives from Goldstein’s conceptual subgradient method.
{"title":"A Local Nearly Linearly Convergent First-Order Method for Nonsmooth Functions with Quadratic Growth","authors":"Damek Davis, Liwei Jiang","doi":"10.1007/s10208-024-09653-y","DOIUrl":"https://doi.org/10.1007/s10208-024-09653-y","url":null,"abstract":"<p>Classical results show that gradient descent converges linearly to minimizers of smooth strongly convex functions. A natural question is whether there exists a locally nearly linearly convergent method for nonsmooth functions with quadratic growth. This work designs such a method for a wide class of nonsmooth and nonconvex locally Lipschitz functions, including max-of-smooth, Shapiro’s decomposable class, and generic semialgebraic functions. The algorithm is parameter-free and derives from Goldstein’s conceptual subgradient method.\u0000</p>","PeriodicalId":55151,"journal":{"name":"Foundations of Computational Mathematics","volume":"29 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-03DOI: 10.1007/s10208-024-09654-x
Andreas Hauptmann, Subhadip Mukherjee, Carola-Bibiane Schönlieb, Ferdia Sherry
Regularization is necessary when solving inverse problems to ensure the well-posedness of the solution map. Additionally, it is desired that the chosen regularization strategy is convergent in the sense that the solution map converges to a solution of the noise-free operator equation. This provides an important guarantee that stable solutions can be computed for all noise levels and that solutions satisfy the operator equation in the limit of vanishing noise. In recent years, reconstructions in inverse problems are increasingly approached from a data-driven perspective. Despite empirical success, the majority of data-driven approaches do not provide a convergent regularization strategy. One such popular example is given by iterative plug-and-play (PnP) denoising using off-the-shelf image denoisers. These usually provide only convergence of the PnP iterates to a fixed point, under suitable regularity assumptions on the denoiser, rather than convergence of the method as a regularization technique, thatis under vanishing noise and regularization strength. This paper serves two purposes: first, we provide an overview of the classical regularization theory in inverse problems and survey a few notable recent data-driven methods that are provably convergent regularization schemes. We then continue to discuss PnP algorithms and their established convergence guarantees. Subsequently, we consider PnP algorithms with learned linear denoisers and propose a novel spectral filtering technique of the denoiser to control the strength of regularization. Further, by relating the implicit regularization of the denoiser to an explicit regularization functional, we are the first to rigorously show that PnP with a learned linear denoiser leads to a convergent regularization scheme. The theoretical analysis is corroborated by numerical experiments for the classical inverse problem of tomographic image reconstruction.
{"title":"Convergent Regularization in Inverse Problems and Linear Plug-and-Play Denoisers","authors":"Andreas Hauptmann, Subhadip Mukherjee, Carola-Bibiane Schönlieb, Ferdia Sherry","doi":"10.1007/s10208-024-09654-x","DOIUrl":"https://doi.org/10.1007/s10208-024-09654-x","url":null,"abstract":"<p>Regularization is necessary when solving inverse problems to ensure the well-posedness of the solution map. Additionally, it is desired that the chosen regularization strategy is convergent in the sense that the solution map converges to a solution of the noise-free operator equation. This provides an important guarantee that stable solutions can be computed for all noise levels and that solutions satisfy the operator equation in the limit of vanishing noise. In recent years, reconstructions in inverse problems are increasingly approached from a data-driven perspective. Despite empirical success, the majority of data-driven approaches do not provide a convergent regularization strategy. One such popular example is given by iterative plug-and-play (PnP) denoising using off-the-shelf image denoisers. These usually provide only convergence of the PnP iterates to a fixed point, under suitable regularity assumptions on the denoiser, rather than convergence of the method as a regularization technique, thatis under vanishing noise and regularization strength. This paper serves two purposes: first, we provide an overview of the classical regularization theory in inverse problems and survey a few notable recent data-driven methods that are provably convergent regularization schemes. We then continue to discuss PnP algorithms and their established convergence guarantees. Subsequently, we consider PnP algorithms with learned linear denoisers and propose a novel spectral filtering technique of the denoiser to control the strength of regularization. Further, by relating the implicit regularization of the denoiser to an explicit regularization functional, we are the first to rigorously show that PnP with a learned linear denoiser leads to a convergent regularization scheme. The theoretical analysis is corroborated by numerical experiments for the classical inverse problem of tomographic image reconstruction.</p>","PeriodicalId":55151,"journal":{"name":"Foundations of Computational Mathematics","volume":"47 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141246310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-28DOI: 10.1007/s10208-024-09652-z
A. S. Lewis, Tonghua Tian
Identifiability, and the closely related idea of partial smoothness, unify classical active set methods and more general notions of solution structure. Diverse optimization algorithms generate iterates in discrete time that are eventually confined to identifiable sets. We present two fresh perspectives on identifiability. The first distills the notion to a simple metric property, applicable not just in Euclidean settings but to optimization over manifolds and beyond; the second reveals analogous continuous-time behavior for subgradient descent curves. The Kurdyka–Łojasiewicz property typically governs convergence in both discrete and continuous time: we explore its interplay with identifiability.
{"title":"Identifiability, the KL Property in Metric Spaces, and Subgradient Curves","authors":"A. S. Lewis, Tonghua Tian","doi":"10.1007/s10208-024-09652-z","DOIUrl":"https://doi.org/10.1007/s10208-024-09652-z","url":null,"abstract":"<p>Identifiability, and the closely related idea of partial smoothness, unify classical active set methods and more general notions of solution structure. Diverse optimization algorithms generate iterates in discrete time that are eventually confined to identifiable sets. We present two fresh perspectives on identifiability. The first distills the notion to a simple metric property, applicable not just in Euclidean settings but to optimization over manifolds and beyond; the second reveals analogous continuous-time behavior for subgradient descent curves. The Kurdyka–Łojasiewicz property typically governs convergence in both discrete and continuous time: we explore its interplay with identifiability.</p>","PeriodicalId":55151,"journal":{"name":"Foundations of Computational Mathematics","volume":"61 23 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141165298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1007/s10208-024-09655-w
Erik Burman, Mihai Nechita, Lauri Oksanen
We consider numerical approximations of ill-posed elliptic problems with conditional stability. The notion of optimal error estimates is defined including both convergence with respect to discretisation and perturbations in data. The rate of convergence is determined by the conditional stability of the underlying continuous problem and the polynomial order of the approximation space. A proof is given that no approximation can converge at a better rate than that given by the definition without increasing the sensitivity to perturbations, thus justifying the concept. A recently introduced class of primal-dual finite element methods with weakly consistent regularisation is recalled and the associated error estimates are shown to be optimal in the sense of this definition.
{"title":"Optimal Approximation of Unique Continuation","authors":"Erik Burman, Mihai Nechita, Lauri Oksanen","doi":"10.1007/s10208-024-09655-w","DOIUrl":"https://doi.org/10.1007/s10208-024-09655-w","url":null,"abstract":"<p>We consider numerical approximations of ill-posed elliptic problems with conditional stability. The notion of <i>optimal error estimates</i> is defined including both convergence with respect to discretisation and perturbations in data. The rate of convergence is determined by the conditional stability of the underlying continuous problem and the polynomial order of the approximation space. A proof is given that no approximation can converge at a better rate than that given by the definition without increasing the sensitivity to perturbations, thus justifying the concept. A recently introduced class of primal-dual finite element methods with weakly consistent regularisation is recalled and the associated error estimates are shown to be optimal in the sense of this definition.</p>","PeriodicalId":55151,"journal":{"name":"Foundations of Computational Mathematics","volume":"50 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141074297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-17DOI: 10.1007/s10208-024-09656-9
Jameson Cahill, Joseph W. Iverson, Dustin G. Mixon, Daniel Packer
Given a real inner product space V and a group G of linear isometries, we construct a family of G-invariant real-valued functions on V that we call max filters. In the case where (V={mathbb {R}}^d) and G is finite, a suitable max filter bank separates orbits, and is even bilipschitz in the quotient metric. In the case where (V=L^2({mathbb {R}}^d)) and G is the group of translation operators, a max filter exhibits stability to diffeomorphic distortion like that of the scattering transform introduced by Mallat. We establish that max filters are well suited for various classification tasks, both in theory and in practice.
给定一个实内积空间 V 和一个线性等距群 G,我们构建了一个 V 上的 G 不变实值函数族,我们称之为最大滤波器。在 (V={mathbb {R}}^d) 和 G 有限的情况下,一个合适的最大滤波器库可以分离轨道,并且在商度量中甚至是双桥的。在(V=L^2({mathbb {R}}^d)) 和 G 是平移算子群的情况下,最大滤波器对类似于马拉特引入的散射变换的衍射变形具有稳定性。我们从理论和实践上证明,最大滤波器非常适合各种分类任务。
{"title":"Group-Invariant Max Filtering","authors":"Jameson Cahill, Joseph W. Iverson, Dustin G. Mixon, Daniel Packer","doi":"10.1007/s10208-024-09656-9","DOIUrl":"https://doi.org/10.1007/s10208-024-09656-9","url":null,"abstract":"<p>Given a real inner product space <i>V</i> and a group <i>G</i> of linear isometries, we construct a family of <i>G</i>-invariant real-valued functions on <i>V</i> that we call <i>max filters</i>. In the case where <span>(V={mathbb {R}}^d)</span> and <i>G</i> is finite, a suitable max filter bank separates orbits, and is even bilipschitz in the quotient metric. In the case where <span>(V=L^2({mathbb {R}}^d))</span> and <i>G</i> is the group of translation operators, a max filter exhibits stability to diffeomorphic distortion like that of the scattering transform introduced by Mallat. We establish that max filters are well suited for various classification tasks, both in theory and in practice.</p>","PeriodicalId":55151,"journal":{"name":"Foundations of Computational Mathematics","volume":"39 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140953985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-16DOI: 10.1007/s10208-024-09650-1
Shreya Arya, Justin Curry, Sayan Mukherjee
We present a sheaf-theoretic construction of shape space—the space of all shapes. We do this by describing a homotopy sheaf on the poset category of constructible sets, where each set is mapped to its Persistent Homology Transforms (PHT). Recent results that build on fundamental work of Schapira have shown that this transform is injective, thus making the PHT a good summary object for each shape. Our homotopy sheaf result allows us to “glue” PHTs of different shapes together to build up the PHT of a larger shape. In the case where our shape is a polyhedron we prove a generalized nerve lemma for the PHT. Finally, by re-examining the sampling result of Smale-Niyogi-Weinberger, we show that we can reliably approximate the PHT of a manifold by a polyhedron up to arbitrary precision.
{"title":"A Sheaf-Theoretic Construction of Shape Space","authors":"Shreya Arya, Justin Curry, Sayan Mukherjee","doi":"10.1007/s10208-024-09650-1","DOIUrl":"https://doi.org/10.1007/s10208-024-09650-1","url":null,"abstract":"<p>We present a sheaf-theoretic construction of shape space—the space of all shapes. We do this by describing a homotopy sheaf on the poset category of constructible sets, where each set is mapped to its Persistent Homology Transforms (PHT). Recent results that build on fundamental work of Schapira have shown that this transform is injective, thus making the PHT a good summary object for each shape. Our homotopy sheaf result allows us to “glue” PHTs of different shapes together to build up the PHT of a larger shape. In the case where our shape is a polyhedron we prove a generalized nerve lemma for the PHT. Finally, by re-examining the sampling result of Smale-Niyogi-Weinberger, we show that we can reliably approximate the PHT of a manifold by a polyhedron up to arbitrary precision.</p>","PeriodicalId":55151,"journal":{"name":"Foundations of Computational Mathematics","volume":"27 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140953668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.1007/s10208-024-09648-9
Simon Lemaire, Silvano Pitassi
We prove discrete versions of the first and second Weber inequalities on (varvec{H}({{,mathrm{{textbf {curl}}},}})cap varvec{H}({{,textrm{div},}}_{eta }))-like hybrid spaces spanned by polynomials attached to the faces and to the cells of a polyhedral mesh. The proven hybrid Weber inequalities are optimal in the sense that (i) they are formulated in terms of (varvec{H}({{,mathrm{{textbf {curl}}},}}))- and (varvec{H}({{,textrm{div},}}_{eta }))-like hybrid semi-norms designed so as to embed optimally (polynomially) consistent face penalty terms, and (ii) they are valid for face polynomials in the smallest possible stability-compatible spaces. Our results are valid on domains with general, possibly non-trivial topology. In a second part we also prove, within a general topological setting, related discrete Maxwell compactness properties.
{"title":"Discrete Weber Inequalities and Related Maxwell Compactness for Hybrid Spaces over Polyhedral Partitions of Domains with General Topology","authors":"Simon Lemaire, Silvano Pitassi","doi":"10.1007/s10208-024-09648-9","DOIUrl":"https://doi.org/10.1007/s10208-024-09648-9","url":null,"abstract":"<p>We prove discrete versions of the first and second Weber inequalities on <span>(varvec{H}({{,mathrm{{textbf {curl}}},}})cap varvec{H}({{,textrm{div},}}_{eta }))</span>-like hybrid spaces spanned by polynomials attached to the faces and to the cells of a polyhedral mesh. The proven hybrid Weber inequalities are optimal in the sense that (i) they are formulated in terms of <span>(varvec{H}({{,mathrm{{textbf {curl}}},}}))</span>- and <span>(varvec{H}({{,textrm{div},}}_{eta }))</span>-like hybrid semi-norms designed so as to embed optimally (polynomially) consistent face penalty terms, and (ii) they are valid for face polynomials in the smallest possible stability-compatible spaces. Our results are valid on domains with general, possibly non-trivial topology. In a second part we also prove, within a general topological setting, related discrete Maxwell compactness properties.</p>","PeriodicalId":55151,"journal":{"name":"Foundations of Computational Mathematics","volume":"56 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140608078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-05DOI: 10.1007/s10208-024-09651-0
Abstract
We consider extensions of the Shannon relative entropy, referred to as f-divergences. Three classical related computational problems are typically associated with these divergences: (a) estimation from moments, (b) computing normalizing integrals, and (c) variational inference in probabilistic models. These problems are related to one another through convex duality, and for all of them, there are many applications throughout data science, and we aim for computationally tractable approximation algorithms that preserve properties of the original problem such as potential convexity or monotonicity. In order to achieve this, we derive a sequence of convex relaxations for computing these divergences from non-centered covariance matrices associated with a given feature vector: starting from the typically non-tractable optimal lower-bound, we consider an additional relaxation based on “sums-of-squares”, which is is now computable in polynomial time as a semidefinite program. We also provide computationally more efficient relaxations based on spectral information divergences from quantum information theory. For all of the tasks above, beyond proposing new relaxations, we derive tractable convex optimization algorithms, and we present illustrations on multivariate trigonometric polynomials and functions on the Boolean hypercube.
{"title":"Sum-of-Squares Relaxations for Information Theory and Variational Inference","authors":"","doi":"10.1007/s10208-024-09651-0","DOIUrl":"https://doi.org/10.1007/s10208-024-09651-0","url":null,"abstract":"<h3>Abstract</h3> <p>We consider extensions of the Shannon relative entropy, referred to as <em>f</em>-divergences. Three classical related computational problems are typically associated with these divergences: (a) estimation from moments, (b) computing normalizing integrals, and (c) variational inference in probabilistic models. These problems are related to one another through convex duality, and for all of them, there are many applications throughout data science, and we aim for computationally tractable approximation algorithms that preserve properties of the original problem such as potential convexity or monotonicity. In order to achieve this, we derive a sequence of convex relaxations for computing these divergences from non-centered covariance matrices associated with a given feature vector: starting from the typically non-tractable optimal lower-bound, we consider an additional relaxation based on “sums-of-squares”, which is is now computable in polynomial time as a semidefinite program. We also provide computationally more efficient relaxations based on spectral information divergences from quantum information theory. For all of the tasks above, beyond proposing new relaxations, we derive tractable convex optimization algorithms, and we present illustrations on multivariate trigonometric polynomials and functions on the Boolean hypercube.</p>","PeriodicalId":55151,"journal":{"name":"Foundations of Computational Mathematics","volume":"204 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140533954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}