Pub Date : 2025-10-01Epub Date: 2025-05-21DOI: 10.1016/j.acha.2025.101776
Michael E. Mckenna , Hrushikesh N. Mhaskar , Richard G. Spencer
Motivated by applications in magnetic resonance relaxometry, we consider the following problem: given samples of a function , where is an integer, , for , determine K, 's and 's. Unlike the case in which the 's are purely imaginary, this problem is notoriously ill-posed. Our goal is to show that this problem can be transformed into an equivalent one in which the 's are replaced by . We show that this may be accomplished by approximation in terms of Hermite functions, and using the fact that these functions are eigenfunctions of the Fourier transform. We present a preliminary numerical exploration of parameter extraction from this formalism, including the effect of noise. The inherent ill-posedness of the original problem persists in the new domain, as reflected in the numerical results.
{"title":"An eigenfunction approach to conversion of the Laplace transform of point masses on the real line to the Fourier domain","authors":"Michael E. Mckenna , Hrushikesh N. Mhaskar , Richard G. Spencer","doi":"10.1016/j.acha.2025.101776","DOIUrl":"10.1016/j.acha.2025.101776","url":null,"abstract":"<div><div>Motivated by applications in magnetic resonance relaxometry, we consider the following problem: given samples of a function <span><math><mi>t</mi><mo>↦</mo><msubsup><mrow><mo>∑</mo></mrow><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mrow><mi>K</mi></mrow></msubsup><msub><mrow><mi>A</mi></mrow><mrow><mi>k</mi></mrow></msub><mi>exp</mi><mo></mo><mo>(</mo><mo>−</mo><mi>t</mi><msub><mrow><mi>λ</mi></mrow><mrow><mi>k</mi></mrow></msub><mo>)</mo></math></span>, where <span><math><mi>K</mi><mo>≥</mo><mn>2</mn></math></span> is an integer, <span><math><msub><mrow><mi>A</mi></mrow><mrow><mi>k</mi></mrow></msub><mo>∈</mo><mi>R</mi></math></span>, <span><math><msub><mrow><mi>λ</mi></mrow><mrow><mi>k</mi></mrow></msub><mo>></mo><mn>0</mn></math></span> for <span><math><mi>k</mi><mo>=</mo><mn>1</mn><mo>,</mo><mo>⋯</mo><mo>,</mo><mi>K</mi></math></span>, determine <em>K</em>, <span><math><msub><mrow><mi>A</mi></mrow><mrow><mi>k</mi></mrow></msub></math></span>'s and <span><math><msub><mrow><mi>λ</mi></mrow><mrow><mi>k</mi></mrow></msub></math></span>'s. Unlike the case in which the <span><math><msub><mrow><mi>λ</mi></mrow><mrow><mi>k</mi></mrow></msub></math></span>'s are purely imaginary, this problem is notoriously ill-posed. Our goal is to show that this problem can be transformed into an equivalent one in which the <span><math><msub><mrow><mi>λ</mi></mrow><mrow><mi>k</mi></mrow></msub></math></span>'s are replaced by <span><math><mi>i</mi><msub><mrow><mi>λ</mi></mrow><mrow><mi>k</mi></mrow></msub></math></span>. We show that this may be accomplished by approximation in terms of Hermite functions, and using the fact that these functions are eigenfunctions of the Fourier transform. We present a preliminary numerical exploration of parameter extraction from this formalism, including the effect of noise. The inherent ill-posedness of the original problem persists in the new domain, as reflected in the numerical results.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101776"},"PeriodicalIF":2.6,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144222723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-16DOI: 10.1016/j.acha.2025.101796
Dorian Florescu, Ayush Bhandari
In this paper we introduce a new sampling and reconstruction approach for multi-dimensional analog signals. Building on top of the Unlimited Sensing Framework (USF), we present a new folded sampling operator called the multi-dimensional modulo-hysteresis that is also backwards compatible with the existing one-dimensional modulo operator. Unlike previous approaches, the proposed model is specifically tailored to multi-dimensional signals. In particular, the model uses certain redundancy in dimensions 2 and above, which is exploited for input recovery with robustness. We prove that the new operator is well-defined and its outputs have a bounded dynamic range. For the noiseless case, we derive a theoretically guaranteed input reconstruction approach. When the input is corrupted by Gaussian noise, we exploit redundancy in higher dimensions to provide a bound on the error probability and show this drops to 0 for high enough sampling rates leading to new theoretical guarantees for the noisy case. Our numerical examples corroborate the theoretical results and show that the proposed approach can handle a significantly larger amount of noise compared to USF.
{"title":"Multi-dimensional unlimited sampling and robust reconstruction","authors":"Dorian Florescu, Ayush Bhandari","doi":"10.1016/j.acha.2025.101796","DOIUrl":"10.1016/j.acha.2025.101796","url":null,"abstract":"<div><div>In this paper we introduce a new sampling and reconstruction approach for multi-dimensional analog signals. Building on top of the Unlimited Sensing Framework (USF), we present a new folded sampling operator called the multi-dimensional modulo-hysteresis that is also backwards compatible with the existing one-dimensional modulo operator. Unlike previous approaches, the proposed model is specifically tailored to multi-dimensional signals. In particular, the model uses certain redundancy in dimensions 2 and above, which is exploited for input recovery with robustness. We prove that the new operator is well-defined and its outputs have a bounded dynamic range. For the noiseless case, we derive a theoretically guaranteed input reconstruction approach. When the input is corrupted by Gaussian noise, we exploit redundancy in higher dimensions to provide a bound on the error probability and show this drops to 0 for high enough sampling rates leading to new theoretical guarantees for the noisy case. Our numerical examples corroborate the theoretical results and show that the proposed approach can handle a significantly larger amount of noise compared to USF.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101796"},"PeriodicalIF":2.6,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144665025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-09DOI: 10.1016/j.acha.2025.101794
Elias Zikkos
<div><div>Let <span><math><msub><mrow><mo>{</mo><msup><mrow><mi>e</mi></mrow><mrow><mi>i</mi><msub><mrow><mi>λ</mi></mrow><mrow><mi>n</mi></mrow></msub><mi>t</mi></mrow></msup><mo>}</mo></mrow><mrow><mi>n</mi><mo>∈</mo><mi>Z</mi></mrow></msub></math></span> be an exponential Schauder basis for <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math></span>, where <span><math><msub><mrow><mi>λ</mi></mrow><mrow><mi>n</mi></mrow></msub><mo>∈</mo><mi>R</mi></math></span>, and let <span><math><msub><mrow><mo>{</mo><msub><mrow><mi>r</mi></mrow><mrow><mi>n</mi></mrow></msub><mo>(</mo><mi>t</mi><mo>)</mo><mo>}</mo></mrow><mrow><mi>n</mi><mo>∈</mo><mi>Z</mi></mrow></msub></math></span> be its dual Schauder basis. Let <em>A</em> be a non-empty subset of the integers containing exactly <em>M</em> elements. We prove that for <span><math><mi>α</mi><mo>></mo><mn>0</mn></math></span> the weighted system <span><math><msub><mrow><mo>{</mo><msup><mrow><mi>t</mi></mrow><mrow><mi>α</mi></mrow></msup><mo>⋅</mo><msub><mrow><mi>r</mi></mrow><mrow><mi>n</mi></mrow></msub><mo>(</mo><mi>t</mi><mo>)</mo><mo>}</mo></mrow><mrow><mi>n</mi><mo>∈</mo><mi>Z</mi><mo>∖</mo><mi>A</mi></mrow></msub></math></span> is exact in the space <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math></span>, that is, it is complete and minimal in <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math></span>, if and only if <span><math><mi>α</mi><mo>∈</mo><mo>[</mo><mi>M</mi><mo>−</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mo>,</mo><mi>M</mi><mo>+</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mo>)</mo></math></span>. We also show that such a system is not a Riesz basis for <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math></span>.</div><div>In particular, the weighted trigonometric system <span><math><msub><mrow><mo>{</mo><msup><mrow><mi>t</mi></mrow><mrow><mi>α</mi></mrow></msup><mo>⋅</mo><msup><mrow><mi>e</mi></mrow><mrow><mn>2</mn><mi>π</mi><mi>i</mi><mi>n</mi><mi>t</mi></mrow></msup><mo>}</mo></mrow><mrow><mi>n</mi><mo>∈</mo><mi>Z</mi><mo>∖</mo><mi>A</mi></mrow></msub></math></span> is exact in <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math></span>, if and only if <span><math><mi>α</mi><mo>∈</mo><mo>[</mo><mi>M</mi><mo>−</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mo>,</mo><mi>M</mi><mo>+</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mo>)</mo></math></span>, but this system is not even a Schauder basis for <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math><
设{eiλnt}n∈Z是L2(0,1)的指数Schauder基,其中λn∈R,设{rn(t)}n∈Z是它的对偶Schauder基。设A是包含M个元素的整数的非空子集。证明了对于α>;0,加权系统{tα⋅rn(t)}n∈Z∈A在L2(0,1)上是精确的,即当且仅当α∈[M−12,M+12]时,它在L2(0,1)上是完备极小的。我们还证明了这样的系统不是L2(0,1)的Riesz基。特别地,当且仅当α∈[M−12,M+12]时,加权三角系统{tα⋅e2πint}n∈Z∈A在L2(0,1)中是精确的,但该系统甚至不是L2(0,1)的Schauder基。这个结果扩展了Heil和Yoon(2012)的结果,他们考虑了α为正整数时的类似问题。{tα⋅e2πint}n∈Z∈A的非碱度结合Heil et al.(2023)的结果,得到对于任意α≥1/2,过完备系统{tα⋅e2πint}n∈Z对于L2(0,1)没有可再生伙伴。然而,这个过完备系统是L2(0,1)的加权下半框架。这是根据我们最近的结果得出的,我们证明了Hilbert空间H中的任何精确系统都是H的加权下半框架。为了完备性,我们在这里重新证明了这个结果。指出Vandermonde矩阵的可逆性对上述系统的精确性和非基性起着至关重要的作用。
{"title":"On exact systems {tα⋅e2πint}n∈Z∖A in L2(0,1) which are weighted lower semi frames but not Schauder bases, and their generalizations","authors":"Elias Zikkos","doi":"10.1016/j.acha.2025.101794","DOIUrl":"10.1016/j.acha.2025.101794","url":null,"abstract":"<div><div>Let <span><math><msub><mrow><mo>{</mo><msup><mrow><mi>e</mi></mrow><mrow><mi>i</mi><msub><mrow><mi>λ</mi></mrow><mrow><mi>n</mi></mrow></msub><mi>t</mi></mrow></msup><mo>}</mo></mrow><mrow><mi>n</mi><mo>∈</mo><mi>Z</mi></mrow></msub></math></span> be an exponential Schauder basis for <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math></span>, where <span><math><msub><mrow><mi>λ</mi></mrow><mrow><mi>n</mi></mrow></msub><mo>∈</mo><mi>R</mi></math></span>, and let <span><math><msub><mrow><mo>{</mo><msub><mrow><mi>r</mi></mrow><mrow><mi>n</mi></mrow></msub><mo>(</mo><mi>t</mi><mo>)</mo><mo>}</mo></mrow><mrow><mi>n</mi><mo>∈</mo><mi>Z</mi></mrow></msub></math></span> be its dual Schauder basis. Let <em>A</em> be a non-empty subset of the integers containing exactly <em>M</em> elements. We prove that for <span><math><mi>α</mi><mo>></mo><mn>0</mn></math></span> the weighted system <span><math><msub><mrow><mo>{</mo><msup><mrow><mi>t</mi></mrow><mrow><mi>α</mi></mrow></msup><mo>⋅</mo><msub><mrow><mi>r</mi></mrow><mrow><mi>n</mi></mrow></msub><mo>(</mo><mi>t</mi><mo>)</mo><mo>}</mo></mrow><mrow><mi>n</mi><mo>∈</mo><mi>Z</mi><mo>∖</mo><mi>A</mi></mrow></msub></math></span> is exact in the space <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math></span>, that is, it is complete and minimal in <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math></span>, if and only if <span><math><mi>α</mi><mo>∈</mo><mo>[</mo><mi>M</mi><mo>−</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mo>,</mo><mi>M</mi><mo>+</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mo>)</mo></math></span>. We also show that such a system is not a Riesz basis for <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math></span>.</div><div>In particular, the weighted trigonometric system <span><math><msub><mrow><mo>{</mo><msup><mrow><mi>t</mi></mrow><mrow><mi>α</mi></mrow></msup><mo>⋅</mo><msup><mrow><mi>e</mi></mrow><mrow><mn>2</mn><mi>π</mi><mi>i</mi><mi>n</mi><mi>t</mi></mrow></msup><mo>}</mo></mrow><mrow><mi>n</mi><mo>∈</mo><mi>Z</mi><mo>∖</mo><mi>A</mi></mrow></msub></math></span> is exact in <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math></span>, if and only if <span><math><mi>α</mi><mo>∈</mo><mo>[</mo><mi>M</mi><mo>−</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mo>,</mo><mi>M</mi><mo>+</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mo>)</mo></math></span>, but this system is not even a Schauder basis for <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math><","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101794"},"PeriodicalIF":2.6,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144596066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-08-14DOI: 10.1016/j.acha.2025.101800
Tim Laux , Jona Lelmi
In this work we present the first rigorous analysis of the MBO scheme for data clustering in the large data limit. Each iteration of the scheme corresponds to one step of implicit gradient descent for the thresholding energy on the similarity graph of some dataset. For a subset of the nodes of the graph, the thresholding energy at time h measures the amount of heat transferred from the subset to its complement at time h, rescaled by a factor . It is then natural to think that outcomes of the MBO scheme are (local) minimizers of this energy. We prove that the algorithm is consistent, in the sense that these (local) minimizers converge to (local) minimizers of a suitably weighted optimal partition problem.
{"title":"Large data limit of the MBO scheme for data clustering: Γ-convergence of the thresholding energies","authors":"Tim Laux , Jona Lelmi","doi":"10.1016/j.acha.2025.101800","DOIUrl":"10.1016/j.acha.2025.101800","url":null,"abstract":"<div><div>In this work we present the first rigorous analysis of the MBO scheme for data clustering in the large data limit. Each iteration of the scheme corresponds to one step of implicit gradient descent for the thresholding energy on the similarity graph of some dataset. For a subset of the nodes of the graph, the thresholding energy at time <em>h</em> measures the amount of heat transferred from the subset to its complement at time <em>h</em>, rescaled by a factor <span><math><msqrt><mrow><mi>h</mi></mrow></msqrt></math></span>. It is then natural to think that outcomes of the MBO scheme are (local) minimizers of this energy. We prove that the algorithm is consistent, in the sense that these (local) minimizers converge to (local) minimizers of a suitably weighted optimal partition problem.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101800"},"PeriodicalIF":3.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144904067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-06-11DOI: 10.1016/j.acha.2025.101786
Tao Zhang , Gennian Ge
The problem of sparse representation has significant applications in signal processing. The spark of a dictionary plays a crucial role in the study of sparse representation. Donoho and Elad initially explored the spark, and they provided a general lower bound. When the dictionary is a union of several orthonormal bases, Gribonval and Nielsen presented an improved lower bound for spark. In this paper, we introduce a new construction of dictionary, achieving the spark bound given by Gribonval and Nielsen. More precisely, let q be a power of 2, we show that for any positive integer t, there exists a dictionary in , which is a union of orthonormal bases, such that the spark of the dictionary attains Gribonval-Nielsen's bound. Our result extends previously best known result from to arbitrarily positive integer t, and our construction is technically different from previous ones. Their method is more combinatorial, while ours is algebraic, which is more general.
{"title":"New results on sparse representations in unions of orthonormal bases","authors":"Tao Zhang , Gennian Ge","doi":"10.1016/j.acha.2025.101786","DOIUrl":"10.1016/j.acha.2025.101786","url":null,"abstract":"<div><div>The problem of sparse representation has significant applications in signal processing. The spark of a dictionary plays a crucial role in the study of sparse representation. Donoho and Elad initially explored the spark, and they provided a general lower bound. When the dictionary is a union of several orthonormal bases, Gribonval and Nielsen presented an improved lower bound for spark. In this paper, we introduce a new construction of dictionary, achieving the spark bound given by Gribonval and Nielsen. More precisely, let <em>q</em> be a power of 2, we show that for any positive integer <em>t</em>, there exists a dictionary in <span><math><msup><mrow><mi>R</mi></mrow><mrow><msup><mrow><mi>q</mi></mrow><mrow><mn>2</mn><mi>t</mi></mrow></msup></mrow></msup></math></span>, which is a union of <span><math><mi>q</mi><mo>+</mo><mn>1</mn></math></span> orthonormal bases, such that the spark of the dictionary attains Gribonval-Nielsen's bound. Our result extends previously best known result from <span><math><mi>t</mi><mo>=</mo><mn>1</mn><mo>,</mo><mn>2</mn></math></span> to arbitrarily positive integer <em>t</em>, and our construction is technically different from previous ones. Their method is more combinatorial, while ours is algebraic, which is more general.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101786"},"PeriodicalIF":2.6,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144262223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-06-18DOI: 10.1016/j.acha.2025.101789
Daniel Potts, Laura Weidensager
We propose two algorithms for boosting random Fourier feature models for approximating high-dimensional functions. These methods utilize the classical and generalized analysis of variance (ANOVA) decomposition to learn low-order functions, where there are few interactions between the variables. Our algorithms are able to find an index set of important input variables and variable interactions reliably.
Furthermore, we generalize already existing random Fourier feature models to an ANOVA setting, where terms of different order can be used. Our algorithms have the advantage of being interpretable, meaning that the influence of every input variable is known in the learned model, even for dependent input variables. We provide theoretical as well as numerical results that our algorithms perform well for sensitivity analysis. The ANOVA-boosting step reduces the approximation error of existing methods significantly.
{"title":"ANOVA-boosting for random Fourier features","authors":"Daniel Potts, Laura Weidensager","doi":"10.1016/j.acha.2025.101789","DOIUrl":"10.1016/j.acha.2025.101789","url":null,"abstract":"<div><div>We propose two algorithms for boosting random Fourier feature models for approximating high-dimensional functions. These methods utilize the classical and generalized analysis of variance (ANOVA) decomposition to learn low-order functions, where there are few interactions between the variables. Our algorithms are able to find an index set of important input variables and variable interactions reliably.</div><div>Furthermore, we generalize already existing random Fourier feature models to an ANOVA setting, where terms of different order can be used. Our algorithms have the advantage of being interpretable, meaning that the influence of every input variable is known in the learned model, even for dependent input variables. We provide theoretical as well as numerical results that our algorithms perform well for sensitivity analysis. The ANOVA-boosting step reduces the approximation error of existing methods significantly.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101789"},"PeriodicalIF":2.6,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144313768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-16DOI: 10.1016/j.acha.2025.101797
Yunfei Yang
This paper studies the problem of how efficiently functions in the Sobolev spaces and Besov spaces can be approximated by deep ReLU neural networks with width W and depth L, when the error is measured in the norm. This problem has been studied by several recent works, which obtained the approximation rate up to logarithmic factors when , and the rate for networks with fixed width when the Sobolev embedding condition holds. We generalize these results by showing that the rate indeed holds under the Sobolev embedding condition. It is known that this rate is optimal up to logarithmic factors. The key tool in our proof is a novel encoding of sparse vectors by using deep ReLU neural networks with varied width and depth, which may be of independent interest.
{"title":"On the optimal approximation of Sobolev and Besov functions using deep ReLU neural networks","authors":"Yunfei Yang","doi":"10.1016/j.acha.2025.101797","DOIUrl":"10.1016/j.acha.2025.101797","url":null,"abstract":"<div><div>This paper studies the problem of how efficiently functions in the Sobolev spaces <span><math><msup><mrow><mi>W</mi></mrow><mrow><mi>s</mi><mo>,</mo><mi>q</mi></mrow></msup><mo>(</mo><msup><mrow><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow><mrow><mi>d</mi></mrow></msup><mo>)</mo></math></span> and Besov spaces <span><math><msubsup><mrow><mi>B</mi></mrow><mrow><mi>q</mi><mo>,</mo><mi>r</mi></mrow><mrow><mi>s</mi></mrow></msubsup><mo>(</mo><msup><mrow><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow><mrow><mi>d</mi></mrow></msup><mo>)</mo></math></span> can be approximated by deep ReLU neural networks with width <em>W</em> and depth <em>L</em>, when the error is measured in the <span><math><msup><mrow><mi>L</mi></mrow><mrow><mi>p</mi></mrow></msup><mo>(</mo><msup><mrow><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow><mrow><mi>d</mi></mrow></msup><mo>)</mo></math></span> norm. This problem has been studied by several recent works, which obtained the approximation rate <span><math><mi>O</mi><mo>(</mo><msup><mrow><mo>(</mo><mi>W</mi><mi>L</mi><mo>)</mo></mrow><mrow><mo>−</mo><mn>2</mn><mi>s</mi><mo>/</mo><mi>d</mi></mrow></msup><mo>)</mo></math></span> up to logarithmic factors when <span><math><mi>p</mi><mo>=</mo><mi>q</mi><mo>=</mo><mo>∞</mo></math></span>, and the rate <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>L</mi></mrow><mrow><mo>−</mo><mn>2</mn><mi>s</mi><mo>/</mo><mi>d</mi></mrow></msup><mo>)</mo></math></span> for networks with fixed width when the Sobolev embedding condition <span><math><mn>1</mn><mo>/</mo><mi>q</mi><mo>−</mo><mn>1</mn><mo>/</mo><mi>p</mi><mo><</mo><mi>s</mi><mo>/</mo><mi>d</mi></math></span> holds. We generalize these results by showing that the rate <span><math><mi>O</mi><mo>(</mo><msup><mrow><mo>(</mo><mi>W</mi><mi>L</mi><mo>)</mo></mrow><mrow><mo>−</mo><mn>2</mn><mi>s</mi><mo>/</mo><mi>d</mi></mrow></msup><mo>)</mo></math></span> indeed holds under the Sobolev embedding condition. It is known that this rate is optimal up to logarithmic factors. The key tool in our proof is a novel encoding of sparse vectors by using deep ReLU neural networks with varied width and depth, which may be of independent interest.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101797"},"PeriodicalIF":2.6,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144653252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-08-06DOI: 10.1016/j.acha.2025.101798
Radu Balan , Naveed Haghani , Maneesh Singh
This paper presents primarily two Euclidean embeddings of the quotient space generated by matrices that are identified modulo arbitrary row permutations. The original application is in deep learning on graphs where the learning task is invariant to node relabeling. Two embedding schemes are introduced, one based on sorting and the other based on algebras of multivariate polynomials. While both embeddings exhibit a computational complexity exponential in problem size, the sorting based embedding is globally bi-Lipschitz and admits a low dimensional target space. Additionally, an almost everywhere injective scheme can be implemented with minimal redundancy and low computational cost. In turn, this proves that almost any classifier can be implemented with an arbitrary small loss of performance. Numerical experiments are carried out on two datasets, a chemical compound dataset (QM9) and a proteins dataset (PROTEINS_FULL).
{"title":"Permutation-invariant representations with applications to graph deep learning","authors":"Radu Balan , Naveed Haghani , Maneesh Singh","doi":"10.1016/j.acha.2025.101798","DOIUrl":"10.1016/j.acha.2025.101798","url":null,"abstract":"<div><div>This paper presents primarily two Euclidean embeddings of the quotient space generated by matrices that are identified modulo arbitrary row permutations. The original application is in deep learning on graphs where the learning task is invariant to node relabeling. Two embedding schemes are introduced, one based on sorting and the other based on algebras of multivariate polynomials. While both embeddings exhibit a computational complexity exponential in problem size, the sorting based embedding is globally bi-Lipschitz and admits a low dimensional target space. Additionally, an almost everywhere injective scheme can be implemented with minimal redundancy and low computational cost. In turn, this proves that almost any classifier can be implemented with an arbitrary small loss of performance. Numerical experiments are carried out on two datasets, a chemical compound dataset (<span>QM9</span>) and a proteins dataset (<span>PROTEINS_FULL</span>).</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101798"},"PeriodicalIF":3.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144809570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-03-27DOI: 10.1016/j.acha.2025.101765
Len Spek , Tjeerd Jan Heeringa , Felix Schwenninger , Christoph Brune
Reproducing Kernel Hilbert spaces (RKHS) have been a very successful tool in various areas of machine learning. Recently, Barron spaces have been used to prove bounds on the generalisation error for neural networks. Unfortunately, Barron spaces cannot be understood in terms of RKHS due to the strong nonlinear coupling of the weights. This can be solved by using the more general Reproducing Kernel Banach spaces (RKBS). We show that these Barron spaces belong to a class of integral RKBS. This class can also be understood as an infinite union of RKHS spaces. Furthermore, we show that the dual space of such RKBSs, is again an RKBS where the roles of the data and parameters are interchanged, forming an adjoint pair of RKBSs including a reproducing kernel. This allows us to construct the saddle point problem for neural networks, which can be used in the whole field of primal-dual optimisation.
{"title":"Duality for neural networks through Reproducing Kernel Banach Spaces","authors":"Len Spek , Tjeerd Jan Heeringa , Felix Schwenninger , Christoph Brune","doi":"10.1016/j.acha.2025.101765","DOIUrl":"10.1016/j.acha.2025.101765","url":null,"abstract":"<div><div>Reproducing Kernel Hilbert spaces (RKHS) have been a very successful tool in various areas of machine learning. Recently, Barron spaces have been used to prove bounds on the generalisation error for neural networks. Unfortunately, Barron spaces cannot be understood in terms of RKHS due to the strong nonlinear coupling of the weights. This can be solved by using the more general Reproducing Kernel Banach spaces (RKBS). We show that these Barron spaces belong to a class of integral RKBS. This class can also be understood as an infinite union of RKHS spaces. Furthermore, we show that the dual space of such RKBSs, is again an RKBS where the roles of the data and parameters are interchanged, forming an adjoint pair of RKBSs including a reproducing kernel. This allows us to construct the saddle point problem for neural networks, which can be used in the whole field of primal-dual optimisation.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"78 ","pages":"Article 101765"},"PeriodicalIF":2.6,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143869805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-05-08DOI: 10.1016/j.acha.2025.101775
Jun Fan , Jie Sun , Ailing Yan , Shenglong Zhou
Recovering an unknown signal from quadratic measurements has gained popularity due to its wide range of applications, including phase retrieval, fusion frame phase retrieval, and positive operator-valued measures. In this paper, we employ a least squares approach to reconstruct the signal and establish its non-asymptotic statistical properties. Our analysis shows that the estimator perfectly recovers the true signal in the noiseless case, while the error between the estimator and the true signal is bounded by in the noisy case, where n is the number of measurements and p is the dimension of the signal. We then develop a two-phase algorithm, gradient regularized Newton method (GRNM), to solve the least squares problem. It is proven that the first phase terminates within finitely many steps, and the sequence generated in the second phase converges to a unique local minimum at a superlinear rate under certain mild conditions. Beyond these deterministic results, GRNM is capable of exactly reconstructing the true signal in the noiseless case and achieving the stated error rate with a high probability in the noisy case. Numerical experiments demonstrate that GRNM offers a high level of recovery capability and accuracy as well as fast computational speed.
{"title":"An oracle gradient regularized Newton method for quadratic measurements regression","authors":"Jun Fan , Jie Sun , Ailing Yan , Shenglong Zhou","doi":"10.1016/j.acha.2025.101775","DOIUrl":"10.1016/j.acha.2025.101775","url":null,"abstract":"<div><div>Recovering an unknown signal from quadratic measurements has gained popularity due to its wide range of applications, including phase retrieval, fusion frame phase retrieval, and positive operator-valued measures. In this paper, we employ a least squares approach to reconstruct the signal and establish its non-asymptotic statistical properties. Our analysis shows that the estimator perfectly recovers the true signal in the noiseless case, while the error between the estimator and the true signal is bounded by <span><math><mi>O</mi><mo>(</mo><msqrt><mrow><mi>p</mi><mi>log</mi><mo></mo><mo>(</mo><mn>1</mn><mo>+</mo><mn>2</mn><mi>n</mi><mo>)</mo><mo>/</mo><mi>n</mi></mrow></msqrt><mo>)</mo></math></span> in the noisy case, where <em>n</em> is the number of measurements and <em>p</em> is the dimension of the signal. We then develop a two-phase algorithm, gradient regularized Newton method (GRNM), to solve the least squares problem. It is proven that the first phase terminates within finitely many steps, and the sequence generated in the second phase converges to a unique local minimum at a superlinear rate under certain mild conditions. Beyond these deterministic results, GRNM is capable of exactly reconstructing the true signal in the noiseless case and achieving the stated error rate with a high probability in the noisy case. Numerical experiments demonstrate that GRNM offers a high level of recovery capability and accuracy as well as fast computational speed.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"78 ","pages":"Article 101775"},"PeriodicalIF":2.6,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143935916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}