Pub Date : 2024-05-13DOI: 10.1016/j.acha.2024.101668
Krishnakumar Balasubramanian , Larry Goldstein , Nathan Ross , Adil Salim
We derive upper bounds on the Wasserstein distance (), with respect to sup-norm, between any continuous valued random field indexed by the n-sphere and the Gaussian, based on Stein's method. We develop a novel Gaussian smoothing technique that allows us to transfer a bound in a smoother metric to the distance. The smoothing is based on covariance functions constructed using powers of Laplacian operators, designed so that the associated Gaussian process has a tractable Cameron-Martin or Reproducing Kernel Hilbert Space. This feature enables us to move beyond one dimensional interval-based index sets that were previously considered in the literature. Specializing our general result, we obtain the first bounds on the Gaussian random field approximation of wide random neural networks of any depth and Lipschitz activation functions at the random field level. Our bounds are explicitly expressed in terms of the widths of the network and moments of the random weights. We also obtain tighter bounds when the activation function has three bounded derivatives.
我们基于斯坦因方法,推导出以 n 球为索引的任何连续 Rd 值随机场与高斯之间的瓦瑟斯坦距离(W1)的上界。我们开发了一种新颖的高斯平滑技术,可以将平滑度量中的约束转移到 W1 距离上。这种平滑技术基于使用拉普拉斯算子幂构造的协方差函数,其设计使相关的高斯过程具有可处理的卡梅隆-马丁或再现核希尔伯特空间。这一特点使我们超越了以往文献中考虑的基于一维区间的索引集。根据我们的一般结果,我们首次获得了在随机场水平上对任意深度和 Lipschitz 激活函数的宽随机神经网络的高斯随机场近似的约束。我们的边界用网络宽度和随机权重矩明确表示。当激活函数有三个有界导数时,我们还得到了更严格的约束。
{"title":"Gaussian random field approximation via Stein's method with applications to wide random neural networks","authors":"Krishnakumar Balasubramanian , Larry Goldstein , Nathan Ross , Adil Salim","doi":"10.1016/j.acha.2024.101668","DOIUrl":"https://doi.org/10.1016/j.acha.2024.101668","url":null,"abstract":"<div><p>We derive upper bounds on the Wasserstein distance (<span><math><msub><mrow><mi>W</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span>), with respect to sup-norm, between any continuous <span><math><msup><mrow><mi>R</mi></mrow><mrow><mi>d</mi></mrow></msup></math></span> valued random field indexed by the <em>n</em>-sphere and the Gaussian, based on Stein's method. We develop a novel Gaussian smoothing technique that allows us to transfer a bound in a smoother metric to the <span><math><msub><mrow><mi>W</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> distance. The smoothing is based on covariance functions constructed using powers of Laplacian operators, designed so that the associated Gaussian process has a tractable Cameron-Martin or Reproducing Kernel Hilbert Space. This feature enables us to move beyond one dimensional interval-based index sets that were previously considered in the literature. Specializing our general result, we obtain the first bounds on the Gaussian random field approximation of wide random neural networks of any depth and Lipschitz activation functions at the random field level. Our bounds are explicitly expressed in terms of the widths of the network and moments of the random weights. We also obtain tighter bounds when the activation function has three bounded derivatives.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"72 ","pages":"Article 101668"},"PeriodicalIF":2.5,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140951223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-07DOI: 10.1016/j.acha.2024.101660
Zhicong Liang , Bao Wang , Quanquan Gu , Stanley Osher , Yuan Yao
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users. However, an adversary may still be able to infer the private training data by attacking the released model. Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models. In this paper, we investigate a utility enhancement scheme based on Laplacian smoothing for differentially private federated learning (DP-Fed-LS), to improve the statistical precision of parameter aggregation with injected Gaussian noise without losing privacy budget. Our key observation is that the aggregated gradients in federated learning often enjoy a type of smoothness, i.e. sparsity in a graph Fourier basis with polynomial decays of Fourier coefficients as frequency grows, which can be exploited by the Laplacian smoothing efficiently. Under a prescribed differential privacy budget, convergence error bounds with tight rates are provided for DP-Fed-LS with uniform subsampling of heterogeneous non-iid data, revealing possible utility improvement of Laplacian smoothing in effective dimensionality and variance reduction, among others. Experiments over MNIST, SVHN, and Shakespeare datasets show that the proposed method can improve model accuracy with DP-guarantee and membership privacy under both uniform and Poisson subsampling mechanisms.
{"title":"Differentially private federated learning with Laplacian smoothing","authors":"Zhicong Liang , Bao Wang , Quanquan Gu , Stanley Osher , Yuan Yao","doi":"10.1016/j.acha.2024.101660","DOIUrl":"https://doi.org/10.1016/j.acha.2024.101660","url":null,"abstract":"<div><p>Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users. However, an adversary may still be able to infer the private training data by attacking the released model. Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models. In this paper, we investigate a utility enhancement scheme based on Laplacian smoothing for differentially private federated learning (DP-Fed-LS), to improve the statistical precision of parameter aggregation with injected Gaussian noise without losing privacy budget. Our key observation is that the aggregated gradients in federated learning often enjoy a type of smoothness, <em>i.e.</em> sparsity in a graph Fourier basis with polynomial decays of Fourier coefficients as frequency grows, which can be exploited by the Laplacian smoothing efficiently. Under a prescribed differential privacy budget, convergence error bounds with tight rates are provided for DP-Fed-LS with uniform subsampling of heterogeneous <strong>non-iid</strong> data, revealing possible utility improvement of Laplacian smoothing in effective dimensionality and variance reduction, among others. Experiments over MNIST, SVHN, and Shakespeare datasets show that the proposed method can improve model accuracy with DP-guarantee and membership privacy under both uniform and Poisson subsampling mechanisms.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"72 ","pages":"Article 101660"},"PeriodicalIF":2.5,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140906086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-17DOI: 10.1016/j.acha.2024.101659
Ole Christensen , Marzieh Hasannasab , Friedrich M. Philipp , Diana Stoeva
In 2016 Aldroubi et al. constructed the first class of frames having the form for a bounded linear operator on the underlying Hilbert space. In this paper we show that a subclass of these frames has a number of additional remarkable features that have not been identified for any other frames in the literature. Most importantly, the subfamily obtained by selecting each Nth element from the frame is itself a frame, regardless of the choice of . Furthermore, the frame property is kept upon removal of an arbitrarily finite number of elements.
2016 年,Aldroubi 等人构建了第一类框架,其形式为底层希尔伯特空间上有界线性算子的{Tkφ}k=0∞。在本文中,我们证明了这些框架的一个子类具有一些额外的显著特征,而这些特征在文献中还没有为任何其他框架所发现。最重要的是,无论选择 N∈N,从框架中选择第 N 个元素得到的子族本身就是一个框架。此外,在移除任意有限数量的元素后,框架属性仍然保持不变。
{"title":"The mystery of Carleson frames","authors":"Ole Christensen , Marzieh Hasannasab , Friedrich M. Philipp , Diana Stoeva","doi":"10.1016/j.acha.2024.101659","DOIUrl":"https://doi.org/10.1016/j.acha.2024.101659","url":null,"abstract":"<div><p>In 2016 Aldroubi et al. constructed the first class of frames having the form <span><math><msubsup><mrow><mo>{</mo><msup><mrow><mi>T</mi></mrow><mrow><mi>k</mi></mrow></msup><mi>φ</mi><mo>}</mo></mrow><mrow><mi>k</mi><mo>=</mo><mn>0</mn></mrow><mrow><mo>∞</mo></mrow></msubsup></math></span> for a bounded linear operator on the underlying Hilbert space. In this paper we show that a subclass of these frames has a number of additional remarkable features that have not been identified for any other frames in the literature. Most importantly, the subfamily obtained by selecting each <em>N</em>th element from the frame is itself a frame, regardless of the choice of <span><math><mi>N</mi><mo>∈</mo><mi>N</mi></math></span>. Furthermore, the frame property is kept upon removal of an arbitrarily finite number of elements.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"72 ","pages":"Article 101659"},"PeriodicalIF":2.5,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140632499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.1016/j.acha.2024.101658
Wei Li , Shidong Li , Jun Xian
An effective tail-atomic norm methodology and algorithms for gridless spectral estimations are developed with a tail-minimization mechanism. We prove that the tail-atomic norm can be equivalently reformulated as a positive semi-definite programming (PSD) problem as well. Some delicate and critical weighting constraints are derived. Iterative tail-minimization algorithms based on PSD programming are also derived and implemented. Extensive simulation results demonstrate that the tail-atomic norm mechanism substantially outperforms state-of-the-art gridless spectral estimation techniques. Numerical studies also show that the tail-atomic norm approach is more robust to noisy measurements than other known related atomic norm methodologies.
{"title":"Effectiveness of the tail-atomic norm in gridless spectrum estimation","authors":"Wei Li , Shidong Li , Jun Xian","doi":"10.1016/j.acha.2024.101658","DOIUrl":"https://doi.org/10.1016/j.acha.2024.101658","url":null,"abstract":"<div><p>An effective tail-atomic norm methodology and algorithms for gridless spectral estimations are developed with a tail-minimization mechanism. We prove that the tail-atomic norm can be equivalently reformulated as a positive semi-definite programming (PSD) problem as well. Some delicate and critical weighting constraints are derived. Iterative tail-minimization algorithms based on PSD programming are also derived and implemented. Extensive simulation results demonstrate that the tail-atomic norm mechanism substantially outperforms state-of-the-art gridless spectral estimation techniques. Numerical studies also show that the tail-atomic norm approach is more robust to noisy measurements than other known related atomic norm methodologies.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"72 ","pages":"Article 101658"},"PeriodicalIF":2.5,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140632498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-04DOI: 10.1016/j.acha.2024.101656
Arash Amini , Julien Fageot , Michael Unser
In this paper, we perform the joint study of scale-invariant operators and self-similar processes of complex order. More precisely, we introduce general families of scale-invariant complex-order fractional-derivation and integration operators by constructing them in the Fourier domain. We analyze these operators in detail, with special emphasis on the decay properties of their output. We further use them to introduce a family of complex-valued stable processes that are self-similar with complex-valued Hurst exponents. These random processes are expressed via their characteristic functionals over the Schwartz space of functions. They are therefore defined as generalized random processes in the sense of Gel'fand. Beside their self-similarity and stationarity, we study the Sobolev regularity of the proposed random processes. Our work illustrates the strong connection between scale-invariant operators and self-similar processes, with the construction of adequate complex-order scale-invariant integration operators being preparatory to the construction of the random processes.
{"title":"Complex-order scale-invariant operators and self-similar processes","authors":"Arash Amini , Julien Fageot , Michael Unser","doi":"10.1016/j.acha.2024.101656","DOIUrl":"https://doi.org/10.1016/j.acha.2024.101656","url":null,"abstract":"<div><p>In this paper, we perform the joint study of scale-invariant operators and self-similar processes of complex order. More precisely, we introduce general families of scale-invariant complex-order fractional-derivation and integration operators by constructing them in the Fourier domain. We analyze these operators in detail, with special emphasis on the decay properties of their output. We further use them to introduce a family of complex-valued stable processes that are self-similar with complex-valued Hurst exponents. These random processes are expressed via their characteristic functionals over the Schwartz space of functions. They are therefore defined as generalized random processes in the sense of Gel'fand. Beside their self-similarity and stationarity, we study the Sobolev regularity of the proposed random processes. Our work illustrates the strong connection between scale-invariant operators and self-similar processes, with the construction of adequate complex-order scale-invariant integration operators being preparatory to the construction of the random processes.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"72 ","pages":"Article 101656"},"PeriodicalIF":2.5,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140844213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-04DOI: 10.1016/j.acha.2024.101657
Friedrich M. Philipp , Manuel Schaller , Karl Worthmann , Sebastian Peitz , Feliks Nüske
We consider the data-driven approximation of the Koopman operator for stochastic differential equations on reproducing kernel Hilbert spaces (RKHS). Our focus is on the estimation error if the data are collected from long-term ergodic simulations. We derive both an exact expression for the variance of the kernel cross-covariance operator, measured in the Hilbert-Schmidt norm, and probabilistic bounds for the finite-data estimation error. Moreover, we derive a bound on the prediction error of observables in the RKHS using a finite Mercer series expansion. Further, assuming Koopman-invariance of the RKHS, we provide bounds on the full approximation error. Numerical experiments using the Ornstein-Uhlenbeck process illustrate our results.
{"title":"Error bounds for kernel-based approximations of the Koopman operator","authors":"Friedrich M. Philipp , Manuel Schaller , Karl Worthmann , Sebastian Peitz , Feliks Nüske","doi":"10.1016/j.acha.2024.101657","DOIUrl":"https://doi.org/10.1016/j.acha.2024.101657","url":null,"abstract":"<div><p>We consider the data-driven approximation of the Koopman operator for stochastic differential equations on reproducing kernel Hilbert spaces (RKHS). Our focus is on the estimation error if the data are collected from long-term ergodic simulations. We derive both an exact expression for the variance of the kernel cross-covariance operator, measured in the Hilbert-Schmidt norm, and probabilistic bounds for the finite-data estimation error. Moreover, we derive a bound on the prediction error of observables in the RKHS using a finite Mercer series expansion. Further, assuming Koopman-invariance of the RKHS, we provide bounds on the full approximation error. Numerical experiments using the Ornstein-Uhlenbeck process illustrate our results.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"71 ","pages":"Article 101657"},"PeriodicalIF":2.5,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1063520324000344/pdfft?md5=f01b4b57c82f431fd15e3f589cf72791&pid=1-s2.0-S1063520324000344-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140542763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-18DOI: 10.1016/j.acha.2024.101655
Xin-Rong Dai , Meng Zhu
We describe the full structure of the frame set for the Gabor system with the window being the Haar function . This is the first compactly supported window function for which the frame set is represented explicitly.
The strategy of this paper is to introduce the piecewise linear transformation on the unit circle, and to provide a complete characterization of structures for its (symmetric) maximal invariant sets. This transformation is related to the famous three gap theorem of Steinhaus which may be of independent interest. Furthermore, a classical criterion on Gabor frames is improved, which allows us to establish a necessary and sufficient condition for the Gabor system to be a frame, i.e., the symmetric invariant set of the transformation is empty.
我们描述了以 Haar 函数为窗口的 Gabor 系统帧集的完整结构。这是第一个明确表示帧集的紧凑支持窗口函数。
{"title":"Frame set for Gabor systems with Haar window","authors":"Xin-Rong Dai , Meng Zhu","doi":"10.1016/j.acha.2024.101655","DOIUrl":"10.1016/j.acha.2024.101655","url":null,"abstract":"<div><p>We describe the full structure of the frame set for the Gabor system <span><math><mi>G</mi><mo>(</mo><mi>g</mi><mo>;</mo><mi>α</mi><mo>,</mo><mi>β</mi><mo>)</mo><mo>:</mo><mo>=</mo><mo>{</mo><msup><mrow><mi>e</mi></mrow><mrow><mo>−</mo><mn>2</mn><mi>π</mi><mi>i</mi><mi>m</mi><mi>β</mi><mo>⋅</mo></mrow></msup><mi>g</mi><mo>(</mo><mo>⋅</mo><mo>−</mo><mi>n</mi><mi>α</mi><mo>)</mo><mo>:</mo><mi>m</mi><mo>,</mo><mi>n</mi><mo>∈</mo><mi>Z</mi><mo>}</mo></math></span> with the window being the Haar function <span><math><mi>g</mi><mo>=</mo><mo>−</mo><msub><mrow><mi>χ</mi></mrow><mrow><mo>[</mo><mo>−</mo><mn>1</mn><mo>/</mo><mn>2</mn><mo>,</mo><mn>0</mn><mo>)</mo></mrow></msub><mo>+</mo><msub><mrow><mi>χ</mi></mrow><mrow><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>/</mo><mn>2</mn><mo>)</mo></mrow></msub></math></span>. This is the first compactly supported window function for which the frame set is represented explicitly.</p><p>The strategy of this paper is to introduce the piecewise linear transformation <span><math><mi>M</mi></math></span> on the unit circle, and to provide a complete characterization of structures for its (symmetric) maximal invariant sets. This transformation is related to the famous three gap theorem of Steinhaus which may be of independent interest. Furthermore, a classical criterion on Gabor frames is improved, which allows us to establish a necessary and sufficient condition for the Gabor system <span><math><mi>G</mi><mo>(</mo><mi>g</mi><mo>;</mo><mi>α</mi><mo>,</mo><mi>β</mi><mo>)</mo></math></span> to be a frame, i.e., the symmetric invariant set of the transformation <span><math><mi>M</mi></math></span> is empty.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"71 ","pages":"Article 101655"},"PeriodicalIF":2.5,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140182420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-16DOI: 10.1016/j.acha.2024.101654
Yurii Belov , Andrei V. Semenov
We prove that frame set for imaginary shift of sinc-function can be described as .
In addition, we prove that for window functions g of the form such that , .
我们证明,sinc 函数g(t)=sinπb(t-iw)t-iw,b,w∈R∖{0}的虚移帧集 Fg 可描述为 Fg={(α,β):αβ⩽1,β⩽|b|}。此外,我们还证明,Fg={(α,β):αβ⩽1}为窗函数 g 的形式1t-iw(1-∑k=1∞ake2πibkt),使得∑k⩾1|ak|e2π|wbk|<1,wbk<0。
{"title":"Frame set for shifted sinc-function","authors":"Yurii Belov , Andrei V. Semenov","doi":"10.1016/j.acha.2024.101654","DOIUrl":"10.1016/j.acha.2024.101654","url":null,"abstract":"<div><p>We prove that frame set <span><math><msub><mrow><mi>F</mi></mrow><mrow><mi>g</mi></mrow></msub></math></span> for imaginary shift of sinc-function<span><span><span><math><mi>g</mi><mo>(</mo><mi>t</mi><mo>)</mo><mo>=</mo><mfrac><mrow><mi>sin</mi><mo></mo><mi>π</mi><mi>b</mi><mo>(</mo><mi>t</mi><mo>−</mo><mi>i</mi><mi>w</mi><mo>)</mo></mrow><mrow><mi>t</mi><mo>−</mo><mi>i</mi><mi>w</mi></mrow></mfrac><mo>,</mo><mspace></mspace><mi>b</mi><mo>,</mo><mi>w</mi><mo>∈</mo><mi>R</mi><mo>∖</mo><mo>{</mo><mn>0</mn><mo>}</mo></math></span></span></span> can be described as <span><math><msub><mrow><mi>F</mi></mrow><mrow><mi>g</mi></mrow></msub><mo>=</mo><mo>{</mo><mo>(</mo><mi>α</mi><mo>,</mo><mi>β</mi><mo>)</mo><mo>:</mo><mi>α</mi><mi>β</mi><mo>⩽</mo><mn>1</mn><mo>,</mo><mi>β</mi><mo>⩽</mo><mo>|</mo><mi>b</mi><mo>|</mo><mo>}</mo></math></span>.</p><p>In addition, we prove that <span><math><msub><mrow><mi>F</mi></mrow><mrow><mi>g</mi></mrow></msub><mo>=</mo><mo>{</mo><mo>(</mo><mi>α</mi><mo>,</mo><mi>β</mi><mo>)</mo><mo>:</mo><mi>α</mi><mi>β</mi><mo>⩽</mo><mn>1</mn><mo>}</mo></math></span> for window functions <em>g</em> of the form<span><span><span><math><mfrac><mrow><mn>1</mn></mrow><mrow><mi>t</mi><mo>−</mo><mi>i</mi><mi>w</mi></mrow></mfrac><mo>(</mo><mn>1</mn><mo>−</mo><munderover><mo>∑</mo><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mrow><mo>∞</mo></mrow></munderover><msub><mrow><mi>a</mi></mrow><mrow><mi>k</mi></mrow></msub><msup><mrow><mi>e</mi></mrow><mrow><mn>2</mn><mi>π</mi><mi>i</mi><msub><mrow><mi>b</mi></mrow><mrow><mi>k</mi></mrow></msub><mi>t</mi></mrow></msup><mo>)</mo><mo>,</mo></math></span></span></span> such that <span><math><msub><mrow><mo>∑</mo></mrow><mrow><mi>k</mi><mo>⩾</mo><mn>1</mn></mrow></msub><mo>|</mo><msub><mrow><mi>a</mi></mrow><mrow><mi>k</mi></mrow></msub><mo>|</mo><msup><mrow><mi>e</mi></mrow><mrow><mn>2</mn><mi>π</mi><mo>|</mo><mi>w</mi><msub><mrow><mi>b</mi></mrow><mrow><mi>k</mi></mrow></msub><mo>|</mo></mrow></msup><mo><</mo><mn>1</mn></math></span>, <span><math><mi>w</mi><msub><mrow><mi>b</mi></mrow><mrow><mi>k</mi></mrow></msub><mo><</mo><mn>0</mn></math></span>.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"71 ","pages":"Article 101654"},"PeriodicalIF":2.5,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140182455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-14DOI: 10.1016/j.acha.2024.101653
Lexing Ying
This note considers the unstructured sparse recovery problems in a general form. Examples include rational approximation, spectral function estimation, Fourier inversion, Laplace inversion, and sparse deconvolution. The main challenges are the noise in the sample values and the unstructured nature of the sample locations. This note proposes the eigenmatrix, a data-driven construction with desired approximate eigenvalues and eigenvectors. The eigenmatrix offers a new way for these sparse recovery problems. Numerical results are provided to demonstrate the efficiency of the proposed method.
{"title":"Eigenmatrix for unstructured sparse recovery","authors":"Lexing Ying","doi":"10.1016/j.acha.2024.101653","DOIUrl":"https://doi.org/10.1016/j.acha.2024.101653","url":null,"abstract":"<div><p>This note considers the unstructured sparse recovery problems in a general form. Examples include rational approximation, spectral function estimation, Fourier inversion, Laplace inversion, and sparse deconvolution. The main challenges are the noise in the sample values and the unstructured nature of the sample locations. This note proposes the eigenmatrix, a data-driven construction with desired approximate eigenvalues and eigenvectors. The eigenmatrix offers a new way for these sparse recovery problems. Numerical results are provided to demonstrate the efficiency of the proposed method.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"71 ","pages":"Article 101653"},"PeriodicalIF":2.5,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140133908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-29DOI: 10.1016/j.acha.2024.101652
Senwei Liang , Shixiao W. Jiang , John Harlim , Haizhao Yang
This paper proposes a mesh-free computational framework and machine learning theory for solving elliptic PDEs on unknown manifolds, identified with point clouds, based on diffusion maps (DM) and deep learning. The PDE solver is formulated as a supervised learning task to solve a least-squares regression problem that imposes an algebraic equation approximating a PDE (and boundary conditions if applicable). This algebraic equation involves a graph-Laplacian type matrix obtained via DM asymptotic expansion, which is a consistent estimator of second-order elliptic differential operators. The resulting numerical method is to solve a highly non-convex empirical risk minimization problem subjected to a solution from a hypothesis space of neural networks (NNs). In a well-posed elliptic PDE setting, when the hypothesis space consists of neural networks with either infinite width or depth, we show that the global minimizer of the empirical loss function is a consistent solution in the limit of large training data. When the hypothesis space is a two-layer neural network, we show that for a sufficiently large width, gradient descent can identify a global minimizer of the empirical loss function. Supporting numerical examples demonstrate the convergence of the solutions, ranging from simple manifolds with low and high co-dimensions, to rough surfaces with and without boundaries. We also show that the proposed NN solver can robustly generalize the PDE solution on new data points with generalization errors that are almost identical to the training errors, superseding a Nyström-based interpolation method.
{"title":"Solving PDEs on unknown manifolds with machine learning","authors":"Senwei Liang , Shixiao W. Jiang , John Harlim , Haizhao Yang","doi":"10.1016/j.acha.2024.101652","DOIUrl":"https://doi.org/10.1016/j.acha.2024.101652","url":null,"abstract":"<div><p>This paper proposes a mesh-free computational framework and machine learning theory for solving elliptic PDEs on unknown manifolds, identified with point clouds, based on diffusion maps (DM) and deep learning. The PDE solver is formulated as a supervised learning task to solve a least-squares regression problem that imposes an algebraic equation approximating a PDE (and boundary conditions if applicable). This algebraic equation involves a graph-Laplacian type matrix obtained via DM asymptotic expansion, which is a consistent estimator of second-order elliptic differential operators. The resulting numerical method is to solve a highly non-convex empirical risk minimization problem subjected to a solution from a hypothesis space of neural networks (NNs). In a well-posed elliptic PDE setting, when the hypothesis space consists of neural networks with either infinite width or depth, we show that the global minimizer of the empirical loss function is a consistent solution in the limit of large training data. When the hypothesis space is a two-layer neural network, we show that for a sufficiently large width, gradient descent can identify a global minimizer of the empirical loss function. Supporting numerical examples demonstrate the convergence of the solutions, ranging from simple manifolds with low and high co-dimensions, to rough surfaces with and without boundaries. We also show that the proposed NN solver can robustly generalize the PDE solution on new data points with generalization errors that are almost identical to the training errors, superseding a Nyström-based interpolation method.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"71 ","pages":"Article 101652"},"PeriodicalIF":2.5,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140014304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}