Pub Date : 2024-02-29DOI: 10.1016/j.acha.2024.101650
Zai Yang , Yi-Lin Mo , Zongben Xu
Atomic norm methods have recently been proposed for spectral super-resolution with flexibility in dealing with missing data and miscellaneous noises. A notorious drawback of these convex optimization methods however is their lower resolution in the high signal-to-noise (SNR) regime as compared to conventional methods such as ESPRIT. In this paper, we devise a simple weighting scheme in existing atomic norm methods and show that in theory the resolution of the resulting convex optimization method can be made arbitrarily high in the absence of noise, achieving the so-called separation-free super-resolution. This is proved by a novel, kernel-free construction of the dual certificate whose existence guarantees exact super-resolution using the proposed method. Numerical results corroborating our analysis are provided.
{"title":"Separation-free spectral super-resolution via convex optimization","authors":"Zai Yang , Yi-Lin Mo , Zongben Xu","doi":"10.1016/j.acha.2024.101650","DOIUrl":"10.1016/j.acha.2024.101650","url":null,"abstract":"<div><p>Atomic norm methods have recently been proposed for spectral super-resolution with flexibility in dealing with missing data and miscellaneous noises. A notorious drawback of these convex optimization methods however is their lower resolution in the high signal-to-noise (SNR) regime as compared to conventional methods such as ESPRIT. In this paper, we devise a simple weighting scheme in existing atomic norm methods and show that in theory the resolution of the resulting convex optimization method can be made arbitrarily high in the absence of noise, achieving the so-called separation-free super-resolution. This is proved by a novel, kernel-free construction of the dual certificate whose existence guarantees exact super-resolution using the proposed method. Numerical results corroborating our analysis are provided.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"71 ","pages":"Article 101650"},"PeriodicalIF":2.5,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140043807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-29DOI: 10.1016/j.acha.2024.101651
Frank Filbir , Ralf Hielscher , Thomas Jahn , Tino Ullrich
The recovery of multivariate functions and estimating their integrals from finitely many samples is one of the central tasks in modern approximation theory. Marcinkiewicz–Zygmund inequalities provide answers to both the recovery and the quadrature aspect. In this paper, we put ourselves on the q-dimensional sphere , and investigate how well continuous -norms of polynomials f of maximum degree n on the sphere can be discretized by positively weighted -sum of finitely many samples, and discuss the distortion between the continuous and discrete quantities, the number and distribution of the (deterministic or randomly chosen) sample points on , the dimension q, and the degree n of the polynomials.
{"title":"Marcinkiewicz–Zygmund inequalities for scattered and random data on the q-sphere","authors":"Frank Filbir , Ralf Hielscher , Thomas Jahn , Tino Ullrich","doi":"10.1016/j.acha.2024.101651","DOIUrl":"https://doi.org/10.1016/j.acha.2024.101651","url":null,"abstract":"<div><p>The recovery of multivariate functions and estimating their integrals from finitely many samples is one of the central tasks in modern approximation theory. Marcinkiewicz–Zygmund inequalities provide answers to both the recovery and the quadrature aspect. In this paper, we put ourselves on the <em>q</em>-dimensional sphere <span><math><msup><mrow><mi>S</mi></mrow><mrow><mi>q</mi></mrow></msup></math></span>, and investigate how well continuous <span><math><msub><mrow><mi>L</mi></mrow><mrow><mi>p</mi></mrow></msub></math></span>-norms of polynomials <em>f</em> of maximum degree <em>n</em> on the sphere <span><math><msup><mrow><mi>S</mi></mrow><mrow><mi>q</mi></mrow></msup></math></span> can be discretized by positively weighted <span><math><msub><mrow><mi>L</mi></mrow><mrow><mi>p</mi></mrow></msub></math></span>-sum of finitely many samples, and discuss the distortion between the continuous and discrete quantities, the number and distribution of the (deterministic or randomly chosen) sample points <span><math><msub><mrow><mi>ξ</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>,</mo><mo>…</mo><mo>,</mo><msub><mrow><mi>ξ</mi></mrow><mrow><mi>N</mi></mrow></msub></math></span> on <span><math><msup><mrow><mi>S</mi></mrow><mrow><mi>q</mi></mrow></msup></math></span>, the dimension <em>q</em>, and the degree <em>n</em> of the polynomials.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"71 ","pages":"Article 101651"},"PeriodicalIF":2.5,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1063520324000289/pdfft?md5=c98b0bf5b8b162d91ccc058130ea9e34&pid=1-s2.0-S1063520324000289-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140042516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.1016/j.acha.2024.101639
Aleksei Kulikov
For a pair of sets the time-frequency localization operator is defined as , where is the Fourier transform and are projection operators onto T and Ω, respectively. We show that in the case when both T and Ω are intervals, the eigenvalues of satisfy if , where is arbitrary and , provided that . This improves the result of Bonami, Jaming and Karoui, who proved it for . The proof is based on the properties of the Bargmann transform.
对于一对集合 T,Ω⊂R,时频定位算子定义为 ST,Ω=PTF-1PΩFPT,其中 F 是傅立叶变换,PT,PΩ 分别是 T 和 Ω 上的投影算子。我们证明,在 T 和 Ω 都是区间的情况下,如果 n≤(1-ε)|T||Ω| ,ST,Ω 的特征值满足 λn(T,Ω)≥1-δ|T||Ω| ,其中 ε>0 是任意的,δ=δ(ε)<1,条件是 |T||Ω|>cε。这改进了博纳米、贾明和卡鲁伊的结果,他们是在ε≥0.42 时证明的。证明基于巴格曼变换的性质。
{"title":"Exponential lower bound for the eigenvalues of the time-frequency localization operator before the plunge region","authors":"Aleksei Kulikov","doi":"10.1016/j.acha.2024.101639","DOIUrl":"https://doi.org/10.1016/j.acha.2024.101639","url":null,"abstract":"<div><p>For a pair of sets <span><math><mi>T</mi><mo>,</mo><mi>Ω</mi><mo>⊂</mo><mi>R</mi></math></span> the time-frequency localization operator is defined as <span><math><msub><mrow><mi>S</mi></mrow><mrow><mi>T</mi><mo>,</mo><mi>Ω</mi></mrow></msub><mo>=</mo><msub><mrow><mi>P</mi></mrow><mrow><mi>T</mi></mrow></msub><msup><mrow><mi>F</mi></mrow><mrow><mo>−</mo><mn>1</mn></mrow></msup><msub><mrow><mi>P</mi></mrow><mrow><mi>Ω</mi></mrow></msub><mi>F</mi><msub><mrow><mi>P</mi></mrow><mrow><mi>T</mi></mrow></msub></math></span>, where <span><math><mi>F</mi></math></span> is the Fourier transform and <span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>T</mi></mrow></msub><mo>,</mo><msub><mrow><mi>P</mi></mrow><mrow><mi>Ω</mi></mrow></msub></math></span> are projection operators onto <em>T</em> and Ω, respectively. We show that in the case when both <em>T</em> and Ω are intervals, the eigenvalues of <span><math><msub><mrow><mi>S</mi></mrow><mrow><mi>T</mi><mo>,</mo><mi>Ω</mi></mrow></msub></math></span> satisfy <span><math><msub><mrow><mi>λ</mi></mrow><mrow><mi>n</mi></mrow></msub><mo>(</mo><mi>T</mi><mo>,</mo><mi>Ω</mi><mo>)</mo><mo>≥</mo><mn>1</mn><mo>−</mo><msup><mrow><mi>δ</mi></mrow><mrow><mo>|</mo><mi>T</mi><mo>|</mo><mo>|</mo><mi>Ω</mi><mo>|</mo></mrow></msup></math></span> if <span><math><mi>n</mi><mo>≤</mo><mo>(</mo><mn>1</mn><mo>−</mo><mi>ε</mi><mo>)</mo><mo>|</mo><mi>T</mi><mo>|</mo><mo>|</mo><mi>Ω</mi><mo>|</mo></math></span>, where <span><math><mi>ε</mi><mo>></mo><mn>0</mn></math></span> is arbitrary and <span><math><mi>δ</mi><mo>=</mo><mi>δ</mi><mo>(</mo><mi>ε</mi><mo>)</mo><mo><</mo><mn>1</mn></math></span>, provided that <span><math><mo>|</mo><mi>T</mi><mo>|</mo><mo>|</mo><mi>Ω</mi><mo>|</mo><mo>></mo><msub><mrow><mi>c</mi></mrow><mrow><mi>ε</mi></mrow></msub></math></span>. This improves the result of Bonami, Jaming and Karoui, who proved it for <span><math><mi>ε</mi><mo>≥</mo><mn>0.42</mn></math></span>. The proof is based on the properties of the Bargmann transform.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"71 ","pages":"Article 101639"},"PeriodicalIF":2.5,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139999730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-27DOI: 10.1016/j.acha.2024.101640
Alex Barnett , Philip Greengard , Manas Rachh
The high efficiency of a recently proposed method for computing with Gaussian processes relies on expanding a (translationally invariant) covariance kernel into complex exponentials, with frequencies lying on a Cartesian equispaced grid. Here we provide rigorous error bounds for this approximation for two popular kernels—Matérn and squared exponential—in terms of the grid spacing and size. The kernel error bounds are uniform over a hypercube centered at the origin. Our tools include a split into aliasing and truncation errors, and bounds on sums of Gaussians or modified Bessel functions over various lattices. For the Matérn case, motivated by numerical study, we conjecture a stronger Frobenius-norm bound on the covariance matrix error for randomly-distributed data points. Lastly, we prove bounds on, and study numerically, the ill-conditioning of the linear systems arising in such regression problems.
{"title":"Uniform approximation of common Gaussian process kernels using equispaced Fourier grids","authors":"Alex Barnett , Philip Greengard , Manas Rachh","doi":"10.1016/j.acha.2024.101640","DOIUrl":"10.1016/j.acha.2024.101640","url":null,"abstract":"<div><p>The high efficiency of a recently proposed method for computing with Gaussian processes relies on expanding a (translationally invariant) covariance kernel into complex exponentials, with frequencies lying on a Cartesian equispaced grid. Here we provide rigorous error bounds for this approximation for two popular kernels—Matérn and squared exponential—in terms of the grid spacing and size. The kernel error bounds are uniform over a hypercube centered at the origin. Our tools include a split into aliasing and truncation errors, and bounds on sums of Gaussians or modified Bessel functions over various lattices. For the Matérn case, motivated by numerical study, we conjecture a stronger Frobenius-norm bound on the covariance matrix error for randomly-distributed data points. Lastly, we prove bounds on, and study numerically, the ill-conditioning of the linear systems arising in such regression problems.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"71 ","pages":"Article 101640"},"PeriodicalIF":2.5,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139994414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-22DOI: 10.1016/j.acha.2024.101642
Vlado Menkovski , Jacobus W. Portegies , Mahefa Ratsisetraina Ravelonanosy
We give an asymptotic expansion of the relative entropy between the heat kernel of a compact Riemannian manifold Z and the normalized Riemannian volume for small values of t and for a fixed element . We prove that coefficients in the expansion can be expressed as universal polynomials in the components of the curvature tensor and its covariant derivatives at z, when they are expressed in terms of normal coordinates. We describe a method to compute the coefficients, and we use the method to compute the first three coefficients. The asymptotic expansion is necessary for an unsupervised machine-learning algorithm called the Diffusion Variational Autoencoder.
我们给出了紧凑黎曼流形 Z 的热核 qZ(t,z,w)与归一化黎曼体积之间的相对熵的渐近展开,适用于小 t 值和固定元素 z∈Z。我们证明,当膨胀中的系数用正态坐标表示时,它们可以用曲率张量的分量及其在 z 处的协变导数中的通用多项式来表示。我们描述了计算这些系数的方法,并用该方法计算了前三个系数。渐近展开对于一种名为 "扩散变异自动编码器 "的无监督机器学习算法是必要的。
{"title":"Small time asymptotics of the entropy of the heat kernel on a Riemannian manifold","authors":"Vlado Menkovski , Jacobus W. Portegies , Mahefa Ratsisetraina Ravelonanosy","doi":"10.1016/j.acha.2024.101642","DOIUrl":"10.1016/j.acha.2024.101642","url":null,"abstract":"<div><p>We give an asymptotic expansion of the relative entropy between the heat kernel <span><math><msub><mrow><mi>q</mi></mrow><mrow><mi>Z</mi></mrow></msub><mo>(</mo><mi>t</mi><mo>,</mo><mi>z</mi><mo>,</mo><mi>w</mi><mo>)</mo></math></span> of a compact Riemannian manifold <em>Z</em> and the normalized Riemannian volume for small values of <em>t</em> and for a fixed element <span><math><mi>z</mi><mo>∈</mo><mi>Z</mi></math></span>. We prove that coefficients in the expansion can be expressed as universal polynomials in the components of the curvature tensor and its covariant derivatives at <em>z</em>, when they are expressed in terms of normal coordinates. We describe a method to compute the coefficients, and we use the method to compute the first three coefficients. The asymptotic expansion is necessary for an unsupervised machine-learning algorithm called the Diffusion Variational Autoencoder.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"71 ","pages":"Article 101642"},"PeriodicalIF":2.5,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1063520324000198/pdfft?md5=9b07347114acdc753144d27860b6f702&pid=1-s2.0-S1063520324000198-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139937845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-21DOI: 10.1016/j.acha.2024.101641
Beatrice Andreolli, Karlheinz Gröchenig
We introduce a new concept of variable bandwidth that is based on the frequency truncation of Wilson expansions. For this model we derive sampling theorems, a complete reconstruction of f from its samples, and necessary density conditions for sampling. Numerical simulations support the interpretation of this model of variable bandwidth. In particular, chirps, as they arise in the description of gravitational waves, can be modeled in a space of variable bandwidth.
我们引入了一个新的可变带宽概念,它基于威尔逊展开的频率截断。针对这一模型,我们推导出了采样定理、从采样中完整重建 f 的方法,以及采样的必要密度条件。数值模拟支持对这一可变带宽模型的解释。特别是,引力波描述中出现的啁啾,可以在可变带宽空间中建模。
{"title":"Variable bandwidth via Wilson bases","authors":"Beatrice Andreolli, Karlheinz Gröchenig","doi":"10.1016/j.acha.2024.101641","DOIUrl":"10.1016/j.acha.2024.101641","url":null,"abstract":"<div><p>We introduce a new concept of variable bandwidth that is based on the frequency truncation of Wilson expansions. For this model we derive sampling theorems, a complete reconstruction of <em>f</em> from its samples, and necessary density conditions for sampling. Numerical simulations support the interpretation of this model of variable bandwidth. In particular, chirps, as they arise in the description of gravitational waves, can be modeled in a space of variable bandwidth.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"71 ","pages":"Article 101641"},"PeriodicalIF":2.5,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1063520324000186/pdfft?md5=a1bc8edd6739aca166f773e9d3ff503a&pid=1-s2.0-S1063520324000186-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139937837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graph Laplacian based algorithms for data lying on a manifold have been proven effective for tasks such as dimensionality reduction, clustering, and denoising. In this work, we consider data sets whose data points lie on a manifold that is closed under the action of a known unitary matrix Lie group G. We propose to construct the graph Laplacian by incorporating the distances between all the pairs of points generated by the action of G on the data set. We deem the latter construction the “G-invariant Graph Laplacian” (G-GL). We show that the G-GL converges to the Laplace-Beltrami operator on the data manifold, while enjoying a significantly improved convergence rate compared to the standard graph Laplacian which only utilizes the distances between the points in the given data set. Furthermore, we show that the G-GL admits a set of eigenfunctions that have the form of certain products between the group elements and eigenvectors of certain matrices, which can be estimated from the data efficiently using FFT-type algorithms. We demonstrate our construction and its advantages on the problem of filtering data on a noisy manifold closed under the action of the special unitary group .
基于图拉普拉斯的流形数据算法已被证明在降维、聚类和去噪等任务中非常有效。在这项工作中,我们考虑的是数据点位于流形上的数据集,该流形在已知单元矩阵 Lie 群 G 的作用下是闭合的。我们建议将 G 对数据集的作用所产生的所有点对之间的距离纳入图拉普拉卡方构建中。我们将后一种构造称为 "G 不变图拉普拉卡方"(G-GL)。我们证明,G-GL 在数据流形上收敛于拉普拉斯-贝尔特拉米算子,同时与只利用给定数据集中点间距离的标准图拉普拉斯算子相比,G-GL 的收敛率显著提高。此外,我们还展示了 G-GL 的特征函数集,这些特征函数具有特定矩阵的组元和特征向量之间的特定乘积形式,可以使用 FFT 类型的算法从数据中高效地估算出来。我们将在特殊单元群 SU(2) 作用下封闭的噪声流形的数据过滤问题上演示我们的构造及其优势。
{"title":"The G-invariant graph Laplacian Part I: Convergence rate and eigendecomposition","authors":"Eitan Rosen , Paulina Hoyos , Xiuyuan Cheng , Joe Kileel , Yoel Shkolnisky","doi":"10.1016/j.acha.2024.101637","DOIUrl":"10.1016/j.acha.2024.101637","url":null,"abstract":"<div><p>Graph Laplacian based algorithms for data lying on a manifold have been proven effective for tasks such as dimensionality reduction, clustering, and denoising. In this work, we consider data sets whose data points lie on a manifold that is closed under the action of a known unitary matrix Lie group <em>G</em>. We propose to construct the graph Laplacian by incorporating the distances between all the pairs of points generated by the action of <em>G</em> on the data set. We deem the latter construction the “<em>G</em>-invariant Graph Laplacian” (<em>G</em>-GL). We show that the <em>G</em>-GL converges to the Laplace-Beltrami operator on the data manifold, while enjoying a significantly improved convergence rate compared to the standard graph Laplacian which only utilizes the distances between the points in the given data set. Furthermore, we show that the <em>G</em>-GL admits a set of eigenfunctions that have the form of certain products between the group elements and eigenvectors of certain matrices, which can be estimated from the data efficiently using FFT-type algorithms. We demonstrate our construction and its advantages on the problem of filtering data on a noisy manifold closed under the action of the special unitary group <span><math><mi>S</mi><mi>U</mi><mo>(</mo><mn>2</mn><mo>)</mo></math></span>.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"71 ","pages":"Article 101637"},"PeriodicalIF":2.5,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139937829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-09DOI: 10.1016/j.acha.2024.101638
Suddhasattwa Das
The separate tasks of denoising, least squares expectation, and manifold learning can often be posed in a common setting of finding the conditional expectations arising from a product of two random variables. This paper focuses on this more general problem and describes an operator theoretic approach to estimating the conditional expectation. Kernel integral operators are used as a compactification tool, to set up the estimation problem as a linear inverse problem in a reproducing kernel Hilbert space. This equation is shown to have solutions that allow numerical approximation, thus guaranteeing the convergence of data-driven implementations. The overall technique is easy to implement, and their successful application to some real-world problems is also shown.
{"title":"Conditional expectation using compactification operators","authors":"Suddhasattwa Das","doi":"10.1016/j.acha.2024.101638","DOIUrl":"https://doi.org/10.1016/j.acha.2024.101638","url":null,"abstract":"<div><p>The separate tasks of denoising, least squares expectation, and manifold learning can often be posed in a common setting of finding the conditional expectations arising from a product of two random variables. This paper focuses on this more general problem and describes an operator theoretic approach to estimating the conditional expectation. Kernel integral operators are used as a compactification tool, to set up the estimation problem as a linear inverse problem in a reproducing kernel Hilbert space. This equation is shown to have solutions that allow numerical approximation, thus guaranteeing the convergence of data-driven implementations. The overall technique is easy to implement, and their successful application to some real-world problems is also shown.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"71 ","pages":"Article 101638"},"PeriodicalIF":2.5,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139732606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-06DOI: 10.1016/j.acha.2024.101635
Joyce Chew, Matthew Hirn, Smita Krishnaswamy, Deanna Needell, Michael Perlmutter, Holly Steach, Siddharth Viswanath, Hau-Tieng Wu
The scattering transform is a multilayered, wavelet-based transform initially introduced as a mathematical model of convolutional neural networks (CNNs) that has played a foundational role in our understanding of these networks' stability and invariance properties. In subsequent years, there has been widespread interest in extending the success of CNNs to data sets with non-Euclidean structure, such as graphs and manifolds, leading to the emerging field of geometric deep learning. In order to improve our understanding of the architectures used in this new field, several papers have proposed generalizations of the scattering transform for non-Euclidean data structures such as undirected graphs and compact Riemannian manifolds without boundary. Analogous to the original scattering transform, these works prove that these variants of the scattering transform have desirable stability and invariance properties and aim to improve our understanding of the neural networks used in geometric deep learning.
In this paper, we introduce a general, unified model for geometric scattering on measure spaces. Our proposed framework includes previous work on compact Riemannian manifolds without boundary and undirected graphs as special cases but also applies to more general settings such as directed graphs, signed graphs, and manifolds with boundary. We propose a new criterion that identifies to which groups a useful representation should be invariant and show that this criterion is sufficient to guarantee that the scattering transform has desirable stability and invariance properties. Additionally, we consider finite measure spaces that are obtained from randomly sampling an unknown manifold. We propose two methods for constructing a data-driven graph on which the associated graph scattering transform approximates the scattering transform on the underlying manifold. Moreover, we use a diffusion-maps based approach to prove quantitative estimates on the rate of convergence of one of these approximations as the number of sample points tends to infinity. Lastly, we showcase the utility of our method on spherical images, a directed graph stochastic block model, and on high-dimensional single-cell data.
{"title":"Geometric scattering on measure spaces","authors":"Joyce Chew, Matthew Hirn, Smita Krishnaswamy, Deanna Needell, Michael Perlmutter, Holly Steach, Siddharth Viswanath, Hau-Tieng Wu","doi":"10.1016/j.acha.2024.101635","DOIUrl":"https://doi.org/10.1016/j.acha.2024.101635","url":null,"abstract":"<div><p>The scattering transform is a multilayered, wavelet-based transform initially introduced as a mathematical model of convolutional neural networks (CNNs) that has played a foundational role in our understanding of these networks' stability and invariance properties. In subsequent years, there has been widespread interest in extending the success of CNNs to data sets with non-Euclidean structure, such as graphs and manifolds, leading to the emerging field of geometric deep learning. In order to improve our understanding of the architectures used in this new field, several papers have proposed generalizations of the scattering transform for non-Euclidean data structures such as undirected graphs and compact Riemannian manifolds without boundary. Analogous to the original scattering transform, these works prove that these variants of the scattering transform have desirable stability and invariance properties and aim to improve our understanding of the neural networks used in geometric deep learning.</p><p>In this paper, we introduce a general, unified model for geometric scattering on measure spaces. Our proposed framework includes previous work on compact Riemannian manifolds without boundary and undirected graphs as special cases but also applies to more general settings such as directed graphs, signed graphs, and manifolds with boundary. We propose a new criterion that identifies to which groups a useful representation should be invariant and show that this criterion is sufficient to guarantee that the scattering transform has desirable stability and invariance properties. Additionally, we consider finite measure spaces that are obtained from randomly sampling an unknown manifold. We propose two methods for constructing a data-driven graph on which the associated graph scattering transform approximates the scattering transform on the underlying manifold. Moreover, we use a diffusion-maps based approach to prove quantitative estimates on the rate of convergence of one of these approximations as the number of sample points tends to infinity. Lastly, we showcase the utility of our method on spherical images, a directed graph stochastic block model, and on high-dimensional single-cell data.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"70 ","pages":"Article 101635"},"PeriodicalIF":2.5,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139710122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01DOI: 10.1016/j.acha.2024.101636
Li Cheng
Recently we have characterized the convergence of bivariate subdivision scheme with nonnegative mask whose support is convex by means of the so-called connectivity of a square matrix, which is derived by a given mask. The convergence in this case can be checked in linear time with respected to the size of a square matrix. This paper will focus on the characterization of such schemes with non-convex supports.
{"title":"Convergent bivariate subdivision scheme with nonnegative mask whose support is non-convex","authors":"Li Cheng","doi":"10.1016/j.acha.2024.101636","DOIUrl":"10.1016/j.acha.2024.101636","url":null,"abstract":"<div><p>Recently we have characterized the convergence of bivariate subdivision scheme with nonnegative mask whose support is convex by means of the so-called connectivity of a square matrix, which is derived by a given mask. The convergence in this case can be checked in linear time with respected to the size of a square matrix. This paper will focus on the characterization of such schemes with non-convex supports.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"70 ","pages":"Article 101636"},"PeriodicalIF":2.5,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139663000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}