Pub Date : 2024-02-22DOI: 10.1016/j.acha.2024.101642
Vlado Menkovski , Jacobus W. Portegies , Mahefa Ratsisetraina Ravelonanosy
We give an asymptotic expansion of the relative entropy between the heat kernel of a compact Riemannian manifold Z and the normalized Riemannian volume for small values of t and for a fixed element . We prove that coefficients in the expansion can be expressed as universal polynomials in the components of the curvature tensor and its covariant derivatives at z, when they are expressed in terms of normal coordinates. We describe a method to compute the coefficients, and we use the method to compute the first three coefficients. The asymptotic expansion is necessary for an unsupervised machine-learning algorithm called the Diffusion Variational Autoencoder.
我们给出了紧凑黎曼流形 Z 的热核 qZ(t,z,w)与归一化黎曼体积之间的相对熵的渐近展开,适用于小 t 值和固定元素 z∈Z。我们证明,当膨胀中的系数用正态坐标表示时,它们可以用曲率张量的分量及其在 z 处的协变导数中的通用多项式来表示。我们描述了计算这些系数的方法,并用该方法计算了前三个系数。渐近展开对于一种名为 "扩散变异自动编码器 "的无监督机器学习算法是必要的。
{"title":"Small time asymptotics of the entropy of the heat kernel on a Riemannian manifold","authors":"Vlado Menkovski , Jacobus W. Portegies , Mahefa Ratsisetraina Ravelonanosy","doi":"10.1016/j.acha.2024.101642","DOIUrl":"10.1016/j.acha.2024.101642","url":null,"abstract":"<div><p>We give an asymptotic expansion of the relative entropy between the heat kernel <span><math><msub><mrow><mi>q</mi></mrow><mrow><mi>Z</mi></mrow></msub><mo>(</mo><mi>t</mi><mo>,</mo><mi>z</mi><mo>,</mo><mi>w</mi><mo>)</mo></math></span> of a compact Riemannian manifold <em>Z</em> and the normalized Riemannian volume for small values of <em>t</em> and for a fixed element <span><math><mi>z</mi><mo>∈</mo><mi>Z</mi></math></span>. We prove that coefficients in the expansion can be expressed as universal polynomials in the components of the curvature tensor and its covariant derivatives at <em>z</em>, when they are expressed in terms of normal coordinates. We describe a method to compute the coefficients, and we use the method to compute the first three coefficients. The asymptotic expansion is necessary for an unsupervised machine-learning algorithm called the Diffusion Variational Autoencoder.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"71 ","pages":"Article 101642"},"PeriodicalIF":2.5,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1063520324000198/pdfft?md5=9b07347114acdc753144d27860b6f702&pid=1-s2.0-S1063520324000198-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139937845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-21DOI: 10.1016/j.acha.2024.101641
Beatrice Andreolli, Karlheinz Gröchenig
We introduce a new concept of variable bandwidth that is based on the frequency truncation of Wilson expansions. For this model we derive sampling theorems, a complete reconstruction of f from its samples, and necessary density conditions for sampling. Numerical simulations support the interpretation of this model of variable bandwidth. In particular, chirps, as they arise in the description of gravitational waves, can be modeled in a space of variable bandwidth.
我们引入了一个新的可变带宽概念,它基于威尔逊展开的频率截断。针对这一模型,我们推导出了采样定理、从采样中完整重建 f 的方法,以及采样的必要密度条件。数值模拟支持对这一可变带宽模型的解释。特别是,引力波描述中出现的啁啾,可以在可变带宽空间中建模。
{"title":"Variable bandwidth via Wilson bases","authors":"Beatrice Andreolli, Karlheinz Gröchenig","doi":"10.1016/j.acha.2024.101641","DOIUrl":"10.1016/j.acha.2024.101641","url":null,"abstract":"<div><p>We introduce a new concept of variable bandwidth that is based on the frequency truncation of Wilson expansions. For this model we derive sampling theorems, a complete reconstruction of <em>f</em> from its samples, and necessary density conditions for sampling. Numerical simulations support the interpretation of this model of variable bandwidth. In particular, chirps, as they arise in the description of gravitational waves, can be modeled in a space of variable bandwidth.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"71 ","pages":"Article 101641"},"PeriodicalIF":2.5,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1063520324000186/pdfft?md5=a1bc8edd6739aca166f773e9d3ff503a&pid=1-s2.0-S1063520324000186-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139937837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graph Laplacian based algorithms for data lying on a manifold have been proven effective for tasks such as dimensionality reduction, clustering, and denoising. In this work, we consider data sets whose data points lie on a manifold that is closed under the action of a known unitary matrix Lie group G. We propose to construct the graph Laplacian by incorporating the distances between all the pairs of points generated by the action of G on the data set. We deem the latter construction the “G-invariant Graph Laplacian” (G-GL). We show that the G-GL converges to the Laplace-Beltrami operator on the data manifold, while enjoying a significantly improved convergence rate compared to the standard graph Laplacian which only utilizes the distances between the points in the given data set. Furthermore, we show that the G-GL admits a set of eigenfunctions that have the form of certain products between the group elements and eigenvectors of certain matrices, which can be estimated from the data efficiently using FFT-type algorithms. We demonstrate our construction and its advantages on the problem of filtering data on a noisy manifold closed under the action of the special unitary group .
基于图拉普拉斯的流形数据算法已被证明在降维、聚类和去噪等任务中非常有效。在这项工作中,我们考虑的是数据点位于流形上的数据集,该流形在已知单元矩阵 Lie 群 G 的作用下是闭合的。我们建议将 G 对数据集的作用所产生的所有点对之间的距离纳入图拉普拉卡方构建中。我们将后一种构造称为 "G 不变图拉普拉卡方"(G-GL)。我们证明,G-GL 在数据流形上收敛于拉普拉斯-贝尔特拉米算子,同时与只利用给定数据集中点间距离的标准图拉普拉斯算子相比,G-GL 的收敛率显著提高。此外,我们还展示了 G-GL 的特征函数集,这些特征函数具有特定矩阵的组元和特征向量之间的特定乘积形式,可以使用 FFT 类型的算法从数据中高效地估算出来。我们将在特殊单元群 SU(2) 作用下封闭的噪声流形的数据过滤问题上演示我们的构造及其优势。
{"title":"The G-invariant graph Laplacian Part I: Convergence rate and eigendecomposition","authors":"Eitan Rosen , Paulina Hoyos , Xiuyuan Cheng , Joe Kileel , Yoel Shkolnisky","doi":"10.1016/j.acha.2024.101637","DOIUrl":"10.1016/j.acha.2024.101637","url":null,"abstract":"<div><p>Graph Laplacian based algorithms for data lying on a manifold have been proven effective for tasks such as dimensionality reduction, clustering, and denoising. In this work, we consider data sets whose data points lie on a manifold that is closed under the action of a known unitary matrix Lie group <em>G</em>. We propose to construct the graph Laplacian by incorporating the distances between all the pairs of points generated by the action of <em>G</em> on the data set. We deem the latter construction the “<em>G</em>-invariant Graph Laplacian” (<em>G</em>-GL). We show that the <em>G</em>-GL converges to the Laplace-Beltrami operator on the data manifold, while enjoying a significantly improved convergence rate compared to the standard graph Laplacian which only utilizes the distances between the points in the given data set. Furthermore, we show that the <em>G</em>-GL admits a set of eigenfunctions that have the form of certain products between the group elements and eigenvectors of certain matrices, which can be estimated from the data efficiently using FFT-type algorithms. We demonstrate our construction and its advantages on the problem of filtering data on a noisy manifold closed under the action of the special unitary group <span><math><mi>S</mi><mi>U</mi><mo>(</mo><mn>2</mn><mo>)</mo></math></span>.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"71 ","pages":"Article 101637"},"PeriodicalIF":2.5,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139937829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-09DOI: 10.1016/j.acha.2024.101638
Suddhasattwa Das
The separate tasks of denoising, least squares expectation, and manifold learning can often be posed in a common setting of finding the conditional expectations arising from a product of two random variables. This paper focuses on this more general problem and describes an operator theoretic approach to estimating the conditional expectation. Kernel integral operators are used as a compactification tool, to set up the estimation problem as a linear inverse problem in a reproducing kernel Hilbert space. This equation is shown to have solutions that allow numerical approximation, thus guaranteeing the convergence of data-driven implementations. The overall technique is easy to implement, and their successful application to some real-world problems is also shown.
{"title":"Conditional expectation using compactification operators","authors":"Suddhasattwa Das","doi":"10.1016/j.acha.2024.101638","DOIUrl":"https://doi.org/10.1016/j.acha.2024.101638","url":null,"abstract":"<div><p>The separate tasks of denoising, least squares expectation, and manifold learning can often be posed in a common setting of finding the conditional expectations arising from a product of two random variables. This paper focuses on this more general problem and describes an operator theoretic approach to estimating the conditional expectation. Kernel integral operators are used as a compactification tool, to set up the estimation problem as a linear inverse problem in a reproducing kernel Hilbert space. This equation is shown to have solutions that allow numerical approximation, thus guaranteeing the convergence of data-driven implementations. The overall technique is easy to implement, and their successful application to some real-world problems is also shown.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"71 ","pages":"Article 101638"},"PeriodicalIF":2.5,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139732606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-06DOI: 10.1016/j.acha.2024.101635
Joyce Chew, Matthew Hirn, Smita Krishnaswamy, Deanna Needell, Michael Perlmutter, Holly Steach, Siddharth Viswanath, Hau-Tieng Wu
The scattering transform is a multilayered, wavelet-based transform initially introduced as a mathematical model of convolutional neural networks (CNNs) that has played a foundational role in our understanding of these networks' stability and invariance properties. In subsequent years, there has been widespread interest in extending the success of CNNs to data sets with non-Euclidean structure, such as graphs and manifolds, leading to the emerging field of geometric deep learning. In order to improve our understanding of the architectures used in this new field, several papers have proposed generalizations of the scattering transform for non-Euclidean data structures such as undirected graphs and compact Riemannian manifolds without boundary. Analogous to the original scattering transform, these works prove that these variants of the scattering transform have desirable stability and invariance properties and aim to improve our understanding of the neural networks used in geometric deep learning.
In this paper, we introduce a general, unified model for geometric scattering on measure spaces. Our proposed framework includes previous work on compact Riemannian manifolds without boundary and undirected graphs as special cases but also applies to more general settings such as directed graphs, signed graphs, and manifolds with boundary. We propose a new criterion that identifies to which groups a useful representation should be invariant and show that this criterion is sufficient to guarantee that the scattering transform has desirable stability and invariance properties. Additionally, we consider finite measure spaces that are obtained from randomly sampling an unknown manifold. We propose two methods for constructing a data-driven graph on which the associated graph scattering transform approximates the scattering transform on the underlying manifold. Moreover, we use a diffusion-maps based approach to prove quantitative estimates on the rate of convergence of one of these approximations as the number of sample points tends to infinity. Lastly, we showcase the utility of our method on spherical images, a directed graph stochastic block model, and on high-dimensional single-cell data.
{"title":"Geometric scattering on measure spaces","authors":"Joyce Chew, Matthew Hirn, Smita Krishnaswamy, Deanna Needell, Michael Perlmutter, Holly Steach, Siddharth Viswanath, Hau-Tieng Wu","doi":"10.1016/j.acha.2024.101635","DOIUrl":"https://doi.org/10.1016/j.acha.2024.101635","url":null,"abstract":"<div><p>The scattering transform is a multilayered, wavelet-based transform initially introduced as a mathematical model of convolutional neural networks (CNNs) that has played a foundational role in our understanding of these networks' stability and invariance properties. In subsequent years, there has been widespread interest in extending the success of CNNs to data sets with non-Euclidean structure, such as graphs and manifolds, leading to the emerging field of geometric deep learning. In order to improve our understanding of the architectures used in this new field, several papers have proposed generalizations of the scattering transform for non-Euclidean data structures such as undirected graphs and compact Riemannian manifolds without boundary. Analogous to the original scattering transform, these works prove that these variants of the scattering transform have desirable stability and invariance properties and aim to improve our understanding of the neural networks used in geometric deep learning.</p><p>In this paper, we introduce a general, unified model for geometric scattering on measure spaces. Our proposed framework includes previous work on compact Riemannian manifolds without boundary and undirected graphs as special cases but also applies to more general settings such as directed graphs, signed graphs, and manifolds with boundary. We propose a new criterion that identifies to which groups a useful representation should be invariant and show that this criterion is sufficient to guarantee that the scattering transform has desirable stability and invariance properties. Additionally, we consider finite measure spaces that are obtained from randomly sampling an unknown manifold. We propose two methods for constructing a data-driven graph on which the associated graph scattering transform approximates the scattering transform on the underlying manifold. Moreover, we use a diffusion-maps based approach to prove quantitative estimates on the rate of convergence of one of these approximations as the number of sample points tends to infinity. Lastly, we showcase the utility of our method on spherical images, a directed graph stochastic block model, and on high-dimensional single-cell data.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"70 ","pages":"Article 101635"},"PeriodicalIF":2.5,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139710122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01DOI: 10.1016/j.acha.2024.101636
Li Cheng
Recently we have characterized the convergence of bivariate subdivision scheme with nonnegative mask whose support is convex by means of the so-called connectivity of a square matrix, which is derived by a given mask. The convergence in this case can be checked in linear time with respected to the size of a square matrix. This paper will focus on the characterization of such schemes with non-convex supports.
{"title":"Convergent bivariate subdivision scheme with nonnegative mask whose support is non-convex","authors":"Li Cheng","doi":"10.1016/j.acha.2024.101636","DOIUrl":"10.1016/j.acha.2024.101636","url":null,"abstract":"<div><p>Recently we have characterized the convergence of bivariate subdivision scheme with nonnegative mask whose support is convex by means of the so-called connectivity of a square matrix, which is derived by a given mask. The convergence in this case can be checked in linear time with respected to the size of a square matrix. This paper will focus on the characterization of such schemes with non-convex supports.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"70 ","pages":"Article 101636"},"PeriodicalIF":2.5,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139663000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-27DOI: 10.1016/j.acha.2024.101632
Jun Fan , Yunwen Lei
Algorithmic stability is a fundamental concept in statistical learning theory to understand the generalization behavior of optimization algorithms. Existing high-probability bounds are developed for the generalization gap as measured by function values and require the algorithm to be uniformly stable. In this paper, we introduce a novel stability measure called pointwise uniform stability by considering the sensitivity of the algorithm with respect to the perturbation of each training example. We show this weaker pointwise uniform stability guarantees almost optimal bounds, and gives the first high-probability bound for the generalization gap as measured by gradients. Sharper bounds are given for strongly convex and smooth problems. We further apply our general result to derive improved generalization bounds for stochastic gradient descent. As a byproduct, we develop concentration inequalities for a summation of weakly-dependent vector-valued random variables.
{"title":"High-probability generalization bounds for pointwise uniformly stable algorithms","authors":"Jun Fan , Yunwen Lei","doi":"10.1016/j.acha.2024.101632","DOIUrl":"10.1016/j.acha.2024.101632","url":null,"abstract":"<div><p>Algorithmic stability is a fundamental concept in statistical learning theory to understand the generalization behavior of optimization algorithms. Existing high-probability bounds are developed for the generalization gap as measured by function values and require the algorithm to be uniformly stable. In this paper, we introduce a novel stability measure called pointwise<span> uniform stability by considering the sensitivity of the algorithm with respect to the perturbation of each training example. We show this weaker pointwise uniform stability guarantees almost optimal bounds, and gives the first high-probability bound for the generalization gap as measured by gradients. Sharper bounds<span> are given for strongly convex and smooth problems. We further apply our general result to derive improved generalization bounds for stochastic gradient descent. As a byproduct, we develop concentration inequalities for a summation of weakly-dependent vector-valued random variables.</span></span></p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"70 ","pages":"Article 101632"},"PeriodicalIF":2.5,"publicationDate":"2024-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139568091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-26DOI: 10.1016/j.acha.2024.101634
Antonio Cicone , Wing Suet Li , Haomin Zhou
The analysis of the time–frequency content of a signal is a classical problem in signal processing, with a broad number of applications in real life. Many different approaches have been developed over the decades, which provide alternative time–frequency representations of a signal each with its advantages and limitations. In this work, following the success of nonlinear methods for the decomposition of signals into intrinsic mode functions (IMFs), we first provide more theoretical insights into the so–called Iterative Filtering decomposition algorithm, proving an energy conservation result for the derived decompositions. Furthermore, we present a new time–frequency representation method based on the IMF decomposition of a signal, which is called IMFogram. We prove theoretical results regarding this method, including its convergence to the spectrogram representation for a certain class of signals, and we present a few examples of applications, comparing results with some of the most well-known approaches available in the literature.
分析信号的时频内容是信号处理中的一个经典问题,在现实生活中有着广泛的应用。几十年来,人们开发了许多不同的方法,这些方法提供了信号的其他时频表示方法,每种方法都有其优势和局限性。在这项工作中,继将信号分解为固有模态函数(IMF)的非线性方法取得成功之后,我们首先对所谓的迭代滤波分解算法提出了更多理论见解,证明了衍生分解的能量守恒结果。此外,我们还提出了一种基于信号 IMF 分解的新时频表示方法,称为 IMFogram。我们证明了有关这种方法的理论结果,包括它对某类信号的频谱图表示的收敛性,我们还介绍了一些应用实例,并将结果与文献中一些最著名的方法进行了比较。
{"title":"New theoretical insights in the decomposition and time-frequency representation of nonstationary signals: The IMFogram algorithm","authors":"Antonio Cicone , Wing Suet Li , Haomin Zhou","doi":"10.1016/j.acha.2024.101634","DOIUrl":"10.1016/j.acha.2024.101634","url":null,"abstract":"<div><p>The analysis of the time–frequency content of a signal is a classical problem in signal processing, with a broad number of applications in real life. Many different approaches have been developed over the decades, which provide alternative time–frequency representations of a signal each with its advantages and limitations. In this work, following the success of nonlinear methods for the decomposition of signals into intrinsic mode functions (IMFs), we first provide more theoretical insights into the so–called Iterative Filtering decomposition algorithm, proving an energy conservation result for the derived decompositions. Furthermore, we present a new time–frequency representation method based on the IMF decomposition of a signal, which is called IMFogram. We prove theoretical results regarding this method, including its convergence to the spectrogram representation for a certain class of signals, and we present a few examples of applications, comparing results with some of the most well-known approaches available in the literature.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"71 ","pages":"Article 101634"},"PeriodicalIF":2.5,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139567875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-24DOI: 10.1016/j.acha.2024.101633
Gregory Beylkin
We consider the free space Helmholtz Green's function and split it into the sum of oscillatory and non-oscillatory (singular) components. The goal is to separate the impact of the singularity of the real part at the origin from the oscillatory behavior controlled by the wave number k. The oscillatory component can be chosen to have any finite number of continuous derivatives at the origin and can be applied to a function in the Fourier space in operations. The non-oscillatory component has a multiresolution representation via a linear combination of Gaussians and is applied efficiently in space.
Since the Helmholtz Green's function can be viewed as a point source, this partitioning can be interpreted as a splitting into propagating and evanescent components. We show that the non-oscillatory component is significant only in the vicinity of the source at distances , for some constants , , whereas the propagating component can be observed at large distances.
我们考虑了自由空间的亥姆霍兹格林函数,并将其拆分为振荡和非振荡(奇异)两部分之和。我们的目标是将原点实部奇异性的影响与波数 k 控制的振荡行为区分开来。振荡分量可以选择在原点具有任意有限个连续导数,并能在 O(kdlogk) 运算中应用于傅里叶空间中的函数。由于亥姆霍兹格林函数可被视为一个点源,因此这种分割可被解释为分为传播分量和蒸发分量。我们的研究表明,对于某些常数 c1、c2,非振荡分量只在距离 O(c1k-1+c2k-1log10k)的源附近才有意义,而传播分量则可以在较大距离上观察到。
{"title":"On representations of the Helmholtz Green's function","authors":"Gregory Beylkin","doi":"10.1016/j.acha.2024.101633","DOIUrl":"10.1016/j.acha.2024.101633","url":null,"abstract":"<div><p>We consider the free space Helmholtz Green's function and split it into the sum of oscillatory and non-oscillatory (singular) components. The goal is to separate the impact of the singularity of the real part at the origin from the oscillatory behavior controlled by the wave number <em>k</em>. The oscillatory component can be chosen to have any finite number of continuous derivatives at the origin and can be applied to a function in the Fourier space in <span><math><mi>O</mi><mrow><mo>(</mo><msup><mrow><mi>k</mi></mrow><mrow><mi>d</mi></mrow></msup><mi>log</mi><mo></mo><mi>k</mi><mo>)</mo></mrow></math></span><span><span> operations. The non-oscillatory component has a multiresolution representation via a </span>linear combination of Gaussians and is applied efficiently in space.</span></p><p>Since the Helmholtz Green's function can be viewed as a point source, this partitioning can be interpreted as a splitting into propagating and evanescent components. We show that the non-oscillatory component is significant only in the vicinity of the source at distances <span><math><mi>O</mi><mrow><mo>(</mo><msub><mrow><mi>c</mi></mrow><mrow><mn>1</mn></mrow></msub><msup><mrow><mi>k</mi></mrow><mrow><mo>−</mo><mn>1</mn></mrow></msup><mo>+</mo><msub><mrow><mi>c</mi></mrow><mrow><mn>2</mn></mrow></msub><msup><mrow><mi>k</mi></mrow><mrow><mo>−</mo><mn>1</mn></mrow></msup><msub><mrow><mi>log</mi></mrow><mrow><mn>10</mn></mrow></msub><mo></mo><mi>k</mi><mo>)</mo></mrow></math></span>, for some constants <span><math><msub><mrow><mi>c</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span>, <span><math><msub><mrow><mi>c</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span>, whereas the propagating component can be observed at large distances.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"70 ","pages":"Article 101633"},"PeriodicalIF":2.5,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139544440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-19DOI: 10.1016/j.acha.2024.101630
Maria Charina , Costanza Conti , Nira Dyn
This paper discusses the generation of multivariate functions with compact small supports by subdivision schemes. Following the construction of such a univariate function, called Up-function, by a non-stationary scheme based on masks of spline subdivision schemes of growing degrees, we term the multivariate functions we generate Up-like functions. We generate them by non-stationary schemes based on masks of three-directional box-splines of growing supports. To analyze the convergence and smoothness of these non-stationary schemes, we develop new tools which apply to a wider class of schemes than the class we study. With our method for achieving small compact supports, we obtain in the univariate case, Up-like functions with supports in comparison to the support of the Up-function. Examples of univariate and bivariate Up-like functions are given. As in the univariate case, the construction of Up-like functions can motivate the generation of compactly supported wavelets of small support in any dimension.
本文讨论通过细分方案生成具有紧凑小支撑的多元 C∞ 函数。根据基于度数不断增长的样条细分方案掩码的非稳态方案构建的单变量函数(称为Up-函数),我们将生成的多变量函数称为Up-类函数。我们通过基于支持度不断增长的三向盒样条曲线掩码的非稳态方案生成它们。为了分析这些非稳态方案的收敛性和平滑性,我们开发了新的工具,这些工具适用于比我们所研究的方案更广泛的方案类别。用我们的方法实现了小的紧凑支撑,在单变量情况下,我们得到了支撑[0,1+ϵ]的类Up函数,与Up函数的支撑[0,2]相比。本文给出了单变量和双变量类 Up 函数的例子。与单变量情况一样,Up-like 函数的构造可以促使在任何维度上生成 C∞ 紧凑支持的小支持小波。
{"title":"Multivariate compactly supported C∞ functions by subdivision","authors":"Maria Charina , Costanza Conti , Nira Dyn","doi":"10.1016/j.acha.2024.101630","DOIUrl":"10.1016/j.acha.2024.101630","url":null,"abstract":"<div><p>This paper discusses the generation of multivariate <span><math><msup><mrow><mi>C</mi></mrow><mrow><mo>∞</mo></mrow></msup></math></span> functions with compact small supports by subdivision schemes. Following the construction of such a univariate function, called <em>Up-function</em>, by a non-stationary scheme based on masks of spline subdivision schemes of growing degrees, we term the multivariate functions we generate <em>Up-like functions</em>. We generate them by non-stationary schemes based on masks of three-directional box-splines of growing supports. To analyze the convergence and smoothness of these non-stationary schemes, we develop new tools which apply to a wider class of schemes than the class we study. With our method for achieving small compact supports, we obtain in the univariate case, Up-like functions with supports <span><math><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>+</mo><mi>ϵ</mi><mo>]</mo></math></span> in comparison to the support <span><math><mo>[</mo><mn>0</mn><mo>,</mo><mn>2</mn><mo>]</mo></math></span> of the Up-function. Examples of univariate and bivariate Up-like functions are given. As in the univariate case, the construction of Up-like functions can motivate the generation of <span><math><msup><mrow><mi>C</mi></mrow><mrow><mo>∞</mo></mrow></msup></math></span> compactly supported wavelets of small support in any dimension.</p></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"70 ","pages":"Article 101630"},"PeriodicalIF":2.5,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1063520324000071/pdfft?md5=1ad3a9e4a30806ec403f504079a4421d&pid=1-s2.0-S1063520324000071-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139505960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}