Pub Date : 2024-11-13DOI: 10.1016/j.acha.2024.101721
Jeremy Hoskins , Manas Rachh , Bowei Wu
This paper describes a trapezoidal quadrature method for the discretization of weakly singular, and hypersingular boundary integral operators with complex symmetric quadratic forms. Such integral operators naturally arise when complex coordinate methods or complexified contour methods are used for the solution of time-harmonic acoustic and electromagnetic interface problems in three dimensions. The quadrature is an extension of a locally corrected punctured trapezoidal rule in parameter space wherein the correction weights are determined by fitting moments of error in the punctured trapezoidal rule, which is known analytically in terms of the Epstein zeta function. In this work, we analyze the analytic continuation of the Epstein zeta function and the generalized Wigner limits to complex quadratic forms; this analysis is essential to apply the fitting procedure for computing the correction weights. We illustrate the high-order convergence of this approach through several numerical examples.
{"title":"On quadrature for singular integral operators with complex symmetric quadratic forms","authors":"Jeremy Hoskins , Manas Rachh , Bowei Wu","doi":"10.1016/j.acha.2024.101721","DOIUrl":"10.1016/j.acha.2024.101721","url":null,"abstract":"<div><div>This paper describes a trapezoidal quadrature method for the discretization of weakly singular, and hypersingular boundary integral operators with complex symmetric quadratic forms. Such integral operators naturally arise when complex coordinate methods or complexified contour methods are used for the solution of time-harmonic acoustic and electromagnetic interface problems in three dimensions. The quadrature is an extension of a locally corrected punctured trapezoidal rule in parameter space wherein the correction weights are determined by fitting moments of error in the punctured trapezoidal rule, which is known analytically in terms of the Epstein zeta function. In this work, we analyze the analytic continuation of the Epstein zeta function and the generalized Wigner limits to complex quadratic forms; this analysis is essential to apply the fitting procedure for computing the correction weights. We illustrate the high-order convergence of this approach through several numerical examples.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"74 ","pages":"Article 101721"},"PeriodicalIF":2.6,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142658788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-13DOI: 10.1016/j.acha.2024.101722
Gi-Ren Liu , Yuan-Chung Sheu , Hau-Tieng Wu
The moving average of the complex modulus of the analytic wavelet transform provides a robust time-scale representation for signals to small time shifts and deformation. In this work, we derive the Wiener chaos expansion of this representation for stationary Gaussian processes by the Malliavin calculus and combinatorial techniques. The expansion allows us to obtain a lower bound for the Wasserstein distance between the time-scale representations of two long-range dependent Gaussian processes in terms of Hurst indices. Moreover, we apply the expansion to establish an upper bound for the smooth Wasserstein distance and the Kolmogorov distance between the distributions of a random vector derived from the time-scale representation and its normal counterpart. It is worth mentioning that the expansion consists of infinite Wiener chaos, and the projection coefficients converge to zero slowly as the order of the Wiener chaos increases. We provide a rational-decay upper bound for these distribution distances, the rate of which depends on the nonlinear transformation of the amplitude of the complex wavelet coefficients.
{"title":"Gaussian approximation for the moving averaged modulus wavelet transform and its variants","authors":"Gi-Ren Liu , Yuan-Chung Sheu , Hau-Tieng Wu","doi":"10.1016/j.acha.2024.101722","DOIUrl":"10.1016/j.acha.2024.101722","url":null,"abstract":"<div><div>The moving average of the complex modulus of the analytic wavelet transform provides a robust time-scale representation for signals to small time shifts and deformation. In this work, we derive the Wiener chaos expansion of this representation for stationary Gaussian processes by the Malliavin calculus and combinatorial techniques. The expansion allows us to obtain a lower bound for the Wasserstein distance between the time-scale representations of two long-range dependent Gaussian processes in terms of Hurst indices. Moreover, we apply the expansion to establish an upper bound for the smooth Wasserstein distance and the Kolmogorov distance between the distributions of a random vector derived from the time-scale representation and its normal counterpart. It is worth mentioning that the expansion consists of infinite Wiener chaos, and the projection coefficients converge to zero slowly as the order of the Wiener chaos increases. We provide a rational-decay upper bound for these distribution distances, the rate of which depends on the nonlinear transformation of the amplitude of the complex wavelet coefficients.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"74 ","pages":"Article 101722"},"PeriodicalIF":2.6,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142658789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-08DOI: 10.1016/j.acha.2024.101720
Matthew Fickus, Benjamin R. Mayo, Cody E. Watson
An equichordal tight fusion frame () is a finite sequence of equi-dimensional subspaces of a Euclidean space that achieves equality in Conway, Hardin and Sloane's simplex bound. Every is a type of optimal Grassmannian code, being a way to arrange a given number of members of a Grassmannian so that the minimal chordal distance between any pair of them is as large as possible. Any nontrivial has both a Naimark complement and spatial complement which themselves are s. We show that taking iterated alternating Naimark and spatial complements of any of at least five subspaces yields an infinite family of s with pairwise distinct parameters. Generalizing a method by King, we then construct s from difference families for finite abelian groups, and use our Naimark-spatial theory to gauge their novelty.
等弦密融合框()是欧几里得空间的等维子空间的有限序列,它在康威、哈丁和斯隆的单数约束中达到相等。每一个都是一种最优格拉斯曼编码,是一种排列给定数量的格拉斯曼成员,使任意一对成员之间的最小弦距尽可能大的方法。我们的研究表明,对至少五个子空间中的任意子空间进行迭代交替的奈马克补集和空间补集,就能得到一个具有成对不同参数的无穷 s 族。然后,我们推广了金的一种方法,从有限无性群的差分族中构造出 s,并利用我们的奈马克空间理论来衡量它们的新颖性。
{"title":"Naimark-spatial families of equichordal tight fusion frames","authors":"Matthew Fickus, Benjamin R. Mayo, Cody E. Watson","doi":"10.1016/j.acha.2024.101720","DOIUrl":"10.1016/j.acha.2024.101720","url":null,"abstract":"<div><div>An equichordal tight fusion frame (<figure><img></figure>) is a finite sequence of equi-dimensional subspaces of a Euclidean space that achieves equality in Conway, Hardin and Sloane's simplex bound. Every <figure><img></figure> is a type of optimal Grassmannian code, being a way to arrange a given number of members of a Grassmannian so that the minimal chordal distance between any pair of them is as large as possible. Any nontrivial <figure><img></figure> has both a Naimark complement and spatial complement which themselves are <figure><img></figure>s. We show that taking iterated alternating Naimark and spatial complements of any <figure><img></figure> of at least five subspaces yields an infinite family of <figure><img></figure>s with pairwise distinct parameters. Generalizing a method by King, we then construct <figure><img></figure>s from difference families for finite abelian groups, and use our Naimark-spatial theory to gauge their novelty.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"74 ","pages":"Article 101720"},"PeriodicalIF":2.6,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142658787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1016/j.acha.2024.101717
Hao Liu , Biraj Dahal , Rongjie Lai , Wenjing Liao
Many physical processes in science and engineering are naturally represented by operators between infinite-dimensional function spaces. The problem of operator learning, in this context, seeks to extract these physical processes from empirical data, which is challenging due to the infinite or high dimensionality of data. An integral component in addressing this challenge is model reduction, which reduces both the data dimensionality and problem size. In this paper, we utilize low-dimensional nonlinear structures in model reduction by investigating Auto-Encoder-based Neural Network (AENet). AENet first learns the latent variables of the input data and then learns the transformation from these latent variables to corresponding output data. Our numerical experiments validate the ability of AENet to accurately learn the solution operator of nonlinear partial differential equations. Furthermore, we establish a mathematical and statistical estimation theory that analyzes the generalization error of AENet. Our theoretical framework shows that the sample complexity of training AENet is intricately tied to the intrinsic dimension of the modeled process, while also demonstrating the robustness of AENet to noise.
{"title":"Generalization error guaranteed auto-encoder-based nonlinear model reduction for operator learning","authors":"Hao Liu , Biraj Dahal , Rongjie Lai , Wenjing Liao","doi":"10.1016/j.acha.2024.101717","DOIUrl":"10.1016/j.acha.2024.101717","url":null,"abstract":"<div><div>Many physical processes in science and engineering are naturally represented by operators between infinite-dimensional function spaces. The problem of operator learning, in this context, seeks to extract these physical processes from empirical data, which is challenging due to the infinite or high dimensionality of data. An integral component in addressing this challenge is model reduction, which reduces both the data dimensionality and problem size. In this paper, we utilize low-dimensional nonlinear structures in model reduction by investigating Auto-Encoder-based Neural Network (AENet). AENet first learns the latent variables of the input data and then learns the transformation from these latent variables to corresponding output data. Our numerical experiments validate the ability of AENet to accurately learn the solution operator of nonlinear partial differential equations. Furthermore, we establish a mathematical and statistical estimation theory that analyzes the generalization error of AENet. Our theoretical framework shows that the sample complexity of training AENet is intricately tied to the intrinsic dimension of the modeled process, while also demonstrating the robustness of AENet to noise.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"74 ","pages":"Article 101717"},"PeriodicalIF":2.6,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24DOI: 10.1016/j.acha.2024.101715
Eyar Azar , Satish Mulleti , Yonina C. Eldar
Analog-to-digital converters (ADCs) act as a bridge between the analog and digital domains. Two important attributes of any ADC are sampling rate and its dynamic range. For bandlimited signals, the sampling should be above the Nyquist rate. It is also desired that the signals' dynamic range should be within that of the ADC's; otherwise, the signal will be clipped. Nonlinear operators such as modulo or companding can be used prior to sampling to avoid clipping. To recover the true signal from the samples of the nonlinear operator, either high sampling rates are required, or strict constraints on the nonlinear operations are imposed, both of which are not desirable in practice. In this paper, we propose a generalized flexible nonlinear operator which is sampling efficient. Moreover, by carefully choosing its parameters, clipping, modulo, and companding can be seen as special cases of it. We show that bandlimited signals are uniquely identified from the nonlinear samples of the proposed operator when sampled above the Nyquist rate. Furthermore, we propose a robust algorithm to recover the true signal from the nonlinear samples. Compared to the existing methods, our approach has a lower mean-squared error for a given sampling rate, noise level, and dynamic range. Our results lead to less constrained hardware design to address the dynamic range issues while operating at the lowest rate possible.
{"title":"Unlimited sampling beyond modulo","authors":"Eyar Azar , Satish Mulleti , Yonina C. Eldar","doi":"10.1016/j.acha.2024.101715","DOIUrl":"10.1016/j.acha.2024.101715","url":null,"abstract":"<div><div>Analog-to-digital converters (ADCs) act as a bridge between the analog and digital domains. Two important attributes of any ADC are sampling rate and its dynamic range. For bandlimited signals, the sampling should be above the Nyquist rate. It is also desired that the signals' dynamic range should be within that of the ADC's; otherwise, the signal will be clipped. Nonlinear operators such as modulo or companding can be used prior to sampling to avoid clipping. To recover the true signal from the samples of the nonlinear operator, either high sampling rates are required, or strict constraints on the nonlinear operations are imposed, both of which are not desirable in practice. In this paper, we propose a generalized flexible nonlinear operator which is sampling efficient. Moreover, by carefully choosing its parameters, clipping, modulo, and companding can be seen as special cases of it. We show that bandlimited signals are uniquely identified from the nonlinear samples of the proposed operator when sampled above the Nyquist rate. Furthermore, we propose a robust algorithm to recover the true signal from the nonlinear samples. Compared to the existing methods, our approach has a lower mean-squared error for a given sampling rate, noise level, and dynamic range. Our results lead to less constrained hardware design to address the dynamic range issues while operating at the lowest rate possible.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"74 ","pages":"Article 101715"},"PeriodicalIF":2.6,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142554887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24DOI: 10.1016/j.acha.2024.101719
Holger Boche , Adalbert Fono , Gitta Kutyniok
Despite the success of Deep Learning (DL) serious reliability issues such as non-robustness persist. An interesting aspect is, whether these problems arise due to insufficient tools or fundamental limitations of DL. We study this question from the computability perspective by characterizing the limits the applied hardware imposes. For this, we focus on the class of inverse problems, which, in particular, encompasses any task to reconstruct data from measurements. On digital hardware, a conceptual barrier on the capabilities of DL for solving finite-dimensional inverse problems has in fact already been derived. This paper investigates the general computation framework of Blum-Shub-Smale (BSS) machines, describing the processing and storage of arbitrary real values. Although a corresponding real-world computing device does not exist, research and development towards real number computing hardware, usually referred to by “neuromorphic computing”, has increased in recent years. In this work, we show that the framework of BSS machines does enable the algorithmic solvability of finite dimensional inverse problems. Our results emphasize the influence of the considered computing model in questions of accuracy and reliability.
{"title":"Inverse problems are solvable on real number signal processing hardware","authors":"Holger Boche , Adalbert Fono , Gitta Kutyniok","doi":"10.1016/j.acha.2024.101719","DOIUrl":"10.1016/j.acha.2024.101719","url":null,"abstract":"<div><div>Despite the success of Deep Learning (DL) serious reliability issues such as non-robustness persist. An interesting aspect is, whether these problems arise due to insufficient tools or fundamental limitations of DL. We study this question from the computability perspective by characterizing the limits the applied hardware imposes. For this, we focus on the class of inverse problems, which, in particular, encompasses any task to reconstruct data from measurements. On digital hardware, a conceptual barrier on the capabilities of DL for solving finite-dimensional inverse problems has in fact already been derived. This paper investigates the general computation framework of Blum-Shub-Smale (BSS) machines, describing the processing and storage of arbitrary real values. Although a corresponding real-world computing device does not exist, research and development towards real number computing hardware, usually referred to by “neuromorphic computing”, has increased in recent years. In this work, we show that the framework of BSS machines does enable the algorithmic solvability of finite dimensional inverse problems. Our results emphasize the influence of the considered computing model in questions of accuracy and reliability.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"74 ","pages":"Article 101719"},"PeriodicalIF":2.6,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142561034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21DOI: 10.1016/j.acha.2024.101710
Gary Froyland, Christopher P. Rock
This paper investigates links between the eigenvalues and eigenfunctions of the Laplace-Beltrami operator, and the higher Cheeger constants of smooth Riemannian manifolds, possibly weighted and/or with boundary. The higher Cheeger constants give a loose description of the major geometric features of a manifold. We give a constructive upper bound on the higher Cheeger constants, in terms of the eigenvalue of any eigenfunction with the corresponding number of nodal domains. Specifically, we show that for each such eigenfunction, a positive-measure collection of its superlevel sets have their Cheeger ratios bounded above in terms of the corresponding eigenvalue.
Some manifolds have their major features entwined across several eigenfunctions, and no single eigenfunction contains all the major features. In this case, there may exist carefully chosen linear combinations of the eigenfunctions, each with large values on a single feature, and small values elsewhere. We can then apply a soft-thresholding operator to these linear combinations to obtain new functions, each supported on a single feature. We show that the Cheeger ratios of the level sets of these functions also give an upper bound on the Laplace-Beltrami eigenvalues. We extend these level set results to nonautonomous dynamical systems, and show that the dynamic Laplacian eigenfunctions reveal sets with small dynamic Cheeger ratios.
{"title":"Higher Cheeger ratios of features in Laplace-Beltrami eigenfunctions","authors":"Gary Froyland, Christopher P. Rock","doi":"10.1016/j.acha.2024.101710","DOIUrl":"10.1016/j.acha.2024.101710","url":null,"abstract":"<div><div>This paper investigates links between the eigenvalues and eigenfunctions of the Laplace-Beltrami operator, and the higher Cheeger constants of smooth Riemannian manifolds, possibly weighted and/or with boundary. The higher Cheeger constants give a loose description of the major geometric features of a manifold. We give a constructive upper bound on the higher Cheeger constants, in terms of the eigenvalue of any eigenfunction with the corresponding number of nodal domains. Specifically, we show that for each such eigenfunction, a positive-measure collection of its superlevel sets have their Cheeger ratios bounded above in terms of the corresponding eigenvalue.</div><div>Some manifolds have their major features entwined across several eigenfunctions, and no single eigenfunction contains all the major features. In this case, there may exist carefully chosen linear combinations of the eigenfunctions, each with large values on a single feature, and small values elsewhere. We can then apply a soft-thresholding operator to these linear combinations to obtain new functions, each supported on a single feature. We show that the Cheeger ratios of the level sets of these functions also give an upper bound on the Laplace-Beltrami eigenvalues. We extend these level set results to nonautonomous dynamical systems, and show that the dynamic Laplacian eigenfunctions reveal sets with small dynamic Cheeger ratios.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"74 ","pages":"Article 101710"},"PeriodicalIF":2.6,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142535999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-18DOI: 10.1016/j.acha.2024.101716
Lexing Ying
Spectral estimation is a fundamental task in signal processing. Recent algorithms in quantum phase estimation are concerned with the large noise, large frequency regime of the spectral estimation problem. The recent work in Ding-Epperly-Lin-Zhang proves that the ESPRIT algorithm exhibits superconvergence behavior for the spike locations in terms of the maximum frequency. This note provides a perturbative analysis to understand this behavior and extends to the case where the noise grows with the sampling frequency. However, this does not imply or explain the rigorous error bound obtained by Ding-Epperly-Lin-Zhang.
{"title":"A perturbative analysis for noisy spectral estimation","authors":"Lexing Ying","doi":"10.1016/j.acha.2024.101716","DOIUrl":"10.1016/j.acha.2024.101716","url":null,"abstract":"<div><div>Spectral estimation is a fundamental task in signal processing. Recent algorithms in quantum phase estimation are concerned with the large noise, large frequency regime of the spectral estimation problem. The recent work in Ding-Epperly-Lin-Zhang proves that the ESPRIT algorithm exhibits superconvergence behavior for the spike locations in terms of the maximum frequency. This note provides a perturbative analysis to understand this behavior and extends to the case where the noise grows with the sampling frequency. However, this does not imply or explain the rigorous error bound obtained by Ding-Epperly-Lin-Zhang.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"74 ","pages":"Article 101716"},"PeriodicalIF":2.6,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142535997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-15DOI: 10.1016/j.acha.2024.101714
Guanhang Lei , Zhen Lei , Lei Shi , Chenyu Zeng , Ding-Xuan Zhou
Physics-informed neural networks (PINNs) have been demonstrated to be efficient in solving partial differential equations (PDEs) from a variety of experimental perspectives. Some recent studies have also proposed PINN algorithms for PDEs on surfaces, including spheres. However, theoretical understanding of the numerical performance of PINNs, especially PINNs on surfaces or manifolds, is still lacking. In this paper, we establish rigorous analysis of the physics-informed convolutional neural network (PICNN) for solving PDEs on the sphere. By using and improving the latest approximation results of deep convolutional neural networks and spherical harmonic analysis, we prove an upper bound for the approximation error with respect to the Sobolev norm. Subsequently, we integrate this with innovative localization complexity analysis to establish fast convergence rates for PICNN. Our theoretical results are also confirmed and supplemented by our experiments. In light of these findings, we explore potential strategies for circumventing the curse of dimensionality that arises when solving high-dimensional PDEs.
{"title":"Solving PDEs on spheres with physics-informed convolutional neural networks","authors":"Guanhang Lei , Zhen Lei , Lei Shi , Chenyu Zeng , Ding-Xuan Zhou","doi":"10.1016/j.acha.2024.101714","DOIUrl":"10.1016/j.acha.2024.101714","url":null,"abstract":"<div><div>Physics-informed neural networks (PINNs) have been demonstrated to be efficient in solving partial differential equations (PDEs) from a variety of experimental perspectives. Some recent studies have also proposed PINN algorithms for PDEs on surfaces, including spheres. However, theoretical understanding of the numerical performance of PINNs, especially PINNs on surfaces or manifolds, is still lacking. In this paper, we establish rigorous analysis of the physics-informed convolutional neural network (PICNN) for solving PDEs on the sphere. By using and improving the latest approximation results of deep convolutional neural networks and spherical harmonic analysis, we prove an upper bound for the approximation error with respect to the Sobolev norm. Subsequently, we integrate this with innovative localization complexity analysis to establish fast convergence rates for PICNN. Our theoretical results are also confirmed and supplemented by our experiments. In light of these findings, we explore potential strategies for circumventing the curse of dimensionality that arises when solving high-dimensional PDEs.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"74 ","pages":"Article 101714"},"PeriodicalIF":2.6,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142442786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce LOT Wassmap, a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space. The algorithm is motivated by the observation that many datasets are naturally interpreted as probability measures rather than points in , and that finding low-dimensional descriptions of such datasets requires manifold learning algorithms in the Wasserstein space. Most available algorithms are based on computing the pairwise Wasserstein distance matrix, which can be computationally challenging for large datasets in high dimensions. Our algorithm leverages approximation schemes such as Sinkhorn distances and linearized optimal transport to speed-up computations, and in particular, avoids computing a pairwise distance matrix. We provide guarantees on the embedding quality under such approximations, including when explicit descriptions of the probability measures are not available and one must deal with finite samples instead. Experiments demonstrate that LOT Wassmap attains correct embeddings and that the quality improves with increased sample size. We also show how LOT Wassmap significantly reduces the computational cost when compared to algorithms that depend on pairwise distance computations.
{"title":"Linearized Wasserstein dimensionality reduction with approximation guarantees","authors":"Alexander Cloninger , Keaton Hamm , Varun Khurana , Caroline Moosmüller","doi":"10.1016/j.acha.2024.101718","DOIUrl":"10.1016/j.acha.2024.101718","url":null,"abstract":"<div><div>We introduce LOT Wassmap, a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space. The algorithm is motivated by the observation that many datasets are naturally interpreted as probability measures rather than points in <span><math><msup><mrow><mi>R</mi></mrow><mrow><mi>n</mi></mrow></msup></math></span>, and that finding low-dimensional descriptions of such datasets requires manifold learning algorithms in the Wasserstein space. Most available algorithms are based on computing the pairwise Wasserstein distance matrix, which can be computationally challenging for large datasets in high dimensions. Our algorithm leverages approximation schemes such as Sinkhorn distances and linearized optimal transport to speed-up computations, and in particular, avoids computing a pairwise distance matrix. We provide guarantees on the embedding quality under such approximations, including when explicit descriptions of the probability measures are not available and one must deal with finite samples instead. Experiments demonstrate that LOT Wassmap attains correct embeddings and that the quality improves with increased sample size. We also show how LOT Wassmap significantly reduces the computational cost when compared to algorithms that depend on pairwise distance computations.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"74 ","pages":"Article 101718"},"PeriodicalIF":2.6,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142535998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}