Pub Date : 2026-01-01Epub Date: 2025-09-14DOI: 10.1016/j.acha.2025.101813
Liane Xu , Amit Singer
Laplacian-based methods are popular for the dimensionality reduction of data lying in . Several theoretical results for these algorithms depend on the fact that the Euclidean distance locally approximates the geodesic distance on the underlying submanifold which the data are assumed to lie on. However, for some applications, other metrics, such as the Wasserstein distance, may provide a more appropriate notion of distance than the Euclidean distance. We provide a framework that generalizes the problem of manifold learning to metric spaces and study when a metric satisfies sufficient conditions for the pointwise convergence of the graph Laplacian.
{"title":"Manifold learning in metric spaces","authors":"Liane Xu , Amit Singer","doi":"10.1016/j.acha.2025.101813","DOIUrl":"10.1016/j.acha.2025.101813","url":null,"abstract":"<div><div>Laplacian-based methods are popular for the dimensionality reduction of data lying in <span><math><msup><mi>R</mi><mi>N</mi></msup></math></span>. Several theoretical results for these algorithms depend on the fact that the Euclidean distance locally approximates the geodesic distance on the underlying submanifold which the data are assumed to lie on. However, for some applications, other metrics, such as the Wasserstein distance, may provide a more appropriate notion of distance than the Euclidean distance. We provide a framework that generalizes the problem of manifold learning to metric spaces and study when a metric satisfies sufficient conditions for the pointwise convergence of the graph Laplacian.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"80 ","pages":"Article 101813"},"PeriodicalIF":3.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145182892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-10-02DOI: 10.1016/j.acha.2025.101816
François G. Meyer
The notion of barycentre graph is of crucial importance for machine learning algorithms that process graph-valued data. The barycentre graph is a “summary graph” that captures the mean topology and connectivity structure of a training dataset of graphs. The construction of a barycentre requires the definition of a metric to quantify distances between pairs of graphs. In this work, we use a multiscale spectral distance that is defined using the eigenvalues of the normalized graph Laplacian. The eigenvalues – but not the eigenvectors – of the normalized Laplacian of the barycentre graph can be determined from the optimization problem that defines the barycentre. In this work, we propose a structural constraint on the eigenvectors of the normalized graph Laplacian of the barycentre graph that guarantees that the barycentre inherits the topological structure of the graphs in the sample dataset. The eigenvectors can be computed using an algorithm that explores the large library of Soules bases. When the graphs are random realizations of a balanced stochastic block model, then our algorithm returns a barycentre that converges asymptotically (in the limit of large graph size) almost-surely to the population mean of the graphs. We perform Monte Carlo simulations to validate the theoretical properties of the estimator; we conduct experiments on real-life graphs that suggest that our approach works beyond the controlled environment of stochastic block models.
{"title":"The spectral barycentre of a set of graphs with community structure","authors":"François G. Meyer","doi":"10.1016/j.acha.2025.101816","DOIUrl":"10.1016/j.acha.2025.101816","url":null,"abstract":"<div><div>The notion of barycentre graph is of crucial importance for machine learning algorithms that process graph-valued data. The barycentre graph is a “summary graph” that captures the mean topology and connectivity structure of a training dataset of graphs. The construction of a barycentre requires the definition of a metric to quantify distances between pairs of graphs. In this work, we use a multiscale spectral distance that is defined using the eigenvalues of the normalized graph Laplacian. The eigenvalues – but not the eigenvectors – of the normalized Laplacian of the barycentre graph can be determined from the optimization problem that defines the barycentre. In this work, we propose a structural constraint on the eigenvectors of the normalized graph Laplacian of the barycentre graph that guarantees that the barycentre inherits the topological structure of the graphs in the sample dataset. The eigenvectors can be computed using an algorithm that explores the large library of Soules bases. When the graphs are random realizations of a balanced stochastic block model, then our algorithm returns a barycentre that converges asymptotically (in the limit of large graph size) almost-surely to the population mean of the graphs. We perform Monte Carlo simulations to validate the theoretical properties of the estimator; we conduct experiments on real-life graphs that suggest that our approach works beyond the controlled environment of stochastic block models.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"80 ","pages":"Article 101816"},"PeriodicalIF":3.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-09-18DOI: 10.1016/j.acha.2025.101812
Ethan N. Epperly , Gil Goldshlager , Robert J. Webber
The randomized Kaczmarz (RK) method is a well-known approach for solving linear least-squares problems with a large number of rows. RK accesses and processes just one row at a time, leading to exponentially fast convergence for consistent linear systems. However, RK fails to converge to the least-squares solution for inconsistent systems. This work presents a simple fix: average the RK iterates produced in the tail part of the algorithm. The proposed tail-averaged randomized Kaczmarz (TARK) converges for both consistent and inconsistent least-squares problems at a polynomial rate, which is known to be optimal for any row-access method. An extension of TARK also leads to efficient solutions for ridge-regularized least-squares problems.
{"title":"Randomized Kaczmarz with tail averaging","authors":"Ethan N. Epperly , Gil Goldshlager , Robert J. Webber","doi":"10.1016/j.acha.2025.101812","DOIUrl":"10.1016/j.acha.2025.101812","url":null,"abstract":"<div><div>The randomized Kaczmarz (RK) method is a well-known approach for solving linear least-squares problems with a large number of rows. RK accesses and processes just one row at a time, leading to exponentially fast convergence for consistent linear systems. However, RK fails to converge to the least-squares solution for inconsistent systems. This work presents a simple fix: average the RK iterates produced in the tail part of the algorithm. The proposed tail-averaged randomized Kaczmarz (TARK) converges for both consistent and inconsistent least-squares problems at a polynomial rate, which is known to be optimal for any row-access method. An extension of TARK also leads to efficient solutions for ridge-regularized least-squares problems.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"80 ","pages":"Article 101812"},"PeriodicalIF":3.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-10-26DOI: 10.1016/j.acha.2025.101818
Yu Xia , Zhiqiang Xu
Compressed sensing has demonstrated that a general signal () can be estimated from few linear measurements with an error proportional to the best -term approximation error, a property known as instance optimality. In this paper, we investigate instance optimality in the context of phaseless measurements using the -minimization decoder, where , for both real and complex cases. More specifically, we prove that (2,1) and (1,1)-instance optimality of order can be achieved with phaseless measurements, paralleling results from linear measurements. These results imply that one can stably recover approximately -sparse signals from phaseless measurements. Our approach leverages the phaseless bi-Lipschitz condition. Additionally, we present a non-uniform version of (2,2)-instance optimality result in probability applicable to any fixed vector . These findings reveal striking parallels between compressive phase retrieval and classical compressed sensing, enhancing our understanding of both phase retrieval and instance optimality.
{"title":"Instance optimality in phase retrieval","authors":"Yu Xia , Zhiqiang Xu","doi":"10.1016/j.acha.2025.101818","DOIUrl":"10.1016/j.acha.2025.101818","url":null,"abstract":"<div><div>Compressed sensing has demonstrated that a general signal <span><math><mrow><mi>x</mi><mo>∈</mo><msup><mi>F</mi><mi>n</mi></msup></mrow></math></span> (<span><math><mrow><mi>F</mi><mo>∈</mo><mo>{</mo><mi>R</mi><mo>,</mo><mi>C</mi><mo>}</mo></mrow></math></span>) can be estimated from few linear measurements with an error proportional to the best <span><math><mi>k</mi></math></span>-term approximation error, a property known as instance optimality. In this paper, we investigate instance optimality in the context of phaseless measurements using the <span><math><msub><mi>ℓ</mi><mi>p</mi></msub></math></span>-minimization decoder, where <span><math><mrow><mi>p</mi><mo>∈</mo><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow></math></span>, for both real and complex cases. More specifically, we prove that (2,1) and (1,1)-instance optimality of order <span><math><mi>k</mi></math></span> can be achieved with <span><math><mrow><mi>m</mi><mo>=</mo><mi>O</mi><mo>(</mo><mi>k</mi><mi>log</mi><mo>(</mo><mi>n</mi><mo>/</mo><mi>k</mi><mo>)</mo><mo>)</mo></mrow></math></span> phaseless measurements, paralleling results from linear measurements. These results imply that one can stably recover approximately <span><math><mi>k</mi></math></span>-sparse signals from <span><math><mrow><mi>m</mi><mo>=</mo><mi>O</mi><mo>(</mo><mi>k</mi><mi>log</mi><mo>(</mo><mi>n</mi><mo>/</mo><mi>k</mi><mo>)</mo><mo>)</mo></mrow></math></span> phaseless measurements. Our approach leverages the phaseless bi-Lipschitz condition. Additionally, we present a non-uniform version of (2,2)-instance optimality result in probability applicable to any fixed vector <span><math><mrow><mi>x</mi><mo>∈</mo><msup><mi>F</mi><mi>n</mi></msup></mrow></math></span>. These findings reveal striking parallels between compressive phase retrieval and classical compressed sensing, enhancing our understanding of both phase retrieval and instance optimality.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"80 ","pages":"Article 101818"},"PeriodicalIF":3.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145383223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-09-12DOI: 10.1016/j.acha.2025.101811
Ilya Krishtal, Brendan Miller
We study spanning properties of Carleson systems and prove a recent conjecture on frame subsequences of Carleson frames. In particular, we show that if is a Carleson frame, then every subsequence of the form where and is also a frame.
{"title":"Demystifying Carleson frames","authors":"Ilya Krishtal, Brendan Miller","doi":"10.1016/j.acha.2025.101811","DOIUrl":"10.1016/j.acha.2025.101811","url":null,"abstract":"<div><div>We study spanning properties of Carleson systems and prove a recent conjecture on frame subsequences of Carleson frames. In particular, we show that if <span><math><msubsup><mrow><mo>{</mo><msup><mi>T</mi><mi>k</mi></msup><mi>φ</mi><mo>}</mo></mrow><mrow><mi>k</mi><mo>=</mo><mn>0</mn></mrow><mi>∞</mi></msubsup></math></span> is a Carleson frame, then every subsequence of the form <span><math><msubsup><mrow><mo>{</mo><msup><mi>T</mi><mrow><mi>N</mi><mi>k</mi><mo>+</mo><msub><mi>j</mi><mi>k</mi></msub></mrow></msup><mi>φ</mi><mo>}</mo></mrow><mrow><mi>k</mi><mo>=</mo><mn>0</mn></mrow><mi>∞</mi></msubsup></math></span> where <span><math><mrow><mi>N</mi><mo>∈</mo><mi>N</mi></mrow></math></span> and <span><math><mrow><mn>0</mn><mo>≤</mo><msub><mi>j</mi><mi>k</mi></msub><mo><</mo><mi>N</mi></mrow></math></span> is also a frame.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"80 ","pages":"Article 101811"},"PeriodicalIF":3.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-09-08DOI: 10.1016/j.acha.2025.101810
Małgorzata Bogdan , Xavier Dupuis , Piotr Graczyk , Bartosz Kołodziejek , Tomasz Skalski , Patrick Tardivel , Maciej Wilczyński
SLOPE is a popular method for dimensionality reduction in high-dimensional regression. Its estimated coefficients can be zero, yielding sparsity, or equal in absolute value, yielding clustering. As a result, SLOPE can eliminate irrelevant predictors and identify groups of predictors that have the same influence on the response. The concept of the SLOPE pattern allows us to formalize and study its sparsity and clustering properties. In particular, the SLOPE pattern of a coefficient vector captures the signs of its components (positive, negative, or zero), the clusters (groups of coefficients with the same absolute value), and the ranking of those clusters. This is the first paper to thoroughly investigate the consistency of the SLOPE pattern. We establish necessary and sufficient conditions for SLOPE pattern recovery, which in turn enable the derivation of an irrepresentability condition for SLOPE given a fixed design matrix . These results lay the groundwork for a comprehensive asymptotic analysis of SLOPE pattern consistency.
{"title":"Pattern recovery by SLOPE","authors":"Małgorzata Bogdan , Xavier Dupuis , Piotr Graczyk , Bartosz Kołodziejek , Tomasz Skalski , Patrick Tardivel , Maciej Wilczyński","doi":"10.1016/j.acha.2025.101810","DOIUrl":"10.1016/j.acha.2025.101810","url":null,"abstract":"<div><div>SLOPE is a popular method for dimensionality reduction in high-dimensional regression. Its estimated coefficients can be zero, yielding sparsity, or equal in absolute value, yielding clustering. As a result, SLOPE can eliminate irrelevant predictors and identify groups of predictors that have the same influence on the response. The concept of the SLOPE pattern allows us to formalize and study its sparsity and clustering properties. In particular, the SLOPE pattern of a coefficient vector captures the signs of its components (positive, negative, or zero), the clusters (groups of coefficients with the same absolute value), and the ranking of those clusters. This is the first paper to thoroughly investigate the consistency of the SLOPE pattern. We establish necessary and sufficient conditions for SLOPE pattern recovery, which in turn enable the derivation of an irrepresentability condition for SLOPE given a fixed design matrix <span><math><mi>X</mi></math></span>. These results lay the groundwork for a comprehensive asymptotic analysis of SLOPE pattern consistency.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"80 ","pages":"Article 101810"},"PeriodicalIF":3.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we focus on lighthouse anisotropic fractional Brownian fields (AFBFs), whose self-similarity depends solely on the so-called Hurst parameter, while anisotropy is revealed through the opening angle of an oriented spectral cone. This fractional field generalizes fractional Brownian motion and models rough natural phenomena. Consequently, estimating the model parameters is a crucial issue for modeling and analyzing real data. This work introduces the representation of AFBFs using the monogenic transform. Combined with a multiscale analysis, the monogenic signal is built from the Riesz transform to extract local orientation and structural information from an image at different scales. We then exploit the monogenic signal to define new estimators of AFBF parameters in the particular case of lighthouse fields. We prove that the estimators of anisotropy and self-similarity index (called the Hurst index) are strongly consistent. We demonstrate that these estimators verify asymptotic normality with explicit variance. We also introduce an estimator of the texture orientation. We propose a numerical scheme for calculating the monogenic representation and strategies for computing the estimators. Numerical results illustrate the performance of these estimators. Regarding Hurst index estimation, estimators based on the monogenic representation of random fields appear to be more robust than those using only the Riesz transform. We show that both estimation methods outperform standard estimation procedures in the isotropic case and provide excellent results for all degrees of anisotropy.
{"title":"Gaussian random fields and monogenic images","authors":"Hermine Biermé , Philippe Carré , Céline Lacaux , Claire Launay","doi":"10.1016/j.acha.2025.101814","DOIUrl":"10.1016/j.acha.2025.101814","url":null,"abstract":"<div><div>In this paper, we focus on lighthouse anisotropic fractional Brownian fields (AFBFs), whose self-similarity depends solely on the so-called Hurst parameter, while anisotropy is revealed through the opening angle of an oriented spectral cone. This fractional field generalizes fractional Brownian motion and models rough natural phenomena. Consequently, estimating the model parameters is a crucial issue for modeling and analyzing real data. This work introduces the representation of AFBFs using the monogenic transform. Combined with a multiscale analysis, the monogenic signal is built from the Riesz transform to extract local orientation and structural information from an image at different scales. We then exploit the monogenic signal to define new estimators of AFBF parameters in the particular case of lighthouse fields. We prove that the estimators of anisotropy and self-similarity index (called the Hurst index) are strongly consistent. We demonstrate that these estimators verify asymptotic normality with explicit variance. We also introduce an estimator of the texture orientation. We propose a numerical scheme for calculating the monogenic representation and strategies for computing the estimators. Numerical results illustrate the performance of these estimators. Regarding Hurst index estimation, estimators based on the monogenic representation of random fields appear to be more robust than those using only the Riesz transform. We show that both estimation methods outperform standard estimation procedures in the isotropic case and provide excellent results for all degrees of anisotropy.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"80 ","pages":"Article 101814"},"PeriodicalIF":3.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145121214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-10-22DOI: 10.1016/j.acha.2025.101819
Gao Huang , Song Li , Hang Xu
We consider the problem of recovering an unknown signal from phaseless measurements. In this paper, we study the convex phase retrieval problem via PhaseLift from linear Gaussian measurements perturbed by -bounded noise and sparse outliers that can change an adversarially chosen -fraction of the measurement vector. We show that the Robust-PhaseLift model can successfully reconstruct the ground-truth up to global phase for any with measurements, even in the case where the sparse outliers may depend on the measurement and the observation. The recovery guarantees are based on the robust outlier bound condition, along with an analysis of the product of two Gaussian variables and the minimum balance function. Moreover, we construct adaptive counterexamples to show that the Robust-PhaseLift model fails when with high probability. Finally, we also provide some preliminary discussions on the adversarially robust recovery of complex signals.
{"title":"Robust outlier bound condition to phase retrieval with adversarial sparse outliers","authors":"Gao Huang , Song Li , Hang Xu","doi":"10.1016/j.acha.2025.101819","DOIUrl":"10.1016/j.acha.2025.101819","url":null,"abstract":"<div><div>We consider the problem of recovering an unknown signal <span><math><mrow><msub><mi>x</mi><mn>0</mn></msub><mo>∈</mo><msup><mi>R</mi><mi>n</mi></msup></mrow></math></span> from phaseless measurements. In this paper, we study the convex phase retrieval problem via PhaseLift from linear Gaussian measurements perturbed by <span><math><msub><mi>ℓ</mi><mn>1</mn></msub></math></span>-bounded noise and sparse outliers that can change an adversarially chosen <span><math><mi>s</mi></math></span>-fraction of the measurement vector. We show that the Robust-PhaseLift model can successfully reconstruct the ground-truth up to global phase for any <span><math><mrow><mi>s</mi><mo><</mo><msup><mi>s</mi><mo>*</mo></msup><mo>≈</mo><mn>0.1185</mn></mrow></math></span> with <span><math><mrow><mi>O</mi><mo>(</mo><mi>n</mi><mo>)</mo></mrow></math></span> measurements, even in the case where the sparse outliers may depend on the measurement and the observation. The recovery guarantees are based on the robust outlier bound condition, along with an analysis of the product of two Gaussian variables and the minimum balance function. Moreover, we construct adaptive counterexamples to show that the Robust-PhaseLift model fails when <span><math><mrow><mi>s</mi><mo>></mo><msup><mi>s</mi><mo>*</mo></msup></mrow></math></span> with high probability. Finally, we also provide some preliminary discussions on the adversarially robust recovery of complex signals.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"80 ","pages":"Article 101819"},"PeriodicalIF":3.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-08-28DOI: 10.1016/j.acha.2025.101801
Daniel Freeman , Daniel Haider
The injectivity of ReLU layers in neural networks, the recovery of vectors from clipped or saturated measurements, and (real) phase retrieval in allow for a similar problem formulation and characterization using frame theory. In this paper, we revisit all three problems with a unified perspective and derive lower Lipschitz bounds for ReLU layers and clipping which are analogous to the previously known result for phase retrieval and are optimal up to a constant factor.
{"title":"Optimal lower Lipschitz bounds for ReLU layers, saturation, and phase retrieval","authors":"Daniel Freeman , Daniel Haider","doi":"10.1016/j.acha.2025.101801","DOIUrl":"10.1016/j.acha.2025.101801","url":null,"abstract":"<div><div>The injectivity of ReLU layers in neural networks, the recovery of vectors from clipped or saturated measurements, and (real) phase retrieval in <span><math><msup><mrow><mi>R</mi></mrow><mrow><mi>n</mi></mrow></msup></math></span> allow for a similar problem formulation and characterization using frame theory. In this paper, we revisit all three problems with a unified perspective and derive lower Lipschitz bounds for ReLU layers and clipping which are analogous to the previously known result for phase retrieval and are optimal up to a constant factor.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"80 ","pages":"Article 101801"},"PeriodicalIF":3.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144921299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-05DOI: 10.1016/j.acha.2025.101792
P. Michael Kielstra , Michael Lindsey
We introduce a fast algorithm for Gaussian process regression in low dimensions, applicable to a widely-used family of non-stationary kernels. The non-stationarity of these kernels is induced by arbitrary spatially-varying vertical and horizontal scales. In particular, any stationary kernel can be accommodated as a special case, and we focus especially on the generalization of the standard Matérn kernel. Our subroutine for kernel matrix-vector multiplications scales almost optimally as , where N is the number of regression points. Like the recently developed equispaced Fourier Gaussian process (EFGP) methodology, which is applicable only to stationary kernels, our approach exploits non-uniform fast Fourier transforms (NUFFTs). We offer a complete analysis controlling the approximation error of our method, and we validate the method's practical performance with numerical experiments. In particular we demonstrate improved scalability compared to state-of-the-art rank-structured approaches in spatial dimension .
{"title":"Gaussian process regression with log-linear scaling for common non-stationary kernels","authors":"P. Michael Kielstra , Michael Lindsey","doi":"10.1016/j.acha.2025.101792","DOIUrl":"10.1016/j.acha.2025.101792","url":null,"abstract":"<div><div>We introduce a fast algorithm for Gaussian process regression in low dimensions, applicable to a widely-used family of non-stationary kernels. The non-stationarity of these kernels is induced by arbitrary spatially-varying vertical and horizontal scales. In particular, any stationary kernel can be accommodated as a special case, and we focus especially on the generalization of the standard Matérn kernel. Our subroutine for kernel matrix-vector multiplications scales almost optimally as <span><math><mi>O</mi><mo>(</mo><mi>N</mi><mi>log</mi><mo></mo><mi>N</mi><mo>)</mo></math></span>, where <em>N</em> is the number of regression points. Like the recently developed equispaced Fourier Gaussian process (EFGP) methodology, which is applicable only to stationary kernels, our approach exploits non-uniform fast Fourier transforms (NUFFTs). We offer a complete analysis controlling the approximation error of our method, and we validate the method's practical performance with numerical experiments. In particular we demonstrate improved scalability compared to state-of-the-art rank-structured approaches in spatial dimension <span><math><mi>d</mi><mo>></mo><mn>1</mn></math></span>.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101792"},"PeriodicalIF":2.6,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144596067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}