Pub Date : 2026-02-01Epub Date: 2025-11-13DOI: 10.1016/j.acha.2025.101824
Victor Bailey , Deguang Han , Keri Kornelson , David Larson , Rui Liu
The theory of dynamical frames evolved from practical problems in dynamical sampling where the initial state of a vector needs to be recovered from the space-time samples of evolutions of the vector. This leads to the investigation of structured frames obtained from the orbits of evolution operators. One of the basic problems in dynamical frame theory is to determine the semigroup representations, which we will call central frame representations, whose frame generators are unique (up to equivalence). Recently, Christensen, Hasannasab, and Philipp proved that all frame representations of the semigroup have this property. Their proof of this result relies on the characterization of the structure of shift-invariant subspaces in due to Beurling. In this paper we settle the general uniqueness problem by presenting a characterization of central frame representations for any semigroup in terms of the co-hyperinvariant subspaces of the left regular representation of the semigroup. This result is not only consistent with the known result of Han-Larson in 2000 for group representation frames, but also proves that all the frame generators of a semigroup generated by any k-tuple of commuting bounded linear operators on a separable Hilbert space H are equivalent, a case where the structure of shift-invariant subspaces, or submodules, of the Hardy Space on polydisks is still not completely characterized.
{"title":"Dynamical frames and hyperinvariant subspaces","authors":"Victor Bailey , Deguang Han , Keri Kornelson , David Larson , Rui Liu","doi":"10.1016/j.acha.2025.101824","DOIUrl":"10.1016/j.acha.2025.101824","url":null,"abstract":"<div><div>The theory of dynamical frames evolved from practical problems in dynamical sampling where the initial state of a vector needs to be recovered from the space-time samples of evolutions of the vector. This leads to the investigation of structured frames obtained from the orbits of evolution operators. One of the basic problems in dynamical frame theory is to determine the semigroup representations, which we will call <em>central frame representations</em>, whose frame generators are unique (up to equivalence). Recently, Christensen, Hasannasab, and Philipp proved that all frame representations of the semigroup <span><math><msub><mi>Z</mi><mo>+</mo></msub></math></span> have this property. Their proof of this result relies on the characterization of the structure of shift-invariant subspaces in <span><math><mrow><msup><mi>H</mi><mn>2</mn></msup><mrow><mo>(</mo><mi>D</mi><mo>)</mo></mrow></mrow></math></span> due to Beurling. In this paper we settle the general uniqueness problem by presenting a characterization of central frame representations for any semigroup in terms of the co-hyperinvariant subspaces of the left regular representation of the semigroup. This result is not only consistent with the known result of Han-Larson in 2000 for group representation frames, but also proves that all the frame generators of a semigroup generated by any <em>k</em>-tuple <span><math><mrow><mo>(</mo><msub><mi>A</mi><mn>1</mn></msub><mo>,</mo><mo>…</mo><mo>,</mo><msub><mi>A</mi><mi>k</mi></msub><mo>)</mo></mrow></math></span> of commuting bounded linear operators on a separable Hilbert space <em>H</em> are equivalent, a case where the structure of shift-invariant subspaces, or submodules, of the Hardy Space on polydisks <span><math><mrow><msup><mi>H</mi><mn>2</mn></msup><mrow><mo>(</mo><msup><mi>D</mi><mi>k</mi></msup><mo>)</mo></mrow></mrow></math></span> is still not completely characterized.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"81 ","pages":"Article 101824"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145531480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-29DOI: 10.1016/j.acha.2025.101833
Geonho Hwang , Myungjoo Kang
The Convolutional Neural Network (CNN) is one of the most prominent neural network architectures in deep learning. Despite its widespread adoption, our understanding of its universal approximation properties has been limited due to its intricate nature. CNNs inherently function as tensor-to-tensor mappings, preserving the spatial structure of input data. However, limited research has explored the universal approximation properties of fully convolutional neural networks as arbitrary continuous tensor-to-tensor functions. In this study, we demonstrate that CNNs, when utilizing zero padding, can approximate arbitrary continuous functions in cases where both the input and output values exhibit the same spatial shape. Additionally, we determine the minimum depth of the neural network required for approximation. We also verify that deep, narrow CNNs possess the UAP as tensor-to-tensor functions. The results encompass a wide range of activation functions, and our research covers CNNs of all dimensions.
{"title":"Universal approximation property of fully convolutional neural networks with zero padding","authors":"Geonho Hwang , Myungjoo Kang","doi":"10.1016/j.acha.2025.101833","DOIUrl":"10.1016/j.acha.2025.101833","url":null,"abstract":"<div><div>The Convolutional Neural Network (CNN) is one of the most prominent neural network architectures in deep learning. Despite its widespread adoption, our understanding of its universal approximation properties has been limited due to its intricate nature. CNNs inherently function as tensor-to-tensor mappings, preserving the spatial structure of input data. However, limited research has explored the universal approximation properties of fully convolutional neural networks as arbitrary continuous tensor-to-tensor functions. In this study, we demonstrate that CNNs, when utilizing zero padding, can approximate arbitrary continuous functions in cases where both the input and output values exhibit the same spatial shape. Additionally, we determine the minimum depth of the neural network required for approximation. We also verify that deep, narrow CNNs possess the UAP as tensor-to-tensor functions. The results encompass a wide range of activation functions, and our research covers CNNs of all dimensions.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101833"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145619673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-04DOI: 10.1016/j.acha.2025.101823
M.H.A. Biswas , P. Massopust , R. Ramakrishnan
In the first part of this paper, we define a deep convolutional neural network connected to the fractional Fourier transform (FrFT) using the -translation operator, the translation operator associated with the FrFT. Subsequently, we study -translation invariant properties of this network. It is well known that the network introduced by Mallat is translation invariant. In general, our network need not be -translation invariant. However, the network can be made asymptotically -translation invariant by choosing suitable pooling factors.
In the second part, we study data approximation problems using the FrFT. More precisely, given a data set , we obtain such thatwhere the minimum is taken over all -shift invariant spaces generated by at most elements. Moreover, we prove the existence of a space of band-limited functions in the FrFT domain which is “closest” to in the above sense.
{"title":"The theory of deep convolutional neural networks and a data approximation problem based on the fractional Fourier transform","authors":"M.H.A. Biswas , P. Massopust , R. Ramakrishnan","doi":"10.1016/j.acha.2025.101823","DOIUrl":"10.1016/j.acha.2025.101823","url":null,"abstract":"<div><div>In the first part of this paper, we define a deep convolutional neural network connected to the fractional Fourier transform (FrFT) using the <span><math><mstyle><mi>Θ</mi></mstyle></math></span>-translation operator, the translation operator associated with the FrFT. Subsequently, we study <span><math><mstyle><mi>Θ</mi></mstyle></math></span>-translation invariant properties of this network. It is well known that the network introduced by Mallat is translation invariant. In general, our network need not be <span><math><mstyle><mi>Θ</mi></mstyle></math></span>-translation invariant. However, the network can be made asymptotically <span><math><mstyle><mi>Θ</mi></mstyle></math></span>-translation invariant by choosing suitable pooling factors.</div><div>In the second part, we study data approximation problems using the FrFT. More precisely, given a data set <span><math><mrow><mi>F</mi><mo>=</mo><mrow><mo>{</mo><msub><mi>f</mi><mn>1</mn></msub><mo>,</mo><mo>…</mo><mo>,</mo><msub><mi>f</mi><mi>m</mi></msub><mo>}</mo></mrow><mo>⊂</mo><msup><mi>L</mi><mn>2</mn></msup><mrow><mo>(</mo><msup><mi>R</mi><mi>n</mi></msup><mo>)</mo></mrow></mrow></math></span>, we obtain <span><math><mrow><mstyle><mi>Φ</mi></mstyle><mo>=</mo><mo>{</mo><msub><mi>ϕ</mi><mn>1</mn></msub><mo>,</mo><mo>…</mo><mo>,</mo><msub><mi>ϕ</mi><mi>ℓ</mi></msub><mo>}</mo></mrow></math></span> such that<span><span><span><math><mrow><msub><mi>V</mi><mstyle><mi>Θ</mi></mstyle></msub><mrow><mo>(</mo><mstyle><mi>Φ</mi></mstyle><mo>)</mo></mrow><mo>=</mo><mi>arg</mi><mi>min</mi><munderover><mo>∑</mo><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><mi>m</mi></munderover><msup><mrow><mo>∥</mo><msub><mi>f</mi><mi>j</mi></msub><mo>−</mo><msub><mi>P</mi><mi>V</mi></msub><msub><mi>f</mi><mi>j</mi></msub><mo>∥</mo></mrow><mn>2</mn></msup><mo>,</mo></mrow></math></span></span></span>where the minimum is taken over all <span><math><mstyle><mi>Θ</mi></mstyle></math></span>-shift invariant spaces generated by at most <span><math><mi>ℓ</mi></math></span> elements. Moreover, we prove the existence of a space of band-limited functions in the FrFT domain which is “closest” to <span><math><mi>F</mi></math></span> in the above sense.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"81 ","pages":"Article 101823"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145441524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-15DOI: 10.1016/j.acha.2025.101825
Simon Halvdansson
For time-frequency localization operators, related to the short-time Fourier transform, with symbol RΩ, we work out the exact large R eigenvalue behavior for rotationally invariant Ω and conjecture that the same relation holds for all scaled symbols RΩ as long as the window is the standard Gaussian. Specifically, we conjecture that the kth eigenvalue of the localization operator with symbol RΩ converges to as R → ∞. To support the conjecture, we compute the eigenvalues of discrete frame multipliers with various symbols using LTFAT and find that they agree with the behavior of the conjecture to a large degree.
{"title":"Empirical plunge profiles of time-frequency localization operators","authors":"Simon Halvdansson","doi":"10.1016/j.acha.2025.101825","DOIUrl":"10.1016/j.acha.2025.101825","url":null,"abstract":"<div><div>For time-frequency localization operators, related to the short-time Fourier transform, with symbol <em>R</em>Ω, we work out the exact large <em>R</em> eigenvalue behavior for rotationally invariant Ω and conjecture that the same relation holds for all scaled symbols <em>R</em>Ω as long as the window is the standard Gaussian. Specifically, we conjecture that the <em>k</em>th eigenvalue of the localization operator with symbol <em>R</em>Ω converges to <span><math><mrow><mfrac><mn>1</mn><mn>2</mn></mfrac><mi>erfc</mi><mo>(</mo><msqrt><mrow><mn>2</mn><mi>π</mi></mrow></msqrt><mfrac><mrow><mi>k</mi><mo>−</mo><msup><mi>R</mi><mn>2</mn></msup><mrow><mo>|</mo><mstyle><mi>Ω</mi></mstyle><mo>|</mo></mrow></mrow><mrow><mi>R</mi><mo>|</mo><mi>∂</mi><mstyle><mi>Ω</mi></mstyle><mo>|</mo></mrow></mfrac><mo>)</mo></mrow></math></span> as <em>R</em> → ∞. To support the conjecture, we compute the eigenvalues of discrete frame multipliers with various symbols using LTFAT and find that they agree with the behavior of the conjecture to a large degree.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"81 ","pages":"Article 101825"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145536107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-12-19DOI: 10.1016/j.acha.2025.101851
Shao-Bo Lin
This paper focuses on scattered data fitting problems on spheres. We study the approximation performance of a class of weighted spectral filter algorithms (WSFA), including Tikhonov regularization, Landweber iteration, spectral cut-off, and iterated Tikhonov, in fitting noisy data with possibly unbounded random noise. For theoretical analysis, we borrow the idea of integral operator approach from statistical learning theory to be an extension of the widely used sampling inequality approach and norming set method in the community of scattered data fitting. After providing an equivalence between the operator differences and quadrature rules, we succeed in deriving tight bounds for operator differences, explicit operator representations for WSFA and consequently optimal error estimates. Our derived error estimates do not suffer from the saturation phenomenon for Tikhonov regularization, native-space-barrier for existing error analysis and adapts to different embedding spaces. Based on the operator representations, we develop a Lepskii-type principle to determine the filter parameter of WSFA and a divide-and-conquer scheme to reduce the computational burden and provide optimal approximation rates for corresponding algorithms.
{"title":"Integral operator approaches for scattered data fitting on spheres","authors":"Shao-Bo Lin","doi":"10.1016/j.acha.2025.101851","DOIUrl":"10.1016/j.acha.2025.101851","url":null,"abstract":"<div><div>This paper focuses on scattered data fitting problems on spheres. We study the approximation performance of a class of weighted spectral filter algorithms (WSFA), including Tikhonov regularization, Landweber iteration, spectral cut-off, and iterated Tikhonov, in fitting noisy data with possibly unbounded random noise. For theoretical analysis, we borrow the idea of integral operator approach from statistical learning theory to be an extension of the widely used sampling inequality approach and norming set method in the community of scattered data fitting. After providing an equivalence between the operator differences and quadrature rules, we succeed in deriving tight bounds for operator differences, explicit operator representations for WSFA and consequently optimal error estimates. Our derived error estimates do not suffer from the saturation phenomenon for Tikhonov regularization, native-space-barrier for existing error analysis and adapts to different embedding spaces. Based on the operator representations, we develop a Lepskii-type principle to determine the filter parameter of WSFA and a divide-and-conquer scheme to reduce the computational burden and provide optimal approximation rates for corresponding algorithms.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101851"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145786039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-12-11DOI: 10.1016/j.acha.2025.101847
Hendrik Bernd Zarucha , Peter Jung
It is known that sparse recovery is possible if the number of measurements is in the order of the sparsity, but the corresponding decoders either lack polynomial decoding time or robustness to noise. Commonly, decoders that rely on a null space property are being used. These achieve polynomial time decoding and are robust to additive noise but pay the price by requiring more measurements. The non-negative least residual has been established as such a decoder for non-negative recovery. A new equivalent condition for uniform, robust recovery of non-negative sparse vectors with the non-negative least residual that is not based on null space properties is introduced. It is shown that the number of measurements for this equivalent condition only needs to be in the order of the sparsity. Further, it is explained why the robustness to additive noise is similar, but not equal, to the robustness of decoders based on null space properties.
{"title":"Non-negative sparse recovery at minimal sampling rate","authors":"Hendrik Bernd Zarucha , Peter Jung","doi":"10.1016/j.acha.2025.101847","DOIUrl":"10.1016/j.acha.2025.101847","url":null,"abstract":"<div><div>It is known that sparse recovery is possible if the number of measurements is in the order of the sparsity, but the corresponding decoders either lack polynomial decoding time or robustness to noise. Commonly, decoders that rely on a null space property are being used. These achieve polynomial time decoding and are robust to additive noise but pay the price by requiring more measurements. The non-negative least residual has been established as such a decoder for non-negative recovery. A new equivalent condition for uniform, robust recovery of non-negative sparse vectors with the non-negative least residual that is not based on null space properties is introduced. It is shown that the number of measurements for this equivalent condition only needs to be in the order of the sparsity. Further, it is explained why the robustness to additive noise is similar, but not equal, to the robustness of decoders based on null space properties.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101847"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145731505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-12-15DOI: 10.1016/j.acha.2025.101850
Hongkang Ni, Lexing Ying
Various wave packet transforms are widely used to extract multiscale structures in signal processing. This paper introduces the quantum circuit implementation of a broad class of wave packets, including Gabor atoms and wavelets, with compact frequency support. Our approach operates in the frequency space, involving reallocation and reshuffling of signals tailored for manipulation on quantum computers. The resulting implementation differs from existing quantum algorithms for spatially compactly supported wavelets and can be readily extended to quantum transforms of other wave packets with compact frequency support.
{"title":"Quantum wave packet transforms with compact frequency support: Implementations for wavelets and Gabor atoms","authors":"Hongkang Ni, Lexing Ying","doi":"10.1016/j.acha.2025.101850","DOIUrl":"10.1016/j.acha.2025.101850","url":null,"abstract":"<div><div>Various wave packet transforms are widely used to extract multiscale structures in signal processing. This paper introduces the quantum circuit implementation of a broad class of wave packets, including Gabor atoms and wavelets, with compact frequency support. Our approach operates in the frequency space, involving reallocation and reshuffling of signals tailored for manipulation on quantum computers. The resulting implementation differs from existing quantum algorithms for spatially compactly supported wavelets and can be readily extended to quantum transforms of other wave packets with compact frequency support.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101850"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145784763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-12-19DOI: 10.1016/j.acha.2025.101849
Zhongjie Shi , Zhiying Fang , Yuan Cao
Although the Transformer model has emerged as the preferred choice in numerous application domains, its theoretical underpinnings remain sparse. Specifically, when compared to traditional fully-connected neural networks (FNNs), there is currently no theoretical result that explains the advantages of Transformers. In this paper, we delve into the analysis of approximation and generalization errors for the Vision Transformer (ViT) model. Despite the presence of the softmax function in the self-attention mechanism, we have successfully constructed a product gate within the ViT architecture. Our analysis shows that, for target functions of the hierarchical compositional form with suitable smoothness constraints, ViTs can avoid the curse of dimensionality in the sense that the input dimension only affects the exponent of the logarithmic terms and the constant terms. Notably, our findings underscore the efficiency of ViTs in terms of parameter usage compared to FNNs. Furthermore, when the regression function is of the hierarchical compositional form with the same suitable smoothness constraints, estimators generated by the empirical risk minimization algorithm with a ViT structure can achieve near-optimal convergence rates in a regression framework. These theoretical contributions not only demonstrate the inherent strengths of the ViT model but also address a significant gap in its theoretical exploration.
{"title":"Approximation and estimation capability of vision transformers for hierarchical compositional models","authors":"Zhongjie Shi , Zhiying Fang , Yuan Cao","doi":"10.1016/j.acha.2025.101849","DOIUrl":"10.1016/j.acha.2025.101849","url":null,"abstract":"<div><div>Although the Transformer model has emerged as the preferred choice in numerous application domains, its theoretical underpinnings remain sparse. Specifically, when compared to traditional fully-connected neural networks (FNNs), there is currently no theoretical result that explains the advantages of Transformers. In this paper, we delve into the analysis of approximation and generalization errors for the Vision Transformer (ViT) model. Despite the presence of the softmax function in the self-attention mechanism, we have successfully constructed a product gate within the ViT architecture. Our analysis shows that, for target functions of the hierarchical compositional form with suitable smoothness constraints, ViTs can avoid the curse of dimensionality in the sense that the input dimension only affects the exponent of the logarithmic terms and the constant terms. Notably, our findings underscore the efficiency of ViTs in terms of parameter usage compared to FNNs. Furthermore, when the regression function is of the hierarchical compositional form with the same suitable smoothness constraints, estimators generated by the empirical risk minimization algorithm with a ViT structure can achieve near-optimal convergence rates in a regression framework. These theoretical contributions not only demonstrate the inherent strengths of the ViT model but also address a significant gap in its theoretical exploration.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101849"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145784759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-28DOI: 10.1016/j.acha.2025.101832
Morten Nielsen
We construct smooth localized orthonormal bases compatible with anisotropic Triebel-Lizorkin and Besov type spaces on . The construction is based on tensor products of so-called univariate brushlet functions that are based on local trigonometric bases in the frequency domain, and the construction is painless in the sense that all parameters for the construction are explicitly specified. It is shown that the associated decomposition system form unconditional bases for the full family of Triebel-Lizorkin and Besov type spaces, including for the so-called α-modulation and α-Triebel-Lizorkin spaces. In the second part of the paper we study nonlinear m-term approximation with the constructed bases, where direct Jackson and Bernstein inequalities for m-term approximation with the tensor brushlet system in α-modulation and α-Triebel-Lizorkin spaces are derived. The inverse Bernstein estimates rely heavily on the fact that the constructed system is non-redundant.
{"title":"Painless construction of unconditional bases for anisotropic modulation and Triebel-Lizorkin type spaces","authors":"Morten Nielsen","doi":"10.1016/j.acha.2025.101832","DOIUrl":"10.1016/j.acha.2025.101832","url":null,"abstract":"<div><div>We construct smooth localized orthonormal bases compatible with anisotropic Triebel-Lizorkin and Besov type spaces on <span><math><msup><mi>R</mi><mi>d</mi></msup></math></span>. The construction is based on tensor products of so-called univariate brushlet functions that are based on local trigonometric bases in the frequency domain, and the construction is painless in the sense that all parameters for the construction are explicitly specified. It is shown that the associated decomposition system form unconditional bases for the full family of Triebel-Lizorkin and Besov type spaces, including for the so-called <em>α</em>-modulation and <em>α</em>-Triebel-Lizorkin spaces. In the second part of the paper we study nonlinear <em>m</em>-term approximation with the constructed bases, where direct Jackson and Bernstein inequalities for <em>m</em>-term approximation with the tensor brushlet system in <em>α</em>-modulation and <em>α</em>-Triebel-Lizorkin spaces are derived. The inverse Bernstein estimates rely heavily on the fact that the constructed system is non-redundant.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101832"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145613991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-12-02DOI: 10.1016/j.acha.2025.101837
Shihao Zhang , Rayan Saab
Deep neural networks have achieved state-of-the-art performance across numerous applications, but their high memory and computational demands present significant challenges, particularly in resource-constrained environments. Model compression techniques, such as low-rank approximation, offer a promising solution by reducing the size and complexity of these networks while only minimally sacrificing accuracy. In this paper, we develop an analytical framework for data-driven post-training low-rank compression. We prove three recovery theorems under progressively weaker assumptions about the approximate low-rank structure of activations, modeling deviations via noise. Our results represent a step toward explaining why data-driven low-rank compression methods outperform data-agnostic approaches and towards theoretically grounded compression algorithms that reduce inference costs while maintaining performance.
{"title":"Theoretical guarantees for low-rank compression of deep neural networks","authors":"Shihao Zhang , Rayan Saab","doi":"10.1016/j.acha.2025.101837","DOIUrl":"10.1016/j.acha.2025.101837","url":null,"abstract":"<div><div>Deep neural networks have achieved state-of-the-art performance across numerous applications, but their high memory and computational demands present significant challenges, particularly in resource-constrained environments. Model compression techniques, such as low-rank approximation, offer a promising solution by reducing the size and complexity of these networks while only minimally sacrificing accuracy. In this paper, we develop an analytical framework for data-driven post-training low-rank compression. We prove three recovery theorems under progressively weaker assumptions about the approximate low-rank structure of activations, modeling deviations via noise. Our results represent a step toward explaining why data-driven low-rank compression methods outperform data-agnostic approaches and towards theoretically grounded compression algorithms that reduce inference costs while maintaining performance.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101837"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145658199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}