首页 > 最新文献

Applied and Computational Harmonic Analysis最新文献

英文 中文
Dynamical frames and hyperinvariant subspaces 动态框架与超不变子空间
IF 3.2 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2026-02-01 Epub Date: 2025-11-13 DOI: 10.1016/j.acha.2025.101824
Victor Bailey , Deguang Han , Keri Kornelson , David Larson , Rui Liu
The theory of dynamical frames evolved from practical problems in dynamical sampling where the initial state of a vector needs to be recovered from the space-time samples of evolutions of the vector. This leads to the investigation of structured frames obtained from the orbits of evolution operators. One of the basic problems in dynamical frame theory is to determine the semigroup representations, which we will call central frame representations, whose frame generators are unique (up to equivalence). Recently, Christensen, Hasannasab, and Philipp proved that all frame representations of the semigroup Z+ have this property. Their proof of this result relies on the characterization of the structure of shift-invariant subspaces in H2(D) due to Beurling. In this paper we settle the general uniqueness problem by presenting a characterization of central frame representations for any semigroup in terms of the co-hyperinvariant subspaces of the left regular representation of the semigroup. This result is not only consistent with the known result of Han-Larson in 2000 for group representation frames, but also proves that all the frame generators of a semigroup generated by any k-tuple (A1,,Ak) of commuting bounded linear operators on a separable Hilbert space H are equivalent, a case where the structure of shift-invariant subspaces, or submodules, of the Hardy Space on polydisks H2(Dk) is still not completely characterized.
动态框架理论是从动态采样的实际问题中发展而来的,其中需要从矢量演化的时空样本中恢复矢量的初始状态。这导致了从演化算子的轨道中获得的结构框架的研究。动态框架理论的一个基本问题是确定半群表示,我们称之为中心框架表示,其框架生成器是唯一的(直到等价)。最近,Christensen, Hasannasab和Philipp证明了半群Z+的所有坐标系表示都具有这个性质。他们对这一结果的证明依赖于由于Beurling对H2(D)中平移不变子空间结构的表征。本文利用半群左正则表示的协超不变子空间,给出了任意半群的中心坐标系表示的刻画,解决了一般唯一性问题。这一结果不仅与Han-Larson(2000)关于群表示帧的已知结果相一致,而且证明了在可分Hilbert空间H上由交换有界线性算子的任意k元组(A1,…,Ak)所生成的半群的所有帧生成器都是等价的,而在多盘H2(Dk)上Hardy空间的移不变子空间或子模的结构仍未完全表征的情况下。
{"title":"Dynamical frames and hyperinvariant subspaces","authors":"Victor Bailey ,&nbsp;Deguang Han ,&nbsp;Keri Kornelson ,&nbsp;David Larson ,&nbsp;Rui Liu","doi":"10.1016/j.acha.2025.101824","DOIUrl":"10.1016/j.acha.2025.101824","url":null,"abstract":"<div><div>The theory of dynamical frames evolved from practical problems in dynamical sampling where the initial state of a vector needs to be recovered from the space-time samples of evolutions of the vector. This leads to the investigation of structured frames obtained from the orbits of evolution operators. One of the basic problems in dynamical frame theory is to determine the semigroup representations, which we will call <em>central frame representations</em>, whose frame generators are unique (up to equivalence). Recently, Christensen, Hasannasab, and Philipp proved that all frame representations of the semigroup <span><math><msub><mi>Z</mi><mo>+</mo></msub></math></span> have this property. Their proof of this result relies on the characterization of the structure of shift-invariant subspaces in <span><math><mrow><msup><mi>H</mi><mn>2</mn></msup><mrow><mo>(</mo><mi>D</mi><mo>)</mo></mrow></mrow></math></span> due to Beurling. In this paper we settle the general uniqueness problem by presenting a characterization of central frame representations for any semigroup in terms of the co-hyperinvariant subspaces of the left regular representation of the semigroup. This result is not only consistent with the known result of Han-Larson in 2000 for group representation frames, but also proves that all the frame generators of a semigroup generated by any <em>k</em>-tuple <span><math><mrow><mo>(</mo><msub><mi>A</mi><mn>1</mn></msub><mo>,</mo><mo>…</mo><mo>,</mo><msub><mi>A</mi><mi>k</mi></msub><mo>)</mo></mrow></math></span> of commuting bounded linear operators on a separable Hilbert space <em>H</em> are equivalent, a case where the structure of shift-invariant subspaces, or submodules, of the Hardy Space on polydisks <span><math><mrow><msup><mi>H</mi><mn>2</mn></msup><mrow><mo>(</mo><msup><mi>D</mi><mi>k</mi></msup><mo>)</mo></mrow></mrow></math></span> is still not completely characterized.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"81 ","pages":"Article 101824"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145531480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Universal approximation property of fully convolutional neural networks with zero padding 具有零填充的全卷积神经网络的普遍逼近性质
IF 3.2 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2026-02-01 Epub Date: 2025-11-29 DOI: 10.1016/j.acha.2025.101833
Geonho Hwang , Myungjoo Kang
The Convolutional Neural Network (CNN) is one of the most prominent neural network architectures in deep learning. Despite its widespread adoption, our understanding of its universal approximation properties has been limited due to its intricate nature. CNNs inherently function as tensor-to-tensor mappings, preserving the spatial structure of input data. However, limited research has explored the universal approximation properties of fully convolutional neural networks as arbitrary continuous tensor-to-tensor functions. In this study, we demonstrate that CNNs, when utilizing zero padding, can approximate arbitrary continuous functions in cases where both the input and output values exhibit the same spatial shape. Additionally, we determine the minimum depth of the neural network required for approximation. We also verify that deep, narrow CNNs possess the UAP as tensor-to-tensor functions. The results encompass a wide range of activation functions, and our research covers CNNs of all dimensions.
卷积神经网络(CNN)是深度学习中最突出的神经网络架构之一。尽管它被广泛采用,但由于其复杂的性质,我们对其普遍近似性质的理解受到限制。cnn本质上是张量到张量的映射,保留了输入数据的空间结构。然而,有限的研究探索了全卷积神经网络作为任意连续张量-张量函数的普遍逼近性质。在本研究中,我们证明了当使用零填充时,cnn可以在输入和输出值具有相同空间形状的情况下近似任意连续函数。此外,我们确定了逼近所需的神经网络的最小深度。我们还验证了深度,窄cnn具有UAP作为张量-张量函数。结果包含了广泛的激活函数,我们的研究涵盖了所有维度的cnn。
{"title":"Universal approximation property of fully convolutional neural networks with zero padding","authors":"Geonho Hwang ,&nbsp;Myungjoo Kang","doi":"10.1016/j.acha.2025.101833","DOIUrl":"10.1016/j.acha.2025.101833","url":null,"abstract":"<div><div>The Convolutional Neural Network (CNN) is one of the most prominent neural network architectures in deep learning. Despite its widespread adoption, our understanding of its universal approximation properties has been limited due to its intricate nature. CNNs inherently function as tensor-to-tensor mappings, preserving the spatial structure of input data. However, limited research has explored the universal approximation properties of fully convolutional neural networks as arbitrary continuous tensor-to-tensor functions. In this study, we demonstrate that CNNs, when utilizing zero padding, can approximate arbitrary continuous functions in cases where both the input and output values exhibit the same spatial shape. Additionally, we determine the minimum depth of the neural network required for approximation. We also verify that deep, narrow CNNs possess the UAP as tensor-to-tensor functions. The results encompass a wide range of activation functions, and our research covers CNNs of all dimensions.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101833"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145619673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The theory of deep convolutional neural networks and a data approximation problem based on the fractional Fourier transform 深度卷积神经网络理论及基于分数阶傅里叶变换的数据逼近问题
IF 3.2 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2026-02-01 Epub Date: 2025-11-04 DOI: 10.1016/j.acha.2025.101823
M.H.A. Biswas , P. Massopust , R. Ramakrishnan
In the first part of this paper, we define a deep convolutional neural network connected to the fractional Fourier transform (FrFT) using the Θ-translation operator, the translation operator associated with the FrFT. Subsequently, we study Θ-translation invariant properties of this network. It is well known that the network introduced by Mallat is translation invariant. In general, our network need not be Θ-translation invariant. However, the network can be made asymptotically Θ-translation invariant by choosing suitable pooling factors.
In the second part, we study data approximation problems using the FrFT. More precisely, given a data set F={f1,,fm}L2(Rn), we obtain Φ={ϕ1,,ϕ} such thatVΘ(Φ)=argminj=1mfjPVfj2,where the minimum is taken over all Θ-shift invariant spaces generated by at most elements. Moreover, we prove the existence of a space of band-limited functions in the FrFT domain which is “closest” to F in the above sense.
在本文的第一部分,我们定义了一个深度卷积神经网络连接到分数傅里叶变换(FrFT)使用Θ-translation算子,平移算子与FrFT相关。随后,我们研究了该网络Θ-translation的不变性。众所周知,Mallat引入的网络是平移不变性的。一般来说,我们的网络不需要Θ-translation不变。然而,通过选择合适的池化因子,可以使网络渐近Θ-translation不变。在第二部分中,我们研究了使用FrFT的数据逼近问题。更精确地说,给定一个数据集F={f1,…,fm}∧L2(Rn),我们得到Φ={ϕ1,…,ϕ _1}这样的thatVΘ(Φ)=argmin∑j=1m∥fj−PVfj∥2,其中最小值占据了所有由最多r个元素生成的Θ-shift不变空间。此外,我们证明了在FrFT域中存在一个在上述意义上“最接近”F的带限函数空间。
{"title":"The theory of deep convolutional neural networks and a data approximation problem based on the fractional Fourier transform","authors":"M.H.A. Biswas ,&nbsp;P. Massopust ,&nbsp;R. Ramakrishnan","doi":"10.1016/j.acha.2025.101823","DOIUrl":"10.1016/j.acha.2025.101823","url":null,"abstract":"<div><div>In the first part of this paper, we define a deep convolutional neural network connected to the fractional Fourier transform (FrFT) using the <span><math><mstyle><mi>Θ</mi></mstyle></math></span>-translation operator, the translation operator associated with the FrFT. Subsequently, we study <span><math><mstyle><mi>Θ</mi></mstyle></math></span>-translation invariant properties of this network. It is well known that the network introduced by Mallat is translation invariant. In general, our network need not be <span><math><mstyle><mi>Θ</mi></mstyle></math></span>-translation invariant. However, the network can be made asymptotically <span><math><mstyle><mi>Θ</mi></mstyle></math></span>-translation invariant by choosing suitable pooling factors.</div><div>In the second part, we study data approximation problems using the FrFT. More precisely, given a data set <span><math><mrow><mi>F</mi><mo>=</mo><mrow><mo>{</mo><msub><mi>f</mi><mn>1</mn></msub><mo>,</mo><mo>…</mo><mo>,</mo><msub><mi>f</mi><mi>m</mi></msub><mo>}</mo></mrow><mo>⊂</mo><msup><mi>L</mi><mn>2</mn></msup><mrow><mo>(</mo><msup><mi>R</mi><mi>n</mi></msup><mo>)</mo></mrow></mrow></math></span>, we obtain <span><math><mrow><mstyle><mi>Φ</mi></mstyle><mo>=</mo><mo>{</mo><msub><mi>ϕ</mi><mn>1</mn></msub><mo>,</mo><mo>…</mo><mo>,</mo><msub><mi>ϕ</mi><mi>ℓ</mi></msub><mo>}</mo></mrow></math></span> such that<span><span><span><math><mrow><msub><mi>V</mi><mstyle><mi>Θ</mi></mstyle></msub><mrow><mo>(</mo><mstyle><mi>Φ</mi></mstyle><mo>)</mo></mrow><mo>=</mo><mi>arg</mi><mi>min</mi><munderover><mo>∑</mo><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><mi>m</mi></munderover><msup><mrow><mo>∥</mo><msub><mi>f</mi><mi>j</mi></msub><mo>−</mo><msub><mi>P</mi><mi>V</mi></msub><msub><mi>f</mi><mi>j</mi></msub><mo>∥</mo></mrow><mn>2</mn></msup><mo>,</mo></mrow></math></span></span></span>where the minimum is taken over all <span><math><mstyle><mi>Θ</mi></mstyle></math></span>-shift invariant spaces generated by at most <span><math><mi>ℓ</mi></math></span> elements. Moreover, we prove the existence of a space of band-limited functions in the FrFT domain which is “closest” to <span><math><mi>F</mi></math></span> in the above sense.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"81 ","pages":"Article 101823"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145441524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empirical plunge profiles of time-frequency localization operators 时频定位算子的经验波动曲线
IF 3.2 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2026-02-01 Epub Date: 2025-11-15 DOI: 10.1016/j.acha.2025.101825
Simon Halvdansson
For time-frequency localization operators, related to the short-time Fourier transform, with symbol RΩ, we work out the exact large R eigenvalue behavior for rotationally invariant Ω and conjecture that the same relation holds for all scaled symbols RΩ as long as the window is the standard Gaussian. Specifically, we conjecture that the kth eigenvalue of the localization operator with symbol RΩ converges to 12erfc(2πkR2|Ω|R|Ω|) as R → ∞. To support the conjecture, we compute the eigenvalues of discrete frame multipliers with various symbols using LTFAT and find that they agree with the behavior of the conjecture to a large degree.
对于与符号RΩ的短时傅里叶变换相关的时频定位算子,我们计算出旋转不变量Ω的确切大R特征值行为,并推测只要窗口是标准高斯,相同的关系适用于所有缩放符号RΩ。具体来说,我们推测符号为RΩ的定位算子的第k个特征值收敛到12erfc(2πk−R2|Ω|R|∂Ω|)为R → ∞。为了支持该猜想,我们使用LTFAT计算了具有不同符号的离散帧乘法器的特征值,发现它们在很大程度上符合该猜想的行为。
{"title":"Empirical plunge profiles of time-frequency localization operators","authors":"Simon Halvdansson","doi":"10.1016/j.acha.2025.101825","DOIUrl":"10.1016/j.acha.2025.101825","url":null,"abstract":"<div><div>For time-frequency localization operators, related to the short-time Fourier transform, with symbol <em>R</em>Ω, we work out the exact large <em>R</em> eigenvalue behavior for rotationally invariant Ω and conjecture that the same relation holds for all scaled symbols <em>R</em>Ω as long as the window is the standard Gaussian. Specifically, we conjecture that the <em>k</em>th eigenvalue of the localization operator with symbol <em>R</em>Ω converges to <span><math><mrow><mfrac><mn>1</mn><mn>2</mn></mfrac><mi>erfc</mi><mo>(</mo><msqrt><mrow><mn>2</mn><mi>π</mi></mrow></msqrt><mfrac><mrow><mi>k</mi><mo>−</mo><msup><mi>R</mi><mn>2</mn></msup><mrow><mo>|</mo><mstyle><mi>Ω</mi></mstyle><mo>|</mo></mrow></mrow><mrow><mi>R</mi><mo>|</mo><mi>∂</mi><mstyle><mi>Ω</mi></mstyle><mo>|</mo></mrow></mfrac><mo>)</mo></mrow></math></span> as <em>R</em> → ∞. To support the conjecture, we compute the eigenvalues of discrete frame multipliers with various symbols using LTFAT and find that they agree with the behavior of the conjecture to a large degree.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"81 ","pages":"Article 101825"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145536107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integral operator approaches for scattered data fitting on spheres 球面上分散数据拟合的积分算子方法
IF 3.2 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2026-02-01 Epub Date: 2025-12-19 DOI: 10.1016/j.acha.2025.101851
Shao-Bo Lin
This paper focuses on scattered data fitting problems on spheres. We study the approximation performance of a class of weighted spectral filter algorithms (WSFA), including Tikhonov regularization, Landweber iteration, spectral cut-off, and iterated Tikhonov, in fitting noisy data with possibly unbounded random noise. For theoretical analysis, we borrow the idea of integral operator approach from statistical learning theory to be an extension of the widely used sampling inequality approach and norming set method in the community of scattered data fitting. After providing an equivalence between the operator differences and quadrature rules, we succeed in deriving tight bounds for operator differences, explicit operator representations for WSFA and consequently optimal error estimates. Our derived error estimates do not suffer from the saturation phenomenon for Tikhonov regularization, native-space-barrier for existing error analysis and adapts to different embedding spaces. Based on the operator representations, we develop a Lepskii-type principle to determine the filter parameter of WSFA and a divide-and-conquer scheme to reduce the computational burden and provide optimal approximation rates for corresponding algorithms.
本文主要研究球面上的散射数据拟合问题。本文研究了包括Tikhonov正则化、Landweber迭代、谱截止和迭代Tikhonov算法在内的加权谱滤波算法(WSFA)在拟合可能无界随机噪声的噪声数据中的近似性能。在理论分析方面,我们借鉴统计学习理论中的积分算子方法思想,对离散数据拟合领域中广泛使用的抽样不等式方法和规范化集方法进行了扩展。在提供了算子差和正交规则之间的等价性之后,我们成功地推导出了算子差的紧界、WSFA的显式算子表示以及由此产生的最优误差估计。我们推导的误差估计不受Tikhonov正则化的饱和现象的影响,不受现有误差分析的本地空间屏障的影响,并适应不同的嵌入空间。在算子表示的基础上,提出了一种确定WSFA滤波器参数的lepskii型原理和一种分而治之的方案,以减少计算量并为相应的算法提供最优逼近率。
{"title":"Integral operator approaches for scattered data fitting on spheres","authors":"Shao-Bo Lin","doi":"10.1016/j.acha.2025.101851","DOIUrl":"10.1016/j.acha.2025.101851","url":null,"abstract":"<div><div>This paper focuses on scattered data fitting problems on spheres. We study the approximation performance of a class of weighted spectral filter algorithms (WSFA), including Tikhonov regularization, Landweber iteration, spectral cut-off, and iterated Tikhonov, in fitting noisy data with possibly unbounded random noise. For theoretical analysis, we borrow the idea of integral operator approach from statistical learning theory to be an extension of the widely used sampling inequality approach and norming set method in the community of scattered data fitting. After providing an equivalence between the operator differences and quadrature rules, we succeed in deriving tight bounds for operator differences, explicit operator representations for WSFA and consequently optimal error estimates. Our derived error estimates do not suffer from the saturation phenomenon for Tikhonov regularization, native-space-barrier for existing error analysis and adapts to different embedding spaces. Based on the operator representations, we develop a Lepskii-type principle to determine the filter parameter of WSFA and a divide-and-conquer scheme to reduce the computational burden and provide optimal approximation rates for corresponding algorithms.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101851"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145786039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-negative sparse recovery at minimal sampling rate 最小采样率下的非负稀疏恢复
IF 3.2 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2026-02-01 Epub Date: 2025-12-11 DOI: 10.1016/j.acha.2025.101847
Hendrik Bernd Zarucha , Peter Jung
It is known that sparse recovery is possible if the number of measurements is in the order of the sparsity, but the corresponding decoders either lack polynomial decoding time or robustness to noise. Commonly, decoders that rely on a null space property are being used. These achieve polynomial time decoding and are robust to additive noise but pay the price by requiring more measurements. The non-negative least residual has been established as such a decoder for non-negative recovery. A new equivalent condition for uniform, robust recovery of non-negative sparse vectors with the non-negative least residual that is not based on null space properties is introduced. It is shown that the number of measurements for this equivalent condition only needs to be in the order of the sparsity. Further, it is explained why the robustness to additive noise is similar, but not equal, to the robustness of decoders based on null space properties.
已知,如果测量次数与稀疏度的顺序一致,则稀疏恢复是可能的,但相应的解码器要么缺乏多项式解码时间,要么缺乏对噪声的鲁棒性。通常,使用依赖于零空间属性的解码器。这些方法实现了多项式时间解码,并且对加性噪声具有鲁棒性,但需要付出更多测量的代价。建立了非负最小残差作为非负恢复的译码器。给出了非负最小残差的非负稀疏向量的一致鲁棒恢复的一个新的等价条件,该条件不基于零空间性质。结果表明,在这种等效条件下,测量次数只需在稀疏度的数量级上即可。此外,还解释了为什么对加性噪声的鲁棒性与基于零空间特性的解码器的鲁棒性相似,但不相等。
{"title":"Non-negative sparse recovery at minimal sampling rate","authors":"Hendrik Bernd Zarucha ,&nbsp;Peter Jung","doi":"10.1016/j.acha.2025.101847","DOIUrl":"10.1016/j.acha.2025.101847","url":null,"abstract":"<div><div>It is known that sparse recovery is possible if the number of measurements is in the order of the sparsity, but the corresponding decoders either lack polynomial decoding time or robustness to noise. Commonly, decoders that rely on a null space property are being used. These achieve polynomial time decoding and are robust to additive noise but pay the price by requiring more measurements. The non-negative least residual has been established as such a decoder for non-negative recovery. A new equivalent condition for uniform, robust recovery of non-negative sparse vectors with the non-negative least residual that is not based on null space properties is introduced. It is shown that the number of measurements for this equivalent condition only needs to be in the order of the sparsity. Further, it is explained why the robustness to additive noise is similar, but not equal, to the robustness of decoders based on null space properties.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101847"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145731505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantum wave packet transforms with compact frequency support: Implementations for wavelets and Gabor atoms 具有紧凑频率支持的量子波包变换:小波和Gabor原子的实现
IF 3.2 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2026-02-01 Epub Date: 2025-12-15 DOI: 10.1016/j.acha.2025.101850
Hongkang Ni, Lexing Ying
Various wave packet transforms are widely used to extract multiscale structures in signal processing. This paper introduces the quantum circuit implementation of a broad class of wave packets, including Gabor atoms and wavelets, with compact frequency support. Our approach operates in the frequency space, involving reallocation and reshuffling of signals tailored for manipulation on quantum computers. The resulting implementation differs from existing quantum algorithms for spatially compactly supported wavelets and can be readily extended to quantum transforms of other wave packets with compact frequency support.
在信号处理中,各种波包变换被广泛用于提取多尺度结构。本文介绍了具有紧凑频率支持的一大类波包的量子电路实现,包括Gabor原子和小波。我们的方法在频率空间中运行,涉及为量子计算机操作量身定制的信号的重新分配和重组。由此产生的实现不同于现有的空间紧支持小波的量子算法,可以很容易地扩展到具有紧凑频率支持的其他波包的量子变换。
{"title":"Quantum wave packet transforms with compact frequency support: Implementations for wavelets and Gabor atoms","authors":"Hongkang Ni,&nbsp;Lexing Ying","doi":"10.1016/j.acha.2025.101850","DOIUrl":"10.1016/j.acha.2025.101850","url":null,"abstract":"<div><div>Various wave packet transforms are widely used to extract multiscale structures in signal processing. This paper introduces the quantum circuit implementation of a broad class of wave packets, including Gabor atoms and wavelets, with compact frequency support. Our approach operates in the frequency space, involving reallocation and reshuffling of signals tailored for manipulation on quantum computers. The resulting implementation differs from existing quantum algorithms for spatially compactly supported wavelets and can be readily extended to quantum transforms of other wave packets with compact frequency support.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101850"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145784763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approximation and estimation capability of vision transformers for hierarchical compositional models 层次组合模型视觉变换的逼近与估计能力
IF 3.2 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2026-02-01 Epub Date: 2025-12-19 DOI: 10.1016/j.acha.2025.101849
Zhongjie Shi , Zhiying Fang , Yuan Cao
Although the Transformer model has emerged as the preferred choice in numerous application domains, its theoretical underpinnings remain sparse. Specifically, when compared to traditional fully-connected neural networks (FNNs), there is currently no theoretical result that explains the advantages of Transformers. In this paper, we delve into the analysis of approximation and generalization errors for the Vision Transformer (ViT) model. Despite the presence of the softmax function in the self-attention mechanism, we have successfully constructed a product gate within the ViT architecture. Our analysis shows that, for target functions of the hierarchical compositional form with suitable smoothness constraints, ViTs can avoid the curse of dimensionality in the sense that the input dimension only affects the exponent of the logarithmic terms and the constant terms. Notably, our findings underscore the efficiency of ViTs in terms of parameter usage compared to FNNs. Furthermore, when the regression function is of the hierarchical compositional form with the same suitable smoothness constraints, estimators generated by the empirical risk minimization algorithm with a ViT structure can achieve near-optimal convergence rates in a regression framework. These theoretical contributions not only demonstrate the inherent strengths of the ViT model but also address a significant gap in its theoretical exploration.
尽管Transformer模型已成为许多应用程序领域的首选,但其理论基础仍然很少。具体来说,与传统的全连接神经网络(fnn)相比,目前还没有理论结果可以解释变形金刚的优势。本文对视觉变压器(Vision Transformer, ViT)模型的近似误差和泛化误差进行了深入的分析。尽管在自关注机制中存在softmax功能,但我们已经成功地在ViT架构中构建了一个产品门。我们的分析表明,对于具有适当平滑约束的分层组合形式的目标函数,vit可以避免维数诅咒,即输入维数仅影响对数项和常数项的指数。值得注意的是,我们的研究结果强调了与fnn相比,vit在参数使用方面的效率。此外,当回归函数为层次组合形式且具有相同的合适平滑约束时,具有ViT结构的经验风险最小化算法生成的估计量在回归框架中可以达到接近最优的收敛速度。这些理论贡献不仅展示了ViT模型的内在优势,而且弥补了其理论探索的重大空白。
{"title":"Approximation and estimation capability of vision transformers for hierarchical compositional models","authors":"Zhongjie Shi ,&nbsp;Zhiying Fang ,&nbsp;Yuan Cao","doi":"10.1016/j.acha.2025.101849","DOIUrl":"10.1016/j.acha.2025.101849","url":null,"abstract":"<div><div>Although the Transformer model has emerged as the preferred choice in numerous application domains, its theoretical underpinnings remain sparse. Specifically, when compared to traditional fully-connected neural networks (FNNs), there is currently no theoretical result that explains the advantages of Transformers. In this paper, we delve into the analysis of approximation and generalization errors for the Vision Transformer (ViT) model. Despite the presence of the softmax function in the self-attention mechanism, we have successfully constructed a product gate within the ViT architecture. Our analysis shows that, for target functions of the hierarchical compositional form with suitable smoothness constraints, ViTs can avoid the curse of dimensionality in the sense that the input dimension only affects the exponent of the logarithmic terms and the constant terms. Notably, our findings underscore the efficiency of ViTs in terms of parameter usage compared to FNNs. Furthermore, when the regression function is of the hierarchical compositional form with the same suitable smoothness constraints, estimators generated by the empirical risk minimization algorithm with a ViT structure can achieve near-optimal convergence rates in a regression framework. These theoretical contributions not only demonstrate the inherent strengths of the ViT model but also address a significant gap in its theoretical exploration.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101849"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145784759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Painless construction of unconditional bases for anisotropic modulation and Triebel-Lizorkin type spaces 各向异性调制和triiebel - lizorkin型空间无条件基的无痛构造
IF 3.2 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2026-02-01 Epub Date: 2025-11-28 DOI: 10.1016/j.acha.2025.101832
Morten Nielsen
We construct smooth localized orthonormal bases compatible with anisotropic Triebel-Lizorkin and Besov type spaces on Rd. The construction is based on tensor products of so-called univariate brushlet functions that are based on local trigonometric bases in the frequency domain, and the construction is painless in the sense that all parameters for the construction are explicitly specified. It is shown that the associated decomposition system form unconditional bases for the full family of Triebel-Lizorkin and Besov type spaces, including for the so-called α-modulation and α-Triebel-Lizorkin spaces. In the second part of the paper we study nonlinear m-term approximation with the constructed bases, where direct Jackson and Bernstein inequalities for m-term approximation with the tensor brushlet system in α-modulation and α-Triebel-Lizorkin spaces are derived. The inverse Bernstein estimates rely heavily on the fact that the constructed system is non-redundant.
我们在Rd上构造与各向异性triiebel - lizorkin和Besov型空间兼容的光滑局域正交基。该构造基于所谓的单变量刷波函数的张量积,该函数基于频域的局部三角基,并且构造的所有参数都明确指定,因此构造是无痛的。结果表明,对于triiebel - lizorkin和Besov型空间,包括α-调制和α- triiebel - lizorkin空间,相关分解体系形成了无条件基。在本文的第二部分,我们用所构造的基研究了非线性m项逼近,导出了在α-调制和α- triiebel - lizorkin空间中张量刷波系统的m项逼近的直接Jackson和Bernstein不等式。逆伯恩斯坦估计在很大程度上依赖于所构建的系统是非冗余的这一事实。
{"title":"Painless construction of unconditional bases for anisotropic modulation and Triebel-Lizorkin type spaces","authors":"Morten Nielsen","doi":"10.1016/j.acha.2025.101832","DOIUrl":"10.1016/j.acha.2025.101832","url":null,"abstract":"<div><div>We construct smooth localized orthonormal bases compatible with anisotropic Triebel-Lizorkin and Besov type spaces on <span><math><msup><mi>R</mi><mi>d</mi></msup></math></span>. The construction is based on tensor products of so-called univariate brushlet functions that are based on local trigonometric bases in the frequency domain, and the construction is painless in the sense that all parameters for the construction are explicitly specified. It is shown that the associated decomposition system form unconditional bases for the full family of Triebel-Lizorkin and Besov type spaces, including for the so-called <em>α</em>-modulation and <em>α</em>-Triebel-Lizorkin spaces. In the second part of the paper we study nonlinear <em>m</em>-term approximation with the constructed bases, where direct Jackson and Bernstein inequalities for <em>m</em>-term approximation with the tensor brushlet system in <em>α</em>-modulation and <em>α</em>-Triebel-Lizorkin spaces are derived. The inverse Bernstein estimates rely heavily on the fact that the constructed system is non-redundant.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101832"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145613991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Theoretical guarantees for low-rank compression of deep neural networks 深度神经网络低秩压缩的理论保证
IF 3.2 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2026-02-01 Epub Date: 2025-12-02 DOI: 10.1016/j.acha.2025.101837
Shihao Zhang , Rayan Saab
Deep neural networks have achieved state-of-the-art performance across numerous applications, but their high memory and computational demands present significant challenges, particularly in resource-constrained environments. Model compression techniques, such as low-rank approximation, offer a promising solution by reducing the size and complexity of these networks while only minimally sacrificing accuracy. In this paper, we develop an analytical framework for data-driven post-training low-rank compression. We prove three recovery theorems under progressively weaker assumptions about the approximate low-rank structure of activations, modeling deviations via noise. Our results represent a step toward explaining why data-driven low-rank compression methods outperform data-agnostic approaches and towards theoretically grounded compression algorithms that reduce inference costs while maintaining performance.
深度神经网络已经在许多应用中实现了最先进的性能,但其高内存和计算需求带来了重大挑战,特别是在资源受限的环境中。模型压缩技术,如低秩近似,提供了一个有前途的解决方案,通过减少这些网络的大小和复杂性,同时只牺牲最小的准确性。在本文中,我们开发了一个数据驱动的训练后低秩压缩分析框架。我们在关于激活的近似低秩结构的逐渐变弱的假设下证明了三个恢复定理,通过噪声建模偏差。我们的研究结果向解释为什么数据驱动的低秩压缩方法优于数据不可知的方法,以及在保持性能的同时降低推理成本的理论基础压缩算法迈出了一步。
{"title":"Theoretical guarantees for low-rank compression of deep neural networks","authors":"Shihao Zhang ,&nbsp;Rayan Saab","doi":"10.1016/j.acha.2025.101837","DOIUrl":"10.1016/j.acha.2025.101837","url":null,"abstract":"<div><div>Deep neural networks have achieved state-of-the-art performance across numerous applications, but their high memory and computational demands present significant challenges, particularly in resource-constrained environments. Model compression techniques, such as low-rank approximation, offer a promising solution by reducing the size and complexity of these networks while only minimally sacrificing accuracy. In this paper, we develop an analytical framework for data-driven post-training low-rank compression. We prove three recovery theorems under progressively weaker assumptions about the approximate low-rank structure of activations, modeling deviations via noise. Our results represent a step toward explaining why data-driven low-rank compression methods outperform data-agnostic approaches and towards theoretically grounded compression algorithms that reduce inference costs while maintaining performance.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101837"},"PeriodicalIF":3.2,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145658199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Applied and Computational Harmonic Analysis
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1