首页 > 最新文献

Applied and Computational Harmonic Analysis最新文献

英文 中文
Generalization analysis of an unfolding network for analysis-based compressed sensing 基于分析的压缩感知展开网络的泛化分析
IF 2.6 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-10-01 Epub Date: 2025-06-06 DOI: 10.1016/j.acha.2025.101787
Vicky Kouni , Yannis Panagakis
Unfolding networks have shown promising results in the Compressed Sensing (CS) field. Yet, the investigation of their generalization ability is still in its infancy. In this paper, we perform a generalization analysis of a state-of-the-art ADMM-based unfolding network, which jointly learns a decoder for CS and a sparsifying redundant analysis operator. To this end, we first impose a structural constraint on the learnable sparsifier, which parametrizes the network's hypothesis class. For the latter, we estimate its Rademacher complexity. With this estimate in hand, we deliver generalization error bounds – which scale like the square root of the number of layers – for the examined network. Finally, the validity of our theory is assessed and numerical comparisons to a state-of-the-art unfolding network are made, on synthetic and real-world datasets. Our experimental results demonstrate that our proposed framework complies with our theoretical findings and outperforms the baseline, consistently for all datasets.
展开网络在压缩感知(CS)领域显示出良好的效果。然而,对其泛化能力的研究还处于起步阶段。在本文中,我们对最先进的基于admm的展开网络进行了泛化分析,该网络共同学习了CS解码器和稀疏冗余分析算子。为此,我们首先对可学习稀疏器施加结构约束,使网络的假设类参数化。对于后者,我们估计其Rademacher复杂度。有了这个估计,我们就可以为被检查的网络提供泛化误差边界——它的尺度类似于层数的平方根。最后,对我们的理论的有效性进行了评估,并在合成和现实世界的数据集上与最先进的展开网络进行了数值比较。我们的实验结果表明,我们提出的框架符合我们的理论发现,并优于基线,一致地适用于所有数据集。
{"title":"Generalization analysis of an unfolding network for analysis-based compressed sensing","authors":"Vicky Kouni ,&nbsp;Yannis Panagakis","doi":"10.1016/j.acha.2025.101787","DOIUrl":"10.1016/j.acha.2025.101787","url":null,"abstract":"<div><div>Unfolding networks have shown promising results in the Compressed Sensing (CS) field. Yet, the investigation of their generalization ability is still in its infancy. In this paper, we perform a generalization analysis of a state-of-the-art ADMM-based unfolding network, which jointly learns a decoder for CS and a sparsifying redundant analysis operator. To this end, we first impose a structural constraint on the learnable sparsifier, which parametrizes the network's hypothesis class. For the latter, we estimate its Rademacher complexity. With this estimate in hand, we deliver generalization error bounds – which scale like the square root of the number of layers – for the examined network. Finally, the validity of our theory is assessed and numerical comparisons to a state-of-the-art unfolding network are made, on synthetic and real-world datasets. Our experimental results demonstrate that our proposed framework complies with our theoretical findings and outperforms the baseline, consistently for all datasets.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101787"},"PeriodicalIF":2.6,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144254105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Wigner distribution of Gaussian tempered generalized stochastic processes 高斯缓和广义随机过程的Wigner分布
IF 3.2 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-10-01 Epub Date: 2025-08-13 DOI: 10.1016/j.acha.2025.101799
Patrik Wahlberg
We define the Wigner distribution of a tempered generalized stochastic process that is complex-valued symmetric Gaussian. This gives a time-frequency generalized stochastic process defined on the phase space. We study its covariance and our main result is a formula for the Weyl symbol of the covariance operator, expressed in terms of the Weyl symbol of the covariance operator of the original generalized stochastic process.
我们定义了复值对称高斯的调和广义随机过程的Wigner分布。给出了一个定义在相空间上的时频广义随机过程。我们研究了它的协方差,我们的主要结果是一个协方差算子的Weyl符号的公式,用原始广义随机过程的协方差算子的Weyl符号表示。
{"title":"The Wigner distribution of Gaussian tempered generalized stochastic processes","authors":"Patrik Wahlberg","doi":"10.1016/j.acha.2025.101799","DOIUrl":"10.1016/j.acha.2025.101799","url":null,"abstract":"<div><div>We define the Wigner distribution of a tempered generalized stochastic process that is complex-valued symmetric Gaussian. This gives a time-frequency generalized stochastic process defined on the phase space. We study its covariance and our main result is a formula for the Weyl symbol of the covariance operator, expressed in terms of the Weyl symbol of the covariance operator of the original generalized stochastic process.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101799"},"PeriodicalIF":3.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144861221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy propagation in scattering convolution networks can be arbitrarily slow 散射卷积网络中的能量传播可以是任意慢的
IF 2.6 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-10-01 Epub Date: 2025-06-20 DOI: 10.1016/j.acha.2025.101790
Hartmut Führ, Max Getter
We analyze energy decay for deep convolutional neural networks employed as feature extractors, including Mallat's wavelet scattering transform. For time-frequency scattering transforms based on Gabor filters, previous work has established that energy decay is exponential for arbitrary square-integrable input signals. In contrast, our main results allow proving that this is false for wavelet scattering in any dimension. Specifically, we show that the energy decay of wavelet and wavelet-like scattering transforms acting on generic square-integrable signals can be arbitrarily slow. Importantly, this slow decay behavior holds for dense subsets of L2(Rd), indicating that rapid energy decay is generally an unstable property of signals. We complement these findings with positive results that allow us to infer fast (up to exponential) energy decay for generalized Sobolev spaces tailored to the frequency localization of the underlying filter bank. Both negative and positive results highlight that energy decay in scattering networks critically depends on the interplay between the respective frequency localizations of both the signal and the filters used.
我们分析了使用Mallat小波散射变换作为特征提取器的深度卷积神经网络的能量衰减。对于基于Gabor滤波器的时频散射变换,先前的工作已经确定了任意平方可积输入信号的能量衰减是指数的。相反,我们的主要结果允许证明这是假的小波散射在任何维度。具体来说,我们证明了作用于一般平方可积信号的小波和类小波散射变换的能量衰减可以是任意慢的。重要的是,这种缓慢的衰减行为适用于L2(Rd)的密集子集,表明快速的能量衰减通常是信号的不稳定特性。我们用积极的结果来补充这些发现,这些结果使我们能够推断出针对底层滤波器组的频率局域化定制的广义Sobolev空间的快速(高达指数级)能量衰减。消极和积极的结果都强调,散射网络中的能量衰减严重依赖于信号和所用滤波器各自频率局域之间的相互作用。
{"title":"Energy propagation in scattering convolution networks can be arbitrarily slow","authors":"Hartmut Führ,&nbsp;Max Getter","doi":"10.1016/j.acha.2025.101790","DOIUrl":"10.1016/j.acha.2025.101790","url":null,"abstract":"<div><div>We analyze energy decay for deep convolutional neural networks employed as feature extractors, including Mallat's wavelet scattering transform. For time-frequency scattering transforms based on Gabor filters, previous work has established that energy decay is exponential for arbitrary square-integrable input signals. In contrast, our main results allow proving that this is false for wavelet scattering in any dimension. Specifically, we show that the energy decay of wavelet and wavelet-like scattering transforms acting on generic square-integrable signals can be arbitrarily slow. Importantly, this slow decay behavior holds for dense subsets of <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><msup><mrow><mi>R</mi></mrow><mrow><mi>d</mi></mrow></msup><mo>)</mo></math></span>, indicating that rapid energy decay is generally an unstable property of signals. We complement these findings with positive results that allow us to infer fast (up to exponential) energy decay for generalized Sobolev spaces tailored to the frequency localization of the underlying filter bank. Both negative and positive results highlight that energy decay in scattering networks critically depends on the interplay between the respective frequency localizations of both the signal and the filters used.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101790"},"PeriodicalIF":2.6,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144337762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified stochastic framework for neural network quantization and pruning 神经网络量化与剪枝的统一随机框架
IF 2.6 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-10-01 Epub Date: 2025-06-02 DOI: 10.1016/j.acha.2025.101778
Haoyu Zhang , Rayan Saab
Quantization and pruning are two essential techniques for compressing neural networks, yet they are often treated independently, with limited theoretical analysis connecting them. This paper introduces a unified framework for post-training quantization and pruning using stochastic path-following algorithms. Our approach builds on the Stochastic Path Following Quantization (SPFQ) method, extending its applicability to pruning and low-bit quantization, including challenging 1-bit regimes. By incorporating a scaling parameter and generalizing the stochastic operator, the proposed method achieves robust error correction and yields rigorous theoretical error bounds for both quantization and pruning as well as their combination.
量化和剪枝是压缩神经网络的两种基本技术,但它们往往被独立对待,很少有理论分析将它们联系起来。本文介绍了一个使用随机路径跟踪算法进行训练后量化和剪枝的统一框架。我们的方法建立在随机路径跟随量化(SPFQ)方法的基础上,扩展了其对剪枝和低比特量化的适用性,包括具有挑战性的1比特制度。通过引入尺度参数和推广随机算子,该方法实现了鲁棒误差校正,并为量化和剪枝及其组合提供了严格的理论误差界。
{"title":"Unified stochastic framework for neural network quantization and pruning","authors":"Haoyu Zhang ,&nbsp;Rayan Saab","doi":"10.1016/j.acha.2025.101778","DOIUrl":"10.1016/j.acha.2025.101778","url":null,"abstract":"<div><div>Quantization and pruning are two essential techniques for compressing neural networks, yet they are often treated independently, with limited theoretical analysis connecting them. This paper introduces a unified framework for post-training quantization and pruning using stochastic path-following algorithms. Our approach builds on the Stochastic Path Following Quantization (SPFQ) method, extending its applicability to pruning and low-bit quantization, including challenging 1-bit regimes. By incorporating a scaling parameter and generalizing the stochastic operator, the proposed method achieves robust error correction and yields rigorous theoretical error bounds for both quantization and pruning as well as their combination.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101778"},"PeriodicalIF":2.6,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144230135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nonharmonic multivariate Fourier transforms and matrices: Condition numbers and hyperplane geometry 非调和多元傅立叶变换与矩阵:条件数与超平面几何
IF 2.6 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-10-01 Epub Date: 2025-07-09 DOI: 10.1016/j.acha.2025.101791
Weilin Li
Consider an operator that takes the Fourier transform of a discrete measure supported in X[12,12)d and restricts it to a compact ΩRd. We provide lower bounds for its smallest singular value when Ω is either a closed ball of radius m or closed cube of side length 2m, and under different types of geometric assumptions on X. We first show that if distances between points in X are lower bounded by a δ that is allowed to be arbitrarily small, then the smallest singular value is at least Cmd/2(mδ)λ1, where λ is the maximum number of elements in X contained within any ball or cube of an explicitly given radius. This estimate communicates a localization effect of the Fourier transform. While it is sharp, the smallest singular value behaves better than expected for many X, including when we dilate a generic set by parameter δ. We next show that if there is a η such that, for each xX, the set X{x} locally consists of at most r hyperplanes whose distances to x are at least η, then the smallest singular value is at least Cmd/2(mη)r. For dilations of a generic set by δ, the lower bound becomes Cmd/2(mδ)(λ1)/d. The appearance of a 1/d factor in the exponent indicates that compared to worst case scenarios, the condition number of nonharmonic Fourier transforms is better than expected for typical sets and improve with higher dimensionality.
考虑一个算子,该算子对X × × [- 12,12)d中支持的离散测度进行傅里叶变换,并将其限制于紧致Ω × × Rd。我们为其提供下界最小奇异值时Ω要么是一个封闭的球的半径m或封闭立方体边长2米,和在不同类型的几何假设X我们第一次表明,如果X点之间的距离是有下界的δ允许任意小,那么最小奇异值至少是Cmd / 2 (mδ)λ−1,λ是元素的最大数量在X中包含任何球或多维数据集的一个显式给定的半径。这种估计传达了傅里叶变换的局部化效应。虽然它很尖锐,但对于许多X,包括当我们通过参数δ展开泛型集时,最小的奇异值表现得比预期的要好。我们接着证明,如果存在一个η,使得对于每个x∈x,集合x∈{x}局部包含最多r个到x的距离至少为η的超平面,则最小奇异值至少为Cmd/2(mη)r。对于一般集δ的扩张,下界变为Cmd/2(mδ)≤(λ−1)/d≤。指数中1/d因子的出现表明,与最坏情况相比,非调和傅里叶变换的条件数比典型集合的预期要好,并且随着维度的提高而改善。
{"title":"Nonharmonic multivariate Fourier transforms and matrices: Condition numbers and hyperplane geometry","authors":"Weilin Li","doi":"10.1016/j.acha.2025.101791","DOIUrl":"10.1016/j.acha.2025.101791","url":null,"abstract":"<div><div>Consider an operator that takes the Fourier transform of a discrete measure supported in <span><math><mi>X</mi><mo>⊆</mo><msup><mrow><mo>[</mo><mo>−</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mo>,</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mo>)</mo></mrow><mrow><mi>d</mi></mrow></msup></math></span> and restricts it to a compact <span><math><mi>Ω</mi><mo>⊆</mo><msup><mrow><mi>R</mi></mrow><mrow><mi>d</mi></mrow></msup></math></span>. We provide lower bounds for its smallest singular value when Ω is either a closed ball of radius <em>m</em> or closed cube of side length 2<em>m</em>, and under different types of geometric assumptions on <span><math><mi>X</mi></math></span>. We first show that if distances between points in <span><math><mi>X</mi></math></span> are lower bounded by a <em>δ</em> that is allowed to be arbitrarily small, then the smallest singular value is at least <span><math><mi>C</mi><msup><mrow><mi>m</mi></mrow><mrow><mi>d</mi><mo>/</mo><mn>2</mn></mrow></msup><msup><mrow><mo>(</mo><mi>m</mi><mi>δ</mi><mo>)</mo></mrow><mrow><mi>λ</mi><mo>−</mo><mn>1</mn></mrow></msup></math></span>, where <em>λ</em> is the maximum number of elements in <span><math><mi>X</mi></math></span> contained within any ball or cube of an explicitly given radius. This estimate communicates a localization effect of the Fourier transform. While it is sharp, the smallest singular value behaves better than expected for many <span><math><mi>X</mi></math></span>, including when we dilate a generic set by parameter <em>δ</em>. We next show that if there is a <em>η</em> such that, for each <span><math><mi>x</mi><mo>∈</mo><mi>X</mi></math></span>, the set <span><math><mi>X</mi><mo>∖</mo><mo>{</mo><mi>x</mi><mo>}</mo></math></span> locally consists of at most <em>r</em> hyperplanes whose distances to <em>x</em> are at least <em>η</em>, then the smallest singular value is at least <span><math><mi>C</mi><msup><mrow><mi>m</mi></mrow><mrow><mi>d</mi><mo>/</mo><mn>2</mn></mrow></msup><msup><mrow><mo>(</mo><mi>m</mi><mi>η</mi><mo>)</mo></mrow><mrow><mi>r</mi></mrow></msup></math></span>. For dilations of a generic set by <em>δ</em>, the lower bound becomes <span><math><mi>C</mi><msup><mrow><mi>m</mi></mrow><mrow><mi>d</mi><mo>/</mo><mn>2</mn></mrow></msup><msup><mrow><mo>(</mo><mi>m</mi><mi>δ</mi><mo>)</mo></mrow><mrow><mo>⌈</mo><mo>(</mo><mi>λ</mi><mo>−</mo><mn>1</mn><mo>)</mo><mo>/</mo><mi>d</mi><mo>⌉</mo></mrow></msup></math></span>. The appearance of a <span><math><mn>1</mn><mo>/</mo><mi>d</mi></math></span> factor in the exponent indicates that compared to worst case scenarios, the condition number of nonharmonic Fourier transforms is better than expected for typical sets and improve with higher dimensionality.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101791"},"PeriodicalIF":2.6,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144595712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the limits of neural network explainability via descrambling 解扰论神经网络可解释性的局限性
IF 2.6 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-10-01 Epub Date: 2025-07-07 DOI: 10.1016/j.acha.2025.101793
Shashank Sule , Richard G. Spencer , Wojciech Czaja
We characterize the exact solutions to neural network descrambling–a mathematical model for explaining the fully connected layers of trained neural networks (NNs). By reformulating the problem to the minimization of the Brockett function arising in graph matching and complexity theory we show that the principal components of the hidden layer preactivations can be characterized as the optimal “explainers” or descramblers for the layer weights, leading to descrambled weight matrices. We show that in typical deep learning contexts these descramblers take diverse and interesting forms including (1) matching largest principal components with the lowest frequency modes of the Fourier basis for isotropic hidden data, (2) discovering the semantic development in two-layer linear NNs for signal recovery problems, and (3) explaining CNNs by optimally permuting the neurons. Our numerical experiments indicate that the eigendecompositions of the hidden layer data–now understood as the descramblers–can also reveal the layer's underlying transformation. These results illustrate that the SVD is more directly related to the explainability of NNs than previously thought and offers a promising avenue for discovering interpretable motifs for the hidden action of NNs, especially in contexts of operator learning or physics-informed NNs, where the input/output data has limited human readability.
我们描述了神经网络解码器的精确解决方案,这是一种用于解释训练神经网络(nn)的完全连接层的数学模型。通过将问题重新表述为最小化图匹配和复杂性理论中出现的Brockett函数,我们表明隐藏层预激活的主成分可以被表征为层权重的最优“解释者”或解扰者,从而导致解扰权重矩阵。我们表明,在典型的深度学习环境中,这些解码器采取多种有趣的形式,包括(1)将各向同性隐藏数据的最大主成分与傅里叶基的最低频率模式匹配,(2)发现信号恢复问题的双层线性nn中的语义发展,以及(3)通过优化排列神经元来解释cnn。我们的数值实验表明,隐藏层数据的特征分解(现在被理解为解密器)也可以揭示该层的底层变换。这些结果表明,SVD与神经网络的可解释性比以前认为的更直接相关,并为发现神经网络隐藏动作的可解释动机提供了一条有希望的途径,特别是在算子学习或物理信息神经网络的背景下,其中输入/输出数据限制了人类的可读性。
{"title":"On the limits of neural network explainability via descrambling","authors":"Shashank Sule ,&nbsp;Richard G. Spencer ,&nbsp;Wojciech Czaja","doi":"10.1016/j.acha.2025.101793","DOIUrl":"10.1016/j.acha.2025.101793","url":null,"abstract":"<div><div>We characterize the exact solutions to <em>neural network descrambling</em>–a mathematical model for explaining the fully connected layers of trained neural networks (NNs). By reformulating the problem to the minimization of the Brockett function arising in graph matching and complexity theory we show that the principal components of the hidden layer preactivations can be characterized as the optimal “explainers” or <em>descramblers</em> for the layer weights, leading to <em>descrambled</em> weight matrices. We show that in typical deep learning contexts these descramblers take diverse and interesting forms including (1) matching largest principal components with the lowest frequency modes of the Fourier basis for isotropic hidden data, (2) discovering the semantic development in two-layer linear NNs for signal recovery problems, and (3) explaining CNNs by optimally permuting the neurons. Our numerical experiments indicate that the eigendecompositions of the hidden layer data–now understood as the descramblers–can also reveal the layer's underlying transformation. These results illustrate that the SVD is more directly related to the explainability of NNs than previously thought and offers a promising avenue for discovering interpretable motifs for the hidden action of NNs, especially in contexts of operator learning or physics-informed NNs, where the input/output data has limited human readability.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101793"},"PeriodicalIF":2.6,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144572145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse free deconvolution under unknown noise level via eigenmatrix 基于特征矩阵的未知噪声下的稀疏自由反卷积
IF 3.2 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-10-01 Epub Date: 2025-08-14 DOI: 10.1016/j.acha.2025.101802
Lexing Ying
This note considers the spectral estimation problems of sparse spectral measures under unknown noise levels. The main technical tool is the eigenmatrix method for solving unstructured sparse recovery problems. When the noise level is determined, the free deconvolution reduces the problem to an unstructured sparse recovery problem to which the eigenmatrix method can be applied. To determine the unknown noise level, we propose an optimization problem based on the singular values of an intermediate matrix of the eigenmatrix method. Numerical results are provided for both the additive and multiplicative free deconvolutions.
本文研究未知噪声水平下稀疏谱测度的谱估计问题。求解非结构化稀疏恢复问题的主要技术工具是特征矩阵法。当噪声水平确定后,自由反褶积将问题简化为可应用特征矩阵方法的非结构化稀疏恢复问题。为了确定未知噪声水平,我们提出了一个基于特征矩阵法中间矩阵奇异值的优化问题。给出了加性和乘性自由反卷积的数值结果。
{"title":"Sparse free deconvolution under unknown noise level via eigenmatrix","authors":"Lexing Ying","doi":"10.1016/j.acha.2025.101802","DOIUrl":"10.1016/j.acha.2025.101802","url":null,"abstract":"<div><div>This note considers the spectral estimation problems of sparse spectral measures under unknown noise levels. The main technical tool is the eigenmatrix method for solving unstructured sparse recovery problems. When the noise level is determined, the free deconvolution reduces the problem to an unstructured sparse recovery problem to which the eigenmatrix method can be applied. To determine the unknown noise level, we propose an optimization problem based on the singular values of an intermediate matrix of the eigenmatrix method. Numerical results are provided for both the additive and multiplicative free deconvolutions.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101802"},"PeriodicalIF":3.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144865562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computing the proximal operator of the q-th power of the ℓ1,q-norm for group sparsity 群稀疏性的q-范数计算了l1的q-幂的近邻算子
IF 2.6 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-10-01 Epub Date: 2025-06-06 DOI: 10.1016/j.acha.2025.101788
Rongrong Lin , Shihai Chen , Han Feng , Yulan Liu
In this note, we comprehensively characterize the proximal operator of the q-th power of the 1,q-norm (denoted by 1,qq) with 0<q<1 by exploiting the well-known proximal operator of ||q on the real line. In particular, much more explicit characterizations can be obtained whenever q=1/2 and q=2/3 due to the existence of closed-form expressions for the proximal operators of ||1/2 and ||2/3. Numerical experiments demonstrate potential advantages of the 1,qq regularization in the inter-group and intra-group sparse vector recovery.
在本文中,我们利用众所周知的实线上的|·|q的近端算子,综合刻画了0<;q<;1的q次幂的近端算子,q范数(表示为_1,qq)为0<;q<1。特别是,当q=1/2和q=2/3时,由于|⋅|1/2和|⋅|2/3的近端算子存在封闭表达式,可以得到更明确的表征。数值实验证明了正则化算法在群间和群内稀疏向量恢复中的潜在优势。
{"title":"Computing the proximal operator of the q-th power of the ℓ1,q-norm for group sparsity","authors":"Rongrong Lin ,&nbsp;Shihai Chen ,&nbsp;Han Feng ,&nbsp;Yulan Liu","doi":"10.1016/j.acha.2025.101788","DOIUrl":"10.1016/j.acha.2025.101788","url":null,"abstract":"<div><div>In this note, we comprehensively characterize the proximal operator of the <em>q</em>-th power of the <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn><mo>,</mo><mi>q</mi></mrow></msub></math></span>-norm (denoted by <span><math><msubsup><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn><mo>,</mo><mi>q</mi></mrow><mrow><mi>q</mi></mrow></msubsup></math></span>) with <span><math><mn>0</mn><mo>&lt;</mo><mi>q</mi><mo>&lt;</mo><mn>1</mn></math></span> by exploiting the well-known proximal operator of <span><math><mo>|</mo><mo>⋅</mo><msup><mrow><mo>|</mo></mrow><mrow><mi>q</mi></mrow></msup></math></span> on the real line. In particular, much more explicit characterizations can be obtained whenever <span><math><mi>q</mi><mo>=</mo><mn>1</mn><mo>/</mo><mn>2</mn></math></span> and <span><math><mi>q</mi><mo>=</mo><mn>2</mn><mo>/</mo><mn>3</mn></math></span> due to the existence of closed-form expressions for the proximal operators of <span><math><mo>|</mo><mo>⋅</mo><msup><mrow><mo>|</mo></mrow><mrow><mn>1</mn><mo>/</mo><mn>2</mn></mrow></msup></math></span> and <span><math><mo>|</mo><mo>⋅</mo><msup><mrow><mo>|</mo></mrow><mrow><mn>2</mn><mo>/</mo><mn>3</mn></mrow></msup></math></span>. Numerical experiments demonstrate potential advantages of the <span><math><msubsup><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn><mo>,</mo><mi>q</mi></mrow><mrow><mi>q</mi></mrow></msubsup></math></span> regularization in the inter-group and intra-group sparse vector recovery.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101788"},"PeriodicalIF":2.6,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144241941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sharp error estimates for target measure diffusion maps with applications to the committor problem 针对提交者问题的应用程序的目标度量扩散映射的精确误差估计
IF 3.2 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-10-01 Epub Date: 2025-08-14 DOI: 10.1016/j.acha.2025.101803
Shashank Sule , Luke Evans , Maria Cameron
We obtain asymptotically sharp error estimates for the consistency error of the Target Measure Diffusion map (TMDmap) (Banisch et al. 2020), a variant of diffusion maps featuring importance sampling and hence allowing input data drawn from an arbitrary density. The derived error estimates include the bias error and the variance error. The resulting convergence rates are consistent with the approximation theory of graph Laplacians. The key novelty of our results lies in the explicit quantification of all the prefactors on leading-order terms. We also prove an error estimate for solutions of Dirichlet BVPs obtained using TMDmap, showing that the solution error is controlled by consistency error. We use these results to study an important application of TMDmap in the analysis of rare events in systems governed by overdamped Langevin dynamics using the framework of transition path theory (TPT). The cornerstone ingredient of TPT is the solution of the committor problem, a boundary value problem for the backward Kolmogorov PDE. Remarkably, we find that the TMDmap algorithm is particularly suited as a meshless solver to the committor problem due to the cancellation of several error terms in the prefactor formula. Furthermore, significant improvements in bias and variance errors occur when using a quasi-uniform sampling density. Our numerical experiments show that these improvements in accuracy are realizable in practice when using δ-nets as spatially uniform inputs to the TMDmap algorithm.
我们获得了目标测量扩散图(TMDmap)一致性误差的渐近尖锐误差估计(Banisch et al. 2020),这是扩散图的一种变体,具有重要采样功能,因此允许从任意密度提取输入数据。得到的误差估计包括偏置误差和方差误差。所得的收敛速率符合图拉普拉斯算子的近似理论。我们的结果的关键新颖之处在于对所有导序项上的前因子的显式量化。我们还证明了用TMDmap得到的Dirichlet bvp解的误差估计,表明解的误差是由一致性误差控制的。我们利用这些结果研究了TMDmap在利用过渡路径理论(TPT)框架分析由过阻尼朗格万动力学控制的系统中的罕见事件中的重要应用。TPT的基石是解决提交者问题,即后向Kolmogorov PDE的边值问题。值得注意的是,我们发现TMDmap算法特别适合作为提交问题的无网格求解器,因为它取消了前因子公式中的几个误差项。此外,当使用准均匀采样密度时,偏差和方差误差会得到显著改善。我们的数值实验表明,当使用δ-nets作为空间均匀输入到TMDmap算法时,这些精度的提高在实践中是可以实现的。
{"title":"Sharp error estimates for target measure diffusion maps with applications to the committor problem","authors":"Shashank Sule ,&nbsp;Luke Evans ,&nbsp;Maria Cameron","doi":"10.1016/j.acha.2025.101803","DOIUrl":"10.1016/j.acha.2025.101803","url":null,"abstract":"<div><div>We obtain asymptotically sharp error estimates for the consistency error of the Target Measure Diffusion map (TMDmap) (Banisch et al. 2020), a variant of diffusion maps featuring importance sampling and hence allowing input data drawn from an arbitrary density. The derived error estimates include the bias error and the variance error. The resulting convergence rates are consistent with the approximation theory of graph Laplacians. The key novelty of our results lies in the explicit quantification of all the prefactors on leading-order terms. We also prove an error estimate for solutions of Dirichlet BVPs obtained using TMDmap, showing that the solution error is controlled by consistency error. We use these results to study an important application of TMDmap in the analysis of rare events in systems governed by overdamped Langevin dynamics using the framework of transition path theory (TPT). The cornerstone ingredient of TPT is the solution of the committor problem, a boundary value problem for the backward Kolmogorov PDE. Remarkably, we find that the TMDmap algorithm is particularly suited as a meshless solver to the committor problem due to the cancellation of several error terms in the prefactor formula. Furthermore, significant improvements in bias and variance errors occur when using a quasi-uniform sampling density. Our numerical experiments show that these improvements in accuracy are realizable in practice when using <em>δ</em>-nets as spatially uniform inputs to the TMDmap algorithm.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101803"},"PeriodicalIF":3.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144885793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minibatch and local SGD: Algorithmic stability and linear speedup in generalization 小批量和局部SGD:算法的稳定性和泛化的线性加速
IF 2.6 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-10-01 Epub Date: 2025-07-16 DOI: 10.1016/j.acha.2025.101795
Yunwen Lei , Tao Sun , Mingrui Liu
The increasing scale of data propels the popularity of leveraging parallelism to speed up the optimization. Minibatch stochastic gradient descent (minibatch SGD) and local SGD are two popular methods for parallel optimization. The existing theoretical studies show a linear speedup of these methods with respect to the number of machines, which, however, is measured by optimization errors in a multi-pass setting. As a comparison, the stability and generalization of these methods are much less studied. In this paper, we study the stability and generalization analysis of minibatch and local SGD to understand their learnability by introducing an expectation-variance decomposition. We incorporate training errors into the stability analysis, which shows how small training errors help generalization for overparameterized models. We show minibatch and local SGD achieve a linear speedup to attain the optimal risk bounds.
不断增长的数据规模推动了利用并行性来加速优化的普及。Minibatch stochastic gradient descent (Minibatch SGD)和local SGD是两种比较流行的并行优化方法。现有的理论研究表明,这些方法的线性加速与机器数量有关,然而,这是通过多通道设置中的优化误差来衡量的。相比之下,对这些方法的稳定性和通用性的研究却很少。本文通过引入期望-方差分解,研究了小批量和局部SGD的稳定性和泛化分析,以了解它们的可学习性。我们将训练误差纳入稳定性分析,这表明小的训练误差如何有助于过度参数化模型的泛化。我们证明了小批量和局部SGD实现线性加速以达到最优风险界。
{"title":"Minibatch and local SGD: Algorithmic stability and linear speedup in generalization","authors":"Yunwen Lei ,&nbsp;Tao Sun ,&nbsp;Mingrui Liu","doi":"10.1016/j.acha.2025.101795","DOIUrl":"10.1016/j.acha.2025.101795","url":null,"abstract":"<div><div>The increasing scale of data propels the popularity of leveraging parallelism to speed up the optimization. Minibatch stochastic gradient descent (minibatch SGD) and local SGD are two popular methods for parallel optimization. The existing theoretical studies show a linear speedup of these methods with respect to the number of machines, which, however, is measured by optimization errors in a multi-pass setting. As a comparison, the stability and generalization of these methods are much less studied. In this paper, we study the stability and generalization analysis of minibatch and local SGD to understand their learnability by introducing an expectation-variance decomposition. We incorporate training errors into the stability analysis, which shows how small training errors help generalization for overparameterized models. We show minibatch and local SGD achieve a linear speedup to attain the optimal risk bounds.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101795"},"PeriodicalIF":2.6,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144653251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Applied and Computational Harmonic Analysis
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1