首页 > 最新文献

Applied and Computational Harmonic Analysis最新文献

英文 中文
Minibatch and local SGD: Algorithmic stability and linear speedup in generalization 小批量和局部SGD:算法的稳定性和泛化的线性加速
IF 2.6 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-07-16 DOI: 10.1016/j.acha.2025.101795
Yunwen Lei , Tao Sun , Mingrui Liu
The increasing scale of data propels the popularity of leveraging parallelism to speed up the optimization. Minibatch stochastic gradient descent (minibatch SGD) and local SGD are two popular methods for parallel optimization. The existing theoretical studies show a linear speedup of these methods with respect to the number of machines, which, however, is measured by optimization errors in a multi-pass setting. As a comparison, the stability and generalization of these methods are much less studied. In this paper, we study the stability and generalization analysis of minibatch and local SGD to understand their learnability by introducing an expectation-variance decomposition. We incorporate training errors into the stability analysis, which shows how small training errors help generalization for overparameterized models. We show minibatch and local SGD achieve a linear speedup to attain the optimal risk bounds.
不断增长的数据规模推动了利用并行性来加速优化的普及。Minibatch stochastic gradient descent (Minibatch SGD)和local SGD是两种比较流行的并行优化方法。现有的理论研究表明,这些方法的线性加速与机器数量有关,然而,这是通过多通道设置中的优化误差来衡量的。相比之下,对这些方法的稳定性和通用性的研究却很少。本文通过引入期望-方差分解,研究了小批量和局部SGD的稳定性和泛化分析,以了解它们的可学习性。我们将训练误差纳入稳定性分析,这表明小的训练误差如何有助于过度参数化模型的泛化。我们证明了小批量和局部SGD实现线性加速以达到最优风险界。
{"title":"Minibatch and local SGD: Algorithmic stability and linear speedup in generalization","authors":"Yunwen Lei ,&nbsp;Tao Sun ,&nbsp;Mingrui Liu","doi":"10.1016/j.acha.2025.101795","DOIUrl":"10.1016/j.acha.2025.101795","url":null,"abstract":"<div><div>The increasing scale of data propels the popularity of leveraging parallelism to speed up the optimization. Minibatch stochastic gradient descent (minibatch SGD) and local SGD are two popular methods for parallel optimization. The existing theoretical studies show a linear speedup of these methods with respect to the number of machines, which, however, is measured by optimization errors in a multi-pass setting. As a comparison, the stability and generalization of these methods are much less studied. In this paper, we study the stability and generalization analysis of minibatch and local SGD to understand their learnability by introducing an expectation-variance decomposition. We incorporate training errors into the stability analysis, which shows how small training errors help generalization for overparameterized models. We show minibatch and local SGD achieve a linear speedup to attain the optimal risk bounds.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101795"},"PeriodicalIF":2.6,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144653251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-dimensional unlimited sampling and robust reconstruction 多维无限采样和鲁棒重建
IF 2.6 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-07-16 DOI: 10.1016/j.acha.2025.101796
Dorian Florescu, Ayush Bhandari
In this paper we introduce a new sampling and reconstruction approach for multi-dimensional analog signals. Building on top of the Unlimited Sensing Framework (USF), we present a new folded sampling operator called the multi-dimensional modulo-hysteresis that is also backwards compatible with the existing one-dimensional modulo operator. Unlike previous approaches, the proposed model is specifically tailored to multi-dimensional signals. In particular, the model uses certain redundancy in dimensions 2 and above, which is exploited for input recovery with robustness. We prove that the new operator is well-defined and its outputs have a bounded dynamic range. For the noiseless case, we derive a theoretically guaranteed input reconstruction approach. When the input is corrupted by Gaussian noise, we exploit redundancy in higher dimensions to provide a bound on the error probability and show this drops to 0 for high enough sampling rates leading to new theoretical guarantees for the noisy case. Our numerical examples corroborate the theoretical results and show that the proposed approach can handle a significantly larger amount of noise compared to USF.
本文介绍了一种新的多维模拟信号采样与重构方法。在无限传感框架(USF)的基础上,我们提出了一种新的折叠采样算子,称为多维模滞回,它也向后兼容现有的一维模算子。与以前的方法不同,所提出的模型是专门针对多维信号量身定制的。特别是,该模型在2维及以上使用了一定的冗余,用于鲁棒性的输入恢复。我们证明了新算子是定义良好的,它的输出具有有界的动态范围。对于无噪声情况,我们推导了一种理论上有保证的输入重构方法。当输入被高斯噪声破坏时,我们利用更高维度的冗余来提供错误概率的界限,并表明在足够高的采样率下,错误概率降至0,从而为噪声情况提供了新的理论保证。我们的数值例子证实了理论结果,并表明与USF相比,所提出的方法可以处理大量的噪声。
{"title":"Multi-dimensional unlimited sampling and robust reconstruction","authors":"Dorian Florescu,&nbsp;Ayush Bhandari","doi":"10.1016/j.acha.2025.101796","DOIUrl":"10.1016/j.acha.2025.101796","url":null,"abstract":"<div><div>In this paper we introduce a new sampling and reconstruction approach for multi-dimensional analog signals. Building on top of the Unlimited Sensing Framework (USF), we present a new folded sampling operator called the multi-dimensional modulo-hysteresis that is also backwards compatible with the existing one-dimensional modulo operator. Unlike previous approaches, the proposed model is specifically tailored to multi-dimensional signals. In particular, the model uses certain redundancy in dimensions 2 and above, which is exploited for input recovery with robustness. We prove that the new operator is well-defined and its outputs have a bounded dynamic range. For the noiseless case, we derive a theoretically guaranteed input reconstruction approach. When the input is corrupted by Gaussian noise, we exploit redundancy in higher dimensions to provide a bound on the error probability and show this drops to 0 for high enough sampling rates leading to new theoretical guarantees for the noisy case. Our numerical examples corroborate the theoretical results and show that the proposed approach can handle a significantly larger amount of noise compared to USF.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101796"},"PeriodicalIF":2.6,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144665025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the optimal approximation of Sobolev and Besov functions using deep ReLU neural networks 基于深度ReLU神经网络的Sobolev和Besov函数的最优逼近
IF 2.6 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-07-16 DOI: 10.1016/j.acha.2025.101797
Yunfei Yang
This paper studies the problem of how efficiently functions in the Sobolev spaces Ws,q([0,1]d) and Besov spaces Bq,rs([0,1]d) can be approximated by deep ReLU neural networks with width W and depth L, when the error is measured in the Lp([0,1]d) norm. This problem has been studied by several recent works, which obtained the approximation rate O((WL)2s/d) up to logarithmic factors when p=q=, and the rate O(L2s/d) for networks with fixed width when the Sobolev embedding condition 1/q1/p<s/d holds. We generalize these results by showing that the rate O((WL)2s/d) indeed holds under the Sobolev embedding condition. It is known that this rate is optimal up to logarithmic factors. The key tool in our proof is a novel encoding of sparse vectors by using deep ReLU neural networks with varied width and depth, which may be of independent interest.
本文研究了当误差在Lp([0,1]d)范数中测量时,如何有效地逼近Sobolev空间Ws,q([0,1]d)和Besov空间Bq,rs([0,1]d)中宽度为W,深度为L的深度ReLU神经网络。最近的一些研究已经得到了这一问题,当p=q=∞时,得到了对数因子的近似速率O((WL)−2s/d),当Sobolev嵌入条件1/q−1/p<;s/d成立时,得到了固定宽度网络的近似速率O(L−2s/d)。我们推广了这些结果,证明在Sobolev嵌入条件下,速率O((WL)−2s/d)确实成立。众所周知,这个速率在对数因子范围内是最优的。我们证明的关键工具是使用具有不同宽度和深度的深度ReLU神经网络对稀疏向量进行新的编码,这可能是独立的兴趣。
{"title":"On the optimal approximation of Sobolev and Besov functions using deep ReLU neural networks","authors":"Yunfei Yang","doi":"10.1016/j.acha.2025.101797","DOIUrl":"10.1016/j.acha.2025.101797","url":null,"abstract":"<div><div>This paper studies the problem of how efficiently functions in the Sobolev spaces <span><math><msup><mrow><mi>W</mi></mrow><mrow><mi>s</mi><mo>,</mo><mi>q</mi></mrow></msup><mo>(</mo><msup><mrow><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow><mrow><mi>d</mi></mrow></msup><mo>)</mo></math></span> and Besov spaces <span><math><msubsup><mrow><mi>B</mi></mrow><mrow><mi>q</mi><mo>,</mo><mi>r</mi></mrow><mrow><mi>s</mi></mrow></msubsup><mo>(</mo><msup><mrow><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow><mrow><mi>d</mi></mrow></msup><mo>)</mo></math></span> can be approximated by deep ReLU neural networks with width <em>W</em> and depth <em>L</em>, when the error is measured in the <span><math><msup><mrow><mi>L</mi></mrow><mrow><mi>p</mi></mrow></msup><mo>(</mo><msup><mrow><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></mrow><mrow><mi>d</mi></mrow></msup><mo>)</mo></math></span> norm. This problem has been studied by several recent works, which obtained the approximation rate <span><math><mi>O</mi><mo>(</mo><msup><mrow><mo>(</mo><mi>W</mi><mi>L</mi><mo>)</mo></mrow><mrow><mo>−</mo><mn>2</mn><mi>s</mi><mo>/</mo><mi>d</mi></mrow></msup><mo>)</mo></math></span> up to logarithmic factors when <span><math><mi>p</mi><mo>=</mo><mi>q</mi><mo>=</mo><mo>∞</mo></math></span>, and the rate <span><math><mi>O</mi><mo>(</mo><msup><mrow><mi>L</mi></mrow><mrow><mo>−</mo><mn>2</mn><mi>s</mi><mo>/</mo><mi>d</mi></mrow></msup><mo>)</mo></math></span> for networks with fixed width when the Sobolev embedding condition <span><math><mn>1</mn><mo>/</mo><mi>q</mi><mo>−</mo><mn>1</mn><mo>/</mo><mi>p</mi><mo>&lt;</mo><mi>s</mi><mo>/</mo><mi>d</mi></math></span> holds. We generalize these results by showing that the rate <span><math><mi>O</mi><mo>(</mo><msup><mrow><mo>(</mo><mi>W</mi><mi>L</mi><mo>)</mo></mrow><mrow><mo>−</mo><mn>2</mn><mi>s</mi><mo>/</mo><mi>d</mi></mrow></msup><mo>)</mo></math></span> indeed holds under the Sobolev embedding condition. It is known that this rate is optimal up to logarithmic factors. The key tool in our proof is a novel encoding of sparse vectors by using deep ReLU neural networks with varied width and depth, which may be of independent interest.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101797"},"PeriodicalIF":2.6,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144653252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nonharmonic multivariate Fourier transforms and matrices: Condition numbers and hyperplane geometry 非调和多元傅立叶变换与矩阵:条件数与超平面几何
IF 2.6 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-07-09 DOI: 10.1016/j.acha.2025.101791
Weilin Li
Consider an operator that takes the Fourier transform of a discrete measure supported in X[12,12)d and restricts it to a compact ΩRd. We provide lower bounds for its smallest singular value when Ω is either a closed ball of radius m or closed cube of side length 2m, and under different types of geometric assumptions on X. We first show that if distances between points in X are lower bounded by a δ that is allowed to be arbitrarily small, then the smallest singular value is at least Cmd/2(mδ)λ1, where λ is the maximum number of elements in X contained within any ball or cube of an explicitly given radius. This estimate communicates a localization effect of the Fourier transform. While it is sharp, the smallest singular value behaves better than expected for many X, including when we dilate a generic set by parameter δ. We next show that if there is a η such that, for each xX, the set X{x} locally consists of at most r hyperplanes whose distances to x are at least η, then the smallest singular value is at least Cmd/2(mη)r. For dilations of a generic set by δ, the lower bound becomes Cmd/2(mδ)(λ1)/d. The appearance of a 1/d factor in the exponent indicates that compared to worst case scenarios, the condition number of nonharmonic Fourier transforms is better than expected for typical sets and improve with higher dimensionality.
考虑一个算子,该算子对X × × [- 12,12)d中支持的离散测度进行傅里叶变换,并将其限制于紧致Ω × × Rd。我们为其提供下界最小奇异值时Ω要么是一个封闭的球的半径m或封闭立方体边长2米,和在不同类型的几何假设X我们第一次表明,如果X点之间的距离是有下界的δ允许任意小,那么最小奇异值至少是Cmd / 2 (mδ)λ−1,λ是元素的最大数量在X中包含任何球或多维数据集的一个显式给定的半径。这种估计传达了傅里叶变换的局部化效应。虽然它很尖锐,但对于许多X,包括当我们通过参数δ展开泛型集时,最小的奇异值表现得比预期的要好。我们接着证明,如果存在一个η,使得对于每个x∈x,集合x∈{x}局部包含最多r个到x的距离至少为η的超平面,则最小奇异值至少为Cmd/2(mη)r。对于一般集δ的扩张,下界变为Cmd/2(mδ)≤(λ−1)/d≤。指数中1/d因子的出现表明,与最坏情况相比,非调和傅里叶变换的条件数比典型集合的预期要好,并且随着维度的提高而改善。
{"title":"Nonharmonic multivariate Fourier transforms and matrices: Condition numbers and hyperplane geometry","authors":"Weilin Li","doi":"10.1016/j.acha.2025.101791","DOIUrl":"10.1016/j.acha.2025.101791","url":null,"abstract":"<div><div>Consider an operator that takes the Fourier transform of a discrete measure supported in <span><math><mi>X</mi><mo>⊆</mo><msup><mrow><mo>[</mo><mo>−</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mo>,</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mo>)</mo></mrow><mrow><mi>d</mi></mrow></msup></math></span> and restricts it to a compact <span><math><mi>Ω</mi><mo>⊆</mo><msup><mrow><mi>R</mi></mrow><mrow><mi>d</mi></mrow></msup></math></span>. We provide lower bounds for its smallest singular value when Ω is either a closed ball of radius <em>m</em> or closed cube of side length 2<em>m</em>, and under different types of geometric assumptions on <span><math><mi>X</mi></math></span>. We first show that if distances between points in <span><math><mi>X</mi></math></span> are lower bounded by a <em>δ</em> that is allowed to be arbitrarily small, then the smallest singular value is at least <span><math><mi>C</mi><msup><mrow><mi>m</mi></mrow><mrow><mi>d</mi><mo>/</mo><mn>2</mn></mrow></msup><msup><mrow><mo>(</mo><mi>m</mi><mi>δ</mi><mo>)</mo></mrow><mrow><mi>λ</mi><mo>−</mo><mn>1</mn></mrow></msup></math></span>, where <em>λ</em> is the maximum number of elements in <span><math><mi>X</mi></math></span> contained within any ball or cube of an explicitly given radius. This estimate communicates a localization effect of the Fourier transform. While it is sharp, the smallest singular value behaves better than expected for many <span><math><mi>X</mi></math></span>, including when we dilate a generic set by parameter <em>δ</em>. We next show that if there is a <em>η</em> such that, for each <span><math><mi>x</mi><mo>∈</mo><mi>X</mi></math></span>, the set <span><math><mi>X</mi><mo>∖</mo><mo>{</mo><mi>x</mi><mo>}</mo></math></span> locally consists of at most <em>r</em> hyperplanes whose distances to <em>x</em> are at least <em>η</em>, then the smallest singular value is at least <span><math><mi>C</mi><msup><mrow><mi>m</mi></mrow><mrow><mi>d</mi><mo>/</mo><mn>2</mn></mrow></msup><msup><mrow><mo>(</mo><mi>m</mi><mi>η</mi><mo>)</mo></mrow><mrow><mi>r</mi></mrow></msup></math></span>. For dilations of a generic set by <em>δ</em>, the lower bound becomes <span><math><mi>C</mi><msup><mrow><mi>m</mi></mrow><mrow><mi>d</mi><mo>/</mo><mn>2</mn></mrow></msup><msup><mrow><mo>(</mo><mi>m</mi><mi>δ</mi><mo>)</mo></mrow><mrow><mo>⌈</mo><mo>(</mo><mi>λ</mi><mo>−</mo><mn>1</mn><mo>)</mo><mo>/</mo><mi>d</mi><mo>⌉</mo></mrow></msup></math></span>. The appearance of a <span><math><mn>1</mn><mo>/</mo><mi>d</mi></math></span> factor in the exponent indicates that compared to worst case scenarios, the condition number of nonharmonic Fourier transforms is better than expected for typical sets and improve with higher dimensionality.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101791"},"PeriodicalIF":2.6,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144595712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On exact systems {tα⋅e2πint}n∈Z∖A in L2(0,1) which are weighted lower semi frames but not Schauder bases, and their generalizations 关于L2(0,1)中为加权下半坐标系但非Schauder基的精确系统{tα⋅e2πint}n∈Z∈A及其推广
IF 2.6 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-07-09 DOI: 10.1016/j.acha.2025.101794
Elias Zikkos
Let {eiλnt}nZ be an exponential Schauder basis for L2(0,1), where λnR, and let {rn(t)}nZ be its dual Schauder basis. Let A be a non-empty subset of the integers containing exactly M elements. We prove that for α>0 the weighted system {tαrn(t)}nZA is exact in the space L2(0,1), that is, it is complete and minimal in L2(0,1), if and only if α[M12,M+12). We also show that such a system is not a Riesz basis for L2(0,1).
In particular, the weighted trigonometric system {tαe2πint}nZA is exact in L2(0,1), if and only if α[M12,M+12), but this system is not even a Schauder basis for L2(0,1)<
设{eiλnt}n∈Z是L2(0,1)的指数Schauder基,其中λn∈R,设{rn(t)}n∈Z是它的对偶Schauder基。设A是包含M个元素的整数的非空子集。证明了对于α>;0,加权系统{tα⋅rn(t)}n∈Z∈A在L2(0,1)上是精确的,即当且仅当α∈[M−12,M+12]时,它在L2(0,1)上是完备极小的。我们还证明了这样的系统不是L2(0,1)的Riesz基。特别地,当且仅当α∈[M−12,M+12]时,加权三角系统{tα⋅e2πint}n∈Z∈A在L2(0,1)中是精确的,但该系统甚至不是L2(0,1)的Schauder基。这个结果扩展了Heil和Yoon(2012)的结果,他们考虑了α为正整数时的类似问题。{tα⋅e2πint}n∈Z∈A的非碱度结合Heil et al.(2023)的结果,得到对于任意α≥1/2,过完备系统{tα⋅e2πint}n∈Z对于L2(0,1)没有可再生伙伴。然而,这个过完备系统是L2(0,1)的加权下半框架。这是根据我们最近的结果得出的,我们证明了Hilbert空间H中的任何精确系统都是H的加权下半框架。为了完备性,我们在这里重新证明了这个结果。指出Vandermonde矩阵的可逆性对上述系统的精确性和非基性起着至关重要的作用。
{"title":"On exact systems {tα⋅e2πint}n∈Z∖A in L2(0,1) which are weighted lower semi frames but not Schauder bases, and their generalizations","authors":"Elias Zikkos","doi":"10.1016/j.acha.2025.101794","DOIUrl":"10.1016/j.acha.2025.101794","url":null,"abstract":"<div><div>Let <span><math><msub><mrow><mo>{</mo><msup><mrow><mi>e</mi></mrow><mrow><mi>i</mi><msub><mrow><mi>λ</mi></mrow><mrow><mi>n</mi></mrow></msub><mi>t</mi></mrow></msup><mo>}</mo></mrow><mrow><mi>n</mi><mo>∈</mo><mi>Z</mi></mrow></msub></math></span> be an exponential Schauder basis for <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math></span>, where <span><math><msub><mrow><mi>λ</mi></mrow><mrow><mi>n</mi></mrow></msub><mo>∈</mo><mi>R</mi></math></span>, and let <span><math><msub><mrow><mo>{</mo><msub><mrow><mi>r</mi></mrow><mrow><mi>n</mi></mrow></msub><mo>(</mo><mi>t</mi><mo>)</mo><mo>}</mo></mrow><mrow><mi>n</mi><mo>∈</mo><mi>Z</mi></mrow></msub></math></span> be its dual Schauder basis. Let <em>A</em> be a non-empty subset of the integers containing exactly <em>M</em> elements. We prove that for <span><math><mi>α</mi><mo>&gt;</mo><mn>0</mn></math></span> the weighted system <span><math><msub><mrow><mo>{</mo><msup><mrow><mi>t</mi></mrow><mrow><mi>α</mi></mrow></msup><mo>⋅</mo><msub><mrow><mi>r</mi></mrow><mrow><mi>n</mi></mrow></msub><mo>(</mo><mi>t</mi><mo>)</mo><mo>}</mo></mrow><mrow><mi>n</mi><mo>∈</mo><mi>Z</mi><mo>∖</mo><mi>A</mi></mrow></msub></math></span> is exact in the space <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math></span>, that is, it is complete and minimal in <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math></span>, if and only if <span><math><mi>α</mi><mo>∈</mo><mo>[</mo><mi>M</mi><mo>−</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mo>,</mo><mi>M</mi><mo>+</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mo>)</mo></math></span>. We also show that such a system is not a Riesz basis for <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math></span>.</div><div>In particular, the weighted trigonometric system <span><math><msub><mrow><mo>{</mo><msup><mrow><mi>t</mi></mrow><mrow><mi>α</mi></mrow></msup><mo>⋅</mo><msup><mrow><mi>e</mi></mrow><mrow><mn>2</mn><mi>π</mi><mi>i</mi><mi>n</mi><mi>t</mi></mrow></msup><mo>}</mo></mrow><mrow><mi>n</mi><mo>∈</mo><mi>Z</mi><mo>∖</mo><mi>A</mi></mrow></msub></math></span> is exact in <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math></span>, if and only if <span><math><mi>α</mi><mo>∈</mo><mo>[</mo><mi>M</mi><mo>−</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mo>,</mo><mi>M</mi><mo>+</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mo>)</mo></math></span>, but this system is not even a Schauder basis for <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>)</mo></math><","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101794"},"PeriodicalIF":2.6,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144596066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the limits of neural network explainability via descrambling 解扰论神经网络可解释性的局限性
IF 2.6 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-07-07 DOI: 10.1016/j.acha.2025.101793
Shashank Sule , Richard G. Spencer , Wojciech Czaja
We characterize the exact solutions to neural network descrambling–a mathematical model for explaining the fully connected layers of trained neural networks (NNs). By reformulating the problem to the minimization of the Brockett function arising in graph matching and complexity theory we show that the principal components of the hidden layer preactivations can be characterized as the optimal “explainers” or descramblers for the layer weights, leading to descrambled weight matrices. We show that in typical deep learning contexts these descramblers take diverse and interesting forms including (1) matching largest principal components with the lowest frequency modes of the Fourier basis for isotropic hidden data, (2) discovering the semantic development in two-layer linear NNs for signal recovery problems, and (3) explaining CNNs by optimally permuting the neurons. Our numerical experiments indicate that the eigendecompositions of the hidden layer data–now understood as the descramblers–can also reveal the layer's underlying transformation. These results illustrate that the SVD is more directly related to the explainability of NNs than previously thought and offers a promising avenue for discovering interpretable motifs for the hidden action of NNs, especially in contexts of operator learning or physics-informed NNs, where the input/output data has limited human readability.
我们描述了神经网络解码器的精确解决方案,这是一种用于解释训练神经网络(nn)的完全连接层的数学模型。通过将问题重新表述为最小化图匹配和复杂性理论中出现的Brockett函数,我们表明隐藏层预激活的主成分可以被表征为层权重的最优“解释者”或解扰者,从而导致解扰权重矩阵。我们表明,在典型的深度学习环境中,这些解码器采取多种有趣的形式,包括(1)将各向同性隐藏数据的最大主成分与傅里叶基的最低频率模式匹配,(2)发现信号恢复问题的双层线性nn中的语义发展,以及(3)通过优化排列神经元来解释cnn。我们的数值实验表明,隐藏层数据的特征分解(现在被理解为解密器)也可以揭示该层的底层变换。这些结果表明,SVD与神经网络的可解释性比以前认为的更直接相关,并为发现神经网络隐藏动作的可解释动机提供了一条有希望的途径,特别是在算子学习或物理信息神经网络的背景下,其中输入/输出数据限制了人类的可读性。
{"title":"On the limits of neural network explainability via descrambling","authors":"Shashank Sule ,&nbsp;Richard G. Spencer ,&nbsp;Wojciech Czaja","doi":"10.1016/j.acha.2025.101793","DOIUrl":"10.1016/j.acha.2025.101793","url":null,"abstract":"<div><div>We characterize the exact solutions to <em>neural network descrambling</em>–a mathematical model for explaining the fully connected layers of trained neural networks (NNs). By reformulating the problem to the minimization of the Brockett function arising in graph matching and complexity theory we show that the principal components of the hidden layer preactivations can be characterized as the optimal “explainers” or <em>descramblers</em> for the layer weights, leading to <em>descrambled</em> weight matrices. We show that in typical deep learning contexts these descramblers take diverse and interesting forms including (1) matching largest principal components with the lowest frequency modes of the Fourier basis for isotropic hidden data, (2) discovering the semantic development in two-layer linear NNs for signal recovery problems, and (3) explaining CNNs by optimally permuting the neurons. Our numerical experiments indicate that the eigendecompositions of the hidden layer data–now understood as the descramblers–can also reveal the layer's underlying transformation. These results illustrate that the SVD is more directly related to the explainability of NNs than previously thought and offers a promising avenue for discovering interpretable motifs for the hidden action of NNs, especially in contexts of operator learning or physics-informed NNs, where the input/output data has limited human readability.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101793"},"PeriodicalIF":2.6,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144572145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaussian process regression with log-linear scaling for common non-stationary kernels 常见非平稳核的对数线性标度高斯过程回归
IF 2.6 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-07-05 DOI: 10.1016/j.acha.2025.101792
P. Michael Kielstra , Michael Lindsey
We introduce a fast algorithm for Gaussian process regression in low dimensions, applicable to a widely-used family of non-stationary kernels. The non-stationarity of these kernels is induced by arbitrary spatially-varying vertical and horizontal scales. In particular, any stationary kernel can be accommodated as a special case, and we focus especially on the generalization of the standard Matérn kernel. Our subroutine for kernel matrix-vector multiplications scales almost optimally as O(NlogN), where N is the number of regression points. Like the recently developed equispaced Fourier Gaussian process (EFGP) methodology, which is applicable only to stationary kernels, our approach exploits non-uniform fast Fourier transforms (NUFFTs). We offer a complete analysis controlling the approximation error of our method, and we validate the method's practical performance with numerical experiments. In particular we demonstrate improved scalability compared to state-of-the-art rank-structured approaches in spatial dimension d>1.
我们介绍了一种快速的低维高斯过程回归算法,适用于广泛使用的非平稳核族。这些核的非平稳性是由任意空间变化的垂直和水平尺度引起的。特别地,任何平稳核都可以作为一种特殊情况,我们特别关注标准mat核的推广。我们的核矩阵-向量乘法子程序的尺度几乎为O(Nlog (N)),其中N是回归点的数量。就像最近开发的均等化傅立叶高斯过程(EFGP)方法一样,该方法仅适用于平稳核,我们的方法利用了非均匀快速傅立叶变换(nufft)。对控制方法的逼近误差进行了完整的分析,并通过数值实验验证了该方法的实用性能。特别是,与空间维度d>;1的最先进的秩结构方法相比,我们展示了改进的可扩展性。
{"title":"Gaussian process regression with log-linear scaling for common non-stationary kernels","authors":"P. Michael Kielstra ,&nbsp;Michael Lindsey","doi":"10.1016/j.acha.2025.101792","DOIUrl":"10.1016/j.acha.2025.101792","url":null,"abstract":"<div><div>We introduce a fast algorithm for Gaussian process regression in low dimensions, applicable to a widely-used family of non-stationary kernels. The non-stationarity of these kernels is induced by arbitrary spatially-varying vertical and horizontal scales. In particular, any stationary kernel can be accommodated as a special case, and we focus especially on the generalization of the standard Matérn kernel. Our subroutine for kernel matrix-vector multiplications scales almost optimally as <span><math><mi>O</mi><mo>(</mo><mi>N</mi><mi>log</mi><mo>⁡</mo><mi>N</mi><mo>)</mo></math></span>, where <em>N</em> is the number of regression points. Like the recently developed equispaced Fourier Gaussian process (EFGP) methodology, which is applicable only to stationary kernels, our approach exploits non-uniform fast Fourier transforms (NUFFTs). We offer a complete analysis controlling the approximation error of our method, and we validate the method's practical performance with numerical experiments. In particular we demonstrate improved scalability compared to state-of-the-art rank-structured approaches in spatial dimension <span><math><mi>d</mi><mo>&gt;</mo><mn>1</mn></math></span>.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101792"},"PeriodicalIF":2.6,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144596067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy propagation in scattering convolution networks can be arbitrarily slow 散射卷积网络中的能量传播可以是任意慢的
IF 2.6 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-06-20 DOI: 10.1016/j.acha.2025.101790
Hartmut Führ, Max Getter
We analyze energy decay for deep convolutional neural networks employed as feature extractors, including Mallat's wavelet scattering transform. For time-frequency scattering transforms based on Gabor filters, previous work has established that energy decay is exponential for arbitrary square-integrable input signals. In contrast, our main results allow proving that this is false for wavelet scattering in any dimension. Specifically, we show that the energy decay of wavelet and wavelet-like scattering transforms acting on generic square-integrable signals can be arbitrarily slow. Importantly, this slow decay behavior holds for dense subsets of L2(Rd), indicating that rapid energy decay is generally an unstable property of signals. We complement these findings with positive results that allow us to infer fast (up to exponential) energy decay for generalized Sobolev spaces tailored to the frequency localization of the underlying filter bank. Both negative and positive results highlight that energy decay in scattering networks critically depends on the interplay between the respective frequency localizations of both the signal and the filters used.
我们分析了使用Mallat小波散射变换作为特征提取器的深度卷积神经网络的能量衰减。对于基于Gabor滤波器的时频散射变换,先前的工作已经确定了任意平方可积输入信号的能量衰减是指数的。相反,我们的主要结果允许证明这是假的小波散射在任何维度。具体来说,我们证明了作用于一般平方可积信号的小波和类小波散射变换的能量衰减可以是任意慢的。重要的是,这种缓慢的衰减行为适用于L2(Rd)的密集子集,表明快速的能量衰减通常是信号的不稳定特性。我们用积极的结果来补充这些发现,这些结果使我们能够推断出针对底层滤波器组的频率局域化定制的广义Sobolev空间的快速(高达指数级)能量衰减。消极和积极的结果都强调,散射网络中的能量衰减严重依赖于信号和所用滤波器各自频率局域之间的相互作用。
{"title":"Energy propagation in scattering convolution networks can be arbitrarily slow","authors":"Hartmut Führ,&nbsp;Max Getter","doi":"10.1016/j.acha.2025.101790","DOIUrl":"10.1016/j.acha.2025.101790","url":null,"abstract":"<div><div>We analyze energy decay for deep convolutional neural networks employed as feature extractors, including Mallat's wavelet scattering transform. For time-frequency scattering transforms based on Gabor filters, previous work has established that energy decay is exponential for arbitrary square-integrable input signals. In contrast, our main results allow proving that this is false for wavelet scattering in any dimension. Specifically, we show that the energy decay of wavelet and wavelet-like scattering transforms acting on generic square-integrable signals can be arbitrarily slow. Importantly, this slow decay behavior holds for dense subsets of <span><math><msup><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>(</mo><msup><mrow><mi>R</mi></mrow><mrow><mi>d</mi></mrow></msup><mo>)</mo></math></span>, indicating that rapid energy decay is generally an unstable property of signals. We complement these findings with positive results that allow us to infer fast (up to exponential) energy decay for generalized Sobolev spaces tailored to the frequency localization of the underlying filter bank. Both negative and positive results highlight that energy decay in scattering networks critically depends on the interplay between the respective frequency localizations of both the signal and the filters used.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101790"},"PeriodicalIF":2.6,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144337762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ANOVA-boosting for random Fourier features 随机傅里叶特征的anova增强
IF 2.6 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-06-18 DOI: 10.1016/j.acha.2025.101789
Daniel Potts, Laura Weidensager
We propose two algorithms for boosting random Fourier feature models for approximating high-dimensional functions. These methods utilize the classical and generalized analysis of variance (ANOVA) decomposition to learn low-order functions, where there are few interactions between the variables. Our algorithms are able to find an index set of important input variables and variable interactions reliably.
Furthermore, we generalize already existing random Fourier feature models to an ANOVA setting, where terms of different order can be used. Our algorithms have the advantage of being interpretable, meaning that the influence of every input variable is known in the learned model, even for dependent input variables. We provide theoretical as well as numerical results that our algorithms perform well for sensitivity analysis. The ANOVA-boosting step reduces the approximation error of existing methods significantly.
我们提出了两种算法来增强随机傅立叶特征模型来近似高维函数。这些方法利用经典和广义方差分析(ANOVA)分解来学习低阶函数,其中变量之间的相互作用很少。我们的算法能够可靠地找到重要输入变量和变量交互的索引集。此外,我们将已经存在的随机傅立叶特征模型推广到ANOVA设置,其中可以使用不同顺序的项。我们的算法具有可解释性的优势,这意味着每个输入变量的影响在学习模型中是已知的,即使对于依赖的输入变量也是如此。我们提供了理论和数值结果,表明我们的算法在灵敏度分析中表现良好。anova增强步骤显著降低了现有方法的近似误差。
{"title":"ANOVA-boosting for random Fourier features","authors":"Daniel Potts,&nbsp;Laura Weidensager","doi":"10.1016/j.acha.2025.101789","DOIUrl":"10.1016/j.acha.2025.101789","url":null,"abstract":"<div><div>We propose two algorithms for boosting random Fourier feature models for approximating high-dimensional functions. These methods utilize the classical and generalized analysis of variance (ANOVA) decomposition to learn low-order functions, where there are few interactions between the variables. Our algorithms are able to find an index set of important input variables and variable interactions reliably.</div><div>Furthermore, we generalize already existing random Fourier feature models to an ANOVA setting, where terms of different order can be used. Our algorithms have the advantage of being interpretable, meaning that the influence of every input variable is known in the learned model, even for dependent input variables. We provide theoretical as well as numerical results that our algorithms perform well for sensitivity analysis. The ANOVA-boosting step reduces the approximation error of existing methods significantly.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101789"},"PeriodicalIF":2.6,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144313768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
New results on sparse representations in unions of orthonormal bases 标准正交基并中的稀疏表示的新结果
IF 2.6 2区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-06-11 DOI: 10.1016/j.acha.2025.101786
Tao Zhang , Gennian Ge
The problem of sparse representation has significant applications in signal processing. The spark of a dictionary plays a crucial role in the study of sparse representation. Donoho and Elad initially explored the spark, and they provided a general lower bound. When the dictionary is a union of several orthonormal bases, Gribonval and Nielsen presented an improved lower bound for spark. In this paper, we introduce a new construction of dictionary, achieving the spark bound given by Gribonval and Nielsen. More precisely, let q be a power of 2, we show that for any positive integer t, there exists a dictionary in Rq2t, which is a union of q+1 orthonormal bases, such that the spark of the dictionary attains Gribonval-Nielsen's bound. Our result extends previously best known result from t=1,2 to arbitrarily positive integer t, and our construction is technically different from previous ones. Their method is more combinatorial, while ours is algebraic, which is more general.
稀疏表示问题在信号处理中有着重要的应用。字典的火花在稀疏表示的研究中起着至关重要的作用。多诺霍和埃拉德最初探索了火星,他们提供了一个一般的下限。当字典是几个标准正交基的并集时,Gribonval和Nielsen提出了一个改进的spark下界。本文引入了一种新的字典结构,实现了Gribonval和Nielsen给出的火花界。更精确地说,设q为2的幂,我们证明了对于任意正整数t,在Rq2t中存在一个字典,它是q+1个正交基的并,使得字典的火花达到Gribonval-Nielsen界。我们的结果将以前最著名的结果从t=1,2扩展到任意正整数t,并且我们的构造在技术上与以前的构造不同。他们的方法是组合的,而我们的方法是代数的,更一般。
{"title":"New results on sparse representations in unions of orthonormal bases","authors":"Tao Zhang ,&nbsp;Gennian Ge","doi":"10.1016/j.acha.2025.101786","DOIUrl":"10.1016/j.acha.2025.101786","url":null,"abstract":"<div><div>The problem of sparse representation has significant applications in signal processing. The spark of a dictionary plays a crucial role in the study of sparse representation. Donoho and Elad initially explored the spark, and they provided a general lower bound. When the dictionary is a union of several orthonormal bases, Gribonval and Nielsen presented an improved lower bound for spark. In this paper, we introduce a new construction of dictionary, achieving the spark bound given by Gribonval and Nielsen. More precisely, let <em>q</em> be a power of 2, we show that for any positive integer <em>t</em>, there exists a dictionary in <span><math><msup><mrow><mi>R</mi></mrow><mrow><msup><mrow><mi>q</mi></mrow><mrow><mn>2</mn><mi>t</mi></mrow></msup></mrow></msup></math></span>, which is a union of <span><math><mi>q</mi><mo>+</mo><mn>1</mn></math></span> orthonormal bases, such that the spark of the dictionary attains Gribonval-Nielsen's bound. Our result extends previously best known result from <span><math><mi>t</mi><mo>=</mo><mn>1</mn><mo>,</mo><mn>2</mn></math></span> to arbitrarily positive integer <em>t</em>, and our construction is technically different from previous ones. Their method is more combinatorial, while ours is algebraic, which is more general.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"79 ","pages":"Article 101786"},"PeriodicalIF":2.6,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144262223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Applied and Computational Harmonic Analysis
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1