Pub Date : 2025-12-11DOI: 10.1016/j.acha.2025.101848
Bowen Li, Bin Shi, Ya-Xiang Yuan
{"title":"Proximal Subgradient Norm Minimization of ISTA and FISTA","authors":"Bowen Li, Bin Shi, Ya-Xiang Yuan","doi":"10.1016/j.acha.2025.101848","DOIUrl":"https://doi.org/10.1016/j.acha.2025.101848","url":null,"abstract":"","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"6 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145731504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1016/j.acha.2025.101838
Tamir Amir, Tamir Bendory, Nadav Dym, Dan Edidin
{"title":"The stability of generalized phase retrieval problem over compact groups","authors":"Tamir Amir, Tamir Bendory, Nadav Dym, Dan Edidin","doi":"10.1016/j.acha.2025.101838","DOIUrl":"https://doi.org/10.1016/j.acha.2025.101838","url":null,"abstract":"","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"150 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145732372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-08DOI: 10.1016/j.acha.2025.101834
Chaoyue Liu, Libin Zhu, Mikhail Belkin
The recently discovered remarkable property that very wide neural networks in certain regimes are linear functions of their weights has become one of the key insights into understanding the mathematical foundations of deep learning. In this work, we show that this transition to linearity of wide neural networks can be viewed as an outcome of an iterated assembly procedure employed in the construction of neural networks. From the perspective of assembly, the output of a wide network can be viewed as an assembly of a large number of similar sub-models, which will transition to linearity as their number increases. This process can be iterated multiple times to show the transition to linearity of deep networks, including general feedforward neural networks with Directed Acyclic Graph (DAG) architecture.
{"title":"Assembly and iteration: transition to linearity of wide neural networks","authors":"Chaoyue Liu, Libin Zhu, Mikhail Belkin","doi":"10.1016/j.acha.2025.101834","DOIUrl":"https://doi.org/10.1016/j.acha.2025.101834","url":null,"abstract":"The recently discovered remarkable property that very wide neural networks in certain regimes are linear functions of their weights has become one of the key insights into understanding the mathematical foundations of deep learning. In this work, we show that this <ce:italic>transition to linearity</ce:italic> of wide neural networks can be viewed as an outcome of an iterated assembly procedure employed in the construction of neural networks. From the perspective of assembly, the output of a wide network can be viewed as an assembly of a large number of similar sub-models, which will transition to linearity as their number increases. This process can be iterated multiple times to show the transition to linearity of deep networks, including general feedforward neural networks with Directed Acyclic Graph (DAG) architecture.","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"44 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1016/j.acha.2025.101837
Shihao Zhang , Rayan Saab
Deep neural networks have achieved state-of-the-art performance across numerous applications, but their high memory and computational demands present significant challenges, particularly in resource-constrained environments. Model compression techniques, such as low-rank approximation, offer a promising solution by reducing the size and complexity of these networks while only minimally sacrificing accuracy. In this paper, we develop an analytical framework for data-driven post-training low-rank compression. We prove three recovery theorems under progressively weaker assumptions about the approximate low-rank structure of activations, modeling deviations via noise. Our results represent a step toward explaining why data-driven low-rank compression methods outperform data-agnostic approaches and towards theoretically grounded compression algorithms that reduce inference costs while maintaining performance.
{"title":"Theoretical guarantees for low-rank compression of deep neural networks","authors":"Shihao Zhang , Rayan Saab","doi":"10.1016/j.acha.2025.101837","DOIUrl":"10.1016/j.acha.2025.101837","url":null,"abstract":"<div><div>Deep neural networks have achieved state-of-the-art performance across numerous applications, but their high memory and computational demands present significant challenges, particularly in resource-constrained environments. Model compression techniques, such as low-rank approximation, offer a promising solution by reducing the size and complexity of these networks while only minimally sacrificing accuracy. In this paper, we develop an analytical framework for data-driven post-training low-rank compression. We prove three recovery theorems under progressively weaker assumptions about the approximate low-rank structure of activations, modeling deviations via noise. Our results represent a step toward explaining why data-driven low-rank compression methods outperform data-agnostic approaches and towards theoretically grounded compression algorithms that reduce inference costs while maintaining performance.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101837"},"PeriodicalIF":3.2,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145658199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-29DOI: 10.1016/j.acha.2025.101836
Dustin G. Mixon, Brantley Vose
{"title":"Recovering a group from few orbits","authors":"Dustin G. Mixon, Brantley Vose","doi":"10.1016/j.acha.2025.101836","DOIUrl":"https://doi.org/10.1016/j.acha.2025.101836","url":null,"abstract":"","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"1 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145619672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-28DOI: 10.1016/j.acha.2025.101832
Morten Nielsen
We construct smooth localized orthonormal bases compatible with anisotropic Triebel-Lizorkin and Besov type spaces on . The construction is based on tensor products of so-called univariate brushlet functions that are based on local trigonometric bases in the frequency domain, and the construction is painless in the sense that all parameters for the construction are explicitly specified. It is shown that the associated decomposition system form unconditional bases for the full family of Triebel-Lizorkin and Besov type spaces, including for the so-called α-modulation and α-Triebel-Lizorkin spaces. In the second part of the paper we study nonlinear m-term approximation with the constructed bases, where direct Jackson and Bernstein inequalities for m-term approximation with the tensor brushlet system in α-modulation and α-Triebel-Lizorkin spaces are derived. The inverse Bernstein estimates rely heavily on the fact that the constructed system is non-redundant.
{"title":"Painless construction of unconditional bases for anisotropic modulation and Triebel-Lizorkin type spaces","authors":"Morten Nielsen","doi":"10.1016/j.acha.2025.101832","DOIUrl":"10.1016/j.acha.2025.101832","url":null,"abstract":"<div><div>We construct smooth localized orthonormal bases compatible with anisotropic Triebel-Lizorkin and Besov type spaces on <span><math><msup><mi>R</mi><mi>d</mi></msup></math></span>. The construction is based on tensor products of so-called univariate brushlet functions that are based on local trigonometric bases in the frequency domain, and the construction is painless in the sense that all parameters for the construction are explicitly specified. It is shown that the associated decomposition system form unconditional bases for the full family of Triebel-Lizorkin and Besov type spaces, including for the so-called <em>α</em>-modulation and <em>α</em>-Triebel-Lizorkin spaces. In the second part of the paper we study nonlinear <em>m</em>-term approximation with the constructed bases, where direct Jackson and Bernstein inequalities for <em>m</em>-term approximation with the tensor brushlet system in <em>α</em>-modulation and <em>α</em>-Triebel-Lizorkin spaces are derived. The inverse Bernstein estimates rely heavily on the fact that the constructed system is non-redundant.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101832"},"PeriodicalIF":3.2,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145613991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}