Pub Date : 2025-12-11DOI: 10.1016/j.acha.2025.101847
Hendrik Bernd Zarucha , Peter Jung
It is known that sparse recovery is possible if the number of measurements is in the order of the sparsity, but the corresponding decoders either lack polynomial decoding time or robustness to noise. Commonly, decoders that rely on a null space property are being used. These achieve polynomial time decoding and are robust to additive noise but pay the price by requiring more measurements. The non-negative least residual has been established as such a decoder for non-negative recovery. A new equivalent condition for uniform, robust recovery of non-negative sparse vectors with the non-negative least residual that is not based on null space properties is introduced. It is shown that the number of measurements for this equivalent condition only needs to be in the order of the sparsity. Further, it is explained why the robustness to additive noise is similar, but not equal, to the robustness of decoders based on null space properties.
{"title":"Non-negative sparse recovery at minimal sampling rate","authors":"Hendrik Bernd Zarucha , Peter Jung","doi":"10.1016/j.acha.2025.101847","DOIUrl":"10.1016/j.acha.2025.101847","url":null,"abstract":"<div><div>It is known that sparse recovery is possible if the number of measurements is in the order of the sparsity, but the corresponding decoders either lack polynomial decoding time or robustness to noise. Commonly, decoders that rely on a null space property are being used. These achieve polynomial time decoding and are robust to additive noise but pay the price by requiring more measurements. The non-negative least residual has been established as such a decoder for non-negative recovery. A new equivalent condition for uniform, robust recovery of non-negative sparse vectors with the non-negative least residual that is not based on null space properties is introduced. It is shown that the number of measurements for this equivalent condition only needs to be in the order of the sparsity. Further, it is explained why the robustness to additive noise is similar, but not equal, to the robustness of decoders based on null space properties.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101847"},"PeriodicalIF":3.2,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145731505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1016/j.acha.2025.101838
Tal Amir , Tamir Bendory , Nadav Dym , Dan Edidin
The generalized phase retrieval problem over compact groups aims to recover a set of matrices–representing an unknown signal–from their associated Gram matrices. This framework generalizes the classical phase retrieval problem, which reconstructs a signal from the magnitudes of its Fourier transform, to a richer setting involving non-abelian compact groups. In this broader context, the unknown phases in Fourier space are replaced by unknown orthogonal matrices that arise from the action of a compact group on a finite-dimensional vector space. This problem is primarily motivated by advances in electron microscopy to determining the 3D structure of biological macromolecules from highly noisy observations. To capture realistic assumptions from machine learning and signal processing, we model the signal as belonging to one of several broad structural families: a generic linear subspace, a sparse representation in a generic basis, the output of a generic ReLU neural network, or a generic low-dimensional manifold. Our main result shows that, for a prior of sufficiently low dimension, the generalized phase retrieval problem not only admits a unique solution (up to inherent group symmetries), but also satisfies a bi-Lipschitz property. This implies robustness to both noise and model mismatch—an essential requirement for practical use, especially when measurements are severely corrupted by noise. These findings provide theoretical support for a wide class of scientific problems under modern structural assumptions, and they offer strong foundations for developing robust algorithms in high-noise regimes.
{"title":"The stability of generalized phase retrieval problem over compact groups","authors":"Tal Amir , Tamir Bendory , Nadav Dym , Dan Edidin","doi":"10.1016/j.acha.2025.101838","DOIUrl":"10.1016/j.acha.2025.101838","url":null,"abstract":"<div><div>The generalized phase retrieval problem over compact groups aims to recover a set of matrices–representing an unknown signal–from their associated Gram matrices. This framework generalizes the classical phase retrieval problem, which reconstructs a signal from the magnitudes of its Fourier transform, to a richer setting involving non-abelian compact groups. In this broader context, the unknown phases in Fourier space are replaced by unknown orthogonal matrices that arise from the action of a compact group on a finite-dimensional vector space. This problem is primarily motivated by advances in electron microscopy to determining the 3D structure of biological macromolecules from highly noisy observations. To capture realistic assumptions from machine learning and signal processing, we model the signal as belonging to one of several broad structural families: a generic linear subspace, a sparse representation in a generic basis, the output of a generic ReLU neural network, or a generic low-dimensional manifold. Our main result shows that, for a prior of sufficiently low dimension, the generalized phase retrieval problem not only admits a unique solution (up to inherent group symmetries), but also satisfies a bi-Lipschitz property. This implies robustness to both noise and model mismatch—an essential requirement for practical use, especially when measurements are severely corrupted by noise. These findings provide theoretical support for a wide class of scientific problems under modern structural assumptions, and they offer strong foundations for developing robust algorithms in high-noise regimes.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101838"},"PeriodicalIF":3.2,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145732372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-08DOI: 10.1016/j.acha.2025.101834
Chaoyue Liu , Libin Zhu , Mikhail Belkin
The recently discovered remarkable property that very wide neural networks in certain regimes are linear functions of their weights has become one of the key insights into understanding the mathematical foundations of deep learning. In this work, we show that this transition to linearity of wide neural networks can be viewed as an outcome of an iterated assembly procedure employed in the construction of neural networks. From the perspective of assembly, the output of a wide network can be viewed as an assembly of a large number of similar sub-models, which will transition to linearity as their number increases. This process can be iterated multiple times to show the transition to linearity of deep networks, including general feedforward neural networks with Directed Acyclic Graph (DAG) architecture.
{"title":"Assembly and iteration: Transition to linearity of wide neural networks","authors":"Chaoyue Liu , Libin Zhu , Mikhail Belkin","doi":"10.1016/j.acha.2025.101834","DOIUrl":"10.1016/j.acha.2025.101834","url":null,"abstract":"<div><div>The recently discovered remarkable property that very wide neural networks in certain regimes are linear functions of their weights has become one of the key insights into understanding the mathematical foundations of deep learning. In this work, we show that this <em>transition to linearity</em> of wide neural networks can be viewed as an outcome of an iterated assembly procedure employed in the construction of neural networks. From the perspective of assembly, the output of a wide network can be viewed as an assembly of a large number of similar sub-models, which will transition to linearity as their number increases. This process can be iterated multiple times to show the transition to linearity of deep networks, including general feedforward neural networks with Directed Acyclic Graph (DAG) architecture.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101834"},"PeriodicalIF":3.2,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-04DOI: 10.1016/j.acha.2025.101835
Na Zhang , Xinrui Liu , Qia Li
In this paper, we consider a new ℓ1/ℓ2 (the ratio of ℓ1 and ℓ2 norms) based sparse signal recovery model, which incorporates a ball constraints to ensure the existence of optimal solutions. The presence of two constraints in this model causes algorithmic difficulties for its numerical treatment. To overcome the difficulties, we propose a penalty formulation for the model, and establish the relationships between their optimal solutions and stationary points. Inspired by the parametric approach for fractional programs, we further propose a parameterized proximal-gradient algorithm (PPGA) and its line search counterpart (PPGAL) for solving a general structured fractional program having the penalty problem as a special case. In particular, we derive a closed form solution to the proximal operator of some nonconvex function, which is required to compute in each iteration when specializing the proposed algorithms to the penalty problem. Moreover, we prove the global convergence of the entire sequences generated by PPGA and PPGAL with monotone line search for the penalty problem. Numerical experiments demonstrate the efficiency of the proposed algorithms for noise-free and noisy signal recovery.
{"title":"Parameterized proximal-gradient algorithms for L1/L2 sparse signal recovery","authors":"Na Zhang , Xinrui Liu , Qia Li","doi":"10.1016/j.acha.2025.101835","DOIUrl":"10.1016/j.acha.2025.101835","url":null,"abstract":"<div><div>In this paper, we consider a new ℓ<sub>1</sub>/ℓ<sub>2</sub> (the ratio of ℓ<sub>1</sub> and ℓ<sub>2</sub> norms) based sparse signal recovery model, which incorporates a ball constraints to ensure the existence of optimal solutions. The presence of two constraints in this model causes algorithmic difficulties for its numerical treatment. To overcome the difficulties, we propose a penalty formulation for the model, and establish the relationships between their optimal solutions and stationary points. Inspired by the parametric approach for fractional programs, we further propose a parameterized proximal-gradient algorithm (PPGA) and its line search counterpart (PPGA<span><math><mo>_</mo></math></span>L) for solving a general structured fractional program having the penalty problem as a special case. In particular, we derive a closed form solution to the proximal operator of some nonconvex function, which is required to compute in each iteration when specializing the proposed algorithms to the penalty problem. Moreover, we prove the global convergence of the entire sequences generated by PPGA and PPGA<span><math><mo>_</mo></math></span>L with monotone line search for the penalty problem. Numerical experiments demonstrate the efficiency of the proposed algorithms for noise-free and noisy signal recovery.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101835"},"PeriodicalIF":3.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145689365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1016/j.acha.2025.101837
Shihao Zhang , Rayan Saab
Deep neural networks have achieved state-of-the-art performance across numerous applications, but their high memory and computational demands present significant challenges, particularly in resource-constrained environments. Model compression techniques, such as low-rank approximation, offer a promising solution by reducing the size and complexity of these networks while only minimally sacrificing accuracy. In this paper, we develop an analytical framework for data-driven post-training low-rank compression. We prove three recovery theorems under progressively weaker assumptions about the approximate low-rank structure of activations, modeling deviations via noise. Our results represent a step toward explaining why data-driven low-rank compression methods outperform data-agnostic approaches and towards theoretically grounded compression algorithms that reduce inference costs while maintaining performance.
{"title":"Theoretical guarantees for low-rank compression of deep neural networks","authors":"Shihao Zhang , Rayan Saab","doi":"10.1016/j.acha.2025.101837","DOIUrl":"10.1016/j.acha.2025.101837","url":null,"abstract":"<div><div>Deep neural networks have achieved state-of-the-art performance across numerous applications, but their high memory and computational demands present significant challenges, particularly in resource-constrained environments. Model compression techniques, such as low-rank approximation, offer a promising solution by reducing the size and complexity of these networks while only minimally sacrificing accuracy. In this paper, we develop an analytical framework for data-driven post-training low-rank compression. We prove three recovery theorems under progressively weaker assumptions about the approximate low-rank structure of activations, modeling deviations via noise. Our results represent a step toward explaining why data-driven low-rank compression methods outperform data-agnostic approaches and towards theoretically grounded compression algorithms that reduce inference costs while maintaining performance.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101837"},"PeriodicalIF":3.2,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145658199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.acha.2025.101839
Yi-Ju Yen , De-Yan Lu , Sing-Yuan Yeh , Jian-Jiun Ding , Chun-Yen Shen
This study focuses on the analysis of signals containing multiple components with crossover instantaneous frequencies (IF). This problem was initially solved with the chirplet transform (CT). Also, it can be sharpened by adding the synchrosqueezing step, which is called the synchrosqueezed chirplet transform (SCT). However, we found that the SCT goes wrong with the high chirp modulation signal due to the wrong estimation of the IF. In this paper, we present the improvement of the post-transformation of the CT. The main goal of this paper is to amend the estimation introduced in the SCT and carry out the high-order synchrosqueezed chirplet transform. The proposed method reduces the wrong estimation when facing a stronger variety of chirp-modulated multi-component signals. The theoretical analysis of the new reassignment ingredient is provided. Numerical experiments on some synthetic signals are presented to verify the effectiveness of the proposed high-order SCT.
{"title":"High-order synchrosqueezed chirplet transforms for multicomponent signal analysis","authors":"Yi-Ju Yen , De-Yan Lu , Sing-Yuan Yeh , Jian-Jiun Ding , Chun-Yen Shen","doi":"10.1016/j.acha.2025.101839","DOIUrl":"10.1016/j.acha.2025.101839","url":null,"abstract":"<div><div>This study focuses on the analysis of signals containing multiple components with crossover instantaneous frequencies (IF). This problem was initially solved with the chirplet transform (CT). Also, it can be sharpened by adding the synchrosqueezing step, which is called the synchrosqueezed chirplet transform (SCT). However, we found that the SCT goes wrong with the high chirp modulation signal due to the wrong estimation of the IF. In this paper, we present the improvement of the post-transformation of the CT. The main goal of this paper is to amend the estimation introduced in the SCT and carry out the high-order synchrosqueezed chirplet transform. The proposed method reduces the wrong estimation when facing a stronger variety of chirp-modulated multi-component signals. The theoretical analysis of the new reassignment ingredient is provided. Numerical experiments on some synthetic signals are presented to verify the effectiveness of the proposed high-order SCT.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101839"},"PeriodicalIF":3.2,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145651082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-29DOI: 10.1016/j.acha.2025.101833
Geonho Hwang , Myungjoo Kang
The Convolutional Neural Network (CNN) is one of the most prominent neural network architectures in deep learning. Despite its widespread adoption, our understanding of its universal approximation properties has been limited due to its intricate nature. CNNs inherently function as tensor-to-tensor mappings, preserving the spatial structure of input data. However, limited research has explored the universal approximation properties of fully convolutional neural networks as arbitrary continuous tensor-to-tensor functions. In this study, we demonstrate that CNNs, when utilizing zero padding, can approximate arbitrary continuous functions in cases where both the input and output values exhibit the same spatial shape. Additionally, we determine the minimum depth of the neural network required for approximation. We also verify that deep, narrow CNNs possess the UAP as tensor-to-tensor functions. The results encompass a wide range of activation functions, and our research covers CNNs of all dimensions.
{"title":"Universal approximation property of fully convolutional neural networks with zero padding","authors":"Geonho Hwang , Myungjoo Kang","doi":"10.1016/j.acha.2025.101833","DOIUrl":"10.1016/j.acha.2025.101833","url":null,"abstract":"<div><div>The Convolutional Neural Network (CNN) is one of the most prominent neural network architectures in deep learning. Despite its widespread adoption, our understanding of its universal approximation properties has been limited due to its intricate nature. CNNs inherently function as tensor-to-tensor mappings, preserving the spatial structure of input data. However, limited research has explored the universal approximation properties of fully convolutional neural networks as arbitrary continuous tensor-to-tensor functions. In this study, we demonstrate that CNNs, when utilizing zero padding, can approximate arbitrary continuous functions in cases where both the input and output values exhibit the same spatial shape. Additionally, we determine the minimum depth of the neural network required for approximation. We also verify that deep, narrow CNNs possess the UAP as tensor-to-tensor functions. The results encompass a wide range of activation functions, and our research covers CNNs of all dimensions.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101833"},"PeriodicalIF":3.2,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145619673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-29DOI: 10.1016/j.acha.2025.101836
Dustin G. Mixon , Brantley Vose
For an unknown finite group G of automorphisms of a finite-dimensional Hilbert space, we find sharp bounds on the number of generic G-orbits needed to recover G up to group isomorphism, as well as the number needed to recover G as a concrete set of automorphisms.
{"title":"Recovering a group from few orbits","authors":"Dustin G. Mixon , Brantley Vose","doi":"10.1016/j.acha.2025.101836","DOIUrl":"10.1016/j.acha.2025.101836","url":null,"abstract":"<div><div>For an unknown finite group <em>G</em> of automorphisms of a finite-dimensional Hilbert space, we find sharp bounds on the number of generic <em>G</em>-orbits needed to recover <em>G</em> up to group isomorphism, as well as the number needed to recover <em>G</em> as a concrete set of automorphisms.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101836"},"PeriodicalIF":3.2,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145619672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-28DOI: 10.1016/j.acha.2025.101832
Morten Nielsen
We construct smooth localized orthonormal bases compatible with anisotropic Triebel-Lizorkin and Besov type spaces on . The construction is based on tensor products of so-called univariate brushlet functions that are based on local trigonometric bases in the frequency domain, and the construction is painless in the sense that all parameters for the construction are explicitly specified. It is shown that the associated decomposition system form unconditional bases for the full family of Triebel-Lizorkin and Besov type spaces, including for the so-called α-modulation and α-Triebel-Lizorkin spaces. In the second part of the paper we study nonlinear m-term approximation with the constructed bases, where direct Jackson and Bernstein inequalities for m-term approximation with the tensor brushlet system in α-modulation and α-Triebel-Lizorkin spaces are derived. The inverse Bernstein estimates rely heavily on the fact that the constructed system is non-redundant.
{"title":"Painless construction of unconditional bases for anisotropic modulation and Triebel-Lizorkin type spaces","authors":"Morten Nielsen","doi":"10.1016/j.acha.2025.101832","DOIUrl":"10.1016/j.acha.2025.101832","url":null,"abstract":"<div><div>We construct smooth localized orthonormal bases compatible with anisotropic Triebel-Lizorkin and Besov type spaces on <span><math><msup><mi>R</mi><mi>d</mi></msup></math></span>. The construction is based on tensor products of so-called univariate brushlet functions that are based on local trigonometric bases in the frequency domain, and the construction is painless in the sense that all parameters for the construction are explicitly specified. It is shown that the associated decomposition system form unconditional bases for the full family of Triebel-Lizorkin and Besov type spaces, including for the so-called <em>α</em>-modulation and <em>α</em>-Triebel-Lizorkin spaces. In the second part of the paper we study nonlinear <em>m</em>-term approximation with the constructed bases, where direct Jackson and Bernstein inequalities for <em>m</em>-term approximation with the tensor brushlet system in <em>α</em>-modulation and <em>α</em>-Triebel-Lizorkin spaces are derived. The inverse Bernstein estimates rely heavily on the fact that the constructed system is non-redundant.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"82 ","pages":"Article 101832"},"PeriodicalIF":3.2,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145613991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-15DOI: 10.1016/j.acha.2025.101825
Simon Halvdansson
For time-frequency localization operators, related to the short-time Fourier transform, with symbol RΩ, we work out the exact large R eigenvalue behavior for rotationally invariant Ω and conjecture that the same relation holds for all scaled symbols RΩ as long as the window is the standard Gaussian. Specifically, we conjecture that the kth eigenvalue of the localization operator with symbol RΩ converges to as R → ∞. To support the conjecture, we compute the eigenvalues of discrete frame multipliers with various symbols using LTFAT and find that they agree with the behavior of the conjecture to a large degree.
{"title":"Empirical plunge profiles of time-frequency localization operators","authors":"Simon Halvdansson","doi":"10.1016/j.acha.2025.101825","DOIUrl":"10.1016/j.acha.2025.101825","url":null,"abstract":"<div><div>For time-frequency localization operators, related to the short-time Fourier transform, with symbol <em>R</em>Ω, we work out the exact large <em>R</em> eigenvalue behavior for rotationally invariant Ω and conjecture that the same relation holds for all scaled symbols <em>R</em>Ω as long as the window is the standard Gaussian. Specifically, we conjecture that the <em>k</em>th eigenvalue of the localization operator with symbol <em>R</em>Ω converges to <span><math><mrow><mfrac><mn>1</mn><mn>2</mn></mfrac><mi>erfc</mi><mo>(</mo><msqrt><mrow><mn>2</mn><mi>π</mi></mrow></msqrt><mfrac><mrow><mi>k</mi><mo>−</mo><msup><mi>R</mi><mn>2</mn></msup><mrow><mo>|</mo><mstyle><mi>Ω</mi></mstyle><mo>|</mo></mrow></mrow><mrow><mi>R</mi><mo>|</mo><mi>∂</mi><mstyle><mi>Ω</mi></mstyle><mo>|</mo></mrow></mfrac><mo>)</mo></mrow></math></span> as <em>R</em> → ∞. To support the conjecture, we compute the eigenvalues of discrete frame multipliers with various symbols using LTFAT and find that they agree with the behavior of the conjecture to a large degree.</div></div>","PeriodicalId":55504,"journal":{"name":"Applied and Computational Harmonic Analysis","volume":"81 ","pages":"Article 101825"},"PeriodicalIF":3.2,"publicationDate":"2025-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145536107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}