We develop a data-driven optimal shrinkage algorithm, named extended OptShrink (eOptShrink), for matrix denoising with high-dimensional noise and a separable covariance structure. This noise is colored and dependent across samples. The algorithm leverages the asymptotic behavior of singular values and vectors of the noisy data's random matrix. Our theory includes the sticking property of non-outlier singular values, delocalization of weak signal singular vectors, and the spectral behavior of outlier singular values and vectors. We introduce three estimators: a novel rank estimator, an estimator for the spectral distribution of the pure noise matrix, and the optimal shrinker eOptShrink. Notably, eOptShrink does not require estimating the noise's separable covariance structure. We provide a theoretical guarantee for these estimators with a convergence rate. Through numerical simulations and comparisons with state-of-the-art optimal shrinkage algorithms, we demonstrate eOptShrink's application in extracting maternal and fetal electrocardiograms from single-channel trans-abdominal maternal electrocardiograms.
The generalized translation invariant (GTI) systems unify the discrete frame theory of generalized shift-invariant systems with its continuous version, such as wavelets, shearlets, Gabor transforms, and others. This article provides sufficient conditions to construct pairwise orthogonal Parseval GTI frames in satisfying the local integrability condition (LIC) and having the Calderón sum one, where G is a second countable locally compact abelian group. The pairwise orthogonality plays a crucial role in multiple access communications, hiding data, synthesizing superframes and frames, etc. Further, we provide a result for constructing N numbers of GTI Parseval frames, which are pairwise orthogonal. Consequently, we obtain an explicit construction of pairwise orthogonal Parseval frames in and , using B-splines as a generating function. In the end, the results are particularly discussed for wavelet systems.
Sparse binary matrices are of great interest in the field of sparse recovery, nonnegative compressed sensing, statistics in networks, and theoretical computer science. This class of matrices makes it possible to perform signal recovery with lower storage costs and faster decoding algorithms. In particular, Bernoulli (p) matrices formed by independent identically distributed (i.i.d.) Bernoulli (p) random variables are of practical relevance in the context of noise-blind recovery in nonnegative compressed sensing.
In this work, we investigate the robust nullspace property of Bernoulli (p) matrices. Previous results in the literature establish that such matrices can accurately recover n-dimensional s-sparse vectors with measurements, where is a constant dependent only on the parameter p. These results suggest that in the sparse regime, as p approaches zero, the (sparse) Bernoulli (p) matrix requires significantly more measurements than the minimal necessary, as achieved by standard isotropic subgaussian designs. However, we show that this is not the case.
Our main result characterizes, for a wide range of sparsity levels s, the smallest p for which sparse recovery can be achieved with the minimal number of measurements. We also provide matching lower bounds to establish the optimality of our results and explore connections with the theory of invertibility of discrete random matrices and integer compressed sensing.
The diffusion maps embedding of data lying on a manifold has shown success in tasks such as dimensionality reduction, clustering, and data visualization. In this work, we consider embedding data sets that were sampled from a manifold which is closed under the action of a continuous matrix group. An example of such a data set is images whose planar rotations are arbitrary. The G-invariant graph Laplacian, introduced in Part I of this work, admits eigenfunctions in the form of tensor products between the elements of the irreducible unitary representations of the group and eigenvectors of certain matrices. We employ these eigenfunctions to derive diffusion maps that intrinsically account for the group action on the data. In particular, we construct both equivariant and invariant embeddings, which can be used to cluster and align the data points. We demonstrate the utility of our construction in the problem of random computerized tomography.
Given a finite group, we study the Gaussian series of the matrices in the image of its left regular representation. We propose such random matrices as a benchmark for improvements to the noncommutative Khintchine inequality, and we highlight an application to the matrix Spencer conjecture.
We prove the existence of a positive semidefinite matrix such that any decomposition into rank-1 matrices has to have factors with a large norm, more precisely where c is independent of n. This provides a lower bound for the Balan–Jiang matrix problem. The construction is probabilistic.