{"title":"通过迭代 GLASSO 和投影,利用核心特征向量先验进行谱图学习","authors":"Saghar Bagheri;Tam Thuc Do;Gene Cheung;Antonio Ortega","doi":"10.1109/TSP.2024.3446453","DOIUrl":null,"url":null,"abstract":"Before the execution of many standard graph signal processing (GSP) modules, such as compression and restoration, learning of a graph that encodes pairwise (dis)similarities in data is an important precursor. In data-starved scenarios, to reduce parameterization, previous graph learning algorithms make assumptions in the nodal domain on i) graph connectivity (e.g., edge sparsity), and/or ii) edge weights (e.g., positive edges only). In this paper, given an empirical covariance matrix \n<inline-formula><tex-math>$\\bar{{\\mathbf{C}}}$</tex-math></inline-formula>\n estimated from sparse data, we consider instead a spectral-domain assumption on the graph Laplacian matrix \n<inline-formula><tex-math>${\\mathcal{L}}$</tex-math></inline-formula>\n: the first \n<inline-formula><tex-math>$K$</tex-math></inline-formula>\n eigenvectors (called “core” eigenvectors) \n<inline-formula><tex-math>$\\{{\\mathbf{u}}_{k}\\}$</tex-math></inline-formula>\n of \n<inline-formula><tex-math>${\\mathcal{L}}$</tex-math></inline-formula>\n are pre-selected—e.g., based on domain-specific knowledge—and only the remaining eigenvectors are learned and parameterized. We first prove that, inside a Hilbert space of real symmetric matrices, the subspace \n<inline-formula><tex-math>${\\mathcal{H}}_{\\mathbf{u}}^{+}$</tex-math></inline-formula>\n of positive semi-definite (PSD) matrices sharing a common set of core \n<inline-formula><tex-math>$K$</tex-math></inline-formula>\n eigenvectors \n<inline-formula><tex-math>$\\{{\\mathbf{u}}_{k}\\}$</tex-math></inline-formula>\n is a convex cone. Inspired by the Gram-Schmidt procedure, we then construct an efficient operator to project a given positive definite (PD) matrix onto \n<inline-formula><tex-math>${\\mathcal{H}}_{\\mathbf{u}}^{+}$</tex-math></inline-formula>\n. Finally, we design a hybrid graphical lasso/projection algorithm to compute a locally optimal inverse Laplacian \n<inline-formula><tex-math>${\\mathcal{L}}^{-1}\\in{\\mathcal{H}}_{\\mathbf{u}}^{+}$</tex-math></inline-formula>\n given \n<inline-formula><tex-math>$\\bar{{\\mathbf{C}}}$</tex-math></inline-formula>\n. We apply our graph learning algorithm in two practical settings: parliamentary voting interpolation and predictive transform coding in image compression. Experiments show that our algorithm outperformed existing graph learning schemes in data-starved scenarios for both synthetic data and these two settings.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"72 ","pages":"3958-3972"},"PeriodicalIF":4.6000,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Spectral Graph Learning With Core Eigenvectors Prior via Iterative GLASSO and Projection\",\"authors\":\"Saghar Bagheri;Tam Thuc Do;Gene Cheung;Antonio Ortega\",\"doi\":\"10.1109/TSP.2024.3446453\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Before the execution of many standard graph signal processing (GSP) modules, such as compression and restoration, learning of a graph that encodes pairwise (dis)similarities in data is an important precursor. In data-starved scenarios, to reduce parameterization, previous graph learning algorithms make assumptions in the nodal domain on i) graph connectivity (e.g., edge sparsity), and/or ii) edge weights (e.g., positive edges only). In this paper, given an empirical covariance matrix \\n<inline-formula><tex-math>$\\\\bar{{\\\\mathbf{C}}}$</tex-math></inline-formula>\\n estimated from sparse data, we consider instead a spectral-domain assumption on the graph Laplacian matrix \\n<inline-formula><tex-math>${\\\\mathcal{L}}$</tex-math></inline-formula>\\n: the first \\n<inline-formula><tex-math>$K$</tex-math></inline-formula>\\n eigenvectors (called “core” eigenvectors) \\n<inline-formula><tex-math>$\\\\{{\\\\mathbf{u}}_{k}\\\\}$</tex-math></inline-formula>\\n of \\n<inline-formula><tex-math>${\\\\mathcal{L}}$</tex-math></inline-formula>\\n are pre-selected—e.g., based on domain-specific knowledge—and only the remaining eigenvectors are learned and parameterized. We first prove that, inside a Hilbert space of real symmetric matrices, the subspace \\n<inline-formula><tex-math>${\\\\mathcal{H}}_{\\\\mathbf{u}}^{+}$</tex-math></inline-formula>\\n of positive semi-definite (PSD) matrices sharing a common set of core \\n<inline-formula><tex-math>$K$</tex-math></inline-formula>\\n eigenvectors \\n<inline-formula><tex-math>$\\\\{{\\\\mathbf{u}}_{k}\\\\}$</tex-math></inline-formula>\\n is a convex cone. Inspired by the Gram-Schmidt procedure, we then construct an efficient operator to project a given positive definite (PD) matrix onto \\n<inline-formula><tex-math>${\\\\mathcal{H}}_{\\\\mathbf{u}}^{+}$</tex-math></inline-formula>\\n. Finally, we design a hybrid graphical lasso/projection algorithm to compute a locally optimal inverse Laplacian \\n<inline-formula><tex-math>${\\\\mathcal{L}}^{-1}\\\\in{\\\\mathcal{H}}_{\\\\mathbf{u}}^{+}$</tex-math></inline-formula>\\n given \\n<inline-formula><tex-math>$\\\\bar{{\\\\mathbf{C}}}$</tex-math></inline-formula>\\n. We apply our graph learning algorithm in two practical settings: parliamentary voting interpolation and predictive transform coding in image compression. Experiments show that our algorithm outperformed existing graph learning schemes in data-starved scenarios for both synthetic data and these two settings.\",\"PeriodicalId\":13330,\"journal\":{\"name\":\"IEEE Transactions on Signal Processing\",\"volume\":\"72 \",\"pages\":\"3958-3972\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-08-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10643489/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10643489/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Spectral Graph Learning With Core Eigenvectors Prior via Iterative GLASSO and Projection
Before the execution of many standard graph signal processing (GSP) modules, such as compression and restoration, learning of a graph that encodes pairwise (dis)similarities in data is an important precursor. In data-starved scenarios, to reduce parameterization, previous graph learning algorithms make assumptions in the nodal domain on i) graph connectivity (e.g., edge sparsity), and/or ii) edge weights (e.g., positive edges only). In this paper, given an empirical covariance matrix
$\bar{{\mathbf{C}}}$
estimated from sparse data, we consider instead a spectral-domain assumption on the graph Laplacian matrix
${\mathcal{L}}$
: the first
$K$
eigenvectors (called “core” eigenvectors)
$\{{\mathbf{u}}_{k}\}$
of
${\mathcal{L}}$
are pre-selected—e.g., based on domain-specific knowledge—and only the remaining eigenvectors are learned and parameterized. We first prove that, inside a Hilbert space of real symmetric matrices, the subspace
${\mathcal{H}}_{\mathbf{u}}^{+}$
of positive semi-definite (PSD) matrices sharing a common set of core
$K$
eigenvectors
$\{{\mathbf{u}}_{k}\}$
is a convex cone. Inspired by the Gram-Schmidt procedure, we then construct an efficient operator to project a given positive definite (PD) matrix onto
${\mathcal{H}}_{\mathbf{u}}^{+}$
. Finally, we design a hybrid graphical lasso/projection algorithm to compute a locally optimal inverse Laplacian
${\mathcal{L}}^{-1}\in{\mathcal{H}}_{\mathbf{u}}^{+}$
given
$\bar{{\mathbf{C}}}$
. We apply our graph learning algorithm in two practical settings: parliamentary voting interpolation and predictive transform coding in image compression. Experiments show that our algorithm outperformed existing graph learning schemes in data-starved scenarios for both synthetic data and these two settings.
期刊介绍:
The IEEE Transactions on Signal Processing covers novel theory, algorithms, performance analyses and applications of techniques for the processing, understanding, learning, retrieval, mining, and extraction of information from signals. The term “signal” includes, among others, audio, video, speech, image, communication, geophysical, sonar, radar, medical and musical signals. Examples of topics of interest include, but are not limited to, information processing and the theory and application of filtering, coding, transmitting, estimating, detecting, analyzing, recognizing, synthesizing, recording, and reproducing signals.