{"title":"Spectral approximation of Gaussian random graph Laplacians and applications to pattern recognition","authors":"Rajeev Airani , Sachin Kamble","doi":"10.1016/j.patcog.2025.111555","DOIUrl":null,"url":null,"abstract":"<div><div>The spectral decomposition of Gaussian Random Graph Laplacian (GRGLs) is at the core of the solutions to many graph-based problems. Most prevalent are graph signal processing, graph matching, and graph learning problems. Proposed here is the Eigen Approximation Theorem (EAT), which states that the diagonal entries of a GRGL matrix are reliable empirical approximations of its eigenvalues, given certain general conditions. This theorem provides a more precise bound for eigenvalues in a subspace derived from the Courant–Fischer min–max theorem. Consequently, the <span><math><mi>k</mi></math></span>th eigenvalue and eigenvector of a GRGL can be computed efficiently using deflated power iteration. Simulation results demonstrate the accuracy and computational speed of the EAT application. Hence, it can solve problems involving GRGLs like graph signal processing, graph matching, and graph learning. The EAT can also be used directly when approximations to spectral decomposition suffice. The real-time applications are also demonstrated.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"164 ","pages":"Article 111555"},"PeriodicalIF":7.5000,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325002158","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The spectral decomposition of Gaussian Random Graph Laplacian (GRGLs) is at the core of the solutions to many graph-based problems. Most prevalent are graph signal processing, graph matching, and graph learning problems. Proposed here is the Eigen Approximation Theorem (EAT), which states that the diagonal entries of a GRGL matrix are reliable empirical approximations of its eigenvalues, given certain general conditions. This theorem provides a more precise bound for eigenvalues in a subspace derived from the Courant–Fischer min–max theorem. Consequently, the th eigenvalue and eigenvector of a GRGL can be computed efficiently using deflated power iteration. Simulation results demonstrate the accuracy and computational speed of the EAT application. Hence, it can solve problems involving GRGLs like graph signal processing, graph matching, and graph learning. The EAT can also be used directly when approximations to spectral decomposition suffice. The real-time applications are also demonstrated.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.