{"title":"PieClam: A Universal Graph Autoencoder Based on Overlapping Inclusive and Exclusive Communities","authors":"Daniel Zilberg, Ron Levie","doi":"arxiv-2409.11618","DOIUrl":null,"url":null,"abstract":"We propose PieClam (Prior Inclusive Exclusive Cluster Affiliation Model): a\nprobabilistic graph model for representing any graph as overlapping generalized\ncommunities. Our method can be interpreted as a graph autoencoder: nodes are\nembedded into a code space by an algorithm that maximizes the log-likelihood of\nthe decoded graph, given the input graph. PieClam is a community affiliation\nmodel that extends well-known methods like BigClam in two main manners. First,\ninstead of the decoder being defined via pairwise interactions between the\nnodes in the code space, we also incorporate a learned prior on the\ndistribution of nodes in the code space, turning our method into a graph\ngenerative model. Secondly, we generalize the notion of communities by allowing\nnot only sets of nodes with strong connectivity, which we call inclusive\ncommunities, but also sets of nodes with strong disconnection, which we call\nexclusive communities. To model both types of communities, we propose a new\ntype of decoder based the Lorentz inner product, which we prove to be much more\nexpressive than standard decoders based on standard inner products or norm\ndistances. By introducing a new graph similarity measure, that we call the log\ncut distance, we show that PieClam is a universal autoencoder, able to\nuniformly approximately reconstruct any graph. Our method is shown to obtain\ncompetitive performance in graph anomaly detection benchmarks.","PeriodicalId":501340,"journal":{"name":"arXiv - STAT - Machine Learning","volume":"27 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - STAT - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11618","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We propose PieClam (Prior Inclusive Exclusive Cluster Affiliation Model): a
probabilistic graph model for representing any graph as overlapping generalized
communities. Our method can be interpreted as a graph autoencoder: nodes are
embedded into a code space by an algorithm that maximizes the log-likelihood of
the decoded graph, given the input graph. PieClam is a community affiliation
model that extends well-known methods like BigClam in two main manners. First,
instead of the decoder being defined via pairwise interactions between the
nodes in the code space, we also incorporate a learned prior on the
distribution of nodes in the code space, turning our method into a graph
generative model. Secondly, we generalize the notion of communities by allowing
not only sets of nodes with strong connectivity, which we call inclusive
communities, but also sets of nodes with strong disconnection, which we call
exclusive communities. To model both types of communities, we propose a new
type of decoder based the Lorentz inner product, which we prove to be much more
expressive than standard decoders based on standard inner products or norm
distances. By introducing a new graph similarity measure, that we call the log
cut distance, we show that PieClam is a universal autoencoder, able to
uniformly approximately reconstruct any graph. Our method is shown to obtain
competitive performance in graph anomaly detection benchmarks.