Edge-Based Graph Component Pooling

T. Snelleman, B. M. Renting, H. H. Hoos, J. N. van Rijn
{"title":"Edge-Based Graph Component Pooling","authors":"T. Snelleman, B. M. Renting, H. H. Hoos, J. N. van Rijn","doi":"arxiv-2409.11856","DOIUrl":null,"url":null,"abstract":"Graph-structured data naturally occurs in many research fields, such as\nchemistry and sociology. The relational information contained therein can be\nleveraged to statistically model graph properties through geometrical deep\nlearning. Graph neural networks employ techniques, such as message-passing\nlayers, to propagate local features through a graph. However, message-passing\nlayers can be computationally expensive when dealing with large and sparse\ngraphs. Graph pooling operators offer the possibility of removing or merging\nnodes in such graphs, thus lowering computational costs. However, pooling\noperators that remove nodes cause data loss, and pooling operators that merge\nnodes are often computationally expensive. We propose a pooling operator that\nmerges nodes so as not to cause data loss but is also conceptually simple and\ncomputationally inexpensive. We empirically demonstrate that the proposed\npooling operator performs statistically significantly better than edge pool on\nfour popular benchmark datasets while reducing time complexity and the number\nof trainable parameters by 70.6% on average. Compared to another maximally\npowerful method named Graph Isomporhic Network, we show that we outperform them\non two popular benchmark datasets while reducing the number of learnable\nparameters on average by 60.9%.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11856","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Graph-structured data naturally occurs in many research fields, such as chemistry and sociology. The relational information contained therein can be leveraged to statistically model graph properties through geometrical deep learning. Graph neural networks employ techniques, such as message-passing layers, to propagate local features through a graph. However, message-passing layers can be computationally expensive when dealing with large and sparse graphs. Graph pooling operators offer the possibility of removing or merging nodes in such graphs, thus lowering computational costs. However, pooling operators that remove nodes cause data loss, and pooling operators that merge nodes are often computationally expensive. We propose a pooling operator that merges nodes so as not to cause data loss but is also conceptually simple and computationally inexpensive. We empirically demonstrate that the proposed pooling operator performs statistically significantly better than edge pool on four popular benchmark datasets while reducing time complexity and the number of trainable parameters by 70.6% on average. Compared to another maximally powerful method named Graph Isomporhic Network, we show that we outperform them on two popular benchmark datasets while reducing the number of learnable parameters on average by 60.9%.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于边的图形组件池
图结构数据自然出现在许多研究领域,如化学和社会学。其中包含的关系信息可以通过几何深度学习来对图形属性进行统计建模。图神经网络采用消息传递层等技术在图中传播局部特征。然而,在处理大型稀疏图时,消息传递层的计算成本可能会很高。图池算子提供了在此类图中移除或合并节点的可能性,从而降低了计算成本。然而,移除节点的汇集算子会导致数据丢失,而合并节点的汇集算子通常计算成本很高。我们提出了一种合并节点的汇集算子,它不仅不会造成数据丢失,而且概念简单、计算成本低廉。我们通过实证证明,在四个流行的基准数据集上,所提出的汇集算子的统计性能明显优于边缘汇集算子,同时平均降低了 70.6% 的时间复杂度和可训练参数的数量。与另一种名为 "图形等距网络 "的最大化方法相比,我们表明在两个流行的基准数据集上,我们的表现优于它们,同时可学习参数的数量平均减少了 60.9%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Almost Sure Convergence of Linear Temporal Difference Learning with Arbitrary Features The Impact of Element Ordering on LM Agent Performance Towards Interpretable End-Stage Renal Disease (ESRD) Prediction: Utilizing Administrative Claims Data with Explainable AI Techniques Extended Deep Submodular Functions Symmetry-Enriched Learning: A Category-Theoretic Framework for Robust Machine Learning Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1