Learning Personalized Scoping for Graph Neural Networks under Heterophily

Gangda Deng, Hongkuan Zhou, Rajgopal Kannan, Viktor Prasanna
{"title":"Learning Personalized Scoping for Graph Neural Networks under Heterophily","authors":"Gangda Deng, Hongkuan Zhou, Rajgopal Kannan, Viktor Prasanna","doi":"arxiv-2409.06998","DOIUrl":null,"url":null,"abstract":"Heterophilous graphs, where dissimilar nodes tend to connect, pose a\nchallenge for graph neural networks (GNNs) as their superior performance\ntypically comes from aggregating homophilous information. Increasing the GNN\ndepth can expand the scope (i.e., receptive field), potentially finding\nhomophily from the higher-order neighborhoods. However, uniformly expanding the\nscope results in subpar performance since real-world graphs often exhibit\nhomophily disparity between nodes. An ideal way is personalized scopes,\nallowing nodes to have varying scope sizes. Existing methods typically add\nnode-adaptive weights for each hop. Although expressive, they inevitably suffer\nfrom severe overfitting. To address this issue, we formalize personalized\nscoping as a separate scope classification problem that overcomes GNN\noverfitting in node classification. Specifically, we predict the optimal GNN\ndepth for each node. Our theoretical and empirical analysis suggests that\naccurately predicting the depth can significantly enhance generalization. We\nfurther propose Adaptive Scope (AS), a lightweight MLP-based approach that only\nparticipates in GNN inference. AS encodes structural patterns and predicts the\ndepth to select the best model for each node's prediction. Experimental results\nshow that AS is highly flexible with various GNN architectures across a wide\nrange of datasets while significantly improving accuracy.","PeriodicalId":501032,"journal":{"name":"arXiv - CS - Social and Information Networks","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Social and Information Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06998","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Heterophilous graphs, where dissimilar nodes tend to connect, pose a challenge for graph neural networks (GNNs) as their superior performance typically comes from aggregating homophilous information. Increasing the GNN depth can expand the scope (i.e., receptive field), potentially finding homophily from the higher-order neighborhoods. However, uniformly expanding the scope results in subpar performance since real-world graphs often exhibit homophily disparity between nodes. An ideal way is personalized scopes, allowing nodes to have varying scope sizes. Existing methods typically add node-adaptive weights for each hop. Although expressive, they inevitably suffer from severe overfitting. To address this issue, we formalize personalized scoping as a separate scope classification problem that overcomes GNN overfitting in node classification. Specifically, we predict the optimal GNN depth for each node. Our theoretical and empirical analysis suggests that accurately predicting the depth can significantly enhance generalization. We further propose Adaptive Scope (AS), a lightweight MLP-based approach that only participates in GNN inference. AS encodes structural patterns and predicts the depth to select the best model for each node's prediction. Experimental results show that AS is highly flexible with various GNN architectures across a wide range of datasets while significantly improving accuracy.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
嗜异性条件下的图神经网络个性化范围学习
嗜异性图(异类节点往往连接在一起)给图神经网络(GNN)带来了挑战,因为它们的卓越性能通常来自于聚合嗜同性信息。增加图神经网络的深度可以扩大其范围(即感受野),从而有可能从高阶邻域中找到同源性信息。然而,均匀地扩大范围会导致性能不佳,因为现实世界中的图通常会表现出节点之间的同源性差异。理想的方法是个性化范围,允许节点拥有不同的范围大小。现有方法通常为每一跳添加节点自适应权重。虽然这些方法具有很强的表现力,但不可避免地存在严重的过拟合问题。为了解决这个问题,我们将个性化范围正式定义为一个单独的范围分类问题,克服了节点分类中的 GNN 过拟合问题。具体来说,我们预测每个节点的最佳 GNN 深度。我们的理论和实证分析表明,准确预测深度可以显著提高泛化效果。我们进一步提出了自适应范围(AS),这是一种基于 MLP 的轻量级方法,只参与 GNN 推断。AS 对结构模式进行编码,并预测深度,从而为每个节点的预测选择最佳模型。实验结果表明,AS 在更广泛的数据集上与各种 GNN 架构配合使用时具有很高的灵活性,同时还能显著提高准确率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
My Views Do Not Reflect Those of My Employer: Differences in Behavior of Organizations' Official and Personal Social Media Accounts A novel DFS/BFS approach towards link prediction Community Shaping in the Digital Age: A Temporal Fusion Framework for Analyzing Discourse Fragmentation in Online Social Networks Skill matching at scale: freelancer-project alignment for efficient multilingual candidate retrieval "It Might be Technically Impressive, But It's Practically Useless to Us": Practices, Challenges, and Opportunities for Cross-Functional Collaboration around AI within the News Industry
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1