CiRLExplainer: Causality-Inspired Explainer for Graph Neural Networks via Reinforcement Learning

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE transactions on neural networks and learning systems Pub Date : 2025-03-13 DOI:10.1109/TNNLS.2025.3543070
Wenya Hu;Jia Wu;Quan Qian
{"title":"CiRLExplainer: Causality-Inspired Explainer for Graph Neural Networks via Reinforcement Learning","authors":"Wenya Hu;Jia Wu;Quan Qian","doi":"10.1109/TNNLS.2025.3543070","DOIUrl":null,"url":null,"abstract":"In this article, we propose a new graph neural network (GNN) explainability model, CiRLExplainer, which elucidates GNN predictions from a causal attribution perspective. Initially, a causal graph is constructed to analyze the causal relationships between the graph structure and GNN predicted values, identifying node attributes as confounding factors between the two. Subsequently, a backdoor adjustment strategy is employed to circumvent these confounders. Additionally, since the edges within the graph structure are not independent, reinforcement learning is incorporated. Through a sequential selection process, each step evaluates the combined effects of an edge and the previous structure to generate an explanatory subgraph. Specifically, a policy network predicts the probability of each candidate edge being selected and adds a new edge through sampling. The causal effect of this action is quantified as a reward, reflecting the interactivity among edges. By maximizing the policy gradient during training, the reward stream of the edge sequence is optimized. The CiRLExplainer is versatile and can be applied to any GNN model. A series of experiments was conducted, including accuracy (ACC) analysis of the explanation results, visualization of the explanatory subgraph, and ablation studies considering node attributes as confounding factors. The experimental results demonstrate that our model not only outperforms current state-of-the-art explanation techniques, but also provides precise semantic explanations from a causal perspective. Additionally, the experiments validate the rationale for considering node attributes as confounding factors, thereby enhancing the explanatory power and ACC of the model. Notably, across different datasets, our explainer achieved improvements over the best baseline models in the ACC-area under the curve (AUC) metrics by 5.89%, 5.69%, and 4.87%, respectively.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 6","pages":"9970-9984"},"PeriodicalIF":8.9000,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10925220/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In this article, we propose a new graph neural network (GNN) explainability model, CiRLExplainer, which elucidates GNN predictions from a causal attribution perspective. Initially, a causal graph is constructed to analyze the causal relationships between the graph structure and GNN predicted values, identifying node attributes as confounding factors between the two. Subsequently, a backdoor adjustment strategy is employed to circumvent these confounders. Additionally, since the edges within the graph structure are not independent, reinforcement learning is incorporated. Through a sequential selection process, each step evaluates the combined effects of an edge and the previous structure to generate an explanatory subgraph. Specifically, a policy network predicts the probability of each candidate edge being selected and adds a new edge through sampling. The causal effect of this action is quantified as a reward, reflecting the interactivity among edges. By maximizing the policy gradient during training, the reward stream of the edge sequence is optimized. The CiRLExplainer is versatile and can be applied to any GNN model. A series of experiments was conducted, including accuracy (ACC) analysis of the explanation results, visualization of the explanatory subgraph, and ablation studies considering node attributes as confounding factors. The experimental results demonstrate that our model not only outperforms current state-of-the-art explanation techniques, but also provides precise semantic explanations from a causal perspective. Additionally, the experiments validate the rationale for considering node attributes as confounding factors, thereby enhancing the explanatory power and ACC of the model. Notably, across different datasets, our explainer achieved improvements over the best baseline models in the ACC-area under the curve (AUC) metrics by 5.89%, 5.69%, and 4.87%, respectively.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
CiRLExplainer:基于强化学习的图神经网络因果关系解释器
本文提出了一种新的图神经网络(GNN)可解释性模型CiRLExplainer,该模型从因果归因的角度阐述了GNN的预测。首先,构建因果图来分析图结构与GNN预测值之间的因果关系,识别节点属性作为两者之间的混淆因素。随后,采用后门调整策略来规避这些混杂因素。此外,由于图结构内的边不是独立的,因此加入了强化学习。通过顺序选择过程,每一步评估边和前一个结构的综合影响,以生成解释性子图。具体来说,策略网络通过抽样来预测每条候选边被选中的概率,并增加一条新边。这种行为的因果效应被量化为奖励,反映了边缘之间的交互性。通过最大化训练过程中的策略梯度,优化边缘序列的奖励流。CiRLExplainer是通用的,可以应用于任何GNN模型。进行了一系列实验,包括解释结果的准确性(ACC)分析、解释子图的可视化以及考虑节点属性作为混杂因素的消融研究。实验结果表明,我们的模型不仅优于当前最先进的解释技术,而且从因果关系的角度提供了精确的语义解释。此外,实验验证了将节点属性作为混杂因素考虑的基本原理,从而增强了模型的解释能力和ACC。值得注意的是,在不同的数据集中,我们的解释器在acc -曲线下面积(AUC)指标上分别比最佳基线模型提高了5.89%、5.69%和4.87%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
期刊最新文献
Prompt Then Refine: Prompt-Free SAM-Enhanced Collaborative Learning Network for Detecting Salient Objects in Underwater Images CoreKD: A Context-Aware Local Region Structural Contrastive Knowledge Distillation Framework for Object Detection Enhancing Stability of Probabilistic Model-Based Reinforcement Learning by Adaptive Noise Filtering Adaptive Niching-Based Gradient-Accelerated Differential Evolution for High-Dimensional Nonconvex Optimization Rethinking Spectral Graph Neural Networks With Spatially Adaptive Filtering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1