GANExplainer: Explainability Method for Graph Neural Network with Generative Adversarial Nets

Xinrui Kang, Dong Liang, Qinfeng Li
{"title":"GANExplainer: Explainability Method for Graph Neural Network with Generative Adversarial Nets","authors":"Xinrui Kang, Dong Liang, Qinfeng Li","doi":"10.1145/3581807.3581850","DOIUrl":null,"url":null,"abstract":"In recent years, graph neural networks (GNNs) have achieved encouraging performance in the processing of graph data generated in non-Euclidean space. GNNs learn node features by aggregating and combining neighbor information, which is applied to many graphics tasks. However, the complex deep learning structure is still regarded as a black box, which is difficult to obtain the full trust of human beings. Due to the lack of interpretability, the application of graph neural network is greatly limited. Therefore, we propose an interpretable method, called GANExplainer, to explain GNNs at the model level. Our method can implicitly generate the characteristic subgraph of the graph without relying on specific input examples as the interpretation of the model to the data. GANExplainer relies on the framework of generative-adversarial method to train the generator and discriminator at the same time. More importantly, when constructing the discriminator, the corresponding graph rules are added to ensure the effectiveness of the generated characteristic subgraph. We carried out experiments on synthetic dataset and chemical molecules dataset and verified the effect of our method on model level interpreter from three aspects: accuracy, fidelity and sparsity.","PeriodicalId":292813,"journal":{"name":"Proceedings of the 2022 11th International Conference on Computing and Pattern Recognition","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 11th International Conference on Computing and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3581807.3581850","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In recent years, graph neural networks (GNNs) have achieved encouraging performance in the processing of graph data generated in non-Euclidean space. GNNs learn node features by aggregating and combining neighbor information, which is applied to many graphics tasks. However, the complex deep learning structure is still regarded as a black box, which is difficult to obtain the full trust of human beings. Due to the lack of interpretability, the application of graph neural network is greatly limited. Therefore, we propose an interpretable method, called GANExplainer, to explain GNNs at the model level. Our method can implicitly generate the characteristic subgraph of the graph without relying on specific input examples as the interpretation of the model to the data. GANExplainer relies on the framework of generative-adversarial method to train the generator and discriminator at the same time. More importantly, when constructing the discriminator, the corresponding graph rules are added to ensure the effectiveness of the generated characteristic subgraph. We carried out experiments on synthetic dataset and chemical molecules dataset and verified the effect of our method on model level interpreter from three aspects: accuracy, fidelity and sparsity.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于生成对抗网络的图神经网络的可解释性方法
近年来,图神经网络(gnn)在处理非欧几里德空间生成的图数据方面取得了令人鼓舞的成绩。gnn通过聚合和组合邻居信息来学习节点特征,这种方法被应用于许多图形任务中。然而,复杂的深度学习结构仍然被视为一个黑盒子,难以获得人类的充分信任。由于缺乏可解释性,极大地限制了图神经网络的应用。因此,我们提出了一种可解释的方法,称为GANExplainer,以在模型级别解释gnn。我们的方法可以隐式地生成图的特征子图,而不依赖于特定的输入示例作为模型对数据的解释。GANExplainer依靠生成对抗方法的框架来同时训练生成器和鉴别器。更重要的是,在构造鉴别器时,加入了相应的图规则,保证了生成的特征子图的有效性。我们在合成数据集和化学分子数据集上进行了实验,从准确性、保真度和稀疏度三个方面验证了我们的方法在模型级解释器上的效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Multi-Scale Channel Attention for Chinese Scene Text Recognition Vehicle Re-identification Based on Multi-Scale Attention Feature Fusion Comparative Study on EEG Feature Recognition based on Deep Belief Network VA-TransUNet: A U-shaped Medical Image Segmentation Network with Visual Attention Traffic Flow Forecasting Research Based on Delay Reconstruction and GRU-SVR
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1