{"title":"Toward Embedding Ambiguity-Sensitive Graph Neural Network Explainability","authors":"Xiaofeng Liu;Yinglong Ma;Degang Chen;Ling Liu","doi":"10.1109/TFUZZ.2024.3457914","DOIUrl":null,"url":null,"abstract":"Recently, many post hoc graph neural network (GNN) explanation methods have been explored to uncover GNNs' predictive behaviors by analyzing the embeddings produced by the GNN models. However, these methods suffer from explanation ambiguity inherent in learned graph embeddings because aggregation-based embeddings can lead to the loss of unique identifiers for individual graph components and, thus, allow noncausal nodes that are adjacent to true causal patterns to unintentionally embody causal information in their embeddings, hindering the explanations from faithfully representing the true insights of GNNs' predictive reasoning. In this article, we present an embedding ambiguity-sensitive GNN explanation framework (EAGX). EAGX can effectively mitigate the impact of embedding-induced explanation ambiguity by creating edges' ambiguity feature extractor, exploring edges' predictive relevance, and integrating them into the explanation process, thereby capturing each graph component's contribution to the predictions. Specifically, we first propose a centroid-constrained fuzzy c-means algorithm to construct an ambiguity feature extractor. Then, we leverage the ambiguity features for edges to develop the ambiguity-based edge attribution module for assigning a prediction relevance score to each edge. Finally, instead of focusing only on the edges with high influence to the GNN prediction, we introduce a joint optimization strategy to refine the learning process of our edge attribution module, empowering EAGX to capture the subtle interplay of both causal and noncausal subgraphs on model predictions, which further improve the explainability of GNN predictions. Experimental results demonstrate that EAGX outperforms the leading explainers on most evaluation metrics, underscoring its effectiveness in generating reliable and precise explanations for GNNs.","PeriodicalId":13212,"journal":{"name":"IEEE Transactions on Fuzzy Systems","volume":"32 12","pages":"6951-6964"},"PeriodicalIF":11.9000,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Fuzzy Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10696966/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, many post hoc graph neural network (GNN) explanation methods have been explored to uncover GNNs' predictive behaviors by analyzing the embeddings produced by the GNN models. However, these methods suffer from explanation ambiguity inherent in learned graph embeddings because aggregation-based embeddings can lead to the loss of unique identifiers for individual graph components and, thus, allow noncausal nodes that are adjacent to true causal patterns to unintentionally embody causal information in their embeddings, hindering the explanations from faithfully representing the true insights of GNNs' predictive reasoning. In this article, we present an embedding ambiguity-sensitive GNN explanation framework (EAGX). EAGX can effectively mitigate the impact of embedding-induced explanation ambiguity by creating edges' ambiguity feature extractor, exploring edges' predictive relevance, and integrating them into the explanation process, thereby capturing each graph component's contribution to the predictions. Specifically, we first propose a centroid-constrained fuzzy c-means algorithm to construct an ambiguity feature extractor. Then, we leverage the ambiguity features for edges to develop the ambiguity-based edge attribution module for assigning a prediction relevance score to each edge. Finally, instead of focusing only on the edges with high influence to the GNN prediction, we introduce a joint optimization strategy to refine the learning process of our edge attribution module, empowering EAGX to capture the subtle interplay of both causal and noncausal subgraphs on model predictions, which further improve the explainability of GNN predictions. Experimental results demonstrate that EAGX outperforms the leading explainers on most evaluation metrics, underscoring its effectiveness in generating reliable and precise explanations for GNNs.
期刊介绍:
The IEEE Transactions on Fuzzy Systems is a scholarly journal that focuses on the theory, design, and application of fuzzy systems. It aims to publish high-quality technical papers that contribute significant technical knowledge and exploratory developments in the field of fuzzy systems. The journal particularly emphasizes engineering systems and scientific applications. In addition to research articles, the Transactions also includes a letters section featuring current information, comments, and rebuttals related to published papers.