Bettina Finzel, Anna Saranti, Alessa Angerschmid, David Tafler, Bastian Pfeifer, Andreas Holzinger
{"title":"图神经网络概念验证的解释生成:在相关度排序子图上学习符号谓词的研究。","authors":"Bettina Finzel, Anna Saranti, Alessa Angerschmid, David Tafler, Bastian Pfeifer, Andreas Holzinger","doi":"10.1007/s13218-022-00781-7","DOIUrl":null,"url":null,"abstract":"<p><p>Graph Neural Networks (GNN) show good performance in relational data classification. However, their contribution to concept learning and the validation of their output from an application domain's and user's perspective have not been thoroughly studied. We argue that combining symbolic learning methods, such as Inductive Logic Programming (ILP), with statistical machine learning methods, especially GNNs, is an essential forward-looking step to perform powerful and validatable relational concept learning. In this contribution, we introduce a benchmark for the conceptual validation of GNN classification outputs. It consists of the symbolic representations of symmetric and non-symmetric figures that are taken from a well-known Kandinsky Pattern data set. We further provide a novel validation framework that can be used to generate comprehensible explanations with ILP on top of the relevance output of GNN explainers and human-expected relevance for concepts learned by GNNs. Our experiments conducted on our benchmark data set demonstrate that it is possible to extract symbolic concepts from the most relevant explanations that are representative of what a GNN has learned. Our findings open up a variety of avenues for future research on validatable explanations for GNNs.</p>","PeriodicalId":45413,"journal":{"name":"Kunstliche Intelligenz","volume":"36 3-4","pages":"271-285"},"PeriodicalIF":2.8000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9794543/pdf/","citationCount":"10","resultStr":"{\"title\":\"Generating Explanations for Conceptual Validation of Graph Neural Networks: An Investigation of Symbolic Predicates Learned on Relevance-Ranked Sub-Graphs.\",\"authors\":\"Bettina Finzel, Anna Saranti, Alessa Angerschmid, David Tafler, Bastian Pfeifer, Andreas Holzinger\",\"doi\":\"10.1007/s13218-022-00781-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Graph Neural Networks (GNN) show good performance in relational data classification. However, their contribution to concept learning and the validation of their output from an application domain's and user's perspective have not been thoroughly studied. We argue that combining symbolic learning methods, such as Inductive Logic Programming (ILP), with statistical machine learning methods, especially GNNs, is an essential forward-looking step to perform powerful and validatable relational concept learning. In this contribution, we introduce a benchmark for the conceptual validation of GNN classification outputs. It consists of the symbolic representations of symmetric and non-symmetric figures that are taken from a well-known Kandinsky Pattern data set. We further provide a novel validation framework that can be used to generate comprehensible explanations with ILP on top of the relevance output of GNN explainers and human-expected relevance for concepts learned by GNNs. Our experiments conducted on our benchmark data set demonstrate that it is possible to extract symbolic concepts from the most relevant explanations that are representative of what a GNN has learned. Our findings open up a variety of avenues for future research on validatable explanations for GNNs.</p>\",\"PeriodicalId\":45413,\"journal\":{\"name\":\"Kunstliche Intelligenz\",\"volume\":\"36 3-4\",\"pages\":\"271-285\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9794543/pdf/\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Kunstliche Intelligenz\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s13218-022-00781-7\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Kunstliche Intelligenz","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s13218-022-00781-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Generating Explanations for Conceptual Validation of Graph Neural Networks: An Investigation of Symbolic Predicates Learned on Relevance-Ranked Sub-Graphs.
Graph Neural Networks (GNN) show good performance in relational data classification. However, their contribution to concept learning and the validation of their output from an application domain's and user's perspective have not been thoroughly studied. We argue that combining symbolic learning methods, such as Inductive Logic Programming (ILP), with statistical machine learning methods, especially GNNs, is an essential forward-looking step to perform powerful and validatable relational concept learning. In this contribution, we introduce a benchmark for the conceptual validation of GNN classification outputs. It consists of the symbolic representations of symmetric and non-symmetric figures that are taken from a well-known Kandinsky Pattern data set. We further provide a novel validation framework that can be used to generate comprehensible explanations with ILP on top of the relevance output of GNN explainers and human-expected relevance for concepts learned by GNNs. Our experiments conducted on our benchmark data set demonstrate that it is possible to extract symbolic concepts from the most relevant explanations that are representative of what a GNN has learned. Our findings open up a variety of avenues for future research on validatable explanations for GNNs.
期刊介绍:
Artificial Intelligence has successfully established itself as a scientific discipline in research and education and has become an integral part of Computer Science with an interdisciplinary character. AI deals with both the development of information processing systems that deliver “intelligent” services and with the modeling of human cognitive skills with the help of information processing systems. Research, development and applications in the field of AI pursue the general goal of creating processes for taking in and processing information that more closely resemble human problem-solving behavior, and to subsequently use those processes to derive methods that enhance and qualitatively improve conventional information processing systems. KI – Künstliche Intelligenz is the official journal of the division for artificial intelligence within the ''Gesellschaft für Informatik e.V.'' (GI) – the German Informatics Society – with contributions from the entire field of artificial intelligence. The journal presents fundamentals and tools, their use and adaptation for scientific purposes, and applications that are implemented using AI methods – and thus provides readers with the latest developments in and well-founded background information on all relevant aspects of artificial intelligence. A highly reputed team of editors from both university and industry will ensure the scientific quality of the articles.The journal provides all members of the AI community with quick access to current topics in the field, while also promoting vital interdisciplinary interchange, it will as well serve as a media of communication between the members of the division and the parent society. The journal is published in English. Content published in this journal is peer reviewed (Double Blind).