图神经网络概念验证的解释生成:在相关度排序子图上学习符号谓词的研究。

IF 2.8 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Kunstliche Intelligenz Pub Date : 2022-01-01 DOI:10.1007/s13218-022-00781-7
Bettina Finzel, Anna Saranti, Alessa Angerschmid, David Tafler, Bastian Pfeifer, Andreas Holzinger
{"title":"图神经网络概念验证的解释生成:在相关度排序子图上学习符号谓词的研究。","authors":"Bettina Finzel,&nbsp;Anna Saranti,&nbsp;Alessa Angerschmid,&nbsp;David Tafler,&nbsp;Bastian Pfeifer,&nbsp;Andreas Holzinger","doi":"10.1007/s13218-022-00781-7","DOIUrl":null,"url":null,"abstract":"<p><p>Graph Neural Networks (GNN) show good performance in relational data classification. However, their contribution to concept learning and the validation of their output from an application domain's and user's perspective have not been thoroughly studied. We argue that combining symbolic learning methods, such as Inductive Logic Programming (ILP), with statistical machine learning methods, especially GNNs, is an essential forward-looking step to perform powerful and validatable relational concept learning. In this contribution, we introduce a benchmark for the conceptual validation of GNN classification outputs. It consists of the symbolic representations of symmetric and non-symmetric figures that are taken from a well-known Kandinsky Pattern data set. We further provide a novel validation framework that can be used to generate comprehensible explanations with ILP on top of the relevance output of GNN explainers and human-expected relevance for concepts learned by GNNs. Our experiments conducted on our benchmark data set demonstrate that it is possible to extract symbolic concepts from the most relevant explanations that are representative of what a GNN has learned. Our findings open up a variety of avenues for future research on validatable explanations for GNNs.</p>","PeriodicalId":45413,"journal":{"name":"Kunstliche Intelligenz","volume":"36 3-4","pages":"271-285"},"PeriodicalIF":2.8000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9794543/pdf/","citationCount":"10","resultStr":"{\"title\":\"Generating Explanations for Conceptual Validation of Graph Neural Networks: An Investigation of Symbolic Predicates Learned on Relevance-Ranked Sub-Graphs.\",\"authors\":\"Bettina Finzel,&nbsp;Anna Saranti,&nbsp;Alessa Angerschmid,&nbsp;David Tafler,&nbsp;Bastian Pfeifer,&nbsp;Andreas Holzinger\",\"doi\":\"10.1007/s13218-022-00781-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Graph Neural Networks (GNN) show good performance in relational data classification. However, their contribution to concept learning and the validation of their output from an application domain's and user's perspective have not been thoroughly studied. We argue that combining symbolic learning methods, such as Inductive Logic Programming (ILP), with statistical machine learning methods, especially GNNs, is an essential forward-looking step to perform powerful and validatable relational concept learning. In this contribution, we introduce a benchmark for the conceptual validation of GNN classification outputs. It consists of the symbolic representations of symmetric and non-symmetric figures that are taken from a well-known Kandinsky Pattern data set. We further provide a novel validation framework that can be used to generate comprehensible explanations with ILP on top of the relevance output of GNN explainers and human-expected relevance for concepts learned by GNNs. Our experiments conducted on our benchmark data set demonstrate that it is possible to extract symbolic concepts from the most relevant explanations that are representative of what a GNN has learned. Our findings open up a variety of avenues for future research on validatable explanations for GNNs.</p>\",\"PeriodicalId\":45413,\"journal\":{\"name\":\"Kunstliche Intelligenz\",\"volume\":\"36 3-4\",\"pages\":\"271-285\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9794543/pdf/\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Kunstliche Intelligenz\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s13218-022-00781-7\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Kunstliche Intelligenz","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s13218-022-00781-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 10

摘要

图神经网络(GNN)在关系数据分类中表现出良好的性能。然而,它们对概念学习的贡献以及从应用领域和用户的角度验证它们的输出还没有得到彻底的研究。我们认为,将符号学习方法(如归纳逻辑编程(ILP))与统计机器学习方法(特别是gnn)相结合,是执行强大且可验证的关系概念学习的重要前瞻性步骤。在本文中,我们引入了GNN分类输出概念验证的基准。它由取自著名的康定斯基图案数据集的对称和非对称图形的符号表示组成。我们进一步提供了一个新的验证框架,该框架可用于在GNN解释器的相关输出和GNN学习的概念的人类期望相关性的基础上,使用ILP生成可理解的解释。我们在基准数据集上进行的实验表明,可以从最相关的解释中提取符号概念,这些解释代表了GNN所学的内容。我们的发现为未来研究gnn的可验证解释开辟了多种途径。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

摘要图片

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Generating Explanations for Conceptual Validation of Graph Neural Networks: An Investigation of Symbolic Predicates Learned on Relevance-Ranked Sub-Graphs.

Graph Neural Networks (GNN) show good performance in relational data classification. However, their contribution to concept learning and the validation of their output from an application domain's and user's perspective have not been thoroughly studied. We argue that combining symbolic learning methods, such as Inductive Logic Programming (ILP), with statistical machine learning methods, especially GNNs, is an essential forward-looking step to perform powerful and validatable relational concept learning. In this contribution, we introduce a benchmark for the conceptual validation of GNN classification outputs. It consists of the symbolic representations of symmetric and non-symmetric figures that are taken from a well-known Kandinsky Pattern data set. We further provide a novel validation framework that can be used to generate comprehensible explanations with ILP on top of the relevance output of GNN explainers and human-expected relevance for concepts learned by GNNs. Our experiments conducted on our benchmark data set demonstrate that it is possible to extract symbolic concepts from the most relevant explanations that are representative of what a GNN has learned. Our findings open up a variety of avenues for future research on validatable explanations for GNNs.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Kunstliche Intelligenz
Kunstliche Intelligenz COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
8.60
自引率
3.40%
发文量
32
期刊介绍: Artificial Intelligence has successfully established itself as a scientific discipline in research and education and has become an integral part of Computer Science with an interdisciplinary character. AI deals with both the development of information processing systems that deliver “intelligent” services and with the modeling of human cognitive skills with the help of information processing systems. Research, development and applications in the field of AI pursue the general goal of creating processes for taking in and processing information that more closely resemble human problem-solving behavior, and to subsequently use those processes to derive methods that enhance and qualitatively improve conventional information processing systems. KI – Künstliche Intelligenz is the official journal of the division for artificial intelligence within the ''Gesellschaft für Informatik e.V.'' (GI) – the German Informatics Society – with contributions from the entire field of artificial intelligence. The journal presents fundamentals and tools, their use and adaptation for scientific purposes, and applications that are implemented using AI methods – and thus provides readers with the latest developments in and well-founded background information on all relevant aspects of artificial intelligence. A highly reputed team of editors from both university and industry will ensure the scientific quality of the articles.The journal provides all members of the AI community with quick access to current topics in the field, while also promoting vital interdisciplinary interchange, it will as well serve as a media of communication between the members of the division and the parent society. The journal is published in English. Content published in this journal is peer reviewed (Double Blind).
期刊最新文献
In Search of Basement Indicators from Street View Imagery Data: An Investigation of Data Sources and Analysis Strategies. Some Thoughts on AI Stimulated by Michael Wooldridge's Book "The Road to Conscious Machines. The Story of AI". A Framework for Learning Event Sequences and Explaining Detected Anomalies in a Smart Home Environment. Generating Explanations for Conceptual Validation of Graph Neural Networks: An Investigation of Symbolic Predicates Learned on Relevance-Ranked Sub-Graphs. News.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1