首页 > 最新文献

Knowledge Graphs for eXplainable Artificial Intelligence最新文献

英文 中文
Knowledge Graph Embeddings and Explainable AI 知识图谱嵌入与可解释人工智能
Pub Date : 2020-04-30 DOI: 10.3233/SSW200011
Federico Bianchi, Gaetano Rossiello, Luca Costabello, M. Palmonari, Pasquale Minervini
Knowledge graph embeddings are now a widely adopted approach to knowledge representation in which entities and relationships are embedded in vector spaces. In this chapter, we introduce the reader to the concept of knowledge graph embeddings by explaining what they are, how they can be generated and how they can be evaluated. We summarize the state-of-the-art in this field by describing the approaches that have been introduced to represent knowledge in the vector space. In relation to knowledge representation, we consider the problem of explainability, and discuss models and methods for explaining predictions obtained via knowledge graph embeddings.
知识图嵌入是一种广泛采用的知识表示方法,其中实体和关系嵌入到向量空间中。在本章中,我们通过解释知识图嵌入是什么、如何生成以及如何评估来向读者介绍知识图嵌入的概念。我们通过描述在向量空间中表示知识的方法来总结这一领域的最新进展。在知识表示方面,我们考虑了可解释性问题,并讨论了通过知识图嵌入获得的预测的解释模型和方法。
{"title":"Knowledge Graph Embeddings and Explainable AI","authors":"Federico Bianchi, Gaetano Rossiello, Luca Costabello, M. Palmonari, Pasquale Minervini","doi":"10.3233/SSW200011","DOIUrl":"https://doi.org/10.3233/SSW200011","url":null,"abstract":"Knowledge graph embeddings are now a widely adopted approach to knowledge representation in which entities and relationships are embedded in vector spaces. In this chapter, we introduce the reader to the concept of knowledge graph embeddings by explaining what they are, how they can be generated and how they can be evaluated. We summarize the state-of-the-art in this field by describing the approaches that have been introduced to represent knowledge in the vector space. In relation to knowledge representation, we consider the problem of explainability, and discuss models and methods for explaining predictions obtained via knowledge graph embeddings.","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121314549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
Foundations of Explainable Knowledge-Enabled Systems 可解释的知识支持系统的基础
Pub Date : 2020-03-17 DOI: 10.3233/SSW200010
Shruthi Chari, Daniel Gruen, O. Seneviratne, D. McGuinness
Explainability has been an important goal since the early days of Artificial Intelligence. Several approaches for producing explanations have been developed. However, many of these approaches were tightly coupled with the capabilities of the artificial intelligence systems at the time. With the proliferation of AI-enabled systems in sometimes critical settings, there is a need for them to be explainable to end-users and decision-makers. We present a historical overview of explainable artificial intelligence systems, with a focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains. Additionally, borrowing from the strengths of past approaches and identifying gaps needed to make explanations user- and context-focused, we propose new definitions for explanations and explainable knowledge-enabled systems.
自人工智能早期以来,可解释性一直是一个重要的目标。已经发展了几种解释的方法。然而,这些方法中的许多都与当时人工智能系统的能力紧密结合在一起。随着人工智能系统在某些关键环境中的扩散,有必要向最终用户和决策者解释它们。我们介绍了可解释的人工智能系统的历史概述,重点是知识支持系统,跨越专家系统,认知助理,语义应用和机器学习领域。此外,借鉴过去方法的优势,并确定使解释以用户和上下文为中心所需的差距,我们提出了解释和可解释的知识支持系统的新定义。
{"title":"Foundations of Explainable Knowledge-Enabled Systems","authors":"Shruthi Chari, Daniel Gruen, O. Seneviratne, D. McGuinness","doi":"10.3233/SSW200010","DOIUrl":"https://doi.org/10.3233/SSW200010","url":null,"abstract":"Explainability has been an important goal since the early days of Artificial Intelligence. Several approaches for producing explanations have been developed. However, many of these approaches were tightly coupled with the capabilities of the artificial intelligence systems at the time. With the proliferation of AI-enabled systems in sometimes critical settings, there is a need for them to be explainable to end-users and decision-makers. We present a historical overview of explainable artificial intelligence systems, with a focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains. Additionally, borrowing from the strengths of past approaches and identifying gaps needed to make explanations user- and context-focused, we propose new definitions for explanations and explainable knowledge-enabled systems.","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125578428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Neuro-symbolic Architectures for Context Understanding 上下文理解的神经符号体系结构
Pub Date : 2020-03-09 DOI: 10.3233/SSW200016
A. Oltramari, Jonathan M Francis, C. Henson, Kaixin Ma, Ruwan Wickramarachchi
Computational context understanding refers to an agent's ability to fuse disparate sources of information for decision-making and is, therefore, generally regarded as a prerequisite for sophisticated machine reasoning capabilities, such as in artificial intelligence (AI). Data-driven and knowledge-driven methods are two classical techniques in the pursuit of such machine sense-making capability. However, while data-driven methods seek to model the statistical regularities of events by making observations in the real-world, they remain difficult to interpret and they lack mechanisms for naturally incorporating external knowledge. Conversely, knowledge-driven methods, combine structured knowledge bases, perform symbolic reasoning based on axiomatic principles, and are more interpretable in their inferential processing; however, they often lack the ability to estimate the statistical salience of an inference. To combat these issues, we propose the use of hybrid AI methodology as a general framework for combining the strengths of both approaches. Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks. We further ground our discussion in two applications of neuro-symbolism and, in both cases, show that our systems maintain interpretability while achieving comparable performance, relative to the state-of-the-art.
计算上下文理解是指代理融合不同信息源进行决策的能力,因此通常被认为是复杂机器推理能力的先决条件,例如人工智能(AI)。数据驱动方法和知识驱动方法是追求这种机器感知能力的两种经典方法。然而,虽然数据驱动的方法试图通过在现实世界中进行观察来模拟事件的统计规律,但它们仍然难以解释,并且缺乏自然地吸收外部知识的机制。相反,知识驱动方法结合结构化知识库,基于公理原则进行符号推理,在推理处理中更具可解释性;然而,他们往往缺乏估计推断的统计显著性的能力。为了解决这些问题,我们建议使用混合人工智能方法作为结合两种方法优势的一般框架。具体来说,我们继承了神经符号的概念,作为一种使用知识库来指导深度神经网络学习过程的方法。我们进一步讨论了神经象征主义的两种应用,并在这两种情况下,表明我们的系统在保持可解释性的同时,实现了与最先进的性能相当的性能。
{"title":"Neuro-symbolic Architectures for Context Understanding","authors":"A. Oltramari, Jonathan M Francis, C. Henson, Kaixin Ma, Ruwan Wickramarachchi","doi":"10.3233/SSW200016","DOIUrl":"https://doi.org/10.3233/SSW200016","url":null,"abstract":"Computational context understanding refers to an agent's ability to fuse disparate sources of information for decision-making and is, therefore, generally regarded as a prerequisite for sophisticated machine reasoning capabilities, such as in artificial intelligence (AI). Data-driven and knowledge-driven methods are two classical techniques in the pursuit of such machine sense-making capability. However, while data-driven methods seek to model the statistical regularities of events by making observations in the real-world, they remain difficult to interpret and they lack mechanisms for naturally incorporating external knowledge. Conversely, knowledge-driven methods, combine structured knowledge bases, perform symbolic reasoning based on axiomatic principles, and are more interpretable in their inferential processing; however, they often lack the ability to estimate the statistical salience of an inference. To combat these issues, we propose the use of hybrid AI methodology as a general framework for combining the strengths of both approaches. Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks. We further ground our discussion in two applications of neuro-symbolism and, in both cases, show that our systems maintain interpretability while achieving comparable performance, relative to the state-of-the-art.","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116347862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Differentiable Reasoning on Large Knowledge Bases and Natural Language 基于大知识库和自然语言的可微推理
Pub Date : 2019-12-17 DOI: 10.1609/AAAI.V34I04.5962
Pasquale Minervini, Matko Bovsnjak, Tim Rocktäschel, Sebastian Riedel, Edward Grefenstette
Reasoning with knowledge expressed in natural language and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering. General neural architectures that jointly learn representations and transformations of text are very data-inefficient, and it is hard to analyse their reasoning process. These issues are addressed by end-to-end differentiable reasoning systems such as Neural Theorem Provers (NTPs), although they can only be used with small-scale symbolic KBs. In this paper we first propose Greedy NTPs (GNTPs), an extension to NTPs addressing their complexity and scalability limitations, thus making them applicable to real-world datasets. This result is achieved by dynamically constructing the computation graph of NTPs and including only the most promising proof paths during inference, thus obtaining orders of magnitude more efficient models. Then, we propose a novel approach for jointly reasoning over KBs and textual mentions, by embedding logic facts and natural language sentences in a shared embedding space. We show that GNTPs perform on par with NTPs at a fraction of their cost while achieving competitive link prediction results on large datasets, providing explanations for predictions, and inducing interpretable models. Source code, datasets, and supplementary material are available online at this https URL.
用自然语言和知识库(KBs)表达的知识进行推理是人工智能面临的主要挑战,在机器阅读、对话和问答中都有应用。联合学习文本表示和转换的一般神经结构数据效率非常低,并且很难分析其推理过程。这些问题由端到端可微推理系统(如神经定理证明器(ntp))解决,尽管它们只能与小规模符号KBs一起使用。在本文中,我们首先提出贪婪ntp (gntp),这是对ntp的扩展,解决了它们的复杂性和可扩展性限制,从而使它们适用于现实世界的数据集。该结果是通过动态构建ntp的计算图,并在推理过程中只包含最有希望的证明路径,从而获得数量级更高的效率模型来实现的。然后,我们提出了一种通过在共享嵌入空间中嵌入逻辑事实和自然语言句子来对KBs和文本提及进行联合推理的新方法。研究表明,gntp的表现与ntp相当,而成本只是后者的一小部分,同时在大型数据集上获得了有竞争力的链接预测结果,为预测提供了解释,并引入了可解释的模型。源代码、数据集和补充材料可在此https URL上在线获得。
{"title":"Differentiable Reasoning on Large Knowledge Bases and Natural Language","authors":"Pasquale Minervini, Matko Bovsnjak, Tim Rocktäschel, Sebastian Riedel, Edward Grefenstette","doi":"10.1609/AAAI.V34I04.5962","DOIUrl":"https://doi.org/10.1609/AAAI.V34I04.5962","url":null,"abstract":"Reasoning with knowledge expressed in natural language and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering. General neural architectures that jointly learn representations and transformations of text are very data-inefficient, and it is hard to analyse their reasoning process. These issues are addressed by end-to-end differentiable reasoning systems such as Neural Theorem Provers (NTPs), although they can only be used with small-scale symbolic KBs. In this paper we first propose Greedy NTPs (GNTPs), an extension to NTPs addressing their complexity and scalability limitations, thus making them applicable to real-world datasets. This result is achieved by dynamically constructing the computation graph of NTPs and including only the most promising proof paths during inference, thus obtaining orders of magnitude more efficient models. Then, we propose a novel approach for jointly reasoning over KBs and textual mentions, by embedding logic facts and natural language sentences in a shared embedding space. We show that GNTPs perform on par with NTPs at a fraction of their cost while achieving competitive link prediction results on large datasets, providing explanations for predictions, and inducing interpretable models. Source code, datasets, and supplementary material are available online at this https URL.","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129857681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 79
Knowledge Representation and Reasoning Methods to Explain Errors in Machine Learning 机器学习中解释错误的知识表示和推理方法
Pub Date : 1900-01-01 DOI: 10.3233/SSW200017
Marjan Alirezaie, Martin Längkvist, A. Loutfi
{"title":"Knowledge Representation and Reasoning Methods to Explain Errors in Machine Learning","authors":"Marjan Alirezaie, Martin Längkvist, A. Loutfi","doi":"10.3233/SSW200017","DOIUrl":"https://doi.org/10.3233/SSW200017","url":null,"abstract":"","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126745154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge-Aware Interpretable Recommender Systems 知识感知可解释推荐系统
Pub Date : 1900-01-01 DOI: 10.3233/SSW200014
V. W. Anelli, Vito Bellini, T. D. Noia, E. Sciascio
Recommender systems are everywhere, from e-commerce to streaming platforms. They help users lost in the maze of available information, items and services to find their way. Among them, over the years, approaches based on machine learning techniques have shown particularly good performance for top-N recommendations engines. Unfortunately, they mostly behave as black-boxes and, even when they embed some form of description about the items to recommend, after the training phase they move such descriptions in a latent space thus loosing the actual explicit semantics of recommended items. As a consequence, the system designers struggle at providing satisfying explanations to the recommendation list provided to the end user. In this chapter, we describe two approaches to recommendation which make use of the semantics encoded in a knowledge graph to train interpretable models which keep the original semantics of the items description thus providing a powerful tool to automatically compute explainable results. The two methods relies on two completely different machine learning algorithms, namely, factorization machines and autoencoder neural networks. We also show how to measure the interpretability of the model through the introduction of two metrics: semantic accuracy and robustness.
从电子商务到流媒体平台,推荐系统无处不在。它们帮助迷失在信息、项目和服务迷宫中的用户找到自己的路。其中,多年来,基于机器学习技术的方法在top-N推荐引擎上表现出了特别好的性能。不幸的是,它们大多表现得像黑盒一样,即使它们嵌入了关于要推荐的项目的某种形式的描述,在训练阶段之后,它们也会将这些描述移动到潜在空间中,从而失去了推荐项目的实际显式语义。因此,系统设计者努力为提供给最终用户的推荐列表提供令人满意的解释。在本章中,我们描述了两种推荐方法,它们利用知识图中编码的语义来训练可解释的模型,这些模型保持了项目描述的原始语义,从而提供了一个强大的工具来自动计算可解释的结果。这两种方法依赖于两种完全不同的机器学习算法,即因式分解机和自编码器神经网络。我们还展示了如何通过引入两个度量来度量模型的可解释性:语义准确性和鲁棒性。
{"title":"Knowledge-Aware Interpretable Recommender Systems","authors":"V. W. Anelli, Vito Bellini, T. D. Noia, E. Sciascio","doi":"10.3233/SSW200014","DOIUrl":"https://doi.org/10.3233/SSW200014","url":null,"abstract":"Recommender systems are everywhere, from e-commerce to streaming platforms. They help users lost in the maze of available information, items and services to find their way. Among them, over the years, approaches based on machine learning techniques have shown particularly good performance for top-N recommendations engines. Unfortunately, they mostly behave as black-boxes and, even when they embed some form of description about the items to recommend, after the training phase they move such descriptions in a latent space thus loosing the actual explicit semantics of recommended items. As a consequence, the system designers struggle at providing satisfying explanations to the recommendation list provided to the end user. In this chapter, we describe two approaches to recommendation which make use of the semantics encoded in a knowledge graph to train interpretable models which keep the original semantics of the items description thus providing a powerful tool to automatically compute explainable results. The two methods relies on two completely different machine learning algorithms, namely, factorization machines and autoencoder neural networks. We also show how to measure the interpretability of the model through the introduction of two metrics: semantic accuracy and robustness.","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127862314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Explanations in Predictive Analytics: Case Studies 预测分析中的解释:案例研究
Pub Date : 1900-01-01 DOI: 10.3233/SSW200019
Jiewen Wu, Minh-Thuan Nguyen, G. Ngo, Nancy F. Chen
{"title":"Explanations in Predictive Analytics: Case Studies","authors":"Jiewen Wu, Minh-Thuan Nguyen, G. Ngo, Nancy F. Chen","doi":"10.3233/SSW200019","DOIUrl":"https://doi.org/10.3233/SSW200019","url":null,"abstract":"","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"22 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114123470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benchmarking the Lifecycle of Knowledge Graphs 对知识图谱生命周期进行基准测试
Pub Date : 1900-01-01 DOI: 10.3233/SSW200012
Michael Röder, M. A. Sherif, Muhammad Saleem, Felix Conrads, A. N. Ngomo
{"title":"Benchmarking the Lifecycle of Knowledge Graphs","authors":"Michael Röder, M. A. Sherif, Muhammad Saleem, Felix Conrads, A. N. Ngomo","doi":"10.3233/SSW200012","DOIUrl":"https://doi.org/10.3233/SSW200012","url":null,"abstract":"","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"80 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132727935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating Explanations in Natural Language from Knowledge Graphs 从知识图生成自然语言解释
Pub Date : 1900-01-01 DOI: 10.3233/SSW200020
Diego Moussallem, René Speck, A. N. Ngomo
{"title":"Generating Explanations in Natural Language from Knowledge Graphs","authors":"Diego Moussallem, René Speck, A. N. Ngomo","doi":"10.3233/SSW200020","DOIUrl":"https://doi.org/10.3233/SSW200020","url":null,"abstract":"","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133378782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Managing Identity in Knowledge-Based Explainable Systems 在基于知识的可解释系统中管理身份
Pub Date : 1900-01-01 DOI: 10.3233/SSW200025
Ilaria Tiddi, Joe Raad
{"title":"Managing Identity in Knowledge-Based Explainable Systems","authors":"Ilaria Tiddi, Joe Raad","doi":"10.3233/SSW200025","DOIUrl":"https://doi.org/10.3233/SSW200025","url":null,"abstract":"","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128421598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Knowledge Graphs for eXplainable Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1