首页 > 最新文献

Neuro-Symbolic Artificial Intelligence最新文献

英文 中文
Logic Meets Learning: From Aristotle to Neural Networks 逻辑与学习:从亚里士多德到神经网络
Pub Date : 2021-12-22 DOI: 10.3233/faia210350
Vaishak Belle
The tension between deduction and induction is perhaps the most fundamental issue in areas such as philosophy, cognition and artificial intelligence. In this chapter, we survey work that provides evidence for the long-standing and deep connections between logic and learning. After a brief historical prelude, our narrative is then structured in terms of three strands of interaction: logic versus learning, machine learning for logic, and logic for machine learning, but with ample overlap.
演绎和归纳法之间的紧张关系可能是哲学、认知和人工智能等领域最根本的问题。在本章中,我们调查了为逻辑和学习之间长期而深刻的联系提供证据的工作。在简短的历史前奏之后,我们的叙述是根据三股互动来构建的:逻辑对学习,机器学习对逻辑,逻辑对机器学习,但有很多重叠。
{"title":"Logic Meets Learning: From Aristotle to Neural Networks","authors":"Vaishak Belle","doi":"10.3233/faia210350","DOIUrl":"https://doi.org/10.3233/faia210350","url":null,"abstract":"The tension between deduction and induction is perhaps the most fundamental issue in areas such as philosophy, cognition and artificial intelligence. In this chapter, we survey work that provides evidence for the long-standing and deep connections between logic and learning. After a brief historical prelude, our narrative is then structured in terms of three strands of interaction: logic versus learning, machine learning for logic, and logic for machine learning, but with ample overlap.","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129630806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Generalizable Neuro-symbolic Systems for Commonsense Question Answering 常识性问答的可推广神经符号系统
Pub Date : 2021-12-22 DOI: 10.3233/FAIA210360
A. Oltramari, Jonathan M Francis, Filip Ilievski, Kaixin Ma, Roshanak Mirzaee
This chapter illustrates how suitable neuro-symbolic models for language understanding can enable domain generalizability and robustness in downstream tasks. Different methods for integrating neural language models and knowledge graphs are discussed. The situations in which this combination is most appropriate are characterized, including quantitative evaluation and qualitative error analysis on a variety of commonsense question answering benchmark datasets.
本章说明了适合语言理解的神经符号模型如何在下游任务中实现领域泛化和鲁棒性。讨论了神经语言模型与知识图集成的不同方法。描述了这种组合最适合的情况,包括对各种常识性问答基准数据集的定量评估和定性误差分析。
{"title":"Generalizable Neuro-symbolic Systems for Commonsense Question Answering","authors":"A. Oltramari, Jonathan M Francis, Filip Ilievski, Kaixin Ma, Roshanak Mirzaee","doi":"10.3233/FAIA210360","DOIUrl":"https://doi.org/10.3233/FAIA210360","url":null,"abstract":"This chapter illustrates how suitable neuro-symbolic models for language understanding can enable domain generalizability and robustness in downstream tasks. Different methods for integrating neural language models and knowledge graphs are discussed. The situations in which this combination is most appropriate are characterized, including quantitative evaluation and qualitative error analysis on a variety of commonsense question answering benchmark datasets.","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133106324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Answering Natural-Language Questions with Neuro-Symbolic Knowledge Bases 用神经符号知识库回答自然语言问题
Pub Date : 2021-12-22 DOI: 10.3233/faia210352
Haitian Sun, Pat Verga, William W. Cohen
Symbolic reasoning systems based on first-order logics are computationally powerful, and feedforward neural networks are computationally efficient, so unless P=NP, neural networks cannot, in general, emulate symbolic logics. Hence bridging the gap between neural and symbolic methods requires achieving a delicate balance: one needs to incorporate just enough of symbolic reasoning to be useful for a task, but not so much as to cause computational intractability. In this chapter we first present results that make this claim precise, and then use these formal results to inform the choice of a neuro-symbolic knowledge-based reasoning system, based on a set-based dataflow query language. We then present experimental results with a number of variants of this neuro-symbolic reasoner, and also show that this neuro-symbolic reasoner can be closely integrated into modern neural language models.
基于一阶逻辑的符号推理系统具有强大的计算能力,而前馈神经网络具有高效的计算能力,因此除非P=NP,否则神经网络通常无法模拟符号逻辑。因此,弥合神经方法和符号方法之间的差距需要达到一种微妙的平衡:一个人需要结合足够的符号推理来完成一项任务,但又不能过多地导致计算困难。在本章中,我们首先给出了使这一说法准确的结果,然后使用这些形式化的结果来告知基于集的数据流查询语言的神经符号知识推理系统的选择。然后,我们展示了该神经符号推理器的许多变体的实验结果,并表明该神经符号推理器可以紧密集成到现代神经语言模型中。
{"title":"Answering Natural-Language Questions with Neuro-Symbolic Knowledge Bases","authors":"Haitian Sun, Pat Verga, William W. Cohen","doi":"10.3233/faia210352","DOIUrl":"https://doi.org/10.3233/faia210352","url":null,"abstract":"Symbolic reasoning systems based on first-order logics are computationally powerful, and feedforward neural networks are computationally efficient, so unless P=NP, neural networks cannot, in general, emulate symbolic logics. Hence bridging the gap between neural and symbolic methods requires achieving a delicate balance: one needs to incorporate just enough of symbolic reasoning to be useful for a task, but not so much as to cause computational intractability. In this chapter we first present results that make this claim precise, and then use these formal results to inform the choice of a neuro-symbolic knowledge-based reasoning system, based on a set-based dataflow query language. We then present experimental results with a number of variants of this neuro-symbolic reasoner, and also show that this neuro-symbolic reasoner can be closely integrated into modern neural language models.","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130795386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Constraint-Based Approach to Learning and Reasoning 基于约束的学习和推理方法
Pub Date : 2021-12-22 DOI: 10.3233/faia210355
Michelangelo Diligenti, Francesco Giannini, M. Gori, Marco Maggini, G. Marra
Neural-symbolic models bridge the gap between sub-symbolic and symbolic approaches, both of which have significant limitations. Sub-symbolic approaches, like neural networks, require a large amount of labeled data to be successful, whereas symbolic approaches, like logic reasoners, require a small amount of prior domain knowledge but do not easily scale to large collections of data. This chapter presents a general approach to integrate learning and reasoning that is based on the translation of the available prior knowledge into an undirected graphical model. Potentials on the graphical model are designed to accommodate dependencies among random variables by means of a set of trainable functions, like those computed by neural networks. The resulting neural-symbolic framework can effectively leverage the training data, when available, while exploiting high-level logic reasoning in a certain domain of discourse. Although exact inference is intractable within this model, different tractable models can be derived by making different assumptions. In particular, three models are presented in this chapter: Semantic-Based Regularization, Deep Logic Models and Relational Neural Machines. Semantic-Based Regularization is a scalable neural-symbolic model, that does not adapt the parameters of the reasoner, under the assumption that the provided prior knowledge is correct and must be exactly satisfied. Deep Logic Models preserve the scalability of Semantic-Based Regularization, while providing a flexible exploitation of logic knowledge by co-training the parameters of the reasoner during the learning procedure. Finally, Relational Neural Machines provide the fundamental advantages of perfectly replicating the effectiveness of training from supervised data of standard deep architectures, and of preserving the same generality and expressive power of Markov Logic Networks, when considering pure reasoning on symbolic data. The bonding between learning and reasoning is very general as any (deep) learner can be adopted, and any output structure expressed via First-Order Logic can be integrated. However, exact inference within a Relational Neural Machine is still intractable, and different factorizations are discussed to increase the scalability of the approach.
神经符号模型弥合了子符号和符号方法之间的差距,这两种方法都有明显的局限性。子符号方法,如神经网络,需要大量标记数据才能成功,而符号方法,如逻辑推理器,需要少量的先验领域知识,但不容易扩展到大量数据集。本章提出了一种综合学习和推理的一般方法,该方法基于将可用的先验知识转化为无向图形模型。图形模型上的势被设计为通过一组可训练的函数来适应随机变量之间的依赖关系,就像神经网络计算的那样。由此产生的神经符号框架可以有效地利用训练数据,同时在特定的话语领域中利用高级逻辑推理。虽然在该模型中难以进行精确的推理,但通过不同的假设可以推导出不同的可处理模型。本章特别介绍了三个模型:基于语义的正则化、深度逻辑模型和关系神经机器。基于语义的正则化是一种可扩展的神经符号模型,它在假设所提供的先验知识是正确的并且必须精确满足的情况下,不调整推理器的参数。深度逻辑模型保留了基于语义的正则化的可扩展性,同时通过在学习过程中共同训练推理器的参数,提供了对逻辑知识的灵活利用。最后,关系神经机器提供了从标准深度架构的监督数据中完美复制训练效果的基本优势,并且在考虑对符号数据进行纯推理时,保留了与马尔可夫逻辑网络相同的通用性和表达能力。学习和推理之间的联系是非常普遍的,任何(深度)学习器都可以被采用,任何一阶逻辑表示的输出结构都可以被集成。然而,关系神经机器内部的精确推理仍然是棘手的,并且讨论了不同的分解来增加方法的可扩展性。
{"title":"A Constraint-Based Approach to Learning and Reasoning","authors":"Michelangelo Diligenti, Francesco Giannini, M. Gori, Marco Maggini, G. Marra","doi":"10.3233/faia210355","DOIUrl":"https://doi.org/10.3233/faia210355","url":null,"abstract":"Neural-symbolic models bridge the gap between sub-symbolic and symbolic approaches, both of which have significant limitations. Sub-symbolic approaches, like neural networks, require a large amount of labeled data to be successful, whereas symbolic approaches, like logic reasoners, require a small amount of prior domain knowledge but do not easily scale to large collections of data. This chapter presents a general approach to integrate learning and reasoning that is based on the translation of the available prior knowledge into an undirected graphical model. Potentials on the graphical model are designed to accommodate dependencies among random variables by means of a set of trainable functions, like those computed by neural networks. The resulting neural-symbolic framework can effectively leverage the training data, when available, while exploiting high-level logic reasoning in a certain domain of discourse. Although exact inference is intractable within this model, different tractable models can be derived by making different assumptions. In particular, three models are presented in this chapter: Semantic-Based Regularization, Deep Logic Models and Relational Neural Machines. Semantic-Based Regularization is a scalable neural-symbolic model, that does not adapt the parameters of the reasoner, under the assumption that the provided prior knowledge is correct and must be exactly satisfied. Deep Logic Models preserve the scalability of Semantic-Based Regularization, while providing a flexible exploitation of logic knowledge by co-training the parameters of the reasoner during the learning procedure. Finally, Relational Neural Machines provide the fundamental advantages of perfectly replicating the effectiveness of training from supervised data of standard deep architectures, and of preserving the same generality and expressive power of Markov Logic Networks, when considering pure reasoning on symbolic data. The bonding between learning and reasoning is very general as any (deep) learner can be adopted, and any output structure expressed via First-Order Logic can be integrated. However, exact inference within a Relational Neural Machine is still intractable, and different factorizations are discussed to increase the scalability of the approach.","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125237852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Tractable Boolean and Arithmetic Circuits 可处理布尔和算术电路
Pub Date : 2021-12-22 DOI: 10.3233/faia210353
Adnan Darwiche
Tractable Boolean and arithmetic circuits have been studied extensively in AI for over two decades now. These circuits were initially proposed as “compiled objects,” meant to facilitate logical and probabilistic reasoning, as they permit various types of inference to be performed in linear time and a feed-forward fashion like neural networks. In more recent years, the role of tractable circuits has significantly expanded as they became a computational and semantical backbone for some approaches that aim to integrate knowledge, reasoning and learning. In this chapter, we review the foundations of tractable circuits and some associated milestones, while focusing on their core properties and techniques that make them particularly useful for the broad aims of neuro-symbolic AI.
可处理的布尔和算术电路在人工智能领域已经被广泛研究了二十多年。这些电路最初是作为“编译对象”提出的,旨在促进逻辑和概率推理,因为它们允许在线性时间和前馈方式下执行各种类型的推理,就像神经网络一样。近年来,可处理电路的作用显著扩大,因为它们成为一些旨在整合知识、推理和学习的方法的计算和语义支柱。在本章中,我们回顾了可处理电路的基础和一些相关的里程碑,同时关注它们的核心属性和技术,使它们对神经符号人工智能的广泛目标特别有用。
{"title":"Tractable Boolean and Arithmetic Circuits","authors":"Adnan Darwiche","doi":"10.3233/faia210353","DOIUrl":"https://doi.org/10.3233/faia210353","url":null,"abstract":"Tractable Boolean and arithmetic circuits have been studied extensively in AI for over two decades now. These circuits were initially proposed as “compiled objects,” meant to facilitate logical and probabilistic reasoning, as they permit various types of inference to be performed in linear time and a feed-forward fashion like neural networks. In more recent years, the role of tractable circuits has significantly expanded as they became a computational and semantical backbone for some approaches that aim to integrate knowledge, reasoning and learning. In this chapter, we review the foundations of tractable circuits and some associated milestones, while focusing on their core properties and techniques that make them particularly useful for the broad aims of neuro-symbolic AI.","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131724263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Abductive Learning 诱导的学习
Pub Date : 2021-12-22 DOI: 10.1007/springerreference_226307
Zhi-Hua Zhou, Yu-Xuan Huang
{"title":"Abductive Learning","authors":"Zhi-Hua Zhou, Yu-Xuan Huang","doi":"10.1007/springerreference_226307","DOIUrl":"https://doi.org/10.1007/springerreference_226307","url":null,"abstract":"","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121051171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Graph Reasoning Networks and Applications 图推理网络及其应用
Pub Date : 2021-12-22 DOI: 10.3233/faia210351
Qingxing Cao, Wentao Wan, Xiaodan Liang, Liang Lin
Despite the significant success in various domains, the data-driven deep neural networks compromise the feature interpretability, lack the global reasoning capability, and can’t incorporate external information crucial for complicated real-world tasks. Since the structured knowledge can provide rich cues to record human observations and commonsense, it is thus desirable to bridge symbolic semantics with learned local feature representations. In this chapter, we review works that incorporate different domain knowledge into the intermediate feature representation.These methods firstly construct a domain-specific graph that represents related human knowledge. Then, they characterize node representations with neural network features and perform graph convolution to enhance these symbolic nodes via the graph neural network(GNN).Lastly, they map the enhanced node feature back into the neural network for further propagation or prediction. Through integrating knowledge graphs into neural networks, one can collaborate feature learning and graph reasoning with the same supervised loss function and achieve a more effective and interpretable way to introduce structure constraints.
尽管数据驱动的深度神经网络在各个领域取得了显著的成功,但其特征可解释性受到损害,缺乏全局推理能力,并且无法整合复杂的现实世界任务所必需的外部信息。由于结构化知识可以为记录人类观察和常识提供丰富的线索,因此需要将符号语义与学习到的局部特征表示连接起来。在本章中,我们回顾了将不同领域知识纳入中间特征表示的工作。这些方法首先构建一个特定领域的图来表示相关的人类知识。然后,他们用神经网络特征来表征节点表示,并通过图神经网络(GNN)进行图卷积来增强这些符号节点。最后,他们将增强的节点特征映射回神经网络以进一步传播或预测。通过将知识图集成到神经网络中,可以使用相同的监督损失函数协同特征学习和图推理,实现更有效和可解释的方式引入结构约束。
{"title":"Graph Reasoning Networks and Applications","authors":"Qingxing Cao, Wentao Wan, Xiaodan Liang, Liang Lin","doi":"10.3233/faia210351","DOIUrl":"https://doi.org/10.3233/faia210351","url":null,"abstract":"Despite the significant success in various domains, the data-driven deep neural networks compromise the feature interpretability, lack the global reasoning capability, and can’t incorporate external information crucial for complicated real-world tasks. Since the structured knowledge can provide rich cues to record human observations and commonsense, it is thus desirable to bridge symbolic semantics with learned local feature representations. In this chapter, we review works that incorporate different domain knowledge into the intermediate feature representation.These methods firstly construct a domain-specific graph that represents related human knowledge. Then, they characterize node representations with neural network features and perform graph convolution to enhance these symbolic nodes via the graph neural network(GNN).Lastly, they map the enhanced node feature back into the neural network for further propagation or prediction. Through integrating knowledge graphs into neural networks, one can collaborate feature learning and graph reasoning with the same supervised loss function and achieve a more effective and interpretable way to introduce structure constraints.","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122001646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Symbolic Reasoning in Latent Space: Classical Planning as an Example 潜在空间中的符号推理:以经典规划为例
Pub Date : 2021-12-22 DOI: 10.3233/faia210349
Masataro Asai, Hiroshi Kajino, A. Fukunaga, Christian Muise
Symbolic systems require hand-coded symbolic representation as input, resulting in a knowledge acquisition bottleneck. Meanwhile, although deep learning has achieved significant success in many fields, the knowledge is encoded in a subsymbolic representation which is incompatible with symbolic systems. To address the gap between the two fields, one has to solve Symbol Grounding problem: The question of how a machine can generate symbols automatically. We discuss our recent work called Latplan, an unsupervised architecture combining deep learning and classical planning. Given only an unlabeled set of image pairs showing a subset of transitions allowed in the environment (training inputs), Latplan learns a complete propositional PDDL action model of the environment. Later, when a pair of images representing the initial and the goal states (planning inputs) is given, Latplan finds a plan to the goal state in a symbolic latent space and returns a visualized plan execution. We discuss several key ideas that made Latplan possible which would hopefully extend to many other symbolic paradigms outside classical planning.
符号系统需要手工编码的符号表示作为输入,导致知识获取的瓶颈。与此同时,尽管深度学习在许多领域取得了显著的成功,但知识被编码为与符号系统不兼容的子符号表示。为了解决这两个领域之间的差距,人们必须解决符号接地问题:机器如何自动生成符号的问题。我们讨论了我们最近的作品Latplan,这是一个结合了深度学习和经典规划的无监督建筑。给定一组未标记的图像对,显示环境中允许的过渡子集(训练输入),Latplan学习环境的完整命题PDDL动作模型。然后,当给定一对表示初始状态和目标状态的图像(规划输入)时,Latplan在符号潜在空间中找到一个到目标状态的计划,并返回一个可视化的计划执行。我们讨论了使拉丁规划成为可能的几个关键思想,这些思想有望扩展到经典规划之外的许多其他符号范式。
{"title":"Symbolic Reasoning in Latent Space: Classical Planning as an Example","authors":"Masataro Asai, Hiroshi Kajino, A. Fukunaga, Christian Muise","doi":"10.3233/faia210349","DOIUrl":"https://doi.org/10.3233/faia210349","url":null,"abstract":"Symbolic systems require hand-coded symbolic representation as input, resulting in a knowledge acquisition bottleneck. Meanwhile, although deep learning has achieved significant success in many fields, the knowledge is encoded in a subsymbolic representation which is incompatible with symbolic systems. To address the gap between the two fields, one has to solve Symbol Grounding problem: The question of how a machine can generate symbols automatically. We discuss our recent work called Latplan, an unsupervised architecture combining deep learning and classical planning. Given only an unlabeled set of image pairs showing a subset of transitions allowed in the environment (training inputs), Latplan learns a complete propositional PDDL action model of the environment. Later, when a pair of images representing the initial and the goal states (planning inputs) is given, Latplan finds a plan to the goal state in a symbolic latent space and returns a visualized plan execution. We discuss several key ideas that made Latplan possible which would hopefully extend to many other symbolic paradigms outside classical planning.","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134015759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Logic Tensor Networks: Theory and Applications 逻辑张量网络:理论与应用
Pub Date : 2021-12-22 DOI: 10.3233/faia210498
L. Serafini, A.S. d'Avila Garcez, Samy Badreddine, Ivan Donadello, Michael Spranger, Federico Bianchi
The recent availability of large-scale data combining multiple data modalities has opened various research and commercial opportunities in Artificial Intelligence (AI). Machine Learning (ML) has achieved important results in this area mostly by adopting a sub-symbolic distributed representation. It is generally accepted now that such purely sub-symbolic approaches can be data inefficient and struggle at extrapolation and reasoning. By contrast, symbolic AI is based on rich, high-level representations ideally based on human-readable symbols. Despite being more explainable and having success at reasoning, symbolic AI usually struggles when faced with incomplete knowledge or inaccurate, large data sets and combinatorial knowledge. Neurosymbolic AI attempts to benefit from the strengths of both approaches combining reasoning with complex representation of knowledge and efficient learning from multiple data modalities. Hence, neurosymbolic AI seeks to ground rich knowledge into efficient sub-symbolic representations and to explain sub-symbolic representations and deep learning by offering high-level symbolic descriptions for such learning systems. Logic Tensor Networks (LTN) are a neurosymbolic AI system for querying, learning and reasoning with rich data and abstract knowledge. LTN introduces Real Logic, a fully differentiable first-order language with concrete semantics such that every symbolic expression has an interpretation that is grounded onto real numbers in the domain. In particular, LTN converts Real Logic formulas into computational graphs that enable gradient-based optimization. This chapter presents the LTN framework and illustrates its use on knowledge completion tasks to ground the relational predicates (symbols) into a concrete interpretation (vectors and tensors). It then investigates the use of LTN on semi-supervised learning, learning of embeddings and reasoning. LTN has been applied recently to many important AI tasks, including semantic image interpretation, ontology learning and reasoning, and reinforcement learning, which use LTN for supervised classification, data clustering, semi-supervised learning, embedding learning, reasoning and query answering. The chapter presents some of the main recent applications of LTN before analyzing results in the context of related work and discussing the next steps for neurosymbolic AI and LTN-based AI models.
最近结合多种数据模式的大规模数据的可用性为人工智能(AI)开辟了各种研究和商业机会。机器学习(ML)在这一领域取得了重要的成果,主要是通过采用子符号分布式表示。现在人们普遍认为,这种纯粹的子符号方法可能是数据效率低下的,并且在推断和推理方面存在困难。相比之下,符号AI基于丰富的高级表示,理想情况下是基于人类可读的符号。尽管符号人工智能更容易解释,在推理方面也更成功,但在面对不完整的知识或不准确的大型数据集和组合知识时,符号人工智能通常会遇到困难。神经符号人工智能试图从这两种方法的优势中获益,将推理与复杂的知识表示和从多种数据模式中高效学习结合起来。因此,神经符号人工智能寻求将丰富的知识转化为有效的子符号表示,并通过为此类学习系统提供高级符号描述来解释子符号表示和深度学习。逻辑张量网络(LTN)是一种具有丰富数据和抽象知识的神经符号人工智能系统,用于查询、学习和推理。LTN引入了实逻辑,这是一种具有具体语义的完全可微的一阶语言,使得每个符号表达式都有基于域内实数的解释。特别是,LTN将Real Logic公式转换为支持基于梯度的优化的计算图。本章介绍了LTN框架,并说明了它在知识完成任务中的使用,将关系谓词(符号)建立到具体的解释(向量和张量)中。然后研究了LTN在半监督学习、嵌入学习和推理中的应用。LTN最近被应用于许多重要的人工智能任务,包括语义图像解释、本体学习和推理以及强化学习,强化学习使用LTN进行监督分类、数据聚类、半监督学习、嵌入学习、推理和查询回答。本章介绍了LTN最近的一些主要应用,然后在相关工作的背景下分析结果,并讨论了神经符号人工智能和基于LTN的人工智能模型的下一步。
{"title":"Logic Tensor Networks: Theory and Applications","authors":"L. Serafini, A.S. d'Avila Garcez, Samy Badreddine, Ivan Donadello, Michael Spranger, Federico Bianchi","doi":"10.3233/faia210498","DOIUrl":"https://doi.org/10.3233/faia210498","url":null,"abstract":"The recent availability of large-scale data combining multiple data modalities has opened various research and commercial opportunities in Artificial Intelligence (AI). Machine Learning (ML) has achieved important results in this area mostly by adopting a sub-symbolic distributed representation. It is generally accepted now that such purely sub-symbolic approaches can be data inefficient and struggle at extrapolation and reasoning. By contrast, symbolic AI is based on rich, high-level representations ideally based on human-readable symbols. Despite being more explainable and having success at reasoning, symbolic AI usually struggles when faced with incomplete knowledge or inaccurate, large data sets and combinatorial knowledge. Neurosymbolic AI attempts to benefit from the strengths of both approaches combining reasoning with complex representation of knowledge and efficient learning from multiple data modalities. Hence, neurosymbolic AI seeks to ground rich knowledge into efficient sub-symbolic representations and to explain sub-symbolic representations and deep learning by offering high-level symbolic descriptions for such learning systems. Logic Tensor Networks (LTN) are a neurosymbolic AI system for querying, learning and reasoning with rich data and abstract knowledge. LTN introduces Real Logic, a fully differentiable first-order language with concrete semantics such that every symbolic expression has an interpretation that is grounded onto real numbers in the domain. In particular, LTN converts Real Logic formulas into computational graphs that enable gradient-based optimization. This chapter presents the LTN framework and illustrates its use on knowledge completion tasks to ground the relational predicates (symbols) into a concrete interpretation (vectors and tensors). It then investigates the use of LTN on semi-supervised learning, learning of embeddings and reasoning. LTN has been applied recently to many important AI tasks, including semantic image interpretation, ontology learning and reasoning, and reinforcement learning, which use LTN for supervised classification, data clustering, semi-supervised learning, embedding learning, reasoning and query answering. The chapter presents some of the main recent applications of LTN before analyzing results in the context of related work and discussing the next steps for neurosymbolic AI and LTN-based AI models.","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124892886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Neuro-Symbolic Artificial Intelligence: The State of the Art 神经符号人工智能:最新技术
Pub Date : 2021-12-22 DOI: 10.3233/faia342
P. Hitzler, Md Kamruzzaman Sarker
{"title":"Neuro-Symbolic Artificial Intelligence: The State of the Art","authors":"P. Hitzler, Md Kamruzzaman Sarker","doi":"10.3233/faia342","DOIUrl":"https://doi.org/10.3233/faia342","url":null,"abstract":"","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122257507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
期刊
Neuro-Symbolic Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1