{"title":"Embedding Ontologies in the Description Logic ALC by Axis-Aligned Cones","authors":"Özgür Lütfü Özcep, Mena Leemhuis, Diedrich Wolter","doi":"10.1613/jair.1.13939","DOIUrl":null,"url":null,"abstract":"This paper is concerned with knowledge graph embedding with background knowledge, taking the formal perspective of logics. In knowledge graph embedding, knowledge— expressed as a set of triples of the form (a R b) (“a is R-related to b”)—is embedded into a real-valued vector space. The embedding helps exploiting geometrical regularities of the space in order to tackle typical inductive tasks of machine learning such as link prediction. Recent embedding approaches also consider incorporating background knowledge, in which the intended meanings of the symbols a, R, b are further constrained via axioms of a theory. Of particular interest are theories expressed in a formal language with a neat semantics and a good balance between expressivity and feasibility. In that case, the knowledge graph together with the background can be considered to be an ontology. This paper develops a cone-based theory for embedding in order to advance the expressivity of the ontology: it works (at least) with ontologies expressed in the description logic ALC, which comprises restricted existential and universal quantifiers, as well as concept negation and concept disjunction. In order to align the classical Tarskian Style semantics for ALC with the sub-symbolic representation of triples, we use the notion of a geometric model of an ALC ontology and show, as one of our main results, that an ALC ontology is satisfiable in the classical sense iff it is satisfiable by a geometric model based on cones. The geometric model, if treated as a partial model, can even be chosen to be faithful, i.e., to reflect all and only the knowledge captured by the ontology. We introduce the class of axis-aligned cones and show that modulo simple geometric operations any distributive logic (such as ALC) interpreted over cones employs this class of cones. Cones are also attractive from a machine learning perspective on knowledge graph embeddings since they give rise to applying conic optimization techniques.","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"59 11","pages":"0"},"PeriodicalIF":4.5000,"publicationDate":"2023-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Artificial Intelligence Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1613/jair.1.13939","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This paper is concerned with knowledge graph embedding with background knowledge, taking the formal perspective of logics. In knowledge graph embedding, knowledge— expressed as a set of triples of the form (a R b) (“a is R-related to b”)—is embedded into a real-valued vector space. The embedding helps exploiting geometrical regularities of the space in order to tackle typical inductive tasks of machine learning such as link prediction. Recent embedding approaches also consider incorporating background knowledge, in which the intended meanings of the symbols a, R, b are further constrained via axioms of a theory. Of particular interest are theories expressed in a formal language with a neat semantics and a good balance between expressivity and feasibility. In that case, the knowledge graph together with the background can be considered to be an ontology. This paper develops a cone-based theory for embedding in order to advance the expressivity of the ontology: it works (at least) with ontologies expressed in the description logic ALC, which comprises restricted existential and universal quantifiers, as well as concept negation and concept disjunction. In order to align the classical Tarskian Style semantics for ALC with the sub-symbolic representation of triples, we use the notion of a geometric model of an ALC ontology and show, as one of our main results, that an ALC ontology is satisfiable in the classical sense iff it is satisfiable by a geometric model based on cones. The geometric model, if treated as a partial model, can even be chosen to be faithful, i.e., to reflect all and only the knowledge captured by the ontology. We introduce the class of axis-aligned cones and show that modulo simple geometric operations any distributive logic (such as ALC) interpreted over cones employs this class of cones. Cones are also attractive from a machine learning perspective on knowledge graph embeddings since they give rise to applying conic optimization techniques.
本文从逻辑的形式化角度研究知识图与背景知识的嵌入问题。在知识图嵌入中,知识被表示为(a R b)(“a是R与b相关的”)形式的一组三元组,并嵌入到实值向量空间中。嵌入有助于利用空间的几何规律,以解决机器学习的典型归纳任务,如链接预测。最近的嵌入方法还考虑纳入背景知识,其中符号a, R, b的预期含义通过理论的公理进一步受到约束。特别感兴趣的是用形式语言表达的理论,它具有整洁的语义,并且在表达性和可行性之间取得了良好的平衡。在这种情况下,知识图和背景可以被认为是一个本体。为了提高本体的表达性,本文发展了一种基于圆锥体的嵌入理论:它至少适用于描述逻辑ALC中表达的本体,其中包括有限存在量词和全称量词,以及概念否定和概念析取。为了将ALC的经典Tarskian风格语义与三元组的子符号表示结合起来,我们使用了ALC本体的几何模型的概念,并作为我们的主要结果之一,证明了ALC本体在经典意义上是可满足的,如果它是基于锥体的几何模型可满足的。如果将几何模型视为部分模型,甚至可以选择忠实的模型,即反映本体捕获的所有知识。我们引入了轴向锥类,并证明了在锥上解释的任何分配逻辑(如ALC)的模简单几何运算都使用了这类锥。从知识图嵌入的机器学习角度来看,锥体也很有吸引力,因为它们可以应用锥体优化技术。
期刊介绍:
JAIR(ISSN 1076 - 9757) covers all areas of artificial intelligence (AI), publishing refereed research articles, survey articles, and technical notes. Established in 1993 as one of the first electronic scientific journals, JAIR is indexed by INSPEC, Science Citation Index, and MathSciNet. JAIR reviews papers within approximately three months of submission and publishes accepted articles on the internet immediately upon receiving the final versions. JAIR articles are published for free distribution on the internet by the AI Access Foundation, and for purchase in bound volumes by AAAI Press.