Pub Date : 2020-01-01Epub Date: 2020-06-21DOI: 10.1007/s13218-020-00674-7
Leif Sabellek
An ontology-mediated query (OMQ) consists of a database query paired with an ontology. When evaluated on a database, an OMQ returns not only the answers that are already in the database, but also those answers that can be obtained via logical reasoning using rules from ontology. There are many open questions regarding the complexities of problems related to OMQs. Motivated by the use of ontologies in practice, new reasoning problems which have never been considered in the context of ontologies become relevant, since they can improve the usability of ontology enriched systems. This thesis deals with various reasoning problems that emerge from ontology-mediated querying and it investigates the computational complexity of these problems. We focus on ontologies formulated in Horn description logics, which are a popular choice for ontologies in practice. In particular, the thesis gives results regarding the data complexity of OMQ evaluation by completely classifying complexity and rewritability questions for OMQs based on an EL ontology and a conjunctive query. Furthermore, the query-by-example problem, and the expressibility and verification problem in ontology-based data access are introduced and investigated.
{"title":"Ontology-Mediated Querying with Horn Description Logics.","authors":"Leif Sabellek","doi":"10.1007/s13218-020-00674-7","DOIUrl":"https://doi.org/10.1007/s13218-020-00674-7","url":null,"abstract":"<p><p>An ontology-mediated query (OMQ) consists of a database query paired with an ontology. When evaluated on a database, an OMQ returns not only the answers that are already in the database, but also those answers that can be obtained via logical reasoning using rules from ontology. There are many open questions regarding the complexities of problems related to OMQs. Motivated by the use of ontologies in practice, new reasoning problems which have never been considered in the context of ontologies become relevant, since they can improve the usability of ontology enriched systems. This thesis deals with various reasoning problems that emerge from ontology-mediated querying and it investigates the computational complexity of these problems. We focus on ontologies formulated in Horn description logics, which are a popular choice for ontologies in practice. In particular, the thesis gives results regarding the data complexity of OMQ evaluation by completely classifying complexity and rewritability questions for OMQs based on an EL ontology and a conjunctive query. Furthermore, the query-by-example problem, and the expressibility and verification problem in ontology-based data access are introduced and investigated.</p>","PeriodicalId":45413,"journal":{"name":"Kunstliche Intelligenz","volume":"34 4","pages":"533-537"},"PeriodicalIF":2.9,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s13218-020-00674-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38738074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01Epub Date: 2020-06-03DOI: 10.1007/s13218-020-00661-y
Daniel Sonntag
{"title":"AI in Medicine, Covid-19 and Springer Nature's Open Access Agreement.","authors":"Daniel Sonntag","doi":"10.1007/s13218-020-00661-y","DOIUrl":"https://doi.org/10.1007/s13218-020-00661-y","url":null,"abstract":"","PeriodicalId":45413,"journal":{"name":"Kunstliche Intelligenz","volume":"34 2","pages":"123-125"},"PeriodicalIF":2.9,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s13218-020-00661-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38031134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01Epub Date: 2020-09-16DOI: 10.1007/s13218-020-00682-7
Thomas Schneider, Mantas Šimkus
{"title":"Special Issue on Ontologies and Data Management: Part I.","authors":"Thomas Schneider, Mantas Šimkus","doi":"10.1007/s13218-020-00682-7","DOIUrl":"https://doi.org/10.1007/s13218-020-00682-7","url":null,"abstract":"","PeriodicalId":45413,"journal":{"name":"Kunstliche Intelligenz","volume":"34 3","pages":"287-289"},"PeriodicalIF":2.9,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s13218-020-00682-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38404251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1007/978-3-658-30506-2_12
Sabine von Oelffen, U. Bär
{"title":"Fazit und Ausblick","authors":"Sabine von Oelffen, U. Bär","doi":"10.1007/978-3-658-30506-2_12","DOIUrl":"https://doi.org/10.1007/978-3-658-30506-2_12","url":null,"abstract":"","PeriodicalId":45413,"journal":{"name":"Kunstliche Intelligenz","volume":"1 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"51271844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01Epub Date: 2019-07-01DOI: 10.1007/s13218-019-00600-6
Cameron Browne
This report summarises the Digital Ludeme Project, a recently launched 5-year research project being conducted at Maastricht University. This computational study of the world's traditional strategy games seeks to improve our understanding of early games, their development, and their role in the spread of related mathematical ideas throughout recorded human history.
{"title":"AI for Ancient Games: Report on the Digital Ludeme Project.","authors":"Cameron Browne","doi":"10.1007/s13218-019-00600-6","DOIUrl":"https://doi.org/10.1007/s13218-019-00600-6","url":null,"abstract":"<p><p>This report summarises the Digital Ludeme Project, a recently launched 5-year research project being conducted at Maastricht University. This computational study of the world's traditional strategy games seeks to improve our understanding of early games, their development, and their role in the spread of related mathematical ideas throughout recorded human history.</p>","PeriodicalId":45413,"journal":{"name":"Kunstliche Intelligenz","volume":"34 1","pages":"89-93"},"PeriodicalIF":2.9,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s13218-019-00600-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37912904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01Epub Date: 2020-08-13DOI: 10.1007/s13218-020-00686-3
Thomas Schneider, Mantas Šimkus
Information systems have to deal with an increasing amount of data that is heterogeneous, unstructured, or incomplete. In order to align and complete data, systems may rely on taxonomies and background knowledge that are provided in the form of an ontology. This survey gives an overview of research work on the use of ontologies for accessing incomplete and/or heterogeneous data.
{"title":"Ontologies and Data Management: A Brief Survey.","authors":"Thomas Schneider, Mantas Šimkus","doi":"10.1007/s13218-020-00686-3","DOIUrl":"10.1007/s13218-020-00686-3","url":null,"abstract":"<p><p>Information systems have to deal with an increasing amount of data that is heterogeneous, unstructured, or incomplete. In order to align and complete data, systems may rely on taxonomies and background knowledge that are provided in the form of an ontology. This survey gives an overview of research work on the use of ontologies for accessing incomplete and/or heterogeneous data.</p>","PeriodicalId":45413,"journal":{"name":"Kunstliche Intelligenz","volume":"34 3","pages":"329-353"},"PeriodicalIF":2.9,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7497697/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38442163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01Epub Date: 2020-06-11DOI: 10.1007/s13218-020-00671-w
Shqiponja Ahmetaj
A most promising approach to answering queries in ontology-based data access (OBDA) is through query rewriting. In this paper we present novel rewriting approaches for several extensions of OBDA. The goal is to understand their relative expressiveness and to pave the way for efficient query answering algorithms.
{"title":"Rewriting Approaches for Ontology-Mediated Query Answering.","authors":"Shqiponja Ahmetaj","doi":"10.1007/s13218-020-00671-w","DOIUrl":"10.1007/s13218-020-00671-w","url":null,"abstract":"<p><p>A most promising approach to answering queries in ontology-based data access (OBDA) is through query rewriting. In this paper we present novel rewriting approaches for several extensions of OBDA. The goal is to understand their relative expressiveness and to pave the way for efficient query answering algorithms.</p>","PeriodicalId":45413,"journal":{"name":"Kunstliche Intelligenz","volume":"34 4","pages":"523-526"},"PeriodicalIF":2.9,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7732797/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38738073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01Epub Date: 2020-01-21DOI: 10.1007/s13218-020-00636-z
Andreas Holzinger, André Carrington, Heimo Müller
Recent success in Artificial Intelligence (AI) and Machine Learning (ML) allow problem solving automatically without any human intervention. Autonomous approaches can be very convenient. However, in certain domains, e.g., in the medical domain, it is necessary to enable a domain expert to understand, why an algorithm came up with a certain result. Consequently, the field of Explainable AI (xAI) rapidly gained interest worldwide in various domains, particularly in medicine. Explainable AI studies transparency and traceability of opaque AI/ML and there are already a huge variety of methods. For example with layer-wise relevance propagation relevant parts of inputs to, and representations in, a neural network which caused a result, can be highlighted. This is a first important step to ensure that end users, e.g., medical professionals, assume responsibility for decision making with AI/ML and of interest to professionals and regulators. Interactive ML adds the component of human expertise to AI/ML processes by enabling them to re-enact and retrace AI/ML results, e.g. let them check it for plausibility. This requires new human-AI interfaces for explainable AI. In order to build effective and efficient interactive human-AI interfaces we have to deal with the question of how to evaluate the quality of explanations given by an explainable AI system. In this paper we introduce our System Causability Scale to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al. in Wiley Interdiscip Rev Data Min Knowl Discov 9(4), 2019) combined with concepts adapted from a widely-accepted usability scale.
最近人工智能(AI)和机器学习(ML)的成功使问题自动解决,无需任何人为干预。自主方法非常方便。然而,在某些领域,例如在医学领域,有必要使领域专家能够理解为什么算法会产生特定的结果。因此,可解释人工智能(xAI)领域迅速引起了全世界各个领域的兴趣,特别是在医学领域。可解释的AI研究不透明AI/ML的透明度和可追溯性,并且已经有各种各样的方法。例如,通过分层相关传播,可以突出显示导致结果的神经网络输入的相关部分和表示。这是确保最终用户(例如医疗专业人员)承担使用人工智能/机器学习做出决策的责任以及专业人员和监管机构感兴趣的第一步。交互式ML将人类专业知识的组成部分添加到AI/ML过程中,使他们能够重新制定和追溯AI/ML结果,例如让他们检查其合理性。这就需要新的人机界面来实现可解释的AI。为了构建有效和高效的人机交互界面,我们必须处理如何评估可解释的人工智能系统给出的解释质量的问题。在本文中,我们引入了我们的系统因果性量表来衡量解释的质量。它基于我们的因果性概念(Holzinger等人在Wiley interdisp Rev Data Min Knowl discoverv 9(4), 2019)中结合了广泛接受的可用性量表的概念。
{"title":"Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations.","authors":"Andreas Holzinger, André Carrington, Heimo Müller","doi":"10.1007/s13218-020-00636-z","DOIUrl":"https://doi.org/10.1007/s13218-020-00636-z","url":null,"abstract":"<p><p>Recent success in Artificial Intelligence (AI) and Machine Learning (ML) allow problem solving automatically without any human intervention. Autonomous approaches can be very convenient. However, in certain domains, e.g., in the medical domain, it is necessary to enable a domain expert to understand, <i>why</i> an algorithm came up with a certain result. Consequently, the field of Explainable AI (xAI) rapidly gained interest worldwide in various domains, particularly in medicine. Explainable AI studies transparency and traceability of opaque AI/ML and there are already a huge variety of methods. For example with layer-wise relevance propagation relevant parts of inputs to, and representations in, a neural network which caused a result, can be highlighted. This is a first important step to ensure that end users, e.g., medical professionals, assume responsibility for decision making with AI/ML and of interest to professionals and regulators. Interactive ML adds the component of human expertise to AI/ML processes by enabling them to re-enact and retrace AI/ML results, e.g. let them check it for plausibility. This requires new human-AI interfaces for explainable AI. In order to build effective and efficient interactive human-AI interfaces we have to deal with the question of <i>how to evaluate the quality of explanations</i> given by an explainable AI system. In this paper we introduce our System Causability Scale to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al. in Wiley Interdiscip Rev Data Min Knowl Discov 9(4), 2019) combined with concepts adapted from a widely-accepted usability scale.</p>","PeriodicalId":45413,"journal":{"name":"Kunstliche Intelligenz","volume":"34 2","pages":"193-198"},"PeriodicalIF":2.9,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s13218-020-00636-z","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38053584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}