Theory of strongly equivalent transformations is an essential part of the methodology of representing knowledge in answer set programming. Strong equivalence of two programs can be sometimes characterized as the possibility of deriving the rules of each program from the rules of the other in some deductive system. This paper describes a system with this property for the language mini-GRINGO. The key to the proof is an ω-completeness theorem for the many-sorted logic of here-and-there.
{"title":"Omega-Completeness of the Logic of Here-and-There and Strong Equivalence of Logic Programs","authors":"Jorge Fandinno, V. Lifschitz","doi":"10.24963/kr.2023/24","DOIUrl":"https://doi.org/10.24963/kr.2023/24","url":null,"abstract":"Theory of strongly equivalent transformations is an essential part of the methodology of representing knowledge in answer set programming. Strong equivalence of two programs can be sometimes characterized as the possibility of deriving the rules of each program from the rules of the other in some deductive system. This paper describes a system with this property for the language mini-GRINGO. The key to the proof is an ω-completeness theorem for the many-sorted logic of here-and-there.","PeriodicalId":342950,"journal":{"name":"International Conference on Principles of Knowledge Representation and Reasoning","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125202732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alejandro J. Mercado, Daniel A. Grimaldi, R. Rodríguez
In this article, we provide the weak version of ensconcement which characterizes an interesting family of Shielded base contractions. In turn, this characterization induces a class of AGM contractions satisfying certain postulates that we reveal here. Finally, we show a connection among the class of contractions given by our weak ensconcement and other kinds of base contraction operators. In doing so, we also point out a flaw in the original theorems that link the epistemic entrenchment with ensconcement (which are well established in the literature), and then we provide two possible solutions.
{"title":"Weak-Ensconcement for Shielded Base Contraction","authors":"Alejandro J. Mercado, Daniel A. Grimaldi, R. Rodríguez","doi":"10.24963/kr.2023/51","DOIUrl":"https://doi.org/10.24963/kr.2023/51","url":null,"abstract":"In this article, we provide the weak version of ensconcement which characterizes an interesting family of Shielded base contractions. In turn, this characterization induces a class of AGM contractions satisfying certain postulates that we reveal here. Finally, we show a connection among the class of contractions given by our weak ensconcement and other kinds of base contraction operators. In doing so, we also point out a flaw in the original theorems that link the epistemic entrenchment with ensconcement (which are well established in the literature), and then we provide two possible solutions.","PeriodicalId":342950,"journal":{"name":"International Conference on Principles of Knowledge Representation and Reasoning","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116969510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simplification of logic programs under the answer-set semantics has been studied from the very beginning of the field. One natural simplification is the removal of atoms that are deemed irrelevant. While equivalence-preserving rewritings are well understood and incorporated in state-of-the-art systems, more careful rewritings in the realm of strong or uniform equivalence have received considerably less attention. This might be due to the fact that these equivalence notions rely on comparisons with respect to context programs that remain the same for both the original and the simplified program. In this work, we pursue the idea that the atoms considered irrelevant are disregarded accordingly in the context programs of the simplification, and propose novel equivalence notions for this purpose. We provide necessary and sufficient conditions for these kinds of simplifiability of programs, and show that such simplifications, if possible, can actually be achieved by just projecting the atoms from the programs themselves. We furthermore provide complexity results for the problems of deciding simplifiability and equivalence testing.
{"title":"Foundations for Projecting Away the Irrelevant in ASP Programs","authors":"Z. G. Saribatur, S. Woltran","doi":"10.24963/kr.2023/60","DOIUrl":"https://doi.org/10.24963/kr.2023/60","url":null,"abstract":"Simplification of logic programs under the answer-set semantics has been studied from the very beginning of the field. One natural simplification is the removal of atoms that are deemed irrelevant. While equivalence-preserving rewritings are well understood and incorporated in state-of-the-art systems, more careful rewritings in the realm of strong or uniform equivalence have received considerably less attention. This might be due to the fact that these equivalence notions rely on comparisons with respect to context programs that remain the same for both the original and the simplified program. In this work, we pursue the idea that the atoms considered irrelevant are disregarded accordingly in the context programs of the simplification, and propose novel equivalence notions for this purpose. We provide necessary and sufficient conditions for these kinds of simplifiability of programs, and show that such simplifications, if possible, can actually be achieved by just projecting the atoms from the programs themselves. We furthermore provide complexity results for the problems of deciding simplifiability and equivalence testing.","PeriodicalId":342950,"journal":{"name":"International Conference on Principles of Knowledge Representation and Reasoning","volume":"18 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120972228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Panagiotis Kouvaros, Francesco Leofante, Blake Edwards, Calvin Chung, D. Margineantu, A. Lomuscio
We analyse Semantic Segmentation Neural Networks running on an autonomous aircraft to estimate its 6DOF pose during landing. We show that automated reasoning techniques from neural network verification can be used to analyse the conditions under which the networks can operate safely, thus providing enhanced assurance guarantees on the behaviour of the overall pose estimation systems.
{"title":"Verification of Semantic Key Point Detection for Aircraft Pose Estimation","authors":"Panagiotis Kouvaros, Francesco Leofante, Blake Edwards, Calvin Chung, D. Margineantu, A. Lomuscio","doi":"10.24963/kr.2023/77","DOIUrl":"https://doi.org/10.24963/kr.2023/77","url":null,"abstract":"We analyse Semantic Segmentation Neural Networks running on an autonomous aircraft to estimate its 6DOF pose during landing. We show that automated reasoning techniques from neural network verification can be used to analyse the conditions under which the networks can operate safely, thus providing enhanced assurance guarantees on the behaviour of the overall pose estimation systems.","PeriodicalId":342950,"journal":{"name":"International Conference on Principles of Knowledge Representation and Reasoning","volume":"262 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128317634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Heyninck, Giovanni Casini, T. Meyer, U. Straccia
Propositional Typicality Logic (PTL) extends propositional logic with a connective • expressing the most typical (alias normal or conventional) situations in which a given sentence holds. As such, it generalises e.g.~preferential logics that formalise reasoning with conditionals such as ``birds typically fly''. In this paper, we study revision of sets of PTL-sentences. We first show why it is necessary to extend the PTL-language with a possibility operator, and then define the revision of PTL-sentences syntactically and characterise it semantically. We show that this allows us to represent a wide variety of existing revision methods, such as propositional revision and revision of epistemic states. Furthermore, we provide several examples showing why our approach is innovative. In more detail, we study revision of a set of conditionals under preferential closure, and the addition and contraction of possible worlds from an epistemic state.
{"title":"Revising Typical Beliefs: One Revision to Rule Them All","authors":"J. Heyninck, Giovanni Casini, T. Meyer, U. Straccia","doi":"10.24963/kr.2023/35","DOIUrl":"https://doi.org/10.24963/kr.2023/35","url":null,"abstract":"Propositional Typicality Logic (PTL) extends propositional logic with a connective • expressing the most typical (alias normal or conventional) situations in which a given sentence holds. As such, it generalises e.g.~preferential logics that formalise reasoning with conditionals such as ``birds typically fly''. In this paper, we study revision of sets of PTL-sentences. We first show why it is necessary to extend the PTL-language with a possibility operator, and then define the revision of PTL-sentences syntactically and characterise it semantically. We show that this allows us to represent a wide variety of existing revision methods, such as propositional revision and revision of epistemic states. Furthermore, we provide several examples showing why our approach is innovative. In more detail, we study revision of a set of conditionals under preferential closure, and the addition and contraction of possible worlds from an epistemic state.","PeriodicalId":342950,"journal":{"name":"International Conference on Principles of Knowledge Representation and Reasoning","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129459522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Perceptron operators have been introduced to knowledge representation languages such as description logics in order to define concepts by listing features with associated weights and by giving a threshold. Semantically, an individual then belongs to such a concept if the weighted sum of the listed features it belongs to reaches that threshold. Such operators have been subsequently applied to cognitively-motivated modelling scenarios and to building bridges between learning and reasoning. However, they suffer from the basic limitation that they cannot consider the weight or number of role fillers. This paper introduces an extension of the basic perceptron operator language to address this shortcoming, defining the language ALCP and answering some basic questions regarding the succinctness and complexity of the new language. Namely, we show firstly that in ALCP+, when weights are positive, the language is expressively equivalent to ALCQ, whilst it is strictly more expressive in the general case allowing also negative weights. Secondly, ALCP+ is shown to be strictly more succinct than ALCQ. Thirdly, capitalising on results concerning the logic ALCSCC, we show that despite the added expressivity, reasoning in ALCP remains EXPTIME-complete.
{"title":"Succinctness and Complexity of ALC with Counting Perceptrons","authors":"P. Galliani, O. Kutz, N. Troquard","doi":"10.24963/kr.2023/29","DOIUrl":"https://doi.org/10.24963/kr.2023/29","url":null,"abstract":"Perceptron operators have been introduced to knowledge representation languages such as description logics in order to define concepts by listing features with associated weights and by giving a threshold. Semantically, an individual then belongs to such a concept if the weighted sum of the listed features it belongs to reaches that threshold. Such operators have been subsequently applied to cognitively-motivated modelling scenarios and to building bridges between learning and reasoning. However, they suffer from the basic limitation that they cannot consider the weight or number of role fillers. This paper introduces an extension of the basic perceptron operator language to address this shortcoming, defining the language ALCP and answering some basic questions regarding the succinctness and complexity of the new language. Namely, we show firstly that in ALCP+, when weights are positive, the language is expressively equivalent to ALCQ, whilst it is strictly more expressive in the general case allowing also negative weights. Secondly, ALCP+ is shown to be strictly more succinct than ALCQ. Thirdly, capitalising on results concerning the logic ALCSCC, we show that despite the added expressivity, reasoning in ALCP remains EXPTIME-complete.","PeriodicalId":342950,"journal":{"name":"International Conference on Principles of Knowledge Representation and Reasoning","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115937288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The disjunctive restricted chase is a sound and complete procedure for solving boolean conjunctive query entailment over knowledge bases of disjunctive existential rules. Alas, this procedure does not always terminate and checking if it does is undecidable. However, we can use acyclicity notions (sufficient conditions that imply termination) to effectively apply the chase in many real-world cases. To know if these conditions are as general as possible, we can use cyclicity notions (sufficient conditions that imply non-termination). In this paper, we discuss some issues with previously existing cyclicity notions, propose some novel notions for non-termination by dismantling the original idea, and empirically verify the generality of the new criteria.
{"title":"Do Repeat Yourself: Understanding Sufficient Conditions for Restricted Chase Non-Termination","authors":"Lukas Gerlach, David Carral","doi":"10.24963/kr.2023/30","DOIUrl":"https://doi.org/10.24963/kr.2023/30","url":null,"abstract":"The disjunctive restricted chase is a sound and complete procedure for solving boolean conjunctive query entailment over knowledge bases of disjunctive existential rules. Alas, this procedure does not always terminate and checking if it does is undecidable. However, we can use acyclicity notions (sufficient conditions that imply termination) to effectively apply the chase in many real-world cases. To know if these conditions are as general as possible, we can use cyclicity notions (sufficient conditions that imply non-termination). In this paper, we discuss some issues with previously existing cyclicity notions, propose some novel notions for non-termination by dismantling the original idea, and empirically verify the generality of the new criteria.","PeriodicalId":342950,"journal":{"name":"International Conference on Principles of Knowledge Representation and Reasoning","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114637298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Lukasiewicz, Enrico Malizia, Cristian Molinaro
Inconsistency-tolerant semantics have been proposed to provide meaningful ontological query answers even in the presence of inconsistencies. Several such semantics rely on the notion of a repair, which is a "maximal" consistent subset of the database, where different maximality criteria might be adopted depending on the application at hand. Previous work in the context of Datalog+/- has considered only the subset and cardinality maximality criteria. We take here a step further and study inconsistency-tolerant semantics under maximality criteria based on weights and priority levels. We provide a thorough complexity analysis for a wide range of existential rule languages and for several complexity measures.
{"title":"Complexity of Inconsistency-Tolerant Query Answering in Datalog+/- under Preferred Repairs","authors":"Thomas Lukasiewicz, Enrico Malizia, Cristian Molinaro","doi":"10.24963/kr.2023/46","DOIUrl":"https://doi.org/10.24963/kr.2023/46","url":null,"abstract":"Inconsistency-tolerant semantics have been proposed to provide meaningful ontological query answers even in the presence of inconsistencies. Several such semantics rely on the notion of a repair, which is a \"maximal\" consistent subset of the database, where different maximality criteria might be adopted depending on the application at hand. Previous work in the context of Datalog+/- has considered only the subset and cardinality maximality criteria.\u0000\u0000We take here a step further and study inconsistency-tolerant semantics under maximality criteria based on weights and priority levels. We provide a thorough complexity analysis for a wide range of existential rule languages and for several complexity measures.","PeriodicalId":342950,"journal":{"name":"International Conference on Principles of Knowledge Representation and Reasoning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131287108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maksim Gladyshev, N. Alechina, M. Dastani, D. Doder
The need for tools and techniques to formally analyze and trace the responsibility for unsafe outcomes to decision-making actors is urgent. Existing formal approaches assume that the unsafe outcomes for which actors can be held responsible are actually realized. This paper considers a broader notion of responsibility where unsafe outcomes are not necessarily realized, but their probabilities are unacceptably high. We present a logic combining strategic, probabilistic and temporal primitives designed to express concepts such as the risk of an undesirable outcome and being responsible for exceeding a risk threshold. We demonstrate that the proposed logic is complete and decidable.
{"title":"Group Responsibility for Exceeding Risk Threshold","authors":"Maksim Gladyshev, N. Alechina, M. Dastani, D. Doder","doi":"10.24963/kr.2023/32","DOIUrl":"https://doi.org/10.24963/kr.2023/32","url":null,"abstract":"The need for tools and techniques to formally analyze and trace the responsibility for unsafe outcomes to decision-making actors is urgent. Existing formal approaches assume that the unsafe outcomes for which actors can be held responsible are actually realized. This paper considers a broader notion of responsibility where unsafe outcomes are not necessarily realized, but their probabilities are unacceptably high. We present a logic combining strategic, probabilistic and temporal primitives designed to express concepts such as the risk of an undesirable outcome and being responsible for exceeding a risk threshold. We demonstrate that the proposed logic is complete and decidable.","PeriodicalId":342950,"journal":{"name":"International Conference on Principles of Knowledge Representation and Reasoning","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134339574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Knowledge compilation offers a formal approach to explaining and verifying the behavior of machine learning systems, such as neural networks. Unfortunately, compiling even an individual neuron into a tractable representation such as an Ordered Binary Decision Diagram (OBDD), is an NP-hard problem. In this paper, we consider the problem of training a neuron from data, subject to the constraint that it has a compact representation as an OBDD. Our approach is based on the observation that a neuron can be compiled into an OBDD in polytime if (1) the neuron has integer weights, and (2) its aggregate weight is bounded. Unfortunately, we first show that it is also NP-hard to train a neuron, subject to these two constraints. On the other hand, we show that if we train a neuron generatively, rather than discriminatively, a neuron with bounded aggregate weight can be trained in pseudo-polynomial time. Hence, we propose the first efficient algorithm for training a neuron that is guaranteed to have a compact representation as an OBDD. Empirically, we show that our approach can train neurons with higher accuracy and more compact OBDDs.
{"title":"On Training Neurons with Bounded Compilations","authors":"Lance Kennedy, Issouf Kindo, Arthur Choi","doi":"10.24963/kr.2023/39","DOIUrl":"https://doi.org/10.24963/kr.2023/39","url":null,"abstract":"Knowledge compilation offers a formal approach to explaining and verifying the behavior of machine learning systems, such as neural networks. Unfortunately, compiling even an individual neuron into a tractable representation such as an Ordered Binary Decision Diagram (OBDD), is an NP-hard problem. In this paper, we consider the problem of training a neuron from data, subject to the constraint that it has a compact representation as an OBDD. Our approach is based on the observation that a neuron can be compiled into an OBDD in polytime if (1) the neuron has integer weights, and (2) its aggregate weight is bounded. Unfortunately, we first show that it is also NP-hard to train a neuron, subject to these two constraints. On the other hand, we show that if we train a neuron generatively, rather than discriminatively, a neuron with bounded aggregate weight can be trained in pseudo-polynomial time. Hence, we propose the first efficient algorithm for training a neuron that is guaranteed to have a compact representation as an OBDD. Empirically, we show that our approach can train neurons with higher accuracy and more compact OBDDs.","PeriodicalId":342950,"journal":{"name":"International Conference on Principles of Knowledge Representation and Reasoning","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125160007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}