Giovanni De Gasperis, Stefania Costantini, Andrea Rafanelli, Patrizio Migliarini, Ivan Letteri, Abeer Dyoub
Abstract Autonomous robots can be employed in exploring unknown environments and performing many tasks, such as, e.g. detecting areas of interest, collecting target objects, etc. Deep reinforcement learning (RL) is often used to train this kind of robot. However, concerning the artificial environments aimed at testing the robot, there is a lack of available data sets and a long time is needed to create them from scratch. A good data set is in fact usually produced with high effort in terms of cost and human work to satisfy the constraints imposed by the expected results. In the first part of this paper, we focus on the specification of the properties of the solutions needed to build a data set, making the case of environment exploration. In the proposed approach, rather than using imperative programming, we explore the possibility of generating data sets using constraint programming in Prolog. In this phase, geometric predicates describe a virtual environment according to inter-space requirements. The second part of the paper is focused on testing the generated data set in an AI gym via space search techniques. We developed a Neuro-Symbolic agent built from the following: (i) A deep Q-learning component implemented in Python, able to address via RL a search problem in the virtual space; the agent has the goal to explore a generated virtual environment to seek for a target, improving its performance through a RL process. (ii) A symbolic component able to re-address the search when the Q-learning component gets stuck in a part of the virtual environment; these components stimulate the agent to move to and explore other parts of the environment. Wide experimentation has been performed, with promising results, and is reported, to demonstrate the effectiveness of the approach.
{"title":"Extension of constraint-procedural logic-generated environments for deep Q-learning agent training and benchmarking","authors":"Giovanni De Gasperis, Stefania Costantini, Andrea Rafanelli, Patrizio Migliarini, Ivan Letteri, Abeer Dyoub","doi":"10.1093/logcom/exad032","DOIUrl":"https://doi.org/10.1093/logcom/exad032","url":null,"abstract":"Abstract Autonomous robots can be employed in exploring unknown environments and performing many tasks, such as, e.g. detecting areas of interest, collecting target objects, etc. Deep reinforcement learning (RL) is often used to train this kind of robot. However, concerning the artificial environments aimed at testing the robot, there is a lack of available data sets and a long time is needed to create them from scratch. A good data set is in fact usually produced with high effort in terms of cost and human work to satisfy the constraints imposed by the expected results. In the first part of this paper, we focus on the specification of the properties of the solutions needed to build a data set, making the case of environment exploration. In the proposed approach, rather than using imperative programming, we explore the possibility of generating data sets using constraint programming in Prolog. In this phase, geometric predicates describe a virtual environment according to inter-space requirements. The second part of the paper is focused on testing the generated data set in an AI gym via space search techniques. We developed a Neuro-Symbolic agent built from the following: (i) A deep Q-learning component implemented in Python, able to address via RL a search problem in the virtual space; the agent has the goal to explore a generated virtual environment to seek for a target, improving its performance through a RL process. (ii) A symbolic component able to re-address the search when the Q-learning component gets stuck in a part of the virtual environment; these components stimulate the agent to move to and explore other parts of the environment. Wide experimentation has been performed, with promising results, and is reported, to demonstrate the effectiveness of the approach.","PeriodicalId":50162,"journal":{"name":"Journal of Logic and Computation","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135449358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fabio Tardivo, A. Dovier, A. Formisano, L. Michel, Enrico Pontelli
The AllDifferent constraint is a fundamental tool in Constraint Programming. It naturally arises in many problems, from puzzles to scheduling and routing applications. Such popularity has prompted an extensive literature on filtering and propagation for this constraint. This paper investigates the use of General Processing Units (GPUs) to accelerate filtering and propagation. In particular, the paper presents an efficient parallelization of the AllDifferent constraint on GPU, along with an analysis of different design and implementation choices and evaluation of the performance of the resulting system on several benchmarks.
{"title":"Constraint propagation on GPU: A case study for the AllDifferent constraint","authors":"Fabio Tardivo, A. Dovier, A. Formisano, L. Michel, Enrico Pontelli","doi":"10.1093/logcom/exad033","DOIUrl":"https://doi.org/10.1093/logcom/exad033","url":null,"abstract":"\u0000 The AllDifferent constraint is a fundamental tool in Constraint Programming. It naturally arises in many problems, from puzzles to scheduling and routing applications. Such popularity has prompted an extensive literature on filtering and propagation for this constraint. This paper investigates the use of General Processing Units (GPUs) to accelerate filtering and propagation. In particular, the paper presents an efficient parallelization of the AllDifferent constraint on GPU, along with an analysis of different design and implementation choices and evaluation of the performance of the resulting system on several benchmarks.","PeriodicalId":50162,"journal":{"name":"Journal of Logic and Computation","volume":" ","pages":""},"PeriodicalIF":0.7,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43471248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Some techniques are proposed for reasoning on co-inductive structures. First, we devise a sound axiomatization of (conservative extensions) of such structures, thus reducing the problem of checking whether a formula admits a co-inductive model to a first-order satisfiability test. We devise a class of structures, called regularly co-inductive, for which the axiomatization is complete (for other co-inductive structures, the proposed axiomatization is sound, but not complete). Then, we propose proof calculi for reasoning on such structures. We first show that some of the axioms mentioned above can be omitted if the inference rules are able to handle rational terms. Furthermore, under some conditions, some other axioms may be replaced by an additional inference rule that computes the solutions of fixpoint equations. Finally, we show that a stronger completeness result can be established under some additional conditions on the signature.
{"title":"Some techniques for reasoning automatically on co-inductive data structures","authors":"N. Peltier","doi":"10.1093/logcom/exad028","DOIUrl":"https://doi.org/10.1093/logcom/exad028","url":null,"abstract":"\u0000 Some techniques are proposed for reasoning on co-inductive structures. First, we devise a sound axiomatization of (conservative extensions) of such structures, thus reducing the problem of checking whether a formula admits a co-inductive model to a first-order satisfiability test. We devise a class of structures, called regularly co-inductive, for which the axiomatization is complete (for other co-inductive structures, the proposed axiomatization is sound, but not complete). Then, we propose proof calculi for reasoning on such structures. We first show that some of the axioms mentioned above can be omitted if the inference rules are able to handle rational terms. Furthermore, under some conditions, some other axioms may be replaced by an additional inference rule that computes the solutions of fixpoint equations. Finally, we show that a stronger completeness result can be established under some additional conditions on the signature.","PeriodicalId":50162,"journal":{"name":"Journal of Logic and Computation","volume":" ","pages":""},"PeriodicalIF":0.7,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41621792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Livio Robaldo, Francesco Pacenza, Jessica Zangari, Roberta Calegari, Francesco Calimeri, Giovanni Siragusa
Abstract Automated compliance checking, i.e. the task of automatically assessing whether states of affairs comply with normative systems, has recently received a lot of attention from the scientific community, also as a consequence of the increasing investments in Artificial Intelligence technologies for the legal domain (LegalTech). The authors of this paper deem as crucial the research and implementation of compliance checkers that can directly process data in RDF format, as nowadays more and more (big) data in this format are becoming available worldwide, across a multitude of different domains. Among the automated technologies that have been used in recent literature, to the best of our knowledge, only two of them have been evaluated with input states of affairs encoded in RDF format. This paper formalizes a selected use case in these two technologies and compares the implementations, also in terms of simulations with respect to shared synthetic datasets.
{"title":"Efficient compliance checking of RDF data","authors":"Livio Robaldo, Francesco Pacenza, Jessica Zangari, Roberta Calegari, Francesco Calimeri, Giovanni Siragusa","doi":"10.1093/logcom/exad034","DOIUrl":"https://doi.org/10.1093/logcom/exad034","url":null,"abstract":"Abstract Automated compliance checking, i.e. the task of automatically assessing whether states of affairs comply with normative systems, has recently received a lot of attention from the scientific community, also as a consequence of the increasing investments in Artificial Intelligence technologies for the legal domain (LegalTech). The authors of this paper deem as crucial the research and implementation of compliance checkers that can directly process data in RDF format, as nowadays more and more (big) data in this format are becoming available worldwide, across a multitude of different domains. Among the automated technologies that have been used in recent literature, to the best of our knowledge, only two of them have been evaluated with input states of affairs encoded in RDF format. This paper formalizes a selected use case in these two technologies and compares the implementations, also in terms of simulations with respect to shared synthetic datasets.","PeriodicalId":50162,"journal":{"name":"Journal of Logic and Computation","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135494125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article provides a systematic analysis of the well-known notions of weak and strong permission in input/output (I/O) logic. We extend the account of permission initially put forward by Makinson and Van der Torre to the whole family of I/O systems developed during the last two decades. The main contribution is a series of characterization results for strong permission, based on establishing the so-called non-repetition property. We also study an input/output logic not yet covered in the literature. It supports reasoning by cases—a natural feature of human reasoning. The output is not closed under logical entailment. At the same time, it avoids excess output using a consistency check—a technique familiar from non-monotonic logic. This makes it well suited for contrary-to-duty reasoning. The axiomatic characterization is in terms of a generalized OR rule. We discuss the implications of all this for our understanding of the notion of the coherence of a normative system. Topics for future research are identified.1
本文系统地分析了输入/输出(I/O)逻辑中众所周知的弱权限和强权限概念。我们将Makinson和Van der Torre最初提出的权限解释扩展到过去二十年中开发的整个I/O系统家族。主要贡献是基于建立所谓的不重复性质的一系列强许可表征结果。我们还研究了文献中尚未涉及的输入/输出逻辑。它支持案例推理——这是人类推理的自然特征。输出在逻辑蕴涵下不是封闭的。同时,它使用一致性检查(非单调逻辑中常见的一种技术)避免了多余的输出。这使得它非常适合于反职责推理。公理化表征是根据一个广义或规则。我们将讨论所有这些对我们理解规范系统的相干性概念的影响。确定了未来研究的主题
{"title":"Permissive and regulative norms in deontic logic","authors":"Maya Olszewski, X. Parent, Leendert van der Torre","doi":"10.1093/logcom/exad024","DOIUrl":"https://doi.org/10.1093/logcom/exad024","url":null,"abstract":"\u0000 This article provides a systematic analysis of the well-known notions of weak and strong permission in input/output (I/O) logic. We extend the account of permission initially put forward by Makinson and Van der Torre to the whole family of I/O systems developed during the last two decades. The main contribution is a series of characterization results for strong permission, based on establishing the so-called non-repetition property. We also study an input/output logic not yet covered in the literature. It supports reasoning by cases—a natural feature of human reasoning. The output is not closed under logical entailment. At the same time, it avoids excess output using a consistency check—a technique familiar from non-monotonic logic. This makes it well suited for contrary-to-duty reasoning. The axiomatic characterization is in terms of a generalized OR rule. We discuss the implications of all this for our understanding of the notion of the coherence of a normative system. Topics for future research are identified.1","PeriodicalId":50162,"journal":{"name":"Journal of Logic and Computation","volume":" ","pages":""},"PeriodicalIF":0.7,"publicationDate":"2023-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41766069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kettle logic is a colloquial term that describes an agent’s advancement of inconsistent arguments in order to defeat a particular claim. Intuitively, a consistent subset of the advanced arguments should exist that is at least as successful at refuting the claim as the advancement of the set of inconsistent arguments. In this paper, we formalize this intuition and provide a formal analysis of kettle logic in abstract argumentation, a fundamental approach to computational argumentation, showing that all of the analysed abstract argumentation semantics (inference functions)—with the exception of naive semantics, which is considered a mere simplistic helper for the construction of other semantics—suffer from kettle logic. We also provide an approach to mitigating kettle logic under some circumstances. The key findings presented in this paper highlight that agents that apply the inference functions of abstract argumentation, are—similarly to humans—receptive to persuasion by agents who deliberately advance inconsistent and intuitively ‘illogical’ claims. As abstract argumentation can be considered one of the most basic models of computational argumentation, this raises the question to what extent and under what circumstances kettle logic-free argumentation can and should be enforced by computational means.
{"title":"Kettle logic in abstract argumentation","authors":"Timotheus Kampik","doi":"10.1093/logcom/exad027","DOIUrl":"https://doi.org/10.1093/logcom/exad027","url":null,"abstract":"\u0000 Kettle logic is a colloquial term that describes an agent’s advancement of inconsistent arguments in order to defeat a particular claim. Intuitively, a consistent subset of the advanced arguments should exist that is at least as successful at refuting the claim as the advancement of the set of inconsistent arguments. In this paper, we formalize this intuition and provide a formal analysis of kettle logic in abstract argumentation, a fundamental approach to computational argumentation, showing that all of the analysed abstract argumentation semantics (inference functions)—with the exception of naive semantics, which is considered a mere simplistic helper for the construction of other semantics—suffer from kettle logic. We also provide an approach to mitigating kettle logic under some circumstances. The key findings presented in this paper highlight that agents that apply the inference functions of abstract argumentation, are—similarly to humans—receptive to persuasion by agents who deliberately advance inconsistent and intuitively ‘illogical’ claims. As abstract argumentation can be considered one of the most basic models of computational argumentation, this raises the question to what extent and under what circumstances kettle logic-free argumentation can and should be enforced by computational means.","PeriodicalId":50162,"journal":{"name":"Journal of Logic and Computation","volume":" ","pages":""},"PeriodicalIF":0.7,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48797074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This paper addresses the long-standing question of the predicativity of the Mahlo universe. A solution, called the extended predicative Mahlo universe, has been proposed by Kahle and Setzer in the context of explicit mathematics. It makes use of the collection of untyped terms (denoting partial functions) which are directly available in explicit mathematics but not in Martin-Löf type theory. In this paper, we overcome the obstacle of not having direct access to untyped terms in Martin-Löf type theory by formalizing explicit mathematics with an extended predicative Mahlo universe in Martin-Löf type theory with certain indexed inductive-recursive definitions. In this way, we can relate the predicativity question to the fundamental semantics of Martin-Löf type theory in terms of computation to canonical form. As a result, we get the first extended predicative definition of a Mahlo universe in Martin-Löf type theory. To this end, we first define an external variant of Kahle and Setzer’s internal extended predicative universe in explicit mathematics. This is then formalized in Martin-Löf type theory, where it becomes an internal extended predicative Mahlo universe. Although we make use of indexed inductive-recursive definitions that go beyond the type theory $mathbf {IIRD}$ of indexed inductive-recursive definitions defined in previous work by the authors, we argue that they are constructive and predicative in Martin-Löf’s sense. The model construction has been type-checked in the proof assistant Agda.
{"title":"The extended predicative Mahlo universe in Martin-Löf type theory","authors":"Peter Dybjer, Anton Setzer","doi":"10.1093/logcom/exad022","DOIUrl":"https://doi.org/10.1093/logcom/exad022","url":null,"abstract":"Abstract This paper addresses the long-standing question of the predicativity of the Mahlo universe. A solution, called the extended predicative Mahlo universe, has been proposed by Kahle and Setzer in the context of explicit mathematics. It makes use of the collection of untyped terms (denoting partial functions) which are directly available in explicit mathematics but not in Martin-Löf type theory. In this paper, we overcome the obstacle of not having direct access to untyped terms in Martin-Löf type theory by formalizing explicit mathematics with an extended predicative Mahlo universe in Martin-Löf type theory with certain indexed inductive-recursive definitions. In this way, we can relate the predicativity question to the fundamental semantics of Martin-Löf type theory in terms of computation to canonical form. As a result, we get the first extended predicative definition of a Mahlo universe in Martin-Löf type theory. To this end, we first define an external variant of Kahle and Setzer’s internal extended predicative universe in explicit mathematics. This is then formalized in Martin-Löf type theory, where it becomes an internal extended predicative Mahlo universe. Although we make use of indexed inductive-recursive definitions that go beyond the type theory $mathbf {IIRD}$ of indexed inductive-recursive definitions defined in previous work by the authors, we argue that they are constructive and predicative in Martin-Löf’s sense. The model construction has been type-checked in the proof assistant Agda.","PeriodicalId":50162,"journal":{"name":"Journal of Logic and Computation","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135389198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a conceptual foundation for nonmonotonic reasoning which integrates an intuitionistic plurimodal logic with classical logic. The need for a multilogical system arises once we understand that to legitimately infer less than certain conclusions, we make two different kinds of assessments: an external one (what are the alternatives) and an internal one (which logic should govern the set of beliefs). We exploit this scheme to capture in novel ways basic nonmonotonic systems such as autoepistemic logic, default logic, one of the best known interpretations of negation-as-failure, as well as some conditional logics of normality. Among the aims of this paper is to explain why there are substantial overlaps between these systems and also to settle some problematic features that are of concern in logical investigations on common sense reasoning.
{"title":"Nonmonotonic inferences: Classical conclusions in an intuitionistic modal framework","authors":"Gisèle Fischer Servi","doi":"10.1093/logcom/exad020","DOIUrl":"https://doi.org/10.1093/logcom/exad020","url":null,"abstract":"\u0000 We introduce a conceptual foundation for nonmonotonic reasoning which integrates an intuitionistic plurimodal logic with classical logic. The need for a multilogical system arises once we understand that to legitimately infer less than certain conclusions, we make two different kinds of assessments: an external one (what are the alternatives) and an internal one (which logic should govern the set of beliefs). We exploit this scheme to capture in novel ways basic nonmonotonic systems such as autoepistemic logic, default logic, one of the best known interpretations of negation-as-failure, as well as some conditional logics of normality. Among the aims of this paper is to explain why there are substantial overlaps between these systems and also to settle some problematic features that are of concern in logical investigations on common sense reasoning.","PeriodicalId":50162,"journal":{"name":"Journal of Logic and Computation","volume":" ","pages":""},"PeriodicalIF":0.7,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48218835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Correction to: Argumentation Frameworks with Attack Classification","authors":"","doi":"10.1093/logcom/exad026","DOIUrl":"https://doi.org/10.1093/logcom/exad026","url":null,"abstract":"","PeriodicalId":50162,"journal":{"name":"Journal of Logic and Computation","volume":" ","pages":""},"PeriodicalIF":0.7,"publicationDate":"2023-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48645337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we study the metamathematics of consistent arithmetical theories $T$ (containing $textsf {I}varSigma _{1}$); we investigate numerical properties based on proof predicates that depend on numerations of the axioms. Numeral Completeness. For every true (in $mathbb {N}$) sentence $vec {Q}vec {x}.varphi (vec {x})$, with $varphi (vec {x})$ a $varSigma _{1}(textsf {I}varSigma _1)$-formula, there is a numeration $tau $ of the axioms of $T$ such that $textsf {I}varSigma _1vdash vec {Q}vec {x}. texttt {Pr}_{tau }(ulcorner varphi (overset {text{.} }{vec {x}})urcorner )$, where $texttt {Pr}_{tau }$ is the provability predicate for the numeration $tau $. Numeral Consistency. If $T$ is consistent, there is a $varSigma _{1}(textsf {I}varSigma _1)$-numeration $tau $ of the axioms of $textsf {I}varSigma _{1}$ such that $textsf {I}varSigma _1vdash forall, x. texttt {Pr}_{tau }(ulcorner neg textit {Prf}(ulcorner perp urcorner , overset {text{.}}{x})urcorner )$, where $textit {Prf}(x,y)$ denotes a $varDelta _{1}(textsf {I}varSigma _1)$-definition of ‘$y$ is a $T$-proof of $x$’. Finitist consistency is addressed by generalizing a result of Artemov: Partial finitism. If $T$ is consistent, there is a primitive recursive function $f$ such that, for all $nin mathbb {N}$, $f(n)$ is the code of an $textsf {I}varSigma _{1}$-proof of $neg, textit{Prf}(ulcorner perp urcorner ,overline {n})$. These results are not in conflict with Gödel’s Incompleteness Theorems. Rather, they allow to extend their usual interpretation and show a deep connection to reflections in Hilbert’s last papers of 1931.
{"title":"A new perspective on completeness and finitist consistency","authors":"P. G. Santos, W. Sieg, R. Kahle","doi":"10.1093/logcom/exad021","DOIUrl":"https://doi.org/10.1093/logcom/exad021","url":null,"abstract":"\u0000 In this paper, we study the metamathematics of consistent arithmetical theories $T$ (containing $textsf {I}varSigma _{1}$); we investigate numerical properties based on proof predicates that depend on numerations of the axioms. Numeral Completeness. For every true (in $mathbb {N}$) sentence $vec {Q}vec {x}.varphi (vec {x})$, with $varphi (vec {x})$ a $varSigma _{1}(textsf {I}varSigma _1)$-formula, there is a numeration $tau $ of the axioms of $T$ such that $textsf {I}varSigma _1vdash vec {Q}vec {x}. texttt {Pr}_{tau }(ulcorner varphi (overset {text{.} }{vec {x}})urcorner )$, where $texttt {Pr}_{tau }$ is the provability predicate for the numeration $tau $.\u0000 Numeral Consistency. If $T$ is consistent, there is a $varSigma _{1}(textsf {I}varSigma _1)$-numeration $tau $ of the axioms of $textsf {I}varSigma _{1}$ such that $textsf {I}varSigma _1vdash forall, x. texttt {Pr}_{tau }(ulcorner neg textit {Prf}(ulcorner perp urcorner , overset {text{.}}{x})urcorner )$, where $textit {Prf}(x,y)$ denotes a $varDelta _{1}(textsf {I}varSigma _1)$-definition of ‘$y$ is a $T$-proof of $x$’. Finitist consistency is addressed by generalizing a result of Artemov:\u0000 Partial finitism. If $T$ is consistent, there is a primitive recursive function $f$ such that, for all $nin mathbb {N}$, $f(n)$ is the code of an $textsf {I}varSigma _{1}$-proof of $neg, textit{Prf}(ulcorner perp urcorner ,overline {n})$.\u0000 These results are not in conflict with Gödel’s Incompleteness Theorems. Rather, they allow to extend their usual interpretation and show a deep connection to reflections in Hilbert’s last papers of 1931.","PeriodicalId":50162,"journal":{"name":"Journal of Logic and Computation","volume":" ","pages":""},"PeriodicalIF":0.7,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43622574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}