This is a paper on Stone duality in computer science with special focus on topics with applications in formal language theory. In Section 2 we give a general overview of Stone duality in its various forms: for Boolean algebras, distributive lattices, and frames. For distributive lattices, we discuss both Stone and Priestley duality. We identify how to move between the different dualities and which dual spaces carry the Scott topology. We then focus on three themes.The first theme is additional operations on distributive lattices and Boolean algebras. Additional operations arise in denotational semantics in the form of predicate transformers. In verification they occur in the form of modal operators. They play an essential rôle in Eilenberg’s variety theorem in the form of quotient operations. Quotient operations are unary instantiations of residual operators which are dual to the operations in the profinite algebras of algebraic language theory. We discuss additional operations in Section 3.The second theme is that of hyperspaces, that is, spaces of subsets of an underlying space. Some classes of algebras may be seen as the class of algebras for a functor. In the case of predicate transformers the dual functors are hyperspace constructions such as the Plotkin, Smyth, and Hoare powerdomain constructions. The algebras-for-a-functor point of view is central to the coalgebraic study of modal logic and to the solution of domain equations. In the algebraic theory of formal languages various hyperspace-related product constructions, such as block and Schützenberger products, are used to study complexity hierarchies. We describe a construction, similar to the Schützenberger product, which is dual to adding a layer of quantification to formulas describing formal languages. We discuss hyperspaces in Section 4.The final theme is that of "equations". These are pairs of elements of dual spaces. They arise via the duality between subalgebras and quotient spaces and have provided one of the most successful tools for obtaining decidability results for classes of regular languages. The perspective provided by duality allows us to obtain a notion of equations for the study of arbitrary formal languages. Equations in language theory is the topic of Section 5.
{"title":"Duality in Computer Science *","authors":"M. Gehrke","doi":"10.1145/2933575.2934575","DOIUrl":"https://doi.org/10.1145/2933575.2934575","url":null,"abstract":"This is a paper on Stone duality in computer science with special focus on topics with applications in formal language theory. In Section 2 we give a general overview of Stone duality in its various forms: for Boolean algebras, distributive lattices, and frames. For distributive lattices, we discuss both Stone and Priestley duality. We identify how to move between the different dualities and which dual spaces carry the Scott topology. We then focus on three themes.The first theme is additional operations on distributive lattices and Boolean algebras. Additional operations arise in denotational semantics in the form of predicate transformers. In verification they occur in the form of modal operators. They play an essential rôle in Eilenberg’s variety theorem in the form of quotient operations. Quotient operations are unary instantiations of residual operators which are dual to the operations in the profinite algebras of algebraic language theory. We discuss additional operations in Section 3.The second theme is that of hyperspaces, that is, spaces of subsets of an underlying space. Some classes of algebras may be seen as the class of algebras for a functor. In the case of predicate transformers the dual functors are hyperspace constructions such as the Plotkin, Smyth, and Hoare powerdomain constructions. The algebras-for-a-functor point of view is central to the coalgebraic study of modal logic and to the solution of domain equations. In the algebraic theory of formal languages various hyperspace-related product constructions, such as block and Schützenberger products, are used to study complexity hierarchies. We describe a construction, similar to the Schützenberger product, which is dual to adding a layer of quantification to formulas describing formal languages. We discuss hyperspaces in Section 4.The final theme is that of \"equations\". These are pairs of elements of dual spaces. They arise via the duality between subalgebras and quotient spaces and have provided one of the most successful tools for obtaining decidability results for classes of regular languages. The perspective provided by duality allows us to obtain a notion of equations for the study of arbitrary formal languages. Equations in language theory is the topic of Section 5.","PeriodicalId":206395,"journal":{"name":"2016 31st Annual ACM/IEEE Symposium on Logic in Computer Science (LICS)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127591782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An operation f : An → A on the domain A of an algebra A is definable if there exists a first order logic formula ϕ ( x̄ , y ) with parameters from A such that for all ā ∈ An and b ∈ A we have f(ā)=b iff 𝒜 | = ϕ(ā,b). The goal of this paper is to study definability of operations by quantifier-free formulas on countable infinite algebras from computability and model-theoretic definability points of view.
{"title":"Quantifier Free Definability on Infinite Algebras","authors":"B. Khoussainov","doi":"10.1145/2933575.2934572","DOIUrl":"https://doi.org/10.1145/2933575.2934572","url":null,"abstract":"An operation f : A<sup>n</sup> → A on the domain A of an algebra A is definable if there exists a first order logic formula ϕ ( x̄ , y ) with parameters from A such that for all ā ∈ A<sup>n</sup> and b ∈ A we have f(ā)=b iff 𝒜 | = ϕ(ā,b). The goal of this paper is to study definability of operations by quantifier-free formulas on countable infinite algebras from computability and model-theoretic definability points of view.","PeriodicalId":206395,"journal":{"name":"2016 31st Annual ACM/IEEE Symposium on Logic in Computer Science (LICS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128340087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Cook-Levin theorem (the statement that SAT is NP-complete) is a central result in structural complexity theory. Is it possible to prove it using the lambda-calculus instead of Turing machines? We address this question via the notion of affine approximation, which offers the possibility of using order-theoretic arguments, in contrast to the machine-level arguments employed in standard proofs. However, due to the size explosion problem in the lambda-calculus (a linear number of reduction steps may generate exponentially big terms), a naive transliteration of the proof of the Cook-Levin theorem fails. We propose to fix this mismatch using the author’s recently introduced parsimonious lambda-calculus, reproving the Cook-Levin theorem and several related results in this higher-order framework. We also present an interesting relationship between approximations and intersection types, and discuss potential applications.
{"title":"Church Meets Cook and Levin","authors":"Damiano Mazza","doi":"10.1145/2933575.2934541","DOIUrl":"https://doi.org/10.1145/2933575.2934541","url":null,"abstract":"The Cook-Levin theorem (the statement that SAT is NP-complete) is a central result in structural complexity theory. Is it possible to prove it using the lambda-calculus instead of Turing machines? We address this question via the notion of affine approximation, which offers the possibility of using order-theoretic arguments, in contrast to the machine-level arguments employed in standard proofs. However, due to the size explosion problem in the lambda-calculus (a linear number of reduction steps may generate exponentially big terms), a naive transliteration of the proof of the Cook-Levin theorem fails. We propose to fix this mismatch using the author’s recently introduced parsimonious lambda-calculus, reproving the Cook-Levin theorem and several related results in this higher-order framework. We also present an interesting relationship between approximations and intersection types, and discuss potential applications.","PeriodicalId":206395,"journal":{"name":"2016 31st Annual ACM/IEEE Symposium on Logic in Computer Science (LICS)","volume":"252 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133074225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Symbolic tree automata allow transitions to carry predicates over rich alphabet theories, such as linear arithmetic, and therefore extend finite tree automata to operate over infinite alphabets, such as the set of rational numbers. Existing tree automata algorithms rely on the alphabet being finite, and generalizing them to the symbolic setting is not a trivial task. In this paper we study the problem of minimizing symbolic tree automata. First, we formally define and prove the properties of minimality in the symbolic setting. Second, we lift existing minimization algorithms to symbolic tree automata. Third, we present a new algorithm based on the following idea: the problem of minimizing symbolic tree automata can be reduced to the problem of minimizing symbolic (string) automata by encoding the tree structure as part of the alphabet theory. We implement and evaluate all our algorithms against existing implementations and show that the symbolic algorithms scale to large alphabets and can minimize automata over complex alphabet theories.
{"title":"Minimization of Symbolic Tree Automata","authors":"Loris D'antoni, Margus Veanes","doi":"10.1145/2933575.2933578","DOIUrl":"https://doi.org/10.1145/2933575.2933578","url":null,"abstract":"Symbolic tree automata allow transitions to carry predicates over rich alphabet theories, such as linear arithmetic, and therefore extend finite tree automata to operate over infinite alphabets, such as the set of rational numbers. Existing tree automata algorithms rely on the alphabet being finite, and generalizing them to the symbolic setting is not a trivial task. In this paper we study the problem of minimizing symbolic tree automata. First, we formally define and prove the properties of minimality in the symbolic setting. Second, we lift existing minimization algorithms to symbolic tree automata. Third, we present a new algorithm based on the following idea: the problem of minimizing symbolic tree automata can be reduced to the problem of minimizing symbolic (string) automata by encoding the tree structure as part of the alphabet theory. We implement and evaluate all our algorithms against existing implementations and show that the symbolic algorithms scale to large alphabets and can minimize automata over complex alphabet theories.","PeriodicalId":206395,"journal":{"name":"2016 31st Annual ACM/IEEE Symposium on Logic in Computer Science (LICS)","volume":"13 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114162365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In our LICS 2004 paper we introduced an approach to the study of the local structure of finite algebras and relational structures that aims at applications in the Constraint Satisfaction Problem (CSP). This approach involves a graph associated with an algebra ${mathbb{A}}$ or a relational structure A, whose vertices are the elements of ${mathbb{A}}$ (or A), the edges represent subsets of ${mathbb{A}}$ such that the restriction of some term operation of ${mathbb{A}}$ is ‘good’ on the subset, that is, act as an operation of one of the 3 types: semilattice, majority, or affine. In this paper we significantly refine and advance this approach. In particular, we prove certain connectivity and rectangularity properties of relations over algebras related to components of the graph connected by semilattice and affine edges. We also prove a result similar to 2-decomposition of relations invariant under a majority operation, only here we do not impose any restrictions on the relation. These results allow us to give a new, somewhat more intuitive proof of the bounded width theorem: the CSP over algebra ${mathbb{A}}$ has bounded width if and only if ${mathbb{A}}$ does not contain affine edges. Actually, this result shows that bounded width implies width (2,3). We also consider algebras with edges from a restricted set of types. In particular, it can be proved that type restrictions are preserved under the standard algebraic constructions. Finally, we prove that algebras without semilattice edges have few subalgebras of powers, that is, the CSP over such algebras is also polynomial time.
{"title":"Graphs of relational structures: restricted types","authors":"A. Bulatov","doi":"10.1145/2933575.2933604","DOIUrl":"https://doi.org/10.1145/2933575.2933604","url":null,"abstract":"In our LICS 2004 paper we introduced an approach to the study of the local structure of finite algebras and relational structures that aims at applications in the Constraint Satisfaction Problem (CSP). This approach involves a graph associated with an algebra ${mathbb{A}}$ or a relational structure A, whose vertices are the elements of ${mathbb{A}}$ (or A), the edges represent subsets of ${mathbb{A}}$ such that the restriction of some term operation of ${mathbb{A}}$ is ‘good’ on the subset, that is, act as an operation of one of the 3 types: semilattice, majority, or affine. In this paper we significantly refine and advance this approach. In particular, we prove certain connectivity and rectangularity properties of relations over algebras related to components of the graph connected by semilattice and affine edges. We also prove a result similar to 2-decomposition of relations invariant under a majority operation, only here we do not impose any restrictions on the relation. These results allow us to give a new, somewhat more intuitive proof of the bounded width theorem: the CSP over algebra ${mathbb{A}}$ has bounded width if and only if ${mathbb{A}}$ does not contain affine edges. Actually, this result shows that bounded width implies width (2,3). We also consider algebras with edges from a restricted set of types. In particular, it can be proved that type restrictions are preserved under the standard algebraic constructions. Finally, we prove that algebras without semilattice edges have few subalgebras of powers, that is, the CSP over such algebras is also polynomial time.","PeriodicalId":206395,"journal":{"name":"2016 31st Annual ACM/IEEE Symposium on Logic in Computer Science (LICS)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114186876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Randomization is a key element in sequential and distributed computing. Reasoning about randomized algorithms is highly non-trivial. In the 1980s, this initiated first proof methods, logics, and model-checking algorithms. The field of probabilistic verification has developed considerably since then. This paper surveys the algorithmic verification of probabilistic models, in particular probabilistic model checking. We provide an informal account of the main models, the underlying algorithms, applications from reliability and dependability analysis—and beyond—and describe recent developments towards automated parameter synthesis.
{"title":"The Probabilistic Model Checking Landscape*","authors":"J. Katoen","doi":"10.1145/2933575.2934574","DOIUrl":"https://doi.org/10.1145/2933575.2934574","url":null,"abstract":"Randomization is a key element in sequential and distributed computing. Reasoning about randomized algorithms is highly non-trivial. In the 1980s, this initiated first proof methods, logics, and model-checking algorithms. The field of probabilistic verification has developed considerably since then. This paper surveys the algorithmic verification of probabilistic models, in particular probabilistic model checking. We provide an informal account of the main models, the underlying algorithms, applications from reliability and dependability analysis—and beyond—and describe recent developments towards automated parameter synthesis.","PeriodicalId":206395,"journal":{"name":"2016 31st Annual ACM/IEEE Symposium on Logic in Computer Science (LICS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121849422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce 2-compactness, a constructive function-theoretic alternative to topological compactness, based on the notions of Bishop space and Bishop morphism, which are constructive function-theoretic alternatives to topological space and continuous function, respectively. We show that the notion of Bishop morphism is reduced to uniform continuity in important cases, overcoming one of the obstacles in developing constructive general topology posed by Bishop. We prove that 2-compactness generalizes metric compactness, namely that the uniformly continuous real-valued functions on a compact metric space form a 2-compact Bishop topology. Among other properties of 2-compact Bishop spaces, the countable Tychonoff compactness theorem is proved for them. We work within BISH*, Bishop’s informal system of constructive mathematics BISH equipped with inductive definitions with rules of countably many premises, a system strongly connected to Martin-Löf’s Type Theory.
{"title":"A constructive function-theoretic approach to topological compactness","authors":"I. Petrakis","doi":"10.1145/2933575.2933582","DOIUrl":"https://doi.org/10.1145/2933575.2933582","url":null,"abstract":"We introduce 2-compactness, a constructive function-theoretic alternative to topological compactness, based on the notions of Bishop space and Bishop morphism, which are constructive function-theoretic alternatives to topological space and continuous function, respectively. We show that the notion of Bishop morphism is reduced to uniform continuity in important cases, overcoming one of the obstacles in developing constructive general topology posed by Bishop. We prove that 2-compactness generalizes metric compactness, namely that the uniformly continuous real-valued functions on a compact metric space form a 2-compact Bishop topology. Among other properties of 2-compact Bishop spaces, the countable Tychonoff compactness theorem is proved for them. We work within BISH*, Bishop’s informal system of constructive mathematics BISH equipped with inductive definitions with rules of countably many premises, a system strongly connected to Martin-Löf’s Type Theory.","PeriodicalId":206395,"journal":{"name":"2016 31st Annual ACM/IEEE Symposium on Logic in Computer Science (LICS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121730505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We prove near-optimal trade-offs for quantifier depth versus number of variables in first-order logic by exhibiting pairs of n-element structures that can be distinguished by a k-variable first-order sentence but where every such sentence requires quantifier depth at least nΩ(k/ log k). Our trade-offs also apply to first-order counting logic, and by the known connection to the k-dimensional Weisfeiler–Leman algorithm imply near-optimal lower bounds on the number of refinement iterations. A key component in our proof is the hardness condensation technique recently introduced by [Razborov ’16] in the context of proof complexity. We apply this method to reduce the domain size of relational structures while maintaining the quantifier depth required to distinguish them.Categories and Subject Descriptors F.4.1 [Mathematical Logic]: Computational Logic, Model theory; F.2.3 [Tradeoffs between Complexity Measures]
{"title":"Near-Optimal Lower Bounds on Quantifier Depth and Weisfeiler–Leman Refinement Steps","authors":"Christoph Berkholz, Jakob Nordström","doi":"10.1145/2933575.2934560","DOIUrl":"https://doi.org/10.1145/2933575.2934560","url":null,"abstract":"We prove near-optimal trade-offs for quantifier depth versus number of variables in first-order logic by exhibiting pairs of n-element structures that can be distinguished by a k-variable first-order sentence but where every such sentence requires quantifier depth at least nΩ(k/ log k). Our trade-offs also apply to first-order counting logic, and by the known connection to the k-dimensional Weisfeiler–Leman algorithm imply near-optimal lower bounds on the number of refinement iterations. A key component in our proof is the hardness condensation technique recently introduced by [Razborov ’16] in the context of proof complexity. We apply this method to reduce the domain size of relational structures while maintaining the quantifier depth required to distinguish them.Categories and Subject Descriptors F.4.1 [Mathematical Logic]: Computational Logic, Model theory; F.2.3 [Tradeoffs between Complexity Measures]","PeriodicalId":206395,"journal":{"name":"2016 31st Annual ACM/IEEE Symposium on Logic in Computer Science (LICS)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124749872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently Beyersdorff, Bonacina, and Chew [10] introduced a natural class of Frege systems for quantified Boolean formulas (QBF) and showed strong lower bounds for restricted versions of these systems. Here we provide a comprehensive analysis of the new extended Frege system from [10], denoted EF + ∀red, which is a natural extension of classical extended Frege EF.Our main results are the following: Firstly, we prove that the standard Gentzen-style system ${text{G}}_1^{ast}$ p-simulates EF + ∀red and that ${text{G}}_1^{ast}$ is strictly stronger under standard complexity-theoretic hardness assumptions.Secondly, we show a correspondence of EF + ∀red to bounded arithmetic: EF + ∀red can be seen as the non-uniform propositional version of intuitionistic $S_2^1$. Specifically, intuitionistic $S_2^1$ proofs of arbitrary statements in prenex form translate to polynomial-size EF + ∀red proofs, and EF + ∀red is in a sense the weakest system with this property.Finally, we show that unconditional lower bounds for EF + ∀red would imply either a major breakthrough in circuit complexity or in classical proof complexity, and in fact the converse implications hold as well. Therefore, the system EF + ∀red naturally unites the central problems from circuit and proof complexity.Technically, our results rest on a formalised strategy extraction theorem for EF + ∀red akin to witnessing in intuitionistic $S_2^1$ and a normal form for EF + ∀red proofs.
{"title":"Understanding Gentzen and Frege Systems for QBF","authors":"Olaf Beyersdorff, J. Pich","doi":"10.1145/2933575.2933597","DOIUrl":"https://doi.org/10.1145/2933575.2933597","url":null,"abstract":"Recently Beyersdorff, Bonacina, and Chew [10] introduced a natural class of Frege systems for quantified Boolean formulas (QBF) and showed strong lower bounds for restricted versions of these systems. Here we provide a comprehensive analysis of the new extended Frege system from [10], denoted EF + ∀red, which is a natural extension of classical extended Frege EF.Our main results are the following: Firstly, we prove that the standard Gentzen-style system ${text{G}}_1^{ast}$ p-simulates EF + ∀red and that ${text{G}}_1^{ast}$ is strictly stronger under standard complexity-theoretic hardness assumptions.Secondly, we show a correspondence of EF + ∀red to bounded arithmetic: EF + ∀red can be seen as the non-uniform propositional version of intuitionistic $S_2^1$. Specifically, intuitionistic $S_2^1$ proofs of arbitrary statements in prenex form translate to polynomial-size EF + ∀red proofs, and EF + ∀red is in a sense the weakest system with this property.Finally, we show that unconditional lower bounds for EF + ∀red would imply either a major breakthrough in circuit complexity or in classical proof complexity, and in fact the converse implications hold as well. Therefore, the system EF + ∀red naturally unites the central problems from circuit and proof complexity.Technically, our results rest on a formalised strategy extraction theorem for EF + ∀red akin to witnessing in intuitionistic $S_2^1$ and a normal form for EF + ∀red proofs.","PeriodicalId":206395,"journal":{"name":"2016 31st Annual ACM/IEEE Symposium on Logic in Computer Science (LICS)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114968486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We revisit coinductive proof principles from a lattice theoretic point of view. By associating to any monotone function a function which we call the companion, we give a new presentation of both Knaster-Tarski’s seminal result, and of the more recent theory of enhancements of the coinductive proof method (up-to techniques).The resulting theory encompasses parameterized coinduction, as recently proposed by Hur et al., and second-order reasoning, i.e., the ability to reason coinductively about the enhancements themselves. It moreover resolves a historical peculiarity about up-to context techniques.Based on these results, we present an open-ended proof system allowing one to perform proofs on-the-fly and to neatly separate inductive and coinductive phases.
{"title":"Coinduction All the Way Up","authors":"D. Pous","doi":"10.1145/2933575.2934564","DOIUrl":"https://doi.org/10.1145/2933575.2934564","url":null,"abstract":"We revisit coinductive proof principles from a lattice theoretic point of view. By associating to any monotone function a function which we call the companion, we give a new presentation of both Knaster-Tarski’s seminal result, and of the more recent theory of enhancements of the coinductive proof method (up-to techniques).The resulting theory encompasses parameterized coinduction, as recently proposed by Hur et al., and second-order reasoning, i.e., the ability to reason coinductively about the enhancements themselves. It moreover resolves a historical peculiarity about up-to context techniques.Based on these results, we present an open-ended proof system allowing one to perform proofs on-the-fly and to neatly separate inductive and coinductive phases.","PeriodicalId":206395,"journal":{"name":"2016 31st Annual ACM/IEEE Symposium on Logic in Computer Science (LICS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131089278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}