Pub Date : 2004-11-05DOI: 10.1109/LICS.2004.1319610
K. Chatterjee, T. Henzinger, M. Jurdzinski
In 2-player nonzero-sum games, Nash equilibria capture the options for rational behavior if each player attempts to maximize her payoff. In contrast to classical game theory, we consider lexicographic objectives: first, each player tries to maximize her own payoff, and then, the player tries to minimize the opponent's payoff. Such objectives arise naturally in the verification of systems with multiple components. There, instead of proving that each component satisfies its specification no matter how the other components behave, it often suffices to prove that each component satisfies its specification provided that the other components satisfy their specifications. We say that a Nash equilibrium is secure if it is an equilibrium with respect to the lexicographic objectives of both players. We prove that in graph games with Borel objectives, which include the games that arise in verification, there may be several Nash equilibria, but there is always a unique maximal payoff profile of secure equilibria. We show how this equilibrium can be computed in the case of /spl omega/-regular objectives, and we characterize the memory requirements of strategies that achieve the equilibrium.
{"title":"Games with secure equilibria","authors":"K. Chatterjee, T. Henzinger, M. Jurdzinski","doi":"10.1109/LICS.2004.1319610","DOIUrl":"https://doi.org/10.1109/LICS.2004.1319610","url":null,"abstract":"In 2-player nonzero-sum games, Nash equilibria capture the options for rational behavior if each player attempts to maximize her payoff. In contrast to classical game theory, we consider lexicographic objectives: first, each player tries to maximize her own payoff, and then, the player tries to minimize the opponent's payoff. Such objectives arise naturally in the verification of systems with multiple components. There, instead of proving that each component satisfies its specification no matter how the other components behave, it often suffices to prove that each component satisfies its specification provided that the other components satisfy their specifications. We say that a Nash equilibrium is secure if it is an equilibrium with respect to the lexicographic objectives of both players. We prove that in graph games with Borel objectives, which include the games that arise in verification, there may be several Nash equilibria, but there is always a unique maximal payoff profile of secure equilibria. We show how this equilibrium can be computed in the case of /spl omega/-regular objectives, and we characterize the memory requirements of strategies that achieve the equilibrium.","PeriodicalId":442591,"journal":{"name":"Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science, 2004.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129611662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a second-order theory V-Krom of bounded arithmetic for nondeterministic log space. This system is based on Gradel's characterization of NL by second-order Krom formulae with only universal first-order quantifiers, which in turn is motivated by the result that the decision problem for 2-CNF satisfiability is complete for coNL (and hence for NL). This theory has the style of the authors' theory Vi-Horn [APAL 124 (2003)] for polynomial time. Both theories use Zambella's elegant second-order syntax, and are axiomatized by a set 2-BASIC of simple formulae, together with a comprehension scheme for either second-order Horn formulae (in the case of V/sub 1/-Horn), or second-order Krom (2CNF) formulae (in the case of V-Krom). Our main result for V-Krom is a formalization of the Immerman-Szelepcsenyi theorem that NL is closed under complementation. This formalization is necessary to show that the NL functions are /spl Sigma//sub 1//sup B/-definable in V-Krom. The only other theory for NL in the literature relies on the Immerman-Szelepcsenyi's result rather than proving it.
{"title":"A second-order theory for NL","authors":"S. Cook, A. Kolokolova","doi":"10.1109/LICS.2004.5","DOIUrl":"https://doi.org/10.1109/LICS.2004.5","url":null,"abstract":"We introduce a second-order theory V-Krom of bounded arithmetic for nondeterministic log space. This system is based on Gradel's characterization of NL by second-order Krom formulae with only universal first-order quantifiers, which in turn is motivated by the result that the decision problem for 2-CNF satisfiability is complete for coNL (and hence for NL). This theory has the style of the authors' theory Vi-Horn [APAL 124 (2003)] for polynomial time. Both theories use Zambella's elegant second-order syntax, and are axiomatized by a set 2-BASIC of simple formulae, together with a comprehension scheme for either second-order Horn formulae (in the case of V/sub 1/-Horn), or second-order Krom (2CNF) formulae (in the case of V-Krom). Our main result for V-Krom is a formalization of the Immerman-Szelepcsenyi theorem that NL is closed under complementation. This formalization is necessary to show that the NL functions are /spl Sigma//sub 1//sup B/-definable in V-Krom. The only other theory for NL in the literature relies on the Immerman-Szelepcsenyi's result rather than proving it.","PeriodicalId":442591,"journal":{"name":"Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science, 2004.","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115306148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-07-13DOI: 10.1109/LICS.2004.1319628
D. Dams, Kedar S. Namjoshi
Abstraction is often essential to verify a program with model checking. Typically, a concrete source program with an infinite (or finite, but large) state space is reduced to a small, finite state, abstract program on which a correctness property can be checked. The fundamental question we investigate in this paper is whether such a reduction to finite state programs is always possible, for arbitrary branching time temporal properties. We begin by showing that existing abstraction frameworks are inherently incomplete for verifying purely existential or mixed universal-existential properties. We then propose a new, complete abstraction framework which is based on a class of focused transition systems (FTS's). The key new feature in FTS's is a way of "focusing" an abstract state to a set of more precise abstract states. While focus operators have been defined for specific contexts, this result shows their fundamental usefulness for proving non-universal properties. The constructive completeness proof provides linear size maximal models for properties expressed in logics such as CTL and the mu-calculus. This substantially improves upon known (worst-case) exponential size constructions for their universal fragments.
{"title":"The existence of finite abstractions for branching time model checking","authors":"D. Dams, Kedar S. Namjoshi","doi":"10.1109/LICS.2004.1319628","DOIUrl":"https://doi.org/10.1109/LICS.2004.1319628","url":null,"abstract":"Abstraction is often essential to verify a program with model checking. Typically, a concrete source program with an infinite (or finite, but large) state space is reduced to a small, finite state, abstract program on which a correctness property can be checked. The fundamental question we investigate in this paper is whether such a reduction to finite state programs is always possible, for arbitrary branching time temporal properties. We begin by showing that existing abstraction frameworks are inherently incomplete for verifying purely existential or mixed universal-existential properties. We then propose a new, complete abstraction framework which is based on a class of focused transition systems (FTS's). The key new feature in FTS's is a way of \"focusing\" an abstract state to a set of more precise abstract states. While focus operators have been defined for specific contexts, this result shows their fundamental usefulness for proving non-universal properties. The constructive completeness proof provides linear size maximal models for properties expressed in logics such as CTL and the mu-calculus. This substantially improves upon known (worst-case) exponential size constructions for their universal fragments.","PeriodicalId":442591,"journal":{"name":"Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science, 2004.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125157345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper studies the existence of automatic presentations for various algebraic structures. The automatic Boolean algebras are characterised, and it is proven that the free Abelian group of infinite rank and many Fraisse limits do not have automatic presentations. In particular, the countably infinite random graph and the universal partial order do not have automatic presentations. Furthermore, no infinite integral domain is automatic. The second topic of the paper is the isomorphism problem. We prove that the complexity of the isomorphism problem for the class of all automatic structures is /spl Sigma//sub 1//sup 1/-complete.
{"title":"Automatic structures: richness and limitations","authors":"B. Khoussainov, A. Nies, S. Rubin, F. Stephan","doi":"10.2168/LMCS-3(2:2)2007","DOIUrl":"https://doi.org/10.2168/LMCS-3(2:2)2007","url":null,"abstract":"This paper studies the existence of automatic presentations for various algebraic structures. The automatic Boolean algebras are characterised, and it is proven that the free Abelian group of infinite rank and many Fraisse limits do not have automatic presentations. In particular, the countably infinite random graph and the universal partial order do not have automatic presentations. Furthermore, no infinite integral domain is automatic. The second topic of the paper is the isomorphism problem. We prove that the complexity of the isomorphism problem for the class of all automatic structures is /spl Sigma//sub 1//sup 1/-complete.","PeriodicalId":442591,"journal":{"name":"Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science, 2004.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124231207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given a formula /spl Phi/ in quantifier-free Presburger arithmetic, it is well known that, if there is a satisfying solution to /spl Phi/, there is one whose size, measured in bits, is polynomially bounded in the size of /spl Phi/. In this paper, we consider a special class of quantifier-free Presburger formulas in which most linear constraints are separation (difference-bound) constraints, and the nonseparation constraints are sparse. This class has been observed to commonly occur in software verification problems. We derive a solution bound in terms of parameters characterizing the sparseness of linear constraints and the number of nonseparation constraints, in addition to traditional measures of formula size. In particular, the number of bits needed per integer variable is linear in the number of nonseparation constraints and logarithmic in the number and size of nonzero coefficients in them, but is otherwise independent of the total number of linear constraints in the formula. The derived bound can be used in a decision procedure based on instantiating integer variables over a finite domain and translating the input quantifier-free Presburger formula to an equisatisfiable Boolean formula, which is then checked using a Boolean satisfiability solver. We present empirical evidence indicating that this method can greatly outperform other decision procedures.
{"title":"Deciding quantifier-free Presburger formulas using parameterized solution bounds","authors":"S. Seshia, R. Bryant","doi":"10.2168/LMCS-1(2:6)2005","DOIUrl":"https://doi.org/10.2168/LMCS-1(2:6)2005","url":null,"abstract":"Given a formula /spl Phi/ in quantifier-free Presburger arithmetic, it is well known that, if there is a satisfying solution to /spl Phi/, there is one whose size, measured in bits, is polynomially bounded in the size of /spl Phi/. In this paper, we consider a special class of quantifier-free Presburger formulas in which most linear constraints are separation (difference-bound) constraints, and the nonseparation constraints are sparse. This class has been observed to commonly occur in software verification problems. We derive a solution bound in terms of parameters characterizing the sparseness of linear constraints and the number of nonseparation constraints, in addition to traditional measures of formula size. In particular, the number of bits needed per integer variable is linear in the number of nonseparation constraints and logarithmic in the number and size of nonzero coefficients in them, but is otherwise independent of the total number of linear constraints in the formula. The derived bound can be used in a decision procedure based on instantiating integer variables over a finite domain and translating the input quantifier-free Presburger formula to an equisatisfiable Boolean formula, which is then checked using a Boolean satisfiability solver. We present empirical evidence indicating that this method can greatly outperform other decision procedures.","PeriodicalId":442591,"journal":{"name":"Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science, 2004.","volume":"975 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114096887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most parameterized complexity classes are defined in terms of a parameterized version of the Boolean satisfiability problem (the so-called weighted satisfiability problem. For example, Downey and Fellow's W-hierarchy is of this form. But there are also classes, for example, the A-hierarchy, that are more naturally characterised in terms of model-checking problems for fragments of first-order logic. R. G. Downey et al. (1998) were the first to establish a connection between the two formalisms by giving a characterisation of the W-hierarchy in terms of first-order model-checking problems. We improve their result and then prove a similar correspondence between weighted satisfiability and model-checking problems for the A-hierarchy and the W-hierarchy. Thus we obtain very uniform characterisations of many of the most important parameterized complexity classes in both formalisms. Our results can be used to give new, simple proofs of some of the core results of structural parameterized complexity theory.
大多数参数化复杂性类都是根据布尔可满足性问题(即所谓的加权可满足性问题)的参数化版本来定义的。例如,唐尼和费罗的w层次结构就是这种形式。但是也有一些类,例如,a层次结构,更自然地以一阶逻辑片段的模型检查问题为特征。R. G. Downey等人(1998)首先从一阶模型检查问题的角度给出了w层次结构的特征,从而建立了两种形式之间的联系。我们改进了他们的结果,然后证明了a -层次和w -层次的加权可满足性和模型检验问题之间的类似对应关系。因此,我们在两种形式中获得了许多最重要的参数化复杂性类的非常统一的特征。我们的结果可以用来为结构参数化复杂性理论的一些核心结果提供新的、简单的证明。
{"title":"Model-checking problems as a basis for parameterized intractability","authors":"J. Flum, Martin Grohe","doi":"10.2168/LMCS-1(1:2)2005","DOIUrl":"https://doi.org/10.2168/LMCS-1(1:2)2005","url":null,"abstract":"Most parameterized complexity classes are defined in terms of a parameterized version of the Boolean satisfiability problem (the so-called weighted satisfiability problem. For example, Downey and Fellow's W-hierarchy is of this form. But there are also classes, for example, the A-hierarchy, that are more naturally characterised in terms of model-checking problems for fragments of first-order logic. R. G. Downey et al. (1998) were the first to establish a connection between the two formalisms by giving a characterisation of the W-hierarchy in terms of first-order model-checking problems. We improve their result and then prove a similar correspondence between weighted satisfiability and model-checking problems for the A-hierarchy and the W-hierarchy. Thus we obtain very uniform characterisations of many of the most important parameterized complexity classes in both formalisms. Our results can be used to give new, simple proofs of some of the core results of structural parameterized complexity theory.","PeriodicalId":442591,"journal":{"name":"Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science, 2004.","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124489057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-07-13DOI: 10.1109/LICS.2004.1319603
D. Leivant
Total correctness assertions (TCAs) have long been considered a natural formalization of successful program termination. However, research dating back to the 1980s suggests that validity of TCAs is a notion of limited interest; we corroborate this by proving compactness and Herbrand properties for the valid TCAs, defining in passing a new sound, complete, and syntax-directed deductive system for TCAs. It follows that proving TCAs whose truth depends on underlying inductive data-types is impossible in logics of programs that are sound for all structures, such as dynamic logic based on Segerberg's PDL, even when augmented with powerful first-order theories like Peano arithmetic. Harel's convergence rule bypasses this difficulty, but is methodologically and conceptually problematic, in addition to being unsound for general validity. We propose instead to bind variables to inductive data via DL's box operator, leading to an alternative formalization of termination assertions, which we dub inductive TCA (ITCA). We observe that a TCA is provable in Harel's DL exactly when the corresponding ITCA is provable in Segerberg's DL, thereby showing that the convergence rule is not foundationally or practically necessary. We also show that validity of ITCAs is directly reducible to validity of partial correctness assertions, confirming the foundational importance of the latter.
{"title":"Proving termination assertions in dynamic logics","authors":"D. Leivant","doi":"10.1109/LICS.2004.1319603","DOIUrl":"https://doi.org/10.1109/LICS.2004.1319603","url":null,"abstract":"Total correctness assertions (TCAs) have long been considered a natural formalization of successful program termination. However, research dating back to the 1980s suggests that validity of TCAs is a notion of limited interest; we corroborate this by proving compactness and Herbrand properties for the valid TCAs, defining in passing a new sound, complete, and syntax-directed deductive system for TCAs. It follows that proving TCAs whose truth depends on underlying inductive data-types is impossible in logics of programs that are sound for all structures, such as dynamic logic based on Segerberg's PDL, even when augmented with powerful first-order theories like Peano arithmetic. Harel's convergence rule bypasses this difficulty, but is methodologically and conceptually problematic, in addition to being unsound for general validity. We propose instead to bind variables to inductive data via DL's box operator, leading to an alternative formalization of termination assertions, which we dub inductive TCA (ITCA). We observe that a TCA is provable in Harel's DL exactly when the corresponding ITCA is provable in Segerberg's DL, thereby showing that the convergence rule is not foundationally or practically necessary. We also show that validity of ITCAs is directly reducible to validity of partial correctness assertions, confirming the foundational importance of the latter.","PeriodicalId":442591,"journal":{"name":"Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science, 2004.","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128943422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-07-13DOI: 10.1109/LICS.2004.1319613
Y. Akama, S. Berardi, S. Hayashi, U. Kohlenbach
The topic of this paper is relative constructivism. We are concerned with classifying nonconstructive principles from the constructive viewpoint. We compare, up to provability in intuitionistic arithmetic, subclassical principles like Markov's principle, (a function-free version of) weak Konig's lemma, Post's theorem, excluded middle for simply existential and simply universal statements, and many others. Our motivations are rooted in the experience of one of the authors with an extended program extraction and of another author with bound extraction from classical proofs.
{"title":"An arithmetical hierarchy of the law of excluded middle and related principles","authors":"Y. Akama, S. Berardi, S. Hayashi, U. Kohlenbach","doi":"10.1109/LICS.2004.1319613","DOIUrl":"https://doi.org/10.1109/LICS.2004.1319613","url":null,"abstract":"The topic of this paper is relative constructivism. We are concerned with classifying nonconstructive principles from the constructive viewpoint. We compare, up to provability in intuitionistic arithmetic, subclassical principles like Markov's principle, (a function-free version of) weak Konig's lemma, Post's theorem, excluded middle for simply existential and simply universal statements, and many others. Our motivations are rooted in the experience of one of the authors with an extended program extraction and of another author with bound extraction from classical proofs.","PeriodicalId":442591,"journal":{"name":"Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science, 2004.","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121193470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-07-13DOI: 10.1109/LICS.2004.1319597
Michael Baldamus, J. Parrow, B. Victor
We present a concise and natural encoding of the spi-calculus into the more basic /spl pi/-calculus and establish its correctness with respect to a formal notion of testing. This is particularly relevant for security protocols modelled in spi since the tests can be viewed as adversaries. The translation has been implemented in a prototype tool. As a consequence, protocols can be described in the spi calculus and analysed with the emerging flora of tools already available for /spl pi/. The translation also entails a more detailed operational understanding of spi since high level constructs like encryption are encoded in a well known lower level. The formal correctness proof is nontrivial and interesting in its own; so called context bisimulations and new techniques for compositionality make the proof simpler and more concise.
{"title":"Spi calculus translated to /spl pi/-calculus preserving may-tests","authors":"Michael Baldamus, J. Parrow, B. Victor","doi":"10.1109/LICS.2004.1319597","DOIUrl":"https://doi.org/10.1109/LICS.2004.1319597","url":null,"abstract":"We present a concise and natural encoding of the spi-calculus into the more basic /spl pi/-calculus and establish its correctness with respect to a formal notion of testing. This is particularly relevant for security protocols modelled in spi since the tests can be viewed as adversaries. The translation has been implemented in a prototype tool. As a consequence, protocols can be described in the spi calculus and analysed with the emerging flora of tools already available for /spl pi/. The translation also entails a more detailed operational understanding of spi since high level constructs like encryption are encoded in a well known lower level. The formal correctness proof is nontrivial and interesting in its own; so called context bisimulations and new techniques for compositionality make the proof simpler and more concise.","PeriodicalId":442591,"journal":{"name":"Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science, 2004.","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122872208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-07-13DOI: 10.1109/LICS.2004.1319615
Carsten Führmann, D. Pym
It is well-known that weakening and contraction cause naive categorical models of the classical sequent calculus to collapse to Boolean lattices. We introduce sound and complete models that avoid this collapse by interpreting cut-reduction by a partial order between morphisms. We provide concrete examples of such models by applying the geometry-of-interaction construction to quantaloids with finite biproducts, and show how these models illuminate cut reduction in the presence of weakening and contraction. Our models make no commitment to any translation of classical logic into intuitionistic logic and distinguish non-deterministic choices of cut-elimination.
{"title":"On the geometry of interaction for classical logic","authors":"Carsten Führmann, D. Pym","doi":"10.1109/LICS.2004.1319615","DOIUrl":"https://doi.org/10.1109/LICS.2004.1319615","url":null,"abstract":"It is well-known that weakening and contraction cause naive categorical models of the classical sequent calculus to collapse to Boolean lattices. We introduce sound and complete models that avoid this collapse by interpreting cut-reduction by a partial order between morphisms. We provide concrete examples of such models by applying the geometry-of-interaction construction to quantaloids with finite biproducts, and show how these models illuminate cut reduction in the presence of weakening and contraction. Our models make no commitment to any translation of classical logic into intuitionistic logic and distinguish non-deterministic choices of cut-elimination.","PeriodicalId":442591,"journal":{"name":"Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science, 2004.","volume":"212 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114150547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}