Andrew Cave, Francisco Ferreira, P. Panangaden, B. Pientka
Functional Reactive Programming (FRP) models reactive systems with events and signals, which have previously been observed to correspond to the "eventually" and "always" modalities of linear temporal logic (LTL). In this paper, we define a constructive variant of LTL with least fixed point and greatest fixed point operators in the spirit of the modal mu-calculus, and give it a proofs-as-programs interpretation as a foundational calculus for reactive programs. Previous work emphasized the propositions-as-types part of the correspondence between LTL and FRP; here we emphasize the proofs-as-programs part by employing structural proof theory. We show that the type system is expressive enough to enforce liveness properties such as the fairness of schedulers and the eventual delivery of results. We illustrate programming in this calculus using (co)iteration operators. We prove type preservation of our operational semantics, which guarantees that our programs are causal. We give also a proof of strong normalization which provides justification that our programs are productive and that they satisfy liveness properties derived from their types.
{"title":"Fair reactive programming","authors":"Andrew Cave, Francisco Ferreira, P. Panangaden, B. Pientka","doi":"10.1145/2535838.2535881","DOIUrl":"https://doi.org/10.1145/2535838.2535881","url":null,"abstract":"Functional Reactive Programming (FRP) models reactive systems with events and signals, which have previously been observed to correspond to the \"eventually\" and \"always\" modalities of linear temporal logic (LTL). In this paper, we define a constructive variant of LTL with least fixed point and greatest fixed point operators in the spirit of the modal mu-calculus, and give it a proofs-as-programs interpretation as a foundational calculus for reactive programs. Previous work emphasized the propositions-as-types part of the correspondence between LTL and FRP; here we emphasize the proofs-as-programs part by employing structural proof theory. We show that the type system is expressive enough to enforce liveness properties such as the fairness of schedulers and the eventual delivery of results. We illustrate programming in this calculus using (co)iteration operators. We prove type preservation of our operational semantics, which guarantees that our programs are causal. We give also a proof of strong normalization which provides justification that our programs are productive and that they satisfy liveness properties derived from their types.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"94 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83918966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study fundamental properties of a generalisation of monad called parametric effect monad, and apply it to the interpretation of general effect systems whose effects have sequential composition operators. We show that parametric effect monads admit analogues of the structures and concepts that exist for monads, such as Kleisli triples, the state monad and the continuation monad, Plotkin and Power's algebraic operations, and the categorical ┬┬-lifting. We also show a systematic method to generate both effects and a parametric effect monad from a monad morphism. Finally, we introduce two effect systems with explicit and implicit subeffecting, and discuss their denotational semantics and the soundness of effect systems.
{"title":"Parametric effect monads and semantics of effect systems","authors":"Shin-ya Katsumata","doi":"10.1145/2535838.2535846","DOIUrl":"https://doi.org/10.1145/2535838.2535846","url":null,"abstract":"We study fundamental properties of a generalisation of monad called parametric effect monad, and apply it to the interpretation of general effect systems whose effects have sequential composition operators. We show that parametric effect monads admit analogues of the structures and concepts that exist for monads, such as Kleisli triples, the state monad and the continuation monad, Plotkin and Power's algebraic operations, and the categorical ┬┬-lifting. We also show a systematic method to generate both effects and a parametric effect monad from a monad morphism. Finally, we introduce two effect systems with explicit and implicit subeffecting, and discuss their denotational semantics and the soundness of effect systems.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81015913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reynolds' theory of relational parametricity captures the invariance of polymorphically typed programs under change of data representation. Reynolds' original work exploited the typing discipline of the polymorphically typed lambda-calculus System F, but there is now considerable interest in extending relational parametricity to type systems that are richer and more expressive than that of System F. This paper constructs parametric models of predicative and impredicative dependent type theory. The significance of our models is twofold. Firstly, in the impredicative variant we are able to deduce the existence of initial algebras for all indexed=functors. To our knowledge, ours is the first account of parametricity for dependent types that is able to lift the useful deduction of the existence of initial algebras in parametric models of System F to the dependently typed setting. Secondly, our models offer conceptual clarity by uniformly expressing relational parametricity for dependent types in terms of reflexive graphs, which allows us to unify the interpretations of types and kinds, instead of taking the relational interpretation of types as a primitive notion. Expressing our model in terms of reflexive graphs ensures that it has canonical choices for the interpretations of the standard type constructors of dependent type theory, except for the interpretation of the universe of small types, where we formulate a refined interpretation tailored for relational parametricity. Moreover, our reflexive graph model opens the door to generalisations of relational parametricity, for example to higher-dimensional relational parametricity.
{"title":"A relationally parametric model of dependent type theory","authors":"R. Atkey, Neil Ghani, Patricia Johann","doi":"10.1145/2535838.2535852","DOIUrl":"https://doi.org/10.1145/2535838.2535852","url":null,"abstract":"Reynolds' theory of relational parametricity captures the invariance of polymorphically typed programs under change of data representation. Reynolds' original work exploited the typing discipline of the polymorphically typed lambda-calculus System F, but there is now considerable interest in extending relational parametricity to type systems that are richer and more expressive than that of System F. This paper constructs parametric models of predicative and impredicative dependent type theory. The significance of our models is twofold. Firstly, in the impredicative variant we are able to deduce the existence of initial algebras for all indexed=functors. To our knowledge, ours is the first account of parametricity for dependent types that is able to lift the useful deduction of the existence of initial algebras in parametric models of System F to the dependently typed setting. Secondly, our models offer conceptual clarity by uniformly expressing relational parametricity for dependent types in terms of reflexive graphs, which allows us to unify the interpretations of types and kinds, instead of taking the relational interpretation of types as a primitive notion. Expressing our model in terms of reflexive graphs ensures that it has canonical choices for the interpretations of the standard type constructors of dependent type theory, except for the interpretation of the universe of small types, where we formulate a refined interpretation tailored for relational parametricity. Moreover, our reflexive graph model opens the door to generalisations of relational parametricity, for example to higher-dimensional relational parametricity.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78854012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we close the logical gap between provability in the logic BBI, which is the propositional basis for separation logic, and validity in an intended class of separation models, as employed in applications of separation logic such as program verification. An intended class of separation models is usually specified by a collection of axioms describing the specific model properties that are expected to hold, which we call a separation theory. Our main contributions are as follows. First, we show that several typical properties of separation theories are not definable in BBI. Second, we show that these properties become definable in a suitable hybrid extension of BBI, obtained by adding a theory of naming to BBI in the same way that hybrid logic extends normal modal logic. The binder-free extension captures most of the properties we consider, and the full extension HyBBI(↓) with the usual ↓ binder of hybrid logic covers all these properties. Third, we present an axiomatic proof system for our hybrid logic whose extension with any set of "pure" axioms is sound and complete with respect to the models satisfying those axioms. As a corollary of this general result, we obtain, in a parametric manner, a sound and complete axiomatic proof system for any separation theory from our considered class. To the best of our knowledge, this class includes all separation theories appearing in the published literature.
{"title":"Parametric completeness for separation theories","authors":"J. Brotherston, Jules Villard","doi":"10.1145/2535838.2535844","DOIUrl":"https://doi.org/10.1145/2535838.2535844","url":null,"abstract":"In this paper, we close the logical gap between provability in the logic BBI, which is the propositional basis for separation logic, and validity in an intended class of separation models, as employed in applications of separation logic such as program verification. An intended class of separation models is usually specified by a collection of axioms describing the specific model properties that are expected to hold, which we call a separation theory. Our main contributions are as follows. First, we show that several typical properties of separation theories are not definable in BBI. Second, we show that these properties become definable in a suitable hybrid extension of BBI, obtained by adding a theory of naming to BBI in the same way that hybrid logic extends normal modal logic. The binder-free extension captures most of the properties we consider, and the full extension HyBBI(↓) with the usual ↓ binder of hybrid logic covers all these properties. Third, we present an axiomatic proof system for our hybrid logic whose extension with any set of \"pure\" axioms is sound and complete with respect to the models satisfying those axioms. As a corollary of this general result, we obtain, in a parametric manner, a sound and complete axiomatic proof system for any separation theory from our considered class. To the best of our knowledge, this class includes all separation theories appearing in the published literature.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73502774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An authenticated data structure (ADS) is a data structure whose operations can be carried out by an untrusted prover, the results of which a verifier can efficiently check as authentic. This is done by having the prover produce a compact proof that the verifier can check along with each operation's result. ADSs thus support outsourcing data maintenance and processing tasks to untrusted servers without loss of integrity. Past work on ADSs has focused on particular data structures (or limited classes of data structures), one at a time, often with support only for particular operations. This paper presents a generic method, using a simple extension to a ML-like functional programming language we call λ• (lambda-auth), with which one can program authenticated operations over any data structure defined by standard type constructors, including recursive types, sums, and products. The programmer writes the data structure largely as usual and it is compiled to code to be run by the prover and verifier. Using a formalization of λ• we prove that all well-typed λ• programs result in code that is secure under the standard cryptographic assumption of collision-resistant hash functions. We have implemented λ• as an extension to the OCaml compiler, and have used it to produce authenticated versions of many interesting data structures including binary search trees, red-black+ trees, skip lists, and more. Performance experiments show that our approach is efficient, giving up little compared to the hand-optimized data structures developed previously.
ADS (authenticated data structure)是一种数据结构,它的操作可以由不受信任的证明者执行,验证者可以有效地检查其结果是否可信。这是通过让证明者生成一个紧凑的证明来完成的,验证者可以与每个操作的结果一起检查。因此,ads支持将数据维护和处理任务外包给不受信任的服务器,而不会丢失完整性。过去关于ads的工作主要集中在特定的数据结构(或有限的数据结构类)上,每次一个,通常只支持特定的操作。本文提出了一种泛型方法,使用类似ml的函数式编程语言λ•(lambda-auth)的简单扩展,可以在任何由标准类型构造函数定义的数据结构上编程认证操作,包括递归类型、和和乘积。程序员像往常一样编写数据结构,并将其编译为由证明者和验证者运行的代码。使用λ•的形式化,我们证明了在抗碰撞哈希函数的标准密码学假设下,所有类型良好的λ•程序都会产生安全的代码。我们已经将λ•实现为OCaml编译器的扩展,并使用它生成许多有趣的数据结构的认证版本,包括二叉搜索树、红黑+树、跳跃表等。性能实验表明,我们的方法是有效的,与以前开发的手动优化数据结构相比,放弃的很少。
{"title":"Authenticated data structures, generically","authors":"Andrew K. Miller, M. Hicks, Jonathan Katz, E. Shi","doi":"10.1145/2535838.2535851","DOIUrl":"https://doi.org/10.1145/2535838.2535851","url":null,"abstract":"An authenticated data structure (ADS) is a data structure whose operations can be carried out by an untrusted prover, the results of which a verifier can efficiently check as authentic. This is done by having the prover produce a compact proof that the verifier can check along with each operation's result. ADSs thus support outsourcing data maintenance and processing tasks to untrusted servers without loss of integrity. Past work on ADSs has focused on particular data structures (or limited classes of data structures), one at a time, often with support only for particular operations. This paper presents a generic method, using a simple extension to a ML-like functional programming language we call λ• (lambda-auth), with which one can program authenticated operations over any data structure defined by standard type constructors, including recursive types, sums, and products. The programmer writes the data structure largely as usual and it is compiled to code to be run by the prover and verifier. Using a formalization of λ• we prove that all well-typed λ• programs result in code that is secure under the standard cryptographic assumption of collision-resistant hash functions. We have implemented λ• as an extension to the OCaml compiler, and have used it to produce authenticated versions of many interesting data structures including binary search trees, red-black+ trees, skip lists, and more. Performance experiments show that our approach is efficient, giving up little compared to the hand-optimized data structures developed previously.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"190 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72762028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tracing just-in-time compilation is a popular compilation schema for the efficient implementation of dynamic languages, which is commonly used for JavaScript, Python, and PHP. It relies on two key ideas. First, it monitors the execution of the program to detect so-called hot paths, i.e., the most frequently executed paths. Then, it uses some store information available at runtime to optimize hot paths. The result is a residual program where the optimized hot paths are guarded by sufficient conditions ensuring the equivalence of the optimized path and the original program. The residual program is persistently mutated during its execution, e.g., to add new optimized paths or to merge existing paths. Tracing compilation is thus fundamentally different than traditional static compilation. Nevertheless, despite the remarkable practical success of tracing compilation, very little is known about its theoretical foundations. We formalize tracing compilation of programs using abstract interpretation. The monitoring (viz., hot path detection) phase corresponds to an abstraction of the trace semantics that captures the most frequent occurrences of sequences of program points together with an abstraction of their corresponding stores, e.g., a type environment. The optimization (viz., residual program generation) phase corresponds to a transform of the original program that preserves its trace semantics up to a given observation as modeled by some abstraction. We provide a generic framework to express dynamic optimizations and to prove them correct. We instantiate it to prove the correctness of dynamic type specialization. We show that our framework is more general than a recent model of tracing compilation introduced in POPL~2011 by Guo and Palsberg (based on operational bisimulations). In our model we can naturally express hot path reentrance and common optimizations like dead-store elimination, which are either excluded or unsound in Guo and Palsberg's framework.
{"title":"Tracing compilation by abstract interpretation","authors":"Stefano Dissegna, F. Logozzo, Francesco Ranzato","doi":"10.1145/2535838.2535866","DOIUrl":"https://doi.org/10.1145/2535838.2535866","url":null,"abstract":"Tracing just-in-time compilation is a popular compilation schema for the efficient implementation of dynamic languages, which is commonly used for JavaScript, Python, and PHP. It relies on two key ideas. First, it monitors the execution of the program to detect so-called hot paths, i.e., the most frequently executed paths. Then, it uses some store information available at runtime to optimize hot paths. The result is a residual program where the optimized hot paths are guarded by sufficient conditions ensuring the equivalence of the optimized path and the original program. The residual program is persistently mutated during its execution, e.g., to add new optimized paths or to merge existing paths. Tracing compilation is thus fundamentally different than traditional static compilation. Nevertheless, despite the remarkable practical success of tracing compilation, very little is known about its theoretical foundations. We formalize tracing compilation of programs using abstract interpretation. The monitoring (viz., hot path detection) phase corresponds to an abstraction of the trace semantics that captures the most frequent occurrences of sequences of program points together with an abstraction of their corresponding stores, e.g., a type environment. The optimization (viz., residual program generation) phase corresponds to a transform of the original program that preserves its trace semantics up to a given observation as modeled by some abstraction. We provide a generic framework to express dynamic optimizations and to prove them correct. We instantiate it to prove the correctness of dynamic type specialization. We show that our framework is more general than a recent model of tracing compilation introduced in POPL~2011 by Guo and Palsberg (based on operational bisimulations). In our model we can naturally express hot path reentrance and common optimizations like dead-store elimination, which are either excluded or unsound in Guo and Palsberg's framework.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74013327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tewodros A. Beyene, Swarat Chaudhuri, C. Popeea, A. Rybalchenko
We present a constraint-based approach to computing winning strategies in two-player graph games over the state space of infinite-state programs. Such games have numerous applications in program verification and synthesis, including the synthesis of infinite-state reactive programs and branching-time verification of infinite-state programs. Our method handles games with winning conditions given by safety, reachability, and general Linear Temporal Logic (LTL) properties. For each property class, we give a deductive proof rule that --- provided a symbolic representation of the game players --- describes a winning strategy for a particular player. Our rules are sound and relatively complete. We show that these rules can be automated by using an off-the-shelf Horn constraint solver that supports existential quantification in clause heads. The practical promise of the rules is demonstrated through several case studies, including a challenging "Cinderella-Stepmother game" that allows infinite alternation of discrete and continuous choices by two players, as well as examples derived from prior work on program repair and synthesis.
{"title":"A constraint-based approach to solving games on infinite graphs","authors":"Tewodros A. Beyene, Swarat Chaudhuri, C. Popeea, A. Rybalchenko","doi":"10.1145/2535838.2535860","DOIUrl":"https://doi.org/10.1145/2535838.2535860","url":null,"abstract":"We present a constraint-based approach to computing winning strategies in two-player graph games over the state space of infinite-state programs. Such games have numerous applications in program verification and synthesis, including the synthesis of infinite-state reactive programs and branching-time verification of infinite-state programs. Our method handles games with winning conditions given by safety, reachability, and general Linear Temporal Logic (LTL) properties. For each property class, we give a deductive proof rule that --- provided a symbolic representation of the game players --- describes a winning strategy for a particular player. Our rules are sound and relatively complete. We show that these rules can be automated by using an off-the-shelf Horn constraint solver that supports existential quantification in clause heads. The practical promise of the rules is demonstrated through several case studies, including a challenging \"Cinderella-Stepmother game\" that allows infinite alternation of discrete and continuous choices by two players, as well as examples derived from prior work on program repair and synthesis.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76016603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Barthe, C. Fournet, B. Grégoire, Pierre-Yves Strub, N. Swamy, Santiago Zanella Béguelin
Relational program logics have been used for mechanizing formal proofs of various cryptographic constructions. With an eye towards scaling these successes towards end-to-end security proofs for implementations of distributed systems, we present RF*, a relational extension of F*, a general-purpose higher-order stateful programming language with a verification system based on refinement types. The distinguishing feature of F* is a relational Hoare logic for a higher-order, stateful, probabilistic language. Through careful language design, we adapt the F* typechecker to generate both classic and relational verification conditions, and to automatically discharge their proofs using an SMT solver. Thus, we are able to benefit from the existing features of F*, including its abstraction facilities for modular reasoning about program fragments. We evaluate RF* experimentally by programming a series of cryptographic constructions and protocols, and by verifying their security properties, ranging from information flow to unlinkability, integrity, and privacy. Moreover, we validate the design of RF* by formalizing in Coq a core probabilistic λ calculus and a relational refinement type system and proving the soundness of the latter against a denotational semantics of the probabilistic lambda λ calculus.
{"title":"Probabilistic relational verification for cryptographic implementations","authors":"G. Barthe, C. Fournet, B. Grégoire, Pierre-Yves Strub, N. Swamy, Santiago Zanella Béguelin","doi":"10.1145/2535838.2535847","DOIUrl":"https://doi.org/10.1145/2535838.2535847","url":null,"abstract":"Relational program logics have been used for mechanizing formal proofs of various cryptographic constructions. With an eye towards scaling these successes towards end-to-end security proofs for implementations of distributed systems, we present RF*, a relational extension of F*, a general-purpose higher-order stateful programming language with a verification system based on refinement types. The distinguishing feature of F* is a relational Hoare logic for a higher-order, stateful, probabilistic language. Through careful language design, we adapt the F* typechecker to generate both classic and relational verification conditions, and to automatically discharge their proofs using an SMT solver. Thus, we are able to benefit from the existing features of F*, including its abstraction facilities for modular reasoning about program fragments. We evaluate RF* experimentally by programming a series of cryptographic constructions and protocols, and by verifying their security properties, ranging from information flow to unlinkability, integrity, and privacy. Moreover, we validate the design of RF* by formalizing in Coq a core probabilistic λ calculus and a relational refinement type system and proving the soundness of the latter against a denotational semantics of the probabilistic lambda λ calculus.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"99 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76544968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The trivial-automaton model checking problem for higher-order recursion schemes has become a widely studied object in connection with the automatic verification of higher-order programs. The problem is formidably hard: despite considerable progress in recent years, no decision procedures have been demonstrated to scale robustly beyond recursion schemes that comprise more than a few hundred rewrite rules. We present a new, fixed-parameter polynomial time algorithm, based on a novel, type directed form of abstraction refinement in which behaviours of a scheme are distinguished by the abstraction according to the intersection types that they inhabit (the properties that they satisfy). Unlike other intersection type approaches, our algorithm reasons both about acceptance by the property automaton and acceptance by its dual, simultaneously, in order to minimize the amount of work done by converging on the solution to a problem instance from both sides. We have constructed Preface, a prototype implementation of the algorithm, and assembled an extensive body of evidence to demonstrate empirically that the algorithm readily scales to recursion schemes of several thousand rules, well beyond the capabilities of current state-of-the-art higher-order model checkers.
{"title":"A type-directed abstraction refinement approach to higher-order model checking","authors":"S. Ramsay, R. Neatherway, C. Ong","doi":"10.1145/2535838.2535873","DOIUrl":"https://doi.org/10.1145/2535838.2535873","url":null,"abstract":"The trivial-automaton model checking problem for higher-order recursion schemes has become a widely studied object in connection with the automatic verification of higher-order programs. The problem is formidably hard: despite considerable progress in recent years, no decision procedures have been demonstrated to scale robustly beyond recursion schemes that comprise more than a few hundred rewrite rules. We present a new, fixed-parameter polynomial time algorithm, based on a novel, type directed form of abstraction refinement in which behaviours of a scheme are distinguished by the abstraction according to the intersection types that they inhabit (the properties that they satisfy). Unlike other intersection type approaches, our algorithm reasons both about acceptance by the property automaton and acceptance by its dual, simultaneously, in order to minimize the amount of work done by converging on the solution to a problem instance from both sides. We have constructed Preface, a prototype implementation of the algorithm, and assembled an extensive body of evidence to demonstrate empirically that the algorithm readily scales to recursion schemes of several thousand rules, well beyond the capabilities of current state-of-the-art higher-order model checkers.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78731044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Module systems like that of Haskell permit only a weak form of modularity in which module implementations depend directly on other implementations and must be processed in dependency order. Module systems like that of ML, on the other hand, permit a stronger form of modularity in which explicit interfaces express assumptions about dependencies, and each module can be typechecked and reasoned about independently. In this paper, we present Backpack, a new language for building separately-typecheckable *packages* on top of a weak module system like Haskell's. The design of Backpack is inspired by the MixML module calculus of Rossberg and Dreyer, but differs significantly in detail. Like MixML, Backpack supports explicit interfaces and recursive linking. Unlike MixML, Backpack supports a more flexible applicative semantics of instantiation. Moreover, its design is motivated less by foundational concerns and more by the practical concern of integration into Haskell, which has led us to advocate simplicity---in both the syntax and semantics of Backpack---over raw expressive power. The semantics of Backpack packages is defined by elaboration to sets of Haskell modules and binary interface files, thus showing how Backpack maintains interoperability with Haskell while extending it with separate typechecking. Lastly, although Backpack is geared toward integration into Haskell, its design and semantics are largely agnostic with respect to the details of the underlying core language.
{"title":"Backpack: retrofitting Haskell with interfaces","authors":"S. Kilpatrick, Derek Dreyer, S. Jones, S. Marlow","doi":"10.1145/2535838.2535884","DOIUrl":"https://doi.org/10.1145/2535838.2535884","url":null,"abstract":"Module systems like that of Haskell permit only a weak form of modularity in which module implementations depend directly on other implementations and must be processed in dependency order. Module systems like that of ML, on the other hand, permit a stronger form of modularity in which explicit interfaces express assumptions about dependencies, and each module can be typechecked and reasoned about independently. In this paper, we present Backpack, a new language for building separately-typecheckable *packages* on top of a weak module system like Haskell's. The design of Backpack is inspired by the MixML module calculus of Rossberg and Dreyer, but differs significantly in detail. Like MixML, Backpack supports explicit interfaces and recursive linking. Unlike MixML, Backpack supports a more flexible applicative semantics of instantiation. Moreover, its design is motivated less by foundational concerns and more by the practical concern of integration into Haskell, which has led us to advocate simplicity---in both the syntax and semantics of Backpack---over raw expressive power. The semantics of Backpack packages is defined by elaboration to sets of Haskell modules and binary interface files, thus showing how Backpack maintains interoperability with Haskell while extending it with separate typechecking. Lastly, although Backpack is geared toward integration into Haskell, its design and semantics are largely agnostic with respect to the details of the underlying core language.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79030192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}