Flow-driven higher-order inlining is blocked by free variables, yet current theories of environment analysis cannot reliably cope with multiply-bound variables. One of these, ΔCFA, is a promising theory based on stack change but is undermined by its finite-state model of the stack. We present Pushdown ΔCFA which takes a ΔCFA-approach to pushdown models of control flow and can cope with multiply-bound variables, even in the face of recursion.
{"title":"A posteriori environment analysis with Pushdown Delta CFA","authors":"Kimball Germane, M. Might","doi":"10.1145/3009837.3009899","DOIUrl":"https://doi.org/10.1145/3009837.3009899","url":null,"abstract":"Flow-driven higher-order inlining is blocked by free variables, yet current theories of environment analysis cannot reliably cope with multiply-bound variables. One of these, ΔCFA, is a promising theory based on stack change but is undermined by its finite-state model of the stack. We present Pushdown ΔCFA which takes a ΔCFA-approach to pushdown models of control flow and can cope with multiply-bound variables, even in the face of recursion.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73421414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shaked Flur, Susmit Sarkar, Christopher Pulte, Kyndylan Nienhuis, Luc Maranget, Kathryn E. Gray, A. Sezgin, Mark Batty, Peter Sewell
Previous work on the semantics of relaxed shared-memory concurrency has only considered the case in which each load reads the data of exactly one store. In practice, however, multiprocessors support mixed-size accesses, and these are used by systems software and (to some degree) exposed at the C/C++ language level. A semantic foundation for software, therefore, has to address them. We investigate the mixed-size behaviour of ARMv8 and IBM POWER architectures and implementations: by experiment, by developing semantic models, by testing the correspondence between these, and by discussion with ARM and IBM staff. This turns out to be surprisingly subtle, and on the way we have to revisit the fundamental concepts of coherence and sequential consistency, which change in this setting. In particular, we show that adding a memory barrier between each instruction does not restore sequential consistency. We go on to extend the C/C++11 model to support non-atomic mixed-size memory accesses. This is a necessary step towards semantics for real-world shared-memory concurrent code, beyond litmus tests.
以前关于轻松共享内存并发性语义的工作只考虑了每次加载只读取一个存储的数据的情况。然而,在实践中,多处理器支持混合大小的访问,这些由系统软件使用,并且(在某种程度上)在C/ c++语言级别上公开。因此,软件的语义基础必须解决这些问题。我们研究了ARMv8和IBM POWER架构和实现的混合大小行为:通过实验,通过开发语义模型,通过测试它们之间的对应关系,以及通过与ARM和IBM员工的讨论。事实证明,这是非常微妙的,在这个过程中,我们必须重新审视连贯性和顺序一致性的基本概念,它们在这个背景下发生了变化。特别是,我们展示了在每个指令之间添加内存屏障并不能恢复顺序一致性。我们继续扩展C/ c++ 11模型,以支持非原子混合大小的内存访问。这是实现真实世界共享内存并发代码语义的必要步骤,超出了石蕊测试。
{"title":"Mixed-size concurrency: ARM, POWER, C/C++11, and SC","authors":"Shaked Flur, Susmit Sarkar, Christopher Pulte, Kyndylan Nienhuis, Luc Maranget, Kathryn E. Gray, A. Sezgin, Mark Batty, Peter Sewell","doi":"10.1145/3009837.3009839","DOIUrl":"https://doi.org/10.1145/3009837.3009839","url":null,"abstract":"Previous work on the semantics of relaxed shared-memory concurrency has only considered the case in which each load reads the data of exactly one store. In practice, however, multiprocessors support mixed-size accesses, and these are used by systems software and (to some degree) exposed at the C/C++ language level. A semantic foundation for software, therefore, has to address them. We investigate the mixed-size behaviour of ARMv8 and IBM POWER architectures and implementations: by experiment, by developing semantic models, by testing the correspondence between these, and by discussion with ARM and IBM staff. This turns out to be surprisingly subtle, and on the way we have to revisit the fundamental concepts of coherence and sequential consistency, which change in this setting. In particular, we show that adding a memory barrier between each instruction does not restore sequential consistency. We go on to extend the C/C++11 model to support non-atomic mixed-size memory accesses. This is a necessary step towards semantics for real-world shared-memory concurrent code, beyond litmus tests.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"66 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80002728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Performance critical software is almost always developed in C, as programmers do not trust high-level languages to deliver the same reliable performance. This is bad because low-level code in unsafe languages attracts security vulnerabilities and because development is far less productive, with PL advances mostly lost on programmers operating under tight performance constraints. High-level languages provide memory safety out of the box, but they are deemed too slow and unpredictable for serious system software. Recent years have seen a surge in staging and generative programming: the key idea is to use high-level languages and their abstraction power as glorified macro systems to compose code fragments in first-order, potentially domain-specific, intermediate languages, from which fast C can be emitted. But what about security? Since the end result is still C code, the safety guarantees of the high-level host language are lost. In this paper, we extend this generative approach to emit ACSL specifications along with C code. We demonstrate that staging achieves ``abstraction without regret'' for verification: we show how high-level programming models, in particular higher-order composable contracts from dynamic languages, can be used at generation time to compose and generate first-order specifications that can be statically checked by existing tools. We also show how type classes can automatically attach invariants to data types, reducing the need for repetitive manual annotations. We evaluate our system on several case studies that varyingly exercise verification of memory safety, overflow safety, and functional correctness. We feature an HTTP parser that is (1) fast (2) high-level: implemented using staged parser combinators (3) secure: with verified memory safety. This result is significant, as input parsing is a key attack vector, and vulnerabilities related to HTTP parsing have been documented in all widely-used web servers.
{"title":"LMS-Verify: abstraction without regret for verified systems programming","authors":"Nada Amin, Tiark Rompf","doi":"10.1145/3009837.3009867","DOIUrl":"https://doi.org/10.1145/3009837.3009867","url":null,"abstract":"Performance critical software is almost always developed in C, as programmers do not trust high-level languages to deliver the same reliable performance. This is bad because low-level code in unsafe languages attracts security vulnerabilities and because development is far less productive, with PL advances mostly lost on programmers operating under tight performance constraints. High-level languages provide memory safety out of the box, but they are deemed too slow and unpredictable for serious system software. Recent years have seen a surge in staging and generative programming: the key idea is to use high-level languages and their abstraction power as glorified macro systems to compose code fragments in first-order, potentially domain-specific, intermediate languages, from which fast C can be emitted. But what about security? Since the end result is still C code, the safety guarantees of the high-level host language are lost. In this paper, we extend this generative approach to emit ACSL specifications along with C code. We demonstrate that staging achieves ``abstraction without regret'' for verification: we show how high-level programming models, in particular higher-order composable contracts from dynamic languages, can be used at generation time to compose and generate first-order specifications that can be statically checked by existing tools. We also show how type classes can automatically attach invariants to data types, reducing the need for repetitive manual annotations. We evaluate our system on several case studies that varyingly exercise verification of memory safety, overflow safety, and functional correctness. We feature an HTTP parser that is (1) fast (2) high-level: implemented using staged parser combinators (3) secure: with verified memory safety. This result is significant, as input parsing is a key attack vector, and vulnerabilities related to HTTP parsing have been documented in all widely-used web servers.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84189092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A thread-modular proof for the correctness of a concurrent program is based on an inductive and interference-free annotation of each thread. It is well-known that the corresponding proof system is not complete (unless one adds auxiliary variables). We describe a hierarchy of proof systems where each level k corresponds to a generalized notion of thread modularity (level 1 corresponds to the original notion). Each level is strictly more expressive than the previous. Further, each level precisely captures programs that can be proved using uniform Ashcroft invariants with k universal quantifiers. We demonstrate the usefulness of the hierarchy by giving a compositional proof of the Mach shootdown algorithm for TLB consistency. We show a proof at level 2 that shows the algorithm is correct for an arbitrary number of CPUs. However, there is no proof for the algorithm at level 1 which does not involve auxiliary state.
{"title":"Thread modularity at many levels: a pearl in compositional verification","authors":"Jochen Hoenicke, R. Majumdar, A. Podelski","doi":"10.1145/3009837.3009893","DOIUrl":"https://doi.org/10.1145/3009837.3009893","url":null,"abstract":"A thread-modular proof for the correctness of a concurrent program is based on an inductive and interference-free annotation of each thread. It is well-known that the corresponding proof system is not complete (unless one adds auxiliary variables). We describe a hierarchy of proof systems where each level k corresponds to a generalized notion of thread modularity (level 1 corresponds to the original notion). Each level is strictly more expressive than the previous. Further, each level precisely captures programs that can be proved using uniform Ashcroft invariants with k universal quantifiers. We demonstrate the usefulness of the hierarchy by giving a compositional proof of the Mach shootdown algorithm for TLB consistency. We show a proof at level 2 that shows the algorithm is correct for an arbitrary number of CPUs. However, there is no proof for the algorithm at level 1 which does not involve auxiliary state.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82466639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many popular languages have a self-interpreter, that is, an interpreter for the language written in itself. So far, work on polymorphically-typed self-interpreters has concentrated on self-recognizers that merely recover a program from its representation. A larger and until now unsolved challenge is to implement a polymorphically-typed self-evaluator that evaluates the represented program and produces a representation of the result. We present Fωμi, the first λ-calculus that supports a polymorphically-typed self-evaluator. Our calculus extends Fω with recursive types and intensional type functions and has decidable type checking. Our key innovation is a novel implementation of type equality proofs that enables us to define a versatile representation of programs. Our results establish a new category of languages that can support polymorphically-typed self-evaluators.
{"title":"Typed self-evaluation via intensional type functions","authors":"Matt Brown, J. Palsberg","doi":"10.1145/3009837.3009853","DOIUrl":"https://doi.org/10.1145/3009837.3009853","url":null,"abstract":"Many popular languages have a self-interpreter, that is, an interpreter for the language written in itself. So far, work on polymorphically-typed self-interpreters has concentrated on self-recognizers that merely recover a program from its representation. A larger and until now unsolved challenge is to implement a polymorphically-typed self-evaluator that evaluates the represented program and produces a representation of the result. We present Fωμi, the first λ-calculus that supports a polymorphically-typed self-evaluator. Our calculus extends Fω with recursive types and intensional type functions and has decidable type checking. Our key innovation is a novel implementation of type equality proofs that enables us to define a versatile representation of programs. Our results establish a new category of languages that can support polymorphically-typed self-evaluators.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81200718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
O. Kiselyov, Aggelos Biboudis, Nick Palladinos, Y. Smaragdakis
Stream processing is mainstream (again): Widely-used stream libraries are now available for virtually all modern OO and functional languages, from Java to C# to Scala to OCaml to Haskell. Yet expressivity and performance are still lacking. For instance, the popular, well-optimized Java 8 streams do not support the zip operator and are still an order of magnitude slower than hand-written loops. We present the first approach that represents the full generality of stream processing and eliminates overheads, via the use of staging. It is based on an unusually rich semantic model of stream interaction. We support any combination of zipping, nesting (or flat-mapping), sub-ranging, filtering, mapping—of finite or infinite streams. Our model captures idiosyncrasies that a programmer uses in optimizing stream pipelines, such as rate differences and the choice of a “for” vs. “while” loops. Our approach delivers hand-written–like code, but automatically. It explicitly avoids the reliance on black-box optimizers and sufficiently-smart compilers, offering highest, guaranteed and portable performance. Our approach relies on high-level concepts that are then readily mapped into an implementation. Accordingly, we have two distinct implementations: an OCaml stream library, staged via MetaOCaml, and a Scala library for the JVM, staged via LMS. In both cases, we derive libraries richer and simultaneously many tens of times faster than past work. We greatly exceed in performance the standard stream libraries available in Java, Scala and OCaml, including the well-optimized Java 8 streams.
{"title":"Stream fusion, to completeness","authors":"O. Kiselyov, Aggelos Biboudis, Nick Palladinos, Y. Smaragdakis","doi":"10.1145/3009837.3009880","DOIUrl":"https://doi.org/10.1145/3009837.3009880","url":null,"abstract":"Stream processing is mainstream (again): Widely-used stream libraries are now available for virtually all modern OO and functional languages, from Java to C# to Scala to OCaml to Haskell. Yet expressivity and performance are still lacking. For instance, the popular, well-optimized Java 8 streams do not support the zip operator and are still an order of magnitude slower than hand-written loops. We present the first approach that represents the full generality of stream processing and eliminates overheads, via the use of staging. It is based on an unusually rich semantic model of stream interaction. We support any combination of zipping, nesting (or flat-mapping), sub-ranging, filtering, mapping—of finite or infinite streams. Our model captures idiosyncrasies that a programmer uses in optimizing stream pipelines, such as rate differences and the choice of a “for” vs. “while” loops. Our approach delivers hand-written–like code, but automatically. It explicitly avoids the reliance on black-box optimizers and sufficiently-smart compilers, offering highest, guaranteed and portable performance. Our approach relies on high-level concepts that are then readily mapped into an implementation. Accordingly, we have two distinct implementations: an OCaml stream library, staged via MetaOCaml, and a Scala library for the JVM, staged via LMS. In both cases, we derive libraries richer and simultaneously many tens of times faster than past work. We greatly exceed in performance the standard stream libraries available in Java, Scala and OCaml, including the well-optimized Java 8 streams.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"90 3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87726315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We explore the design and implementation of Frank, a strict functional programming language with a bidirectional effect type system designed from the ground up around a novel variant of Plotkin and Pretnar's effect handler abstraction. Effect handlers provide an abstraction for modular effectful programming: a handler acts as an interpreter for a collection of commands whose interfaces are statically tracked by the type system. However, Frank eliminates the need for an additional effect handling construct by generalising the basic mechanism of functional abstraction itself. A function is simply the special case of a Frank operator that interprets no commands. Moreover, Frank's operators can be multihandlers which simultaneously interpret commands from several sources at once, without disturbing the direct style of functional programming with values. Effect typing in Frank employs a novel form of effect polymorphism which avoid mentioning effect variables in source code. This is achieved by propagating an ambient ability inwards, rather than accumulating unions of potential effects outwards. We introduce Frank by example, and then give a formal account of the Frank type system and its semantics. We introduce Core Frank by elaborating Frank operators into functions, case expressions, and unary handlers, and then give a sound small-step operational semantics for Core Frank. Programming with effects and handlers is in its infancy. We contribute an exploration of future possibilities, particularly in combination with other forms of rich type system.
{"title":"Do be do be do","authors":"Sam Lindley, Conor McBride, Craig McLaughlin","doi":"10.1145/3009837.3009897","DOIUrl":"https://doi.org/10.1145/3009837.3009897","url":null,"abstract":"We explore the design and implementation of Frank, a strict functional programming language with a bidirectional effect type system designed from the ground up around a novel variant of Plotkin and Pretnar's effect handler abstraction. Effect handlers provide an abstraction for modular effectful programming: a handler acts as an interpreter for a collection of commands whose interfaces are statically tracked by the type system. However, Frank eliminates the need for an additional effect handling construct by generalising the basic mechanism of functional abstraction itself. A function is simply the special case of a Frank operator that interprets no commands. Moreover, Frank's operators can be multihandlers which simultaneously interpret commands from several sources at once, without disturbing the direct style of functional programming with values. Effect typing in Frank employs a novel form of effect polymorphism which avoid mentioning effect variables in source code. This is achieved by propagating an ambient ability inwards, rather than accumulating unions of potential effects outwards. We introduce Frank by example, and then give a formal account of the Frank type system and its semantics. We introduce Core Frank by elaborating Frank operators into functions, case expressions, and unary handlers, and then give a sound small-step operational semantics for Core Frank. Programming with effects and handlers is in its infancy. We contribute an exploration of future possibilities, particularly in combination with other forms of rich type system.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"84 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90990078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The theory of finite term algebras provides a natural framework to describe the semantics of functional languages. The ability to efficiently reason about term algebras is essential to automate program analysis and verification for functional or imperative programs over inductively defined data types such as lists and trees. However, as the theory of finite term algebras is not finitely axiomatizable, reasoning about quantified properties over term algebras is challenging. In this paper we address full first-order reasoning about properties of programs manipulating term algebras, and describe two approaches for doing so by using first-order theorem proving. Our first method is a conservative extension of the theory of term alge- bras using a finite number of statements, while our second method relies on extending the superposition calculus of first-order theorem provers with additional inference rules. We implemented our work in the first-order theorem prover Vampire and evaluated it on a large number of inductive datatype benchmarks, as well as game theory constraints. Our experimental results show that our methods are able to find proofs for many hard problems previously unsolved by state-of-the-art methods. We also show that Vampire implementing our methods outperforms existing SMT solvers able to deal with inductive data types.
{"title":"Coming to terms with quantified reasoning","authors":"L. Kovács, Simon Robillard, A. Voronkov","doi":"10.1145/3009837.3009887","DOIUrl":"https://doi.org/10.1145/3009837.3009887","url":null,"abstract":"The theory of finite term algebras provides a natural framework to describe the semantics of functional languages. The ability to efficiently reason about term algebras is essential to automate program analysis and verification for functional or imperative programs over inductively defined data types such as lists and trees. However, as the theory of finite term algebras is not finitely axiomatizable, reasoning about quantified properties over term algebras is challenging. In this paper we address full first-order reasoning about properties of programs manipulating term algebras, and describe two approaches for doing so by using first-order theorem proving. Our first method is a conservative extension of the theory of term alge- bras using a finite number of statements, while our second method relies on extending the superposition calculus of first-order theorem provers with additional inference rules. We implemented our work in the first-order theorem prover Vampire and evaluated it on a large number of inductive datatype benchmarks, as well as game theory constraints. Our experimental results show that our methods are able to find proofs for many hard problems previously unsolved by state-of-the-art methods. We also show that Vampire implementing our methods outperforms existing SMT solvers able to deal with inductive data types.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90934053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A long-standing shortcoming of statically typed functional languages is that type checking does not rule out pattern-matching failures (run-time match exceptions). Refinement types distinguish different values of datatypes; if a program annotated with refinements passes type checking, pattern-matching failures become impossible. Unfortunately, refinement is a monolithic property of a type, exacerbating the difficulty of adding refinement types to nontrivial programs. Gradual typing has explored how to incrementally move between static typing and dynamic typing. We develop a type system of gradual sums that combines refinement with imprecision. Then, we develop a bidirectional version of the type system, which rules out excessive imprecision, and give a type-directed translation to a target language with explicit casts. We prove that the static sublanguage cannot have match failures, that a well-typed program remains well-typed if its type annotations are made less precise, and that making annotations less precise causes target programs to fail later. Several of these results correspond to criteria for gradual typing given by Siek et al. (2015).
{"title":"Sums of uncertainty: refinements go gradual","authors":"Khurram A. Jafery, Jana Dunfield","doi":"10.1145/3009837.3009865","DOIUrl":"https://doi.org/10.1145/3009837.3009865","url":null,"abstract":"A long-standing shortcoming of statically typed functional languages is that type checking does not rule out pattern-matching failures (run-time match exceptions). Refinement types distinguish different values of datatypes; if a program annotated with refinements passes type checking, pattern-matching failures become impossible. Unfortunately, refinement is a monolithic property of a type, exacerbating the difficulty of adding refinement types to nontrivial programs. Gradual typing has explored how to incrementally move between static typing and dynamic typing. We develop a type system of gradual sums that combines refinement with imprecision. Then, we develop a bidirectional version of the type system, which rules out excessive imprecision, and give a type-directed translation to a target language with explicit casts. We prove that the static sublanguage cannot have match failures, that a well-typed program remains well-typed if its type annotations are made less precise, and that making annotations less precise causes target programs to fail later. Several of these results correspond to criteria for gradual typing given by Siek et al. (2015).","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85444930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Termination is one of the basic liveness properties, and we study the termination problem for probabilistic programs with real-valued variables. Previous works focused on the qualitative problem that asks whether an input program terminates with probability 1 (almost-sure termination). A powerful approach for this qualitative problem is the notion of ranking supermartingales with respect to a given set of invariants. The quantitative problem (probabilistic termination) asks for bounds on the termination probability, and this problem has not been addressed yet. A fundamental and conceptual drawback of the existing approaches to address probabilistic termination is that even though the supermartingales consider the probabilistic behaviour of the programs, the invariants are obtained completely ignoring the probabilistic aspect (i.e., the invariants are obtained considering all behaviours with no information about the probability). In this work we address the probabilistic termination problem for linear-arithmetic probabilistic programs with nondeterminism. We formally define the notion of stochastic invariants, which are constraints along with a probability bound that the constraints hold. We introduce a concept of repulsing supermartingales. First, we show that repulsing supermartingales can be used to obtain bounds on the probability of the stochastic invariants. Second, we show the effectiveness of repulsing supermartingales in the following three ways: (1) With a combination of ranking and repulsing supermartingales we can compute lower bounds on the probability of termination; (2) repulsing supermartingales provide witnesses for refutation of almost-sure termination; and (3) with a combination of ranking and repulsing supermartingales we can establish persistence properties of probabilistic programs. Along with our conceptual contributions, we establish the following computational results: First, the synthesis of a stochastic invariant which supports some ranking supermartingale and at the same time admits a repulsing supermartingale can be achieved via reduction to the existential first-order theory of reals, which generalizes existing results from the non-probabilistic setting. Second, given a program with "strict invariants" (e.g., obtained via abstract interpretation) and a stochastic invariant, we can check in polynomial time whether there exists a linear repulsing supermartingale w.r.t. the stochastic invariant (via reduction to LP). We also present experimental evaluation of our approach on academic examples.
{"title":"Stochastic invariants for probabilistic termination","authors":"K. Chatterjee, Petr Novotný, Dorde Zikelic","doi":"10.1145/3009837.3009873","DOIUrl":"https://doi.org/10.1145/3009837.3009873","url":null,"abstract":"Termination is one of the basic liveness properties, and we study the termination problem for probabilistic programs with real-valued variables. Previous works focused on the qualitative problem that asks whether an input program terminates with probability 1 (almost-sure termination). A powerful approach for this qualitative problem is the notion of ranking supermartingales with respect to a given set of invariants. The quantitative problem (probabilistic termination) asks for bounds on the termination probability, and this problem has not been addressed yet. A fundamental and conceptual drawback of the existing approaches to address probabilistic termination is that even though the supermartingales consider the probabilistic behaviour of the programs, the invariants are obtained completely ignoring the probabilistic aspect (i.e., the invariants are obtained considering all behaviours with no information about the probability). In this work we address the probabilistic termination problem for linear-arithmetic probabilistic programs with nondeterminism. We formally define the notion of stochastic invariants, which are constraints along with a probability bound that the constraints hold. We introduce a concept of repulsing supermartingales. First, we show that repulsing supermartingales can be used to obtain bounds on the probability of the stochastic invariants. Second, we show the effectiveness of repulsing supermartingales in the following three ways: (1) With a combination of ranking and repulsing supermartingales we can compute lower bounds on the probability of termination; (2) repulsing supermartingales provide witnesses for refutation of almost-sure termination; and (3) with a combination of ranking and repulsing supermartingales we can establish persistence properties of probabilistic programs. Along with our conceptual contributions, we establish the following computational results: First, the synthesis of a stochastic invariant which supports some ranking supermartingale and at the same time admits a repulsing supermartingale can be achieved via reduction to the existential first-order theory of reals, which generalizes existing results from the non-probabilistic setting. Second, given a program with \"strict invariants\" (e.g., obtained via abstract interpretation) and a stochastic invariant, we can check in polynomial time whether there exists a linear repulsing supermartingale w.r.t. the stochastic invariant (via reduction to LP). We also present experimental evaluation of our approach on academic examples.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89581620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}