Numerical abstract domains are an important ingredient of modern static analyzers used for verifying critical program properties (e.g., absence of buffer overflow or memory safety). Among the many numerical domains introduced over the years, Polyhedra is the most expressive one, but also the most expensive: it has worst-case exponential space and time complexity. As a consequence, static analysis with the Polyhedra domain is thought to be impractical when applied to large scale, real world programs. In this paper, we present a new approach and a complete implementation for speeding up Polyhedra domain analysis. Our approach does not lose precision, and for many practical cases, is orders of magnitude faster than state-of-the-art solutions. The key insight underlying our work is that polyhedra arising during analysis can usually be kept decomposed, thus considerably reducing the overall complexity. We first present the theory underlying our approach, which identifies the interaction between partitions of variables and domain operators. Based on the theory we develop new algorithms for these operators that work with decomposed polyhedra. We implemented these algorithms using the same interface as existing libraries, thus enabling static analyzers to use our implementation with little effort. In our evaluation, we analyze large benchmarks from the popular software verification competition, including Linux device drivers with over 50K lines of code. Our experimental results demonstrate massive gains in both space and time: we show end-to-end speedups of two to five orders of magnitude compared to state-of-the-art Polyhedra implementations as well as significant memory gains, on all larger benchmarks. In fact, in many cases our analysis terminates in seconds where prior code runs out of memory or times out after 4 hours. We believe this work is an important step in making the Polyhedra abstract domain both feasible and practically usable for handling large, real-world programs.
{"title":"Fast polyhedra abstract domain","authors":"Gagandeep Singh, Markus Püschel, Martin T. Vechev","doi":"10.1145/3009837.3009885","DOIUrl":"https://doi.org/10.1145/3009837.3009885","url":null,"abstract":"Numerical abstract domains are an important ingredient of modern static analyzers used for verifying critical program properties (e.g., absence of buffer overflow or memory safety). Among the many numerical domains introduced over the years, Polyhedra is the most expressive one, but also the most expensive: it has worst-case exponential space and time complexity. As a consequence, static analysis with the Polyhedra domain is thought to be impractical when applied to large scale, real world programs. In this paper, we present a new approach and a complete implementation for speeding up Polyhedra domain analysis. Our approach does not lose precision, and for many practical cases, is orders of magnitude faster than state-of-the-art solutions. The key insight underlying our work is that polyhedra arising during analysis can usually be kept decomposed, thus considerably reducing the overall complexity. We first present the theory underlying our approach, which identifies the interaction between partitions of variables and domain operators. Based on the theory we develop new algorithms for these operators that work with decomposed polyhedra. We implemented these algorithms using the same interface as existing libraries, thus enabling static analyzers to use our implementation with little effort. In our evaluation, we analyze large benchmarks from the popular software verification competition, including Linux device drivers with over 50K lines of code. Our experimental results demonstrate massive gains in both space and time: we show end-to-end speedups of two to five orders of magnitude compared to state-of-the-art Polyhedra implementations as well as significant memory gains, on all larger benchmarks. In fact, in many cases our analysis terminates in seconds where prior code runs out of memory or times out after 4 hours. We believe this work is an important step in making the Polyhedra abstract domain both feasible and practically usable for handling large, real-world programs.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81398703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We extend the weak monadic second-order logic of one successor on finite strings (M2L-STR) to symbolic alphabets by allowing character predicates to range over decidable quantifier free theories instead of finite alphabets. We call this logic, which is able to describe sequences over complex and potentially infinite domains, symbolic M2L-STR (S-M2L-STR). We then present a decision procedure for S-M2L-STR based on a reduction to symbolic finite automata, a decidable extension of finite automata that allows transitions to carry predicates and can therefore model symbolic alphabets. The reduction constructs a symbolic automaton over an alphabet consisting of pairs of symbols where the first element of the pair is a symbol in the original formula’s alphabet, while the second element is a bit-vector. To handle this modified alphabet we show that the Cartesian product of two decidable Boolean algebras (e.g., the formula’s one and the bit-vector’s one) also forms a decidable Boolean algebras. To make the decision procedure practical, we propose two efficient representations of the Cartesian product of two Boolean algebras, one based on algebraic decision diagrams and one on a variant of Shannon expansions. Finally, we implement our decision procedure and evaluate it on more than 10,000 formulas. Despite the generality, our implementation has comparable performance with the state-of-the-art M2L-STR solvers.
{"title":"Monadic second-order logic on finite sequences","authors":"Loris D'antoni, Margus Veanes","doi":"10.1145/3009837.3009844","DOIUrl":"https://doi.org/10.1145/3009837.3009844","url":null,"abstract":"We extend the weak monadic second-order logic of one successor on finite strings (M2L-STR) to symbolic alphabets by allowing character predicates to range over decidable quantifier free theories instead of finite alphabets. We call this logic, which is able to describe sequences over complex and potentially infinite domains, symbolic M2L-STR (S-M2L-STR). We then present a decision procedure for S-M2L-STR based on a reduction to symbolic finite automata, a decidable extension of finite automata that allows transitions to carry predicates and can therefore model symbolic alphabets. The reduction constructs a symbolic automaton over an alphabet consisting of pairs of symbols where the first element of the pair is a symbol in the original formula’s alphabet, while the second element is a bit-vector. To handle this modified alphabet we show that the Cartesian product of two decidable Boolean algebras (e.g., the formula’s one and the bit-vector’s one) also forms a decidable Boolean algebras. To make the decision procedure practical, we propose two efficient representations of the Cartesian product of two Boolean algebras, one based on algebraic decision diagrams and one on a variant of Shannon expansions. Finally, we implement our decision procedure and evaluate it on more than 10,000 formulas. Despite the generality, our implementation has comparable performance with the state-of-the-art M2L-STR solvers.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88979189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huisong Li, F. Berenger, B. E. Chang, Xavier Rival
To infer complex structural invariants, shape analyses rely on expressive families of logical properties. Many such analyses manipulate abstract memory states that consist of separating conjunctions of basic predicates describing atomic blocks or summaries. Moreover, they use finite disjunctions of abstract memory states in order to account for dissimilar shapes. Disjunctions should be kept small for the sake of scalability, though precision often requires to keep additional case splits. In this context, deciding when and how to merge case splits and to replace them with summaries is critical both for the precision and for the efficiency. Existing techniques use sets of syntactic rules, which are tedious to design and prone to failure. In this paper, we design a semantic criterion to clump abstract states based on their silhouette which applies not only to the conservative union of disjuncts, but also to the weakening of separating conjunction of memory predicates into inductive summaries. Our approach allows to define union and widening operators that aim at preserving the case splits that are required for the analysis to succeed. We implement this approach in the MemCAD analyzer, and evaluate it on real-world C codes from existing libraries, including programs dealing with doubly linked lists, red-black trees and AVL-trees.
{"title":"Semantic-directed clumping of disjunctive abstract states","authors":"Huisong Li, F. Berenger, B. E. Chang, Xavier Rival","doi":"10.1145/3009837.3009881","DOIUrl":"https://doi.org/10.1145/3009837.3009881","url":null,"abstract":"To infer complex structural invariants, shape analyses rely on expressive families of logical properties. Many such analyses manipulate abstract memory states that consist of separating conjunctions of basic predicates describing atomic blocks or summaries. Moreover, they use finite disjunctions of abstract memory states in order to account for dissimilar shapes. Disjunctions should be kept small for the sake of scalability, though precision often requires to keep additional case splits. In this context, deciding when and how to merge case splits and to replace them with summaries is critical both for the precision and for the efficiency. Existing techniques use sets of syntactic rules, which are tedious to design and prone to failure. In this paper, we design a semantic criterion to clump abstract states based on their silhouette which applies not only to the conservative union of disjuncts, but also to the weakening of separating conjunction of memory predicates into inductive summaries. Our approach allows to define union and widening operators that aim at preserving the case splits that are required for the analysis to succeed. We implement this approach in the MemCAD analyzer, and evaluate it on real-world C codes from existing libraries, including programs dealing with doubly linked lists, red-black trees and AVL-trees.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"54 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78422964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A notion of dimension in intersection typed λ-calculi is presented. The dimension of a typed λ-term is given by the minimal norm of an elaboration (a proof theoretic decoration) necessary for typing the term at its type, and, intuitively, measures intersection introduction as a resource. Bounded-dimensional intersection type calculi are shown to enjoy subject reduction, since terms can be elaborated in non-increasing norm under β-reduction. We prove that a multiset interpretation (corresponding to a non-idempotent and non-linear interpretation of intersection) of dimensionality corresponds to the number of simultaneous constraints required during search for inhabitants. As a consequence, the inhabitation problem is decidable in bounded multiset dimension, and it is proven to be EXPSPACE-complete. This result is a substantial generalization of inhabitation for the rank 2-fragment, yielding a calculus with decidable inhabitation which is independent of rank. Our results give rise to a new criterion (dimensional bound) for subclasses of intersection type calculi with a decidable inhabitation problem, which is orthogonal to previously known criteria, and which should have immediate applications in synthesis. Additionally, we give examples of dimensional analysis of fragments of the intersection type system, including conservativity over simple types, rank 2-types, and normal form typings, and we provide some observations towards dimensional analysis of other systems. It is suggested (for future work) that our notion of dimension may have semantic interpretations in terms of of reduction complexity.
{"title":"Intersection type calculi of bounded dimension","authors":"Andrej Dudenhefner, J. Rehof","doi":"10.1145/3009837.3009862","DOIUrl":"https://doi.org/10.1145/3009837.3009862","url":null,"abstract":"A notion of dimension in intersection typed λ-calculi is presented. The dimension of a typed λ-term is given by the minimal norm of an elaboration (a proof theoretic decoration) necessary for typing the term at its type, and, intuitively, measures intersection introduction as a resource. Bounded-dimensional intersection type calculi are shown to enjoy subject reduction, since terms can be elaborated in non-increasing norm under β-reduction. We prove that a multiset interpretation (corresponding to a non-idempotent and non-linear interpretation of intersection) of dimensionality corresponds to the number of simultaneous constraints required during search for inhabitants. As a consequence, the inhabitation problem is decidable in bounded multiset dimension, and it is proven to be EXPSPACE-complete. This result is a substantial generalization of inhabitation for the rank 2-fragment, yielding a calculus with decidable inhabitation which is independent of rank. Our results give rise to a new criterion (dimensional bound) for subclasses of intersection type calculi with a decidable inhabitation problem, which is orthogonal to previously known criteria, and which should have immediate applications in synthesis. Additionally, we give examples of dimensional analysis of fragments of the intersection type system, including conservativity over simple types, rank 2-types, and normal form typings, and we provide some observations towards dimensional analysis of other systems. It is suggested (for future work) that our notion of dimension may have semantic interpretations in terms of of reduction complexity.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75841963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Refinement types are an effective language-based verification technique. However, as any expressive typing discipline, its strength is its weakness, imposing sometimes undesired rigidity. Guided by abstract interpretation, we extend the gradual typing agenda and develop the notion of gradual refinement types, allowing smooth evolution and interoperability between simple types and logically-refined types. In doing so, we address two challenges unexplored in the gradual typing literature: dealing with imprecise logical information, and with dependent function types. The first challenge leads to a crucial notion of locality for refinement formulas, and the second yields novel operators related to type- and term-level substitution, identifying new opportunity for runtime errors in gradual dependently-typed languages. The gradual language we present is type safe, type sound, and satisfies the refined criteria for gradually-typed languages of Siek et al. We also explain how to extend our approach to richer refinement logics, anticipating key challenges to consider.
{"title":"Gradual refinement types","authors":"Nico Lehmann, É. Tanter","doi":"10.1145/3009837.3009856","DOIUrl":"https://doi.org/10.1145/3009837.3009856","url":null,"abstract":"Refinement types are an effective language-based verification technique. However, as any expressive typing discipline, its strength is its weakness, imposing sometimes undesired rigidity. Guided by abstract interpretation, we extend the gradual typing agenda and develop the notion of gradual refinement types, allowing smooth evolution and interoperability between simple types and logically-refined types. In doing so, we address two challenges unexplored in the gradual typing literature: dealing with imprecise logical information, and with dependent function types. The first challenge leads to a crucial notion of locality for refinement formulas, and the second yields novel operators related to type- and term-level substitution, identifying new opportunity for runtime errors in gradual dependently-typed languages. The gradual language we present is type safe, type sound, and satisfies the refined criteria for gradually-typed languages of Siek et al. We also explain how to extend our approach to richer refinement logics, anticipating key challenges to consider.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72927844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages (POPL 2017)
第44届ACM SIGPLAN编程语言原理研讨会论文集(POPL 2017)
{"title":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","authors":"Giuseppe Castagna, A. Gordon","doi":"10.1145/3009837","DOIUrl":"https://doi.org/10.1145/3009837","url":null,"abstract":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages (POPL 2017)","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"75 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83188270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces QWIRE (``choir''), a language for defining quantum circuits and an interface for manipulating them inside of an arbitrary classical host language. QWIRE is minimal---it contains only a few primitives---and sound with respect to the physical properties entailed by quantum mechanics. At the same time, QWIRE is expressive and highly modular due to its relationship with the host language, mirroring the QRAM model of computation that places a quantum computer (controlled by circuits) alongside a classical computer (controlled by the host language). We present QWIRE along with its type system and operational semantics, which we prove is safe and strongly normalizing whenever the host language is. We give circuits a denotational semantics in terms of density matrices. Throughout, we investigate examples that demonstrate the expressive power of QWIRE, including extensions to the host language that (1) expose a general analysis framework for circuits, and (2) provide dependent types.
{"title":"QWIRE: a core language for quantum circuits","authors":"Jennifer Paykin, Robert Rand, S. Zdancewic","doi":"10.1145/3009837.3009894","DOIUrl":"https://doi.org/10.1145/3009837.3009894","url":null,"abstract":"This paper introduces QWIRE (``choir''), a language for defining quantum circuits and an interface for manipulating them inside of an arbitrary classical host language. QWIRE is minimal---it contains only a few primitives---and sound with respect to the physical properties entailed by quantum mechanics. At the same time, QWIRE is expressive and highly modular due to its relationship with the host language, mirroring the QRAM model of computation that places a quantum computer (controlled by circuits) alongside a classical computer (controlled by the host language). We present QWIRE along with its type system and operational semantics, which we prove is safe and strongly normalizing whenever the host language is. We give circuits a denotational semantics in terms of density matrices. Throughout, we investigate examples that demonstrate the expressive power of QWIRE, including extensions to the host language that (1) expose a general analysis framework for circuits, and (2) provide dependent types.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"68 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89965650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bayesian inference, of posterior knowledge from prior knowledge and observed evidence, is typically defined by Bayes's rule, which says the posterior multiplied by the probability of an observation equals a joint probability. But the observation of a continuous quantity usually has probability zero, in which case Bayes's rule says only that the unknown times zero is zero. To infer a posterior distribution from a zero-probability observation, the statistical notion of disintegration tells us to specify the observation as an expression rather than a predicate, but does not tell us how to compute the posterior. We present the first method of computing a disintegration from a probabilistic program and an expression of a quantity to be observed, even when the observation has probability zero. Because the method produces an exact posterior term and preserves a semantics in which monadic terms denote measures, it composes with other inference methods in a modular way-without sacrificing accuracy or performance.
{"title":"Exact Bayesian inference by symbolic disintegration","authors":"Chung-chieh Shan, N. Ramsey","doi":"10.1145/3009837.3009852","DOIUrl":"https://doi.org/10.1145/3009837.3009852","url":null,"abstract":"Bayesian inference, of posterior knowledge from prior knowledge and observed evidence, is typically defined by Bayes's rule, which says the posterior multiplied by the probability of an observation equals a joint probability. But the observation of a continuous quantity usually has probability zero, in which case Bayes's rule says only that the unknown times zero is zero. To infer a posterior distribution from a zero-probability observation, the statistical notion of disintegration tells us to specify the observation as an expression rather than a predicate, but does not tell us how to compute the posterior. We present the first method of computing a disintegration from a probabilistic program and an expression of a quantity to be observed, even when the observation has probability zero. Because the method produces an exact posterior term and preserves a semantics in which monadic terms denote measures, it composes with other inference methods in a modular way-without sacrificing accuracy or performance.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91106571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
What has dependent type theory done for Haskell? In this talk, I will discuss the influence of dependent types on the design of programming languages and on the practice of functional programmers. Over the past ten years, the Glasgow Haskell compiler has adopted several type system features inspired by dependent type theory. However, this process has not been a direct translation; working in the context of an existing language has lead us to new designs in the semantics of dependent types. I will take a close look at what we have achieved in GHC and discuss what we have learned from this experiment: what works now, what doesn't work yet, and what has surprised us along the way.
{"title":"The influence of dependent types (keynote)","authors":"Stephanie Weirich","doi":"10.1145/3093333.3009923","DOIUrl":"https://doi.org/10.1145/3093333.3009923","url":null,"abstract":"What has dependent type theory done for Haskell? In this talk, I will discuss the influence of dependent types on the design of programming languages and on the practice of functional programmers. Over the past ten years, the Glasgow Haskell compiler has adopted several type system features inspired by dependent type theory. However, this process has not been a direct translation; working in the context of an existing language has lead us to new designs in the semantics of dependent types. I will take a close look at what we have achieved in GHC and discuss what we have learned from this experiment: what works now, what doesn't work yet, and what has surprised us along the way.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78965488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many language designers have adopted gradual typing. However, there remains open questions regarding how to gradualize languages. Cimini and Siek (2016) created a methodology and algorithm to automatically generate the type system of a gradually typed language from a fully static version of the language. In this paper, we address the next challenge of how to automatically generate the dynamic semantics of gradually typed languages. Such languages typically use an intermediate language with explicit casts. Our first result is a methodology for generating the syntax, type system, and dynamic semantics of the intermediate language with casts. Next, we present an algorithm that formalizes and automates the methodology, given a language definition as input. We show that our approach is general enough to automatically gradualize several languages, including features such as polymorphism, recursive types and exceptions. We prove that our algorithm produces languages that satisfy the key correctness criteria of gradual typing. Finally, we implement the algorithm, generating complete specifications of gradually typed languages in lambda-Prolog, including executable interpreters.
{"title":"Automatically generating the dynamic semantics of gradually typed languages","authors":"M. Cimini, Jeremy G. Siek","doi":"10.1145/3009837.3009863","DOIUrl":"https://doi.org/10.1145/3009837.3009863","url":null,"abstract":"Many language designers have adopted gradual typing. However, there remains open questions regarding how to gradualize languages. Cimini and Siek (2016) created a methodology and algorithm to automatically generate the type system of a gradually typed language from a fully static version of the language. In this paper, we address the next challenge of how to automatically generate the dynamic semantics of gradually typed languages. Such languages typically use an intermediate language with explicit casts. Our first result is a methodology for generating the syntax, type system, and dynamic semantics of the intermediate language with casts. Next, we present an algorithm that formalizes and automates the methodology, given a language definition as input. We show that our approach is general enough to automatically gradualize several languages, including features such as polymorphism, recursive types and exceptions. We prove that our algorithm produces languages that satisfy the key correctness criteria of gradual typing. Finally, we implement the algorithm, generating complete specifications of gradually typed languages in lambda-Prolog, including executable interpreters.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88603325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}