Determining if a given program satisfies a given bound on the amount of resources that it may use is a fundamental problem with critical practical applications. Conventional automatic verifiers for safety properties cannot be applied to address this problem directly because such verifiers target properties expressed in decidable theories; however, many practical bounds are expressed in nonlinear theories, which are undecidable. In this work, we introduce an automatic verification algorithm, CAMPY, that determines if a given program P satisfies a given resource bound B, which may be expressed using polynomial, exponential, and logarithmic terms. The key technical contribution behind our verifier is an interpolating theorem prover for non-linear theories that lazily learns a sufficiently accurate approximation of non-linear theories by selectively grounding theorems of the nonlinear theory that are relevant to proving that P satisfies B. To evaluate CAMPY, we implemented it to target Java Virtual Machine bytecode. We applied CAMPY to verify that over 20 solutions submitted for programming problems hosted on popular online coding platforms satisfy or do not satisfy expected complexity bounds.
{"title":"Complexity verification using guided theorem enumeration","authors":"Akhilesh Srikanth, Burak Sahin, William R. Harris","doi":"10.1145/3009837.3009864","DOIUrl":"https://doi.org/10.1145/3009837.3009864","url":null,"abstract":"Determining if a given program satisfies a given bound on the amount of resources that it may use is a fundamental problem with critical practical applications. Conventional automatic verifiers for safety properties cannot be applied to address this problem directly because such verifiers target properties expressed in decidable theories; however, many practical bounds are expressed in nonlinear theories, which are undecidable. In this work, we introduce an automatic verification algorithm, CAMPY, that determines if a given program P satisfies a given resource bound B, which may be expressed using polynomial, exponential, and logarithmic terms. The key technical contribution behind our verifier is an interpolating theorem prover for non-linear theories that lazily learns a sufficiently accurate approximation of non-linear theories by selectively grounding theorems of the nonlinear theory that are relevant to proving that P satisfies B. To evaluate CAMPY, we implemented it to target Java Virtual Machine bytecode. We applied CAMPY to verify that over 20 solutions submitted for programming problems hosted on popular online coding platforms satisfy or do not satisfy expected complexity bounds.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"70 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74116567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reynolds's Abstraction theorem forms the mathematical foundation for data abstraction. His setting was the polymorphic lambda calculus. Today, many modern languages, such as the ML family, employ rich module systems designed to give more expressive support for data abstraction than the polymorphic lambda calculus, but analogues of the Abstraction theorem for such module systems have lagged far behind. We give an account of the Abstraction theorem for a modern module calculus supporting generative and applicative functors, higher-order functors, sealing, and translucent signatures. The main issues to be overcome are: (1) the fact that modules combine both types and terms, so they must be treated as both simultaneously, (2) the effect discipline that models the distinction between transparent and opaque modules, and (3) a very rich language of type constructors supporting singleton kinds. We define logical equivalence for modules and show that it coincides with contextual equivalence. This substantiates the folk theorem that modules are good for data abstraction. All our proofs are formalized in Coq.
{"title":"Modules, abstraction, and parametric polymorphism","authors":"Karl Crary","doi":"10.1145/3009837.3009892","DOIUrl":"https://doi.org/10.1145/3009837.3009892","url":null,"abstract":"Reynolds's Abstraction theorem forms the mathematical foundation for data abstraction. His setting was the polymorphic lambda calculus. Today, many modern languages, such as the ML family, employ rich module systems designed to give more expressive support for data abstraction than the polymorphic lambda calculus, but analogues of the Abstraction theorem for such module systems have lagged far behind. We give an account of the Abstraction theorem for a modern module calculus supporting generative and applicative functors, higher-order functors, sealing, and translucent signatures. The main issues to be overcome are: (1) the fact that modules combine both types and terms, so they must be treated as both simultaneously, (2) the effect discipline that models the distinction between transparent and opaque modules, and (3) a very rich language of type constructors supporting singleton kinds. We define logical equivalence for modules and show that it coincides with contextual equivalence. This substantiates the folk theorem that modules are good for data abstraction. All our proofs are formalized in Coq.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"50 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88155288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Algebraic effect handlers, introduced by Plotkin and Power in 2002, are recently gaining in popularity as a purely functional approach to modeling effects. In this article, we give a full overview of practical algebraic effects in the context of a compiled implementation in the Koka language. In particular, we show how algebraic effects generalize over common constructs like exception handling, state, iterators and async-await. We give an effective type inference algorithm based on extensible effect rows using scoped labels, and a direct operational semantics. Finally, we show an efficient compilation scheme to common runtime platforms (like JavaScript) using a type directed selective CPS translation.
{"title":"Type directed compilation of row-typed algebraic effects","authors":"Daan Leijen","doi":"10.1145/3009837.3009872","DOIUrl":"https://doi.org/10.1145/3009837.3009872","url":null,"abstract":"Algebraic effect handlers, introduced by Plotkin and Power in 2002, are recently gaining in popularity as a purely functional approach to modeling effects. In this article, we give a full overview of practical algebraic effects in the context of a compiled implementation in the Koka language. In particular, we show how algebraic effects generalize over common constructs like exception handling, state, iterators and async-await. We give an effective type inference algorithm based on extensible effect rows using scoped labels, and a direct operational semantics. Finally, we show an efficient compilation scheme to common runtime platforms (like JavaScript) using a type directed selective CPS translation.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85110159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The intricate rules for memory ordering and synchronisation associated with the C/C++11 memory model mean that data races can be difficult to eliminate from concurrent programs. Dynamic data race analysis can pinpoint races in large and complex applications, but the state-of-the-art ThreadSanitizer (tsan) tool for C/C++ considers only sequentially consistent program executions, and does not correctly model synchronisation between C/C++11 atomic operations. We present a scalable dynamic data race analysis for C/C++11 that correctly captures C/C++11 synchronisation, and uses instrumentation to support exploration of a class of non sequentially consistent executions. We concisely define the memory model fragment captured by our instrumentation via a restricted axiomatic semantics, and show that the axiomatic semantics permits exactly those executions explored by our instrumentation. We have implemented our analysis in tsan, and evaluate its effectiveness on benchmark programs, enabling a comparison with the CDSChecker tool, and on two large and highly concurrent applications: the Firefox and Chromium web browsers. Our results show that our method can detect races that are beyond the scope of the original tsan tool, and that the overhead associated with applying our enhanced instrumentation to large applications is tolerable.
与C/ c++ 11内存模型相关的内存排序和同步的复杂规则意味着数据竞争很难从并发程序中消除。动态数据竞争分析可以确定大型复杂应用程序中的竞争,但是C/ c++最先进的ThreadSanitizer (tsan)工具只考虑顺序一致的程序执行,并且不能正确地模拟C/ c++ 11原子操作之间的同步。我们为C/ c++ 11提供了一个可扩展的动态数据竞争分析,它可以正确捕获C/ c++ 11的同步,并使用工具来支持对一类非顺序一致执行的探索。我们通过一个受限制的公理语义简明地定义了由我们的工具捕获的内存模型片段,并表明公理语义恰恰允许我们的工具所探索的那些执行。我们已经在tsan中实现了我们的分析,并在基准程序上评估了它的有效性,以便与CDSChecker工具以及两个大型且高度并发的应用程序(Firefox和Chromium web浏览器)进行比较。我们的结果表明,我们的方法可以检测超出原始tsan工具范围的竞争,并且将我们的增强检测应用于大型应用程序的开销是可以忍受的。
{"title":"Dynamic race detection for C++11","authors":"Christopher Lidbury, A. Donaldson","doi":"10.1145/3009837.3009857","DOIUrl":"https://doi.org/10.1145/3009837.3009857","url":null,"abstract":"The intricate rules for memory ordering and synchronisation associated with the C/C++11 memory model mean that data races can be difficult to eliminate from concurrent programs. Dynamic data race analysis can pinpoint races in large and complex applications, but the state-of-the-art ThreadSanitizer (tsan) tool for C/C++ considers only sequentially consistent program executions, and does not correctly model synchronisation between C/C++11 atomic operations. We present a scalable dynamic data race analysis for C/C++11 that correctly captures C/C++11 synchronisation, and uses instrumentation to support exploration of a class of non sequentially consistent executions. We concisely define the memory model fragment captured by our instrumentation via a restricted axiomatic semantics, and show that the axiomatic semantics permits exactly those executions explored by our instrumentation. We have implemented our analysis in tsan, and evaluate its effectiveness on benchmark programs, enabling a comparison with the CDSChecker tool, and on two large and highly concurrent applications: the Firefox and Chromium web browsers. Our results show that our method can detect races that are beyond the scope of the original tsan tool, and that the overhead associated with applying our enhanced instrumentation to large applications is tolerable.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"56 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90770608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeehoon Kang, C. Hur, O. Lahav, Viktor Vafeiadis, Derek Dreyer
Despite many years of research, it has proven very difficult to develop a memory model for concurrent programming languages that adequately balances the conflicting desiderata of programmers, compilers, and hardware. In this paper, we propose the first relaxed memory model that (1) accounts for a broad spectrum of features from the C++11 concurrency model, (2) is implementable, in the sense that it provably validates many standard compiler optimizations and reorderings, as well as standard compilation schemes to x86-TSO and Power, (3) justifies simple invariant-based reasoning, thus demonstrating the absence of bad "out-of-thin-air" behaviors, (4) supports "DRF" guarantees, ensuring that programmers who use sufficient synchronization need not understand the full complexities of relaxed-memory semantics, and (5) defines the semantics of racy programs without relying on undefined behaviors, which is a prerequisite for applicability to type-safe languages like Java. The key novel idea behind our model is the notion of *promises*: a thread may promise to execute a write in the future, thus enabling other threads to read from that write out of order. Crucially, to prevent out-of-thin-air behaviors, a promise step requires a thread-local certification that it will be possible to execute the promised write even in the absence of the promise. To establish confidence in our model, we have formalized most of our key results in Coq.
{"title":"A promising semantics for relaxed-memory concurrency","authors":"Jeehoon Kang, C. Hur, O. Lahav, Viktor Vafeiadis, Derek Dreyer","doi":"10.1145/3009837.3009850","DOIUrl":"https://doi.org/10.1145/3009837.3009850","url":null,"abstract":"Despite many years of research, it has proven very difficult to develop a memory model for concurrent programming languages that adequately balances the conflicting desiderata of programmers, compilers, and hardware. In this paper, we propose the first relaxed memory model that (1) accounts for a broad spectrum of features from the C++11 concurrency model, (2) is implementable, in the sense that it provably validates many standard compiler optimizations and reorderings, as well as standard compilation schemes to x86-TSO and Power, (3) justifies simple invariant-based reasoning, thus demonstrating the absence of bad \"out-of-thin-air\" behaviors, (4) supports \"DRF\" guarantees, ensuring that programmers who use sufficient synchronization need not understand the full complexities of relaxed-memory semantics, and (5) defines the semantics of racy programs without relying on undefined behaviors, which is a prerequisite for applicability to type-safe languages like Java. The key novel idea behind our model is the notion of *promises*: a thread may promise to execute a write in the future, thus enabling other threads to read from that write out of order. Crucially, to prevent out-of-thin-air behaviors, a promise step requires a thread-local certification that it will be possible to execute the promised write even in the absence of the promise. To establish confidence in our model, we have formalized most of our key results in Coq.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89829799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ezgi Çiçek, G. Barthe, Marco Gaboardi, Deepak Garg, Jan Hoffmann
Establishing quantitative bounds on the execution cost of programs is essential in many areas of computer science such as complexity analysis, compiler optimizations, security and privacy. Techniques based on program analysis, type systems and abstract interpretation are well-studied, but methods for analyzing how the execution costs of two programs compare to each other have not received attention. Naively combining the worst and best case execution costs of the two programs does not work well in many cases because such analysis forgets the similarities between the programs or the inputs. In this work, we propose a relational cost analysis technique that is capable of establishing precise bounds on the difference in the execution cost of two programs by making use of relational properties of programs and inputs. We develop , a refinement type and effect system for a higher-order functional language with recursion and subtyping. The key novelty of our technique is the combination of relational refinements with two modes of typing-relational typing for reasoning about similar computations/inputs and unary typing for reasoning about unrelated computations/inputs. This combination allows us to analyze the execution cost difference of two programs more precisely than a naive non-relational approach. We prove our type system sound using a semantic model based on step-indexed unary and binary logical relations accounting for non-relational and relational reasoning principles with their respective costs. We demonstrate the precision and generality of our technique through examples.
{"title":"Relational cost analysis","authors":"Ezgi Çiçek, G. Barthe, Marco Gaboardi, Deepak Garg, Jan Hoffmann","doi":"10.1145/3009837.3009858","DOIUrl":"https://doi.org/10.1145/3009837.3009858","url":null,"abstract":"Establishing quantitative bounds on the execution cost of programs is essential in many areas of computer science such as complexity analysis, compiler optimizations, security and privacy. Techniques based on program analysis, type systems and abstract interpretation are well-studied, but methods for analyzing how the execution costs of two programs compare to each other have not received attention. Naively combining the worst and best case execution costs of the two programs does not work well in many cases because such analysis forgets the similarities between the programs or the inputs. In this work, we propose a relational cost analysis technique that is capable of establishing precise bounds on the difference in the execution cost of two programs by making use of relational properties of programs and inputs. We develop , a refinement type and effect system for a higher-order functional language with recursion and subtyping. The key novelty of our technique is the combination of relational refinements with two modes of typing-relational typing for reasoning about similar computations/inputs and unary typing for reasoning about unrelated computations/inputs. This combination allows us to analyze the execution cost difference of two programs more precisely than a naive non-relational approach. We prove our type system sound using a semantic model based on step-indexed unary and binary logical relations accounting for non-relational and relational reasoning principles with their respective costs. We demonstrate the precision and generality of our technique through examples.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88871409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John Wickerson, Mark Batty, Tyler Sorensen, G. Constantinides
A memory consistency model (MCM) is the part of a programming language or computer architecture specification that defines which values can legally be read from shared memory locations. Because MCMs take into account various optimisations employed by architectures and compilers, they are often complex and counterintuitive, which makes them challenging to design and to understand. We identify four tasks involved in designing and understanding MCMs: generating conformance tests, distinguishing two MCMs, checking compiler optimisations, and checking compiler mappings. We show that all four tasks are instances of a general constraint-satisfaction problem to which the solution is either a program or a pair of programs. Although this problem is intractable for automatic solvers when phrased over programs directly, we show how to solve analogous constraints over program executions, and then construct programs that satisfy the original constraints. Our technique, which is implemented in the Alloy modelling framework, is illustrated on several software- and architecture-level MCMs, both axiomatically and operationally defined. We automatically recreate several known results, often in a simpler form, including: distinctions between variants of the C11 MCM; a failure of the "SC-DRF guarantee" in an early C11 draft; that x86 is "multi-copy atomic" and Power is not; bugs in common C11 compiler optimisations; and bugs in a compiler mapping from OpenCL to AMD-style GPUs. We also use our technique to develop and validate a new MCM for NVIDIA GPUs that supports a natural mapping from OpenCL.
{"title":"Automatically comparing memory consistency models","authors":"John Wickerson, Mark Batty, Tyler Sorensen, G. Constantinides","doi":"10.1145/3009837.3009838","DOIUrl":"https://doi.org/10.1145/3009837.3009838","url":null,"abstract":"A memory consistency model (MCM) is the part of a programming language or computer architecture specification that defines which values can legally be read from shared memory locations. Because MCMs take into account various optimisations employed by architectures and compilers, they are often complex and counterintuitive, which makes them challenging to design and to understand. We identify four tasks involved in designing and understanding MCMs: generating conformance tests, distinguishing two MCMs, checking compiler optimisations, and checking compiler mappings. We show that all four tasks are instances of a general constraint-satisfaction problem to which the solution is either a program or a pair of programs. Although this problem is intractable for automatic solvers when phrased over programs directly, we show how to solve analogous constraints over program executions, and then construct programs that satisfy the original constraints. Our technique, which is implemented in the Alloy modelling framework, is illustrated on several software- and architecture-level MCMs, both axiomatically and operationally defined. We automatically recreate several known results, often in a simpler form, including: distinctions between variants of the C11 MCM; a failure of the \"SC-DRF guarantee\" in an early C11 draft; that x86 is \"multi-copy atomic\" and Power is not; bugs in common C11 compiler optimisations; and bugs in a compiler mapping from OpenCL to AMD-style GPUs. We also use our technique to develop and validate a new MCM for NVIDIA GPUs that supports a natural mapping from OpenCL.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"23 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91479645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present the semantics of an imperative programming language called LOIS (Looping Over Infinite Sets), which allows iterating through certain infinite sets, in finite time. Our semantics intuitively correspond to execution of infinitely many threads in parallel. This allows to merge the power of abstract mathematical constructions into imperative programming. Infinite sets are internally represented using first order formulas over some underlying logical structure, and SMT solvers are employed to evaluate programs.
我们提出了一种称为LOIS (loop Over Infinite Sets)的命令式编程语言的语义,它允许在有限时间内迭代某些无限集。我们的语义直观地对应于并行执行无限多个线程。这允许将抽象数学结构的力量合并到命令式编程中。无限集在内部用一阶公式在一些底层逻辑结构上表示,并使用SMT求解器对程序求值。
{"title":"LOIS: syntax and semantics","authors":"Eryk Kopczynski, Szymon Toruńczyk","doi":"10.1145/3009837.3009876","DOIUrl":"https://doi.org/10.1145/3009837.3009876","url":null,"abstract":"We present the semantics of an imperative programming language called LOIS (Looping Over Infinite Sets), which allows iterating through certain infinite sets, in finite time. Our semantics intuitively correspond to execution of infinitely many threads in parallel. This allows to merge the power of abstract mathematical constructions into imperative programming. Infinite sets are internally represented using first order formulas over some underlying logical structure, and SMT solvers are employed to evaluate programs.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"42 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75676763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Program invariant is a fundamental notion widely used in program verification and analysis. The aim of this paper is twofold: (i) find an appropriate definition of invariants for quantum programs; and (ii) develop an effective technique of invariant generation for verification and analysis of quantum programs. Interestingly, the notion of invariant can be defined for quantum programs in two different ways -- additive invariants and multiplicative invariants -- corresponding to two interpretations of implication in a continuous valued logic: the Lukasiewicz implication and the Godel implication. It is shown that both of them can be used to establish partial correctness of quantum programs. The problem of generating additive invariants of quantum programs is addressed by reducing it to an SDP (Semidefinite Programming) problem. This approach is applied with an SDP solver to generate invariants of two important quantum algorithms -- quantum walk and quantum Metropolis sampling. Our examples show that the generated invariants can be used to verify correctness of these algorithms and are helpful in optimising quantum Metropolis sampling. To our knowledge, this paper is the first attempt to define the notion of invariant and to develop a method of invariant generation for quantum programs.
{"title":"Invariants of quantum programs: characterisations and generation","authors":"M. Ying, Shenggang Ying, Xiaodi Wu","doi":"10.1145/3009837.3009840","DOIUrl":"https://doi.org/10.1145/3009837.3009840","url":null,"abstract":"Program invariant is a fundamental notion widely used in program verification and analysis. The aim of this paper is twofold: (i) find an appropriate definition of invariants for quantum programs; and (ii) develop an effective technique of invariant generation for verification and analysis of quantum programs. Interestingly, the notion of invariant can be defined for quantum programs in two different ways -- additive invariants and multiplicative invariants -- corresponding to two interpretations of implication in a continuous valued logic: the Lukasiewicz implication and the Godel implication. It is shown that both of them can be used to establish partial correctness of quantum programs. The problem of generating additive invariants of quantum programs is addressed by reducing it to an SDP (Semidefinite Programming) problem. This approach is applied with an SDP solver to generate invariants of two important quantum algorithms -- quantum walk and quantum Metropolis sampling. Our examples show that the generated invariants can be used to verify correctness of these algorithms and are helpful in optimising quantum Metropolis sampling. To our knowledge, this paper is the first attempt to define the notion of invariant and to develop a method of invariant generation for quantum programs.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88236776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael M. Vitousek, Cameron Swords, Jeremy G. Siek
Gradual typing combines static and dynamic typing in the same language, offering programmers the error detection and strong guarantees of static types and the rapid prototyping and flexible programming idioms of dynamic types. Many gradually typed languages are implemented by translation into an untyped target language (e.g., Typed Clojure, TypeScript, Gradualtalk, and Reticulated Python). For such languages, it is desirable to support arbitrary interaction between translated code and legacy code in the untyped language while maintaining the type soundness of the translated code. In this paper we formalize this goal in the form of the open-world soundness criterion. We discuss why it is challenging to achieve open-world soundness using the traditional proxy-based approach for higher-order casts. However, the transient design satisfies open-world soundness. Indeed, we present a formal semantics for the transient design and prove that our semantics satisfies open-world soundness. In this paper we also solve a challenging problem for the transient design: how to provide blame tracking without proxies. We define a semantics for blame and prove the Blame Theorem. We also prove that the Gradual Guarantee holds for this system, ensuring that programs can be evolved freely between static and dynamic typing. Finally, we demonstrate that the runtime overhead of the transient approach is low in the context of Reticulated Python, an implementation of gradual typing for Python.
{"title":"Big types in little runtime: open-world soundness and collaborative blame for gradual type systems","authors":"Michael M. Vitousek, Cameron Swords, Jeremy G. Siek","doi":"10.1145/3009837.3009849","DOIUrl":"https://doi.org/10.1145/3009837.3009849","url":null,"abstract":"Gradual typing combines static and dynamic typing in the same language, offering programmers the error detection and strong guarantees of static types and the rapid prototyping and flexible programming idioms of dynamic types. Many gradually typed languages are implemented by translation into an untyped target language (e.g., Typed Clojure, TypeScript, Gradualtalk, and Reticulated Python). For such languages, it is desirable to support arbitrary interaction between translated code and legacy code in the untyped language while maintaining the type soundness of the translated code. In this paper we formalize this goal in the form of the open-world soundness criterion. We discuss why it is challenging to achieve open-world soundness using the traditional proxy-based approach for higher-order casts. However, the transient design satisfies open-world soundness. Indeed, we present a formal semantics for the transient design and prove that our semantics satisfies open-world soundness. In this paper we also solve a challenging problem for the transient design: how to provide blame tracking without proxies. We define a semantics for blame and prove the Blame Theorem. We also prove that the Gradual Guarantee holds for this system, ensuring that programs can be evolved freely between static and dynamic typing. Finally, we demonstrate that the runtime overhead of the transient approach is low in the context of Reticulated Python, an implementation of gradual typing for Python.","PeriodicalId":20657,"journal":{"name":"Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages","volume":"81 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74515322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}