首页 > 最新文献

Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages最新文献

英文 中文
A nonstandard standardization theorem 一个非标准的标准化定理
Beniamino Accattoli, E. Bonelli, D. Kesner, Carlos Lombardi
Standardization is a fundamental notion for connecting programming languages and rewriting calculi. Since both programming languages and calculi rely on substitution for defining their dynamics, explicit substitutions (ES) help further close the gap between theory and practice. This paper focuses on standardization for the linear substitution calculus, a calculus with ES capable of mimicking reduction in lambda-calculus and linear logic proof-nets. For the latter, proof-nets can be formalized by means of a simple equational theory over the linear substitution calculus. Contrary to other extant calculi with ES, our system can be equipped with a residual theory in the sense of Lévy, which is used to prove a left-to-right standardization theorem for the calculus with ES but without the equational theory. Such a theorem, however, does not lift from the calculus with ES to proof-nets, because the notion of left-to-right derivation is not preserved by the equational theory. We then relax the notion of left-to-right standard derivation, based on a total order on redexes, to a more liberal notion of standard derivation based on partial orders. Our proofs rely on Gonthier, Lévy, and Melliès' axiomatic theory for standardization. However, we go beyond merely applying their framework, revisiting some of its key concepts: we obtain uniqueness (modulo) of standard derivations in an abstract way and we provide a coinductive characterization of their key abstract notion of external redex. This last point is then used to give a simple proof that linear head reduction --a nondeterministic strategy having a central role in the theory of linear logic-- is standard.
标准化是连接编程语言和重写演算的基本概念。由于编程语言和微积分都依赖于替换来定义它们的动态,显式替换(ES)有助于进一步缩小理论与实践之间的差距。本文重点讨论了线性代换演算的标准化问题,这是一种具有ES的演算,能够模拟lambda演算和线性逻辑证明网中的约简。对于后者,证明网可以通过线性代换演算上的简单方程理论形式化。与现有的带ES微积分相反,我们的系统可以配备一个lsamvy意义上的残差理论,用来证明带ES微积分的一个从左到右的标准化定理,但没有方程理论。然而,这样的定理并没有从ES的微积分提升到证明网,因为从左到右推导的概念并没有被方程理论所保留。然后我们将基于总阶的从左到右标准推导的概念放宽为基于偏阶的更自由的标准推导的概念。我们的证明依赖于Gonthier、lsamuvy和melli的标准化公理化理论。然而,我们超越了仅仅应用他们的框架,重新审视了它的一些关键概念:我们以抽象的方式获得了标准导数的唯一性(模),我们提供了他们的关键抽象概念的协归纳表征外部索引。最后一点被用来给出一个简单的证明,即线性头部还原——一种在线性逻辑理论中具有中心作用的不确定性策略——是标准的。
{"title":"A nonstandard standardization theorem","authors":"Beniamino Accattoli, E. Bonelli, D. Kesner, Carlos Lombardi","doi":"10.1145/2535838.2535886","DOIUrl":"https://doi.org/10.1145/2535838.2535886","url":null,"abstract":"Standardization is a fundamental notion for connecting programming languages and rewriting calculi. Since both programming languages and calculi rely on substitution for defining their dynamics, explicit substitutions (ES) help further close the gap between theory and practice. This paper focuses on standardization for the linear substitution calculus, a calculus with ES capable of mimicking reduction in lambda-calculus and linear logic proof-nets. For the latter, proof-nets can be formalized by means of a simple equational theory over the linear substitution calculus. Contrary to other extant calculi with ES, our system can be equipped with a residual theory in the sense of Lévy, which is used to prove a left-to-right standardization theorem for the calculus with ES but without the equational theory. Such a theorem, however, does not lift from the calculus with ES to proof-nets, because the notion of left-to-right derivation is not preserved by the equational theory. We then relax the notion of left-to-right standard derivation, based on a total order on redexes, to a more liberal notion of standard derivation based on partial orders. Our proofs rely on Gonthier, Lévy, and Melliès' axiomatic theory for standardization. However, we go beyond merely applying their framework, revisiting some of its key concepts: we obtain uniqueness (modulo) of standard derivations in an abstract way and we provide a coinductive characterization of their key abstract notion of external redex. This last point is then used to give a simple proof that linear head reduction --a nondeterministic strategy having a central role in the theory of linear logic-- is standard.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72891911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Game semantics for interface middleweight Java 游戏语义的接口中等重量的Java
A. Murawski, N. Tzevelekos
We consider an object calculus in which open terms interact with the environment through interfaces. The calculus is intended to capture the essence of contextual interactions of Middleweight Java code. Using game semantics, we provide fully abstract models for the induced notions of contextual approximation and equivalence. These are the first denotational models of this kind.
我们考虑一个对象演算,其中开放项通过接口与环境相互作用。这个演算的目的是捕捉中型Java代码的上下文交互的本质。利用博弈语义,我们为情境近似和等价的诱导概念提供了完全抽象的模型。这是第一个这类指称模型。
{"title":"Game semantics for interface middleweight Java","authors":"A. Murawski, N. Tzevelekos","doi":"10.1145/2535838.2535880","DOIUrl":"https://doi.org/10.1145/2535838.2535880","url":null,"abstract":"We consider an object calculus in which open terms interact with the environment through interfaces. The calculus is intended to capture the essence of contextual interactions of Middleweight Java code. Using game semantics, we provide fully abstract models for the induced notions of contextual approximation and equivalence. These are the first denotational models of this kind.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77626236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Bias-variance tradeoffs in program analysis 程序分析中的偏差-方差权衡
Rahul Sharma, A. Nori, A. Aiken
It is often the case that increasing the precision of a program analysis leads to worse results. It is our thesis that this phenomenon is the result of fundamental limits on the ability to use precise abstract domains as the basis for inferring strong invariants of programs. We show that bias-variance tradeoffs, an idea from learning theory, can be used to explain why more precise abstractions do not necessarily lead to better results and also provides practical techniques for coping with such limitations. Learning theory captures precision using a combinatorial quantity called the VC dimension. We compute the VC dimension for different abstractions and report on its usefulness as a precision metric for program analyses. We evaluate cross validation, a technique for addressing bias-variance tradeoffs, on an industrial strength program verification tool called YOGI. The tool produced using cross validation has significantly better running time, finds new defects, and has fewer time-outs than the current production version. Finally, we make some recommendations for tackling bias-variance tradeoffs in program analysis.
通常情况下,增加程序分析的精度会导致更糟糕的结果。我们的论点是,这种现象是使用精确抽象域作为推断程序强不变量的基础的能力受到基本限制的结果。我们展示了偏差-方差权衡,一个来自学习理论的想法,可以用来解释为什么更精确的抽象不一定会带来更好的结果,也提供了应对这些限制的实用技术。学习理论使用称为VC维的组合量来捕获精度。我们计算了不同抽象的VC维,并报告了它作为程序分析的精度度量的实用性。我们在一个名为YOGI的工业强度程序验证工具上评估交叉验证,这是一种解决偏差-方差权衡的技术。与当前的生产版本相比,使用交叉验证生成的工具具有更好的运行时间、发现新缺陷和更少的超时时间。最后,我们提出了一些在程序分析中处理偏差-方差权衡的建议。
{"title":"Bias-variance tradeoffs in program analysis","authors":"Rahul Sharma, A. Nori, A. Aiken","doi":"10.1145/2535838.2535853","DOIUrl":"https://doi.org/10.1145/2535838.2535853","url":null,"abstract":"It is often the case that increasing the precision of a program analysis leads to worse results. It is our thesis that this phenomenon is the result of fundamental limits on the ability to use precise abstract domains as the basis for inferring strong invariants of programs. We show that bias-variance tradeoffs, an idea from learning theory, can be used to explain why more precise abstractions do not necessarily lead to better results and also provides practical techniques for coping with such limitations. Learning theory captures precision using a combinatorial quantity called the VC dimension. We compute the VC dimension for different abstractions and report on its usefulness as a precision metric for program analyses. We evaluate cross validation, a technique for addressing bias-variance tradeoffs, on an industrial strength program verification tool called YOGI. The tool produced using cross validation has significantly better running time, finds new defects, and has fewer time-outs than the current production version. Finally, we make some recommendations for tackling bias-variance tradeoffs in program analysis.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86933260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Combining proofs and programs in a dependently typed language 用独立类型语言组合证明和程序
Chris Casinghino, Vilhelm Sjöberg, Stephanie Weirich
Most dependently-typed programming languages either require that all expressions terminate (e.g. Coq, Agda, and Epigram), or allow infinite loops but are inconsistent when viewed as logics (e.g. Haskell, ATS, Ωmega. Here, we combine these two approaches into a single dependently-typed core language. The language is composed of two fragments that share a common syntax and overlapping semantics: a logic that guarantees total correctness, and a call-by-value programming language that guarantees type safety but not termination. The two fragments may interact: logical expressions may be used as programs; the logic may soundly reason about potentially nonterminating programs; programs can require logical proofs as arguments; and "mobile" program values, including proofs computed at runtime, may be used as evidence by the logic. This language allows programmers to work with total and partial functions uniformly, providing a smooth path from functional programming to dependently-typed programming.
大多数依赖类型的编程语言要么要求所有表达式终止(例如Coq、Agda和Epigram),要么允许无限循环,但当被视为逻辑时却不一致(例如Haskell、ATS、Ωmega)。在这里,我们将这两种方法合并为一种依赖类型的核心语言。该语言由共享公共语法和重叠语义的两个片段组成:保证完全正确性的逻辑,以及保证类型安全但不终止的按值调用编程语言。这两个片段可以相互作用:逻辑表达式可以用作程序;逻辑可以合理地推断出潜在的非终止程序;程序可以要求逻辑证明作为参数;而“移动”的程序值,包括在运行时计算的证明,可以被逻辑用作证据。这种语言允许程序员统一地使用全函数和部分函数,提供了从函数式编程到依赖类型编程的平滑路径。
{"title":"Combining proofs and programs in a dependently typed language","authors":"Chris Casinghino, Vilhelm Sjöberg, Stephanie Weirich","doi":"10.1145/2535838.2535883","DOIUrl":"https://doi.org/10.1145/2535838.2535883","url":null,"abstract":"Most dependently-typed programming languages either require that all expressions terminate (e.g. Coq, Agda, and Epigram), or allow infinite loops but are inconsistent when viewed as logics (e.g. Haskell, ATS, Ωmega. Here, we combine these two approaches into a single dependently-typed core language. The language is composed of two fragments that share a common syntax and overlapping semantics: a logic that guarantees total correctness, and a call-by-value programming language that guarantees type safety but not termination. The two fragments may interact: logical expressions may be used as programs; the logic may soundly reason about potentially nonterminating programs; programs can require logical proofs as arguments; and \"mobile\" program values, including proofs computed at runtime, may be used as evidence by the logic. This language allows programmers to work with total and partial functions uniformly, providing a smooth path from functional programming to dependently-typed programming.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86032532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 71
A trusted mechanised JavaScript specification 一个可信的机械化JavaScript规范
Martin Bodin, A. Charguéraud, Daniele Filaretti, Philippa Gardner, S. Maffeis, Daiva Naudziuniene, Alan Schmitt, Gareth Smith
JavaScript is the most widely used web language for client-side applications. Whilst the development of JavaScript was initially just led by implementation, there is now increasing momentum behind the ECMA standardisation process. The time is ripe for a formal, mechanised specification of JavaScript, to clarify ambiguities in the ECMA standards, to serve as a trusted reference for high-level language compilation and JavaScript implementations, and to provide a platform for high-assurance proofs of language properties. We present JSCert, a formalisation of the current ECMA standard in the Coq proof assistant, and JSRef, a reference interpreter for JavaScript extracted from Coq to OCaml. We give a Coq proof that JSRef is correct with respect to JSCert and assess JSRef using test262, the ECMA conformance test suite. Our methodology ensures that JSCert is a comparatively accurate formulation of the English standard, which will only improve as time goes on. We have demonstrated that modern techniques of mechanised specification can handle the complexity of JavaScript.
JavaScript是客户端应用程序中使用最广泛的web语言。虽然JavaScript的开发最初只是由实现主导,但现在ECMA标准化过程背后的动力越来越大。一个正式的、机械化的JavaScript规范的时机已经成熟,它可以澄清ECMA标准中的歧义,作为高级语言编译和JavaScript实现的可信参考,并为语言属性的高保证证明提供一个平台。我们介绍了JSCert, Coq证明助手中当前ECMA标准的形式化,以及JSRef,一个从Coq提取到OCaml的JavaScript参考解释器。我们给出了一个Coq证明,证明JSRef相对于JSCert是正确的,并使用test262 (ECMA一致性测试套件)评估JSRef。我们的方法确保JSCert是一个相对准确的英语标准的表述,这只会随着时间的推移而改进。我们已经证明了机械化规范的现代技术可以处理JavaScript的复杂性。
{"title":"A trusted mechanised JavaScript specification","authors":"Martin Bodin, A. Charguéraud, Daniele Filaretti, Philippa Gardner, S. Maffeis, Daiva Naudziuniene, Alan Schmitt, Gareth Smith","doi":"10.1145/2535838.2535876","DOIUrl":"https://doi.org/10.1145/2535838.2535876","url":null,"abstract":"JavaScript is the most widely used web language for client-side applications. Whilst the development of JavaScript was initially just led by implementation, there is now increasing momentum behind the ECMA standardisation process. The time is ripe for a formal, mechanised specification of JavaScript, to clarify ambiguities in the ECMA standards, to serve as a trusted reference for high-level language compilation and JavaScript implementations, and to provide a platform for high-assurance proofs of language properties. We present JSCert, a formalisation of the current ECMA standard in the Coq proof assistant, and JSRef, a reference interpreter for JavaScript extracted from Coq to OCaml. We give a Coq proof that JSRef is correct with respect to JSCert and assess JSRef using test262, the ECMA conformance test suite. Our methodology ensures that JSCert is a comparatively accurate formulation of the English standard, which will only improve as time goes on. We have demonstrated that modern techniques of mechanised specification can handle the complexity of JavaScript.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84396998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 98
Counter-factual typing for debugging type errors 用于调试类型错误的反事实输入
Sheng Chen, Martin Erwig
Changing a program in response to a type error plays an important part in modern software development. However, the generation of good type error messages remains a problem for highly expressive type systems. Existing approaches often suffer from a lack of precision in locating errors and proposing remedies. Specifically, they either fail to locate the source of the type error consistently, or they report too many potential error locations. Moreover, the change suggestions offered are often incorrect. This makes the debugging process tedious and ineffective. We present an approach to the problem of type debugging that is based on generating and filtering a comprehensive set of type-change suggestions. Specifically, we generate all (program-structure-preserving) type changes that can possibly fix the type error. These suggestions will be ranked and presented to the programmer in an iterative fashion. In some cases we also produce suggestions to change the program. In most situations, this strategy delivers the correct change suggestions quickly, and at the same time never misses any rare suggestions. The computation of the potentially huge set of type-change suggestions is efficient since it is based on a variational type inference algorithm that type checks a program with variations only once, efficiently reusing type information for shared parts. We have evaluated our method and compared it with previous approaches. Based on a large set of examples drawn from the literature, we have found that our method outperforms other approaches and provides a viable alternative.
在现代软件开发中,修改程序以响应类型错误扮演着重要的角色。然而,对于高表现力的类型系统来说,生成良好的类型错误消息仍然是一个问题。现有的方法在定位错误和提出补救措施方面往往缺乏精确性。具体来说,它们要么无法一致地定位类型错误的来源,要么报告了太多潜在的错误位置。此外,提供的更改建议往往是不正确的。这使得调试过程冗长而无效。我们提出了一种解决类型调试问题的方法,该方法基于生成和过滤一组全面的类型更改建议。具体来说,我们生成所有可能修复类型错误的(保留程序结构的)类型更改。这些建议将被排序,并以迭代的方式呈现给程序员。在某些情况下,我们还会提出修改程序的建议。在大多数情况下,这种策略可以快速地交付正确的变更建议,同时不会遗漏任何罕见的建议。潜在的大量类型更改建议的计算是高效的,因为它基于一种变分类型推断算法,该算法只对具有变化的程序进行一次类型检查,从而有效地重用共享部分的类型信息。我们已经评估了我们的方法,并与以前的方法进行了比较。基于从文献中提取的大量示例,我们发现我们的方法优于其他方法,并提供了一个可行的替代方案。
{"title":"Counter-factual typing for debugging type errors","authors":"Sheng Chen, Martin Erwig","doi":"10.1145/2535838.2535863","DOIUrl":"https://doi.org/10.1145/2535838.2535863","url":null,"abstract":"Changing a program in response to a type error plays an important part in modern software development. However, the generation of good type error messages remains a problem for highly expressive type systems. Existing approaches often suffer from a lack of precision in locating errors and proposing remedies. Specifically, they either fail to locate the source of the type error consistently, or they report too many potential error locations. Moreover, the change suggestions offered are often incorrect. This makes the debugging process tedious and ineffective. We present an approach to the problem of type debugging that is based on generating and filtering a comprehensive set of type-change suggestions. Specifically, we generate all (program-structure-preserving) type changes that can possibly fix the type error. These suggestions will be ranked and presented to the programmer in an iterative fashion. In some cases we also produce suggestions to change the program. In most situations, this strategy delivers the correct change suggestions quickly, and at the same time never misses any rare suggestions. The computation of the potentially huge set of type-change suggestions is efficient since it is based on a variational type inference algorithm that type checks a program with variations only once, efficiently reusing type information for shared parts. We have evaluated our method and compared it with previous approaches. Based on a large set of examples drawn from the literature, we have found that our method outperforms other approaches and provides a viable alternative.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90193668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 65
A sound and complete abstraction for reasoning about parallel prefix sums 关于并行前缀和推理的一个健全和完整的抽象
Nathan Chong, A. Donaldson, J. Ketema
Prefix sums are key building blocks in the implementation of many concurrent software applications, and recently much work has gone into efficiently implementing prefix sums to run on massively parallel graphics processing units (GPUs). Because they lie at the heart of many GPU-accelerated applications, the correctness of prefix sum implementations is of prime importance. We introduce a novel abstraction, the interval of summations, that allows scalable reasoning about implementations of prefix sums. We present this abstraction as a monoid, and prove a soundness and completeness result showing that a generic sequential prefix sum implementation is correct for an array of length $n$ if and only if it computes the correct result for a specific test case when instantiated with the interval of summations monoid. This allows correctness to be established by running a single test where the input and result require O(n lg(n)) space. This improves upon an existing result by Sheeran where the input requires O(n lg(n)) space and the result O(n2 lg(n)) space, and is more feasible for large n than a method by Voigtlaender that uses O(n) space for the input and result but requires running O(n2) tests. We then extend our abstraction and results to the context of data-parallel programs, developing an automated verification method for GPU implementations of prefix sums. Our method uses static verification to prove that a generic prefix sum implementation is data race-free, after which functional correctness of the implementation can be determined by running a single test case under the interval of summations abstraction. We present an experimental evaluation using four different prefix sum algorithms, showing that our method is highly automatic, scales to large thread counts, and significantly outperforms Voigtlaender's method when applied to large arrays.
前缀和是实现许多并发软件应用程序的关键构建块,最近有很多工作是为了有效地实现在大规模并行图形处理单元(gpu)上运行的前缀和。由于前缀和位于许多gpu加速应用程序的核心,因此前缀和实现的正确性至关重要。我们引入了一个新的抽象,求和区间,它允许对前缀和的实现进行可伸缩的推理。我们将这个抽象表示为一个单群,并证明了一个稳健性和完备性结果,该结果表明对于长度为$n$的数组,一般顺序前缀和实现是正确的,当且仅当它以求和单群区间实例化时计算特定测试用例的正确结果。这允许通过运行单个测试来建立正确性,其中输入和结果需要O(n lg(n))个空间。这改进了Sheeran的现有结果,其中输入需要O(n lg(n))空间,结果需要O(n2 lg(n))空间,并且对于大n来说比Voigtlaender使用O(n)空间用于输入和结果但需要运行O(n2)个测试的方法更可行。然后,我们将我们的抽象和结果扩展到数据并行程序的上下文中,开发了一种用于GPU实现前缀和的自动验证方法。我们的方法使用静态验证来证明通用前缀和实现是无数据竞争的,之后可以通过在求和抽象的区间内运行单个测试用例来确定实现的功能正确性。我们使用四种不同的前缀和算法进行了实验评估,结果表明我们的方法高度自动化,适用于大线程数,并且在应用于大型数组时明显优于Voigtlaender的方法。
{"title":"A sound and complete abstraction for reasoning about parallel prefix sums","authors":"Nathan Chong, A. Donaldson, J. Ketema","doi":"10.1145/2535838.2535882","DOIUrl":"https://doi.org/10.1145/2535838.2535882","url":null,"abstract":"Prefix sums are key building blocks in the implementation of many concurrent software applications, and recently much work has gone into efficiently implementing prefix sums to run on massively parallel graphics processing units (GPUs). Because they lie at the heart of many GPU-accelerated applications, the correctness of prefix sum implementations is of prime importance. We introduce a novel abstraction, the interval of summations, that allows scalable reasoning about implementations of prefix sums. We present this abstraction as a monoid, and prove a soundness and completeness result showing that a generic sequential prefix sum implementation is correct for an array of length $n$ if and only if it computes the correct result for a specific test case when instantiated with the interval of summations monoid. This allows correctness to be established by running a single test where the input and result require O(n lg(n)) space. This improves upon an existing result by Sheeran where the input requires O(n lg(n)) space and the result O(n2 lg(n)) space, and is more feasible for large n than a method by Voigtlaender that uses O(n) space for the input and result but requires running O(n2) tests. We then extend our abstraction and results to the context of data-parallel programs, developing an automated verification method for GPU implementations of prefix sums. Our method uses static verification to prove that a generic prefix sum implementation is data race-free, after which functional correctness of the implementation can be determined by running a single test case under the interval of summations abstraction. We present an experimental evaluation using four different prefix sum algorithms, showing that our method is highly automatic, scales to large thread counts, and significantly outperforms Voigtlaender's method when applied to large arrays.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90374862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Consistency analysis of decision-making programs 决策程序的一致性分析
Swarat Chaudhuri, Azadeh Farzan, Zachary Kincaid
Applications in many areas of computing make discrete decisions under uncertainty, for reasons such as limited numerical precision in calculations and errors in sensor-derived inputs. As a result, individual decisions made by such programs may be nondeterministic, and lead to contradictory decisions at different points of an execution. This means that an otherwise correct program may execute along paths, that it would not follow under its ideal semantics, violating essential program invariants on the way. A program is said to be consistent if it does not suffer from this problem despite uncertainty in decisions. In this paper, we present a sound, automatic program analysis for verifying that a program is consistent in this sense. Our analysis proves that each decision made along a program execution is consistent with the decisions made earlier in the execution. The proof is done by generating an invariant that abstracts the set of all decisions made along executions that end at a program location l, then verifying, using a fixpoint constraint-solver, that no contradiction can be derived when these decisions are combined with new decisions made at l. We evaluate our analysis on a collection of programs implementing algorithms in computational geometry. Consistency is known to be a critical, frequently-violated, and thoroughly studied correctness property in geometry, but ours is the first attempt at automated verification of consistency of geometric algorithms. Our benchmark suite consists of implementations of convex hull computation, triangulation, and point location algorithms. On almost all examples that are not consistent (with two exceptions), our analysis is able to verify consistency within a few minutes.
由于计算的数值精度有限和传感器输入的误差等原因,许多计算领域的应用都是在不确定的情况下进行离散决策。因此,由这样的程序做出的单个决策可能是不确定的,并在执行的不同点导致相互矛盾的决策。这意味着,一个本来正确的程序可能会沿着它不遵循理想语义的路径执行,在此过程中违反了程序的基本不变量。如果一个程序在决策不确定的情况下没有出现这个问题,那么它就被称为是一致的。在本文中,我们提出了一个完善的、自动的程序分析来验证程序在这个意义上是一致的。我们的分析证明,在程序执行过程中做出的每个决策都与执行过程中早期做出的决策是一致的。证明是通过生成一个不变量来完成的,该不变量抽象了在程序位置l结束的执行过程中做出的所有决策的集合,然后使用不动点约束求解器验证,当这些决策与在l处做出的新决策相结合时,不会产生矛盾。我们在计算几何中实现算法的程序集合上评估了我们的分析。众所周知,一致性是几何中一个关键的、经常被违反的、被彻底研究过的正确性属性,但我们的研究是对几何算法一致性的自动验证的第一次尝试。我们的基准套件包括凸包计算、三角测量和点定位算法的实现。在几乎所有不一致的示例中(除了两个例外),我们的分析能够在几分钟内验证一致性。
{"title":"Consistency analysis of decision-making programs","authors":"Swarat Chaudhuri, Azadeh Farzan, Zachary Kincaid","doi":"10.1145/2535838.2535858","DOIUrl":"https://doi.org/10.1145/2535838.2535858","url":null,"abstract":"Applications in many areas of computing make discrete decisions under uncertainty, for reasons such as limited numerical precision in calculations and errors in sensor-derived inputs. As a result, individual decisions made by such programs may be nondeterministic, and lead to contradictory decisions at different points of an execution. This means that an otherwise correct program may execute along paths, that it would not follow under its ideal semantics, violating essential program invariants on the way. A program is said to be consistent if it does not suffer from this problem despite uncertainty in decisions. In this paper, we present a sound, automatic program analysis for verifying that a program is consistent in this sense. Our analysis proves that each decision made along a program execution is consistent with the decisions made earlier in the execution. The proof is done by generating an invariant that abstracts the set of all decisions made along executions that end at a program location l, then verifying, using a fixpoint constraint-solver, that no contradiction can be derived when these decisions are combined with new decisions made at l. We evaluate our analysis on a collection of programs implementing algorithms in computational geometry. Consistency is known to be a critical, frequently-violated, and thoroughly studied correctness property in geometry, but ours is the first attempt at automated verification of consistency of geometric algorithms. Our benchmark suite consists of implementations of convex hull computation, triangulation, and point location algorithms. On almost all examples that are not consistent (with two exceptions), our analysis is able to verify consistency within a few minutes.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91392154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Toward general diagnosis of static errors 对静态误差的一般诊断
Danfeng Zhang, A. Myers
We introduce a general way to locate programmer mistakes that are detected by static analyses such as type checking. The program analysis is expressed in a constraint language in which mistakes result in unsatisfiable constraints. Given an unsatisfiable system of constraints, both satisfiable and unsatisfiable constraints are analyzed, to identify the program expressions most likely to be the cause of unsatisfiability. The likelihood of different error explanations is evaluated under the assumption that the programmer's code is mostly correct, so the simplest explanations are chosen, following Bayesian principles. For analyses that rely on programmer-stated assumptions, the diagnosis also identifies assumptions likely to have been omitted. The new error diagnosis approach has been implemented for two very different program analyses: type inference in OCaml and information flow checking in Jif. The effectiveness of the approach is evaluated using previously collected programs containing errors. The results show that when compared to existing compilers and other tools, the general technique identifies the location of programmer errors significantly more accurately.
我们介绍了一种通用的方法来定位由静态分析(如类型检查)检测到的程序员错误。程序分析是用约束语言表达的,在约束语言中,错误会导致不满意的约束。给定一个不可满足的约束系统,对可满足约束和不可满足约束进行分析,以确定最可能导致不可满足的程序表达式。在假设程序员的代码基本正确的情况下,评估不同错误解释的可能性,因此选择最简单的解释,遵循贝叶斯原则。对于依赖于程序员陈述的假设的分析,诊断还可以识别可能被忽略的假设。新的错误诊断方法已经在两种非常不同的程序分析中实现:OCaml中的类型推断和Jif中的信息流检查。使用先前收集的包含错误的程序来评估该方法的有效性。结果表明,与现有的编译器和其他工具相比,通用技术可以更准确地识别程序员错误的位置。
{"title":"Toward general diagnosis of static errors","authors":"Danfeng Zhang, A. Myers","doi":"10.1145/2535838.2535870","DOIUrl":"https://doi.org/10.1145/2535838.2535870","url":null,"abstract":"We introduce a general way to locate programmer mistakes that are detected by static analyses such as type checking. The program analysis is expressed in a constraint language in which mistakes result in unsatisfiable constraints. Given an unsatisfiable system of constraints, both satisfiable and unsatisfiable constraints are analyzed, to identify the program expressions most likely to be the cause of unsatisfiability. The likelihood of different error explanations is evaluated under the assumption that the programmer's code is mostly correct, so the simplest explanations are chosen, following Bayesian principles. For analyses that rely on programmer-stated assumptions, the diagnosis also identifies assumptions likely to have been omitted. The new error diagnosis approach has been implemented for two very different program analyses: type inference in OCaml and information flow checking in Jif. The effectiveness of the approach is evaluated using previously collected programs containing errors. The results show that when compared to existing compilers and other tools, the general technique identifies the location of programmer errors significantly more accurately.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78973918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Profiling for laziness 懒惰剖析
Stephen Chang, M. Felleisen
While many programmers appreciate the benefits of lazy programming at an abstract level, determining which parts of a concrete program to evaluate lazily poses a significant challenge for most of them. Over the past thirty years, experts have published numerous papers on the problem, but developing this level of expertise requires a significant amount of experience. We present a profiling-based technique that captures and automates this expertise for the insertion of laziness annotations into strict programs. To make this idea precise, we show how to equip a formal semantics with a metric that measures waste in an evaluation. Then we explain how to implement this metric as a dynamic profiling tool that suggests where to insert laziness into a program. Finally, we present evidence that our profiler's suggestions either match or improve on an expert's use of laziness in a range of real-world applications.
虽然许多程序员在抽象级别上欣赏惰性编程的好处,但确定要对具体程序的哪些部分进行惰性评估对他们中的大多数人来说是一个重大挑战。在过去的三十年里,专家们发表了大量关于这个问题的论文,但要达到这种水平的专业知识需要大量的经验。我们提出了一种基于分析的技术,它可以捕获并自动化将惰性注释插入到严格程序中的专业知识。为了使这个想法更加精确,我们将展示如何为形式化语义配备度量评估中的浪费的度量。然后我们解释如何实现这个指标作为一个动态分析工具,它建议在程序中插入惰性。最后,我们提供了证据,证明我们的分析器的建议在一系列实际应用程序中匹配或改进了专家对惰性的使用。
{"title":"Profiling for laziness","authors":"Stephen Chang, M. Felleisen","doi":"10.1145/2535838.2535887","DOIUrl":"https://doi.org/10.1145/2535838.2535887","url":null,"abstract":"While many programmers appreciate the benefits of lazy programming at an abstract level, determining which parts of a concrete program to evaluate lazily poses a significant challenge for most of them. Over the past thirty years, experts have published numerous papers on the problem, but developing this level of expertise requires a significant amount of experience. We present a profiling-based technique that captures and automates this expertise for the insertion of laziness annotations into strict programs. To make this idea precise, we show how to equip a formal semantics with a metric that measures waste in an evaluation. Then we explain how to implement this metric as a dynamic profiling tool that suggests where to insert laziness into a program. Finally, we present evidence that our profiler's suggestions either match or improve on an expert's use of laziness in a range of real-world applications.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86235769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1