首页 > 最新文献

Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages最新文献

英文 中文
Game semantics for interface middleweight Java 游戏语义的接口中等重量的Java
A. Murawski, N. Tzevelekos
We consider an object calculus in which open terms interact with the environment through interfaces. The calculus is intended to capture the essence of contextual interactions of Middleweight Java code. Using game semantics, we provide fully abstract models for the induced notions of contextual approximation and equivalence. These are the first denotational models of this kind.
我们考虑一个对象演算,其中开放项通过接口与环境相互作用。这个演算的目的是捕捉中型Java代码的上下文交互的本质。利用博弈语义,我们为情境近似和等价的诱导概念提供了完全抽象的模型。这是第一个这类指称模型。
{"title":"Game semantics for interface middleweight Java","authors":"A. Murawski, N. Tzevelekos","doi":"10.1145/2535838.2535880","DOIUrl":"https://doi.org/10.1145/2535838.2535880","url":null,"abstract":"We consider an object calculus in which open terms interact with the environment through interfaces. The calculus is intended to capture the essence of contextual interactions of Middleweight Java code. Using game semantics, we provide fully abstract models for the induced notions of contextual approximation and equivalence. These are the first denotational models of this kind.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"346 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77626236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A trusted mechanised JavaScript specification 一个可信的机械化JavaScript规范
Martin Bodin, A. Charguéraud, Daniele Filaretti, Philippa Gardner, S. Maffeis, Daiva Naudziuniene, Alan Schmitt, Gareth Smith
JavaScript is the most widely used web language for client-side applications. Whilst the development of JavaScript was initially just led by implementation, there is now increasing momentum behind the ECMA standardisation process. The time is ripe for a formal, mechanised specification of JavaScript, to clarify ambiguities in the ECMA standards, to serve as a trusted reference for high-level language compilation and JavaScript implementations, and to provide a platform for high-assurance proofs of language properties. We present JSCert, a formalisation of the current ECMA standard in the Coq proof assistant, and JSRef, a reference interpreter for JavaScript extracted from Coq to OCaml. We give a Coq proof that JSRef is correct with respect to JSCert and assess JSRef using test262, the ECMA conformance test suite. Our methodology ensures that JSCert is a comparatively accurate formulation of the English standard, which will only improve as time goes on. We have demonstrated that modern techniques of mechanised specification can handle the complexity of JavaScript.
JavaScript是客户端应用程序中使用最广泛的web语言。虽然JavaScript的开发最初只是由实现主导,但现在ECMA标准化过程背后的动力越来越大。一个正式的、机械化的JavaScript规范的时机已经成熟,它可以澄清ECMA标准中的歧义,作为高级语言编译和JavaScript实现的可信参考,并为语言属性的高保证证明提供一个平台。我们介绍了JSCert, Coq证明助手中当前ECMA标准的形式化,以及JSRef,一个从Coq提取到OCaml的JavaScript参考解释器。我们给出了一个Coq证明,证明JSRef相对于JSCert是正确的,并使用test262 (ECMA一致性测试套件)评估JSRef。我们的方法确保JSCert是一个相对准确的英语标准的表述,这只会随着时间的推移而改进。我们已经证明了机械化规范的现代技术可以处理JavaScript的复杂性。
{"title":"A trusted mechanised JavaScript specification","authors":"Martin Bodin, A. Charguéraud, Daniele Filaretti, Philippa Gardner, S. Maffeis, Daiva Naudziuniene, Alan Schmitt, Gareth Smith","doi":"10.1145/2535838.2535876","DOIUrl":"https://doi.org/10.1145/2535838.2535876","url":null,"abstract":"JavaScript is the most widely used web language for client-side applications. Whilst the development of JavaScript was initially just led by implementation, there is now increasing momentum behind the ECMA standardisation process. The time is ripe for a formal, mechanised specification of JavaScript, to clarify ambiguities in the ECMA standards, to serve as a trusted reference for high-level language compilation and JavaScript implementations, and to provide a platform for high-assurance proofs of language properties. We present JSCert, a formalisation of the current ECMA standard in the Coq proof assistant, and JSRef, a reference interpreter for JavaScript extracted from Coq to OCaml. We give a Coq proof that JSRef is correct with respect to JSCert and assess JSRef using test262, the ECMA conformance test suite. Our methodology ensures that JSCert is a comparatively accurate formulation of the English standard, which will only improve as time goes on. We have demonstrated that modern techniques of mechanised specification can handle the complexity of JavaScript.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84396998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 98
Backpack: retrofitting Haskell with interfaces 背包:用接口改造Haskell
S. Kilpatrick, Derek Dreyer, S. Jones, S. Marlow
Module systems like that of Haskell permit only a weak form of modularity in which module implementations depend directly on other implementations and must be processed in dependency order. Module systems like that of ML, on the other hand, permit a stronger form of modularity in which explicit interfaces express assumptions about dependencies, and each module can be typechecked and reasoned about independently. In this paper, we present Backpack, a new language for building separately-typecheckable *packages* on top of a weak module system like Haskell's. The design of Backpack is inspired by the MixML module calculus of Rossberg and Dreyer, but differs significantly in detail. Like MixML, Backpack supports explicit interfaces and recursive linking. Unlike MixML, Backpack supports a more flexible applicative semantics of instantiation. Moreover, its design is motivated less by foundational concerns and more by the practical concern of integration into Haskell, which has led us to advocate simplicity---in both the syntax and semantics of Backpack---over raw expressive power. The semantics of Backpack packages is defined by elaboration to sets of Haskell modules and binary interface files, thus showing how Backpack maintains interoperability with Haskell while extending it with separate typechecking. Lastly, although Backpack is geared toward integration into Haskell, its design and semantics are largely agnostic with respect to the details of the underlying core language.
像Haskell这样的模块系统只允许弱形式的模块化,其中模块实现直接依赖于其他实现,并且必须按照依赖顺序进行处理。另一方面,像ML这样的模块系统允许更强形式的模块化,其中显式接口表示关于依赖关系的假设,并且每个模块都可以独立地进行类型检查和推理。在本文中,我们介绍了Backpack,这是一种新的语言,用于在弱模块系统(如Haskell)上构建可单独类型检查的“包”。Backpack的设计灵感来自Rossberg和Dreyer的MixML模块演算,但在细节上有很大的不同。与MixML一样,Backpack支持显式接口和递归链接。与MixML不同,Backpack支持更灵活的实例化应用语义。此外,它的设计动机与其说是出于基础考虑,倒不如说更多的是出于集成到Haskell中的实际考虑,这使得我们提倡简单——无论是在语法还是语义上——而不是原始的表达能力。Backpack包的语义是通过对Haskell模块集和二进制接口文件的细化来定义的,从而展示了Backpack如何在通过单独的类型检查扩展Haskell的同时保持与Haskell的互操作性。最后,尽管Backpack旨在与Haskell集成,但它的设计和语义在很大程度上与底层核心语言的细节无关。
{"title":"Backpack: retrofitting Haskell with interfaces","authors":"S. Kilpatrick, Derek Dreyer, S. Jones, S. Marlow","doi":"10.1145/2535838.2535884","DOIUrl":"https://doi.org/10.1145/2535838.2535884","url":null,"abstract":"Module systems like that of Haskell permit only a weak form of modularity in which module implementations depend directly on other implementations and must be processed in dependency order. Module systems like that of ML, on the other hand, permit a stronger form of modularity in which explicit interfaces express assumptions about dependencies, and each module can be typechecked and reasoned about independently. In this paper, we present Backpack, a new language for building separately-typecheckable *packages* on top of a weak module system like Haskell's. The design of Backpack is inspired by the MixML module calculus of Rossberg and Dreyer, but differs significantly in detail. Like MixML, Backpack supports explicit interfaces and recursive linking. Unlike MixML, Backpack supports a more flexible applicative semantics of instantiation. Moreover, its design is motivated less by foundational concerns and more by the practical concern of integration into Haskell, which has led us to advocate simplicity---in both the syntax and semantics of Backpack---over raw expressive power. The semantics of Backpack packages is defined by elaboration to sets of Haskell modules and binary interface files, thus showing how Backpack maintains interoperability with Haskell while extending it with separate typechecking. Lastly, although Backpack is geared toward integration into Haskell, its design and semantics are largely agnostic with respect to the details of the underlying core language.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79030192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
A type-directed abstraction refinement approach to higher-order model checking 用于高阶模型检查的面向类型的抽象细化方法
S. Ramsay, R. Neatherway, C. Ong
The trivial-automaton model checking problem for higher-order recursion schemes has become a widely studied object in connection with the automatic verification of higher-order programs. The problem is formidably hard: despite considerable progress in recent years, no decision procedures have been demonstrated to scale robustly beyond recursion schemes that comprise more than a few hundred rewrite rules. We present a new, fixed-parameter polynomial time algorithm, based on a novel, type directed form of abstraction refinement in which behaviours of a scheme are distinguished by the abstraction according to the intersection types that they inhabit (the properties that they satisfy). Unlike other intersection type approaches, our algorithm reasons both about acceptance by the property automaton and acceptance by its dual, simultaneously, in order to minimize the amount of work done by converging on the solution to a problem instance from both sides. We have constructed Preface, a prototype implementation of the algorithm, and assembled an extensive body of evidence to demonstrate empirically that the algorithm readily scales to recursion schemes of several thousand rules, well beyond the capabilities of current state-of-the-art higher-order model checkers.
高阶递归方案的平凡自动机模型检验问题已成为高阶程序自动验证研究的热点。这个问题非常困难:尽管近年来取得了相当大的进展,但没有一个决策过程被证明可以健壮地扩展到包含数百个重写规则的递归方案之外。我们提出了一种新的固定参数多项式时间算法,基于一种新的、类型导向的抽象改进形式,其中方案的行为根据它们所处的交集类型(它们满足的属性)通过抽象来区分。与其他交叉类型的方法不同,我们的算法同时考虑属性自动机的可接受性和对偶的可接受性,以便通过从两边收敛到问题实例的解来最小化所做的工作量。我们构建了前言,这是该算法的一个原型实现,并收集了大量证据,以经验证明该算法很容易扩展到数千条规则的递归方案,远远超出了当前最先进的高阶模型检查器的能力。
{"title":"A type-directed abstraction refinement approach to higher-order model checking","authors":"S. Ramsay, R. Neatherway, C. Ong","doi":"10.1145/2535838.2535873","DOIUrl":"https://doi.org/10.1145/2535838.2535873","url":null,"abstract":"The trivial-automaton model checking problem for higher-order recursion schemes has become a widely studied object in connection with the automatic verification of higher-order programs. The problem is formidably hard: despite considerable progress in recent years, no decision procedures have been demonstrated to scale robustly beyond recursion schemes that comprise more than a few hundred rewrite rules. We present a new, fixed-parameter polynomial time algorithm, based on a novel, type directed form of abstraction refinement in which behaviours of a scheme are distinguished by the abstraction according to the intersection types that they inhabit (the properties that they satisfy). Unlike other intersection type approaches, our algorithm reasons both about acceptance by the property automaton and acceptance by its dual, simultaneously, in order to minimize the amount of work done by converging on the solution to a problem instance from both sides. We have constructed Preface, a prototype implementation of the algorithm, and assembled an extensive body of evidence to demonstrate empirically that the algorithm readily scales to recursion schemes of several thousand rules, well beyond the capabilities of current state-of-the-art higher-order model checkers.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78731044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Counter-factual typing for debugging type errors 用于调试类型错误的反事实输入
Sheng Chen, Martin Erwig
Changing a program in response to a type error plays an important part in modern software development. However, the generation of good type error messages remains a problem for highly expressive type systems. Existing approaches often suffer from a lack of precision in locating errors and proposing remedies. Specifically, they either fail to locate the source of the type error consistently, or they report too many potential error locations. Moreover, the change suggestions offered are often incorrect. This makes the debugging process tedious and ineffective. We present an approach to the problem of type debugging that is based on generating and filtering a comprehensive set of type-change suggestions. Specifically, we generate all (program-structure-preserving) type changes that can possibly fix the type error. These suggestions will be ranked and presented to the programmer in an iterative fashion. In some cases we also produce suggestions to change the program. In most situations, this strategy delivers the correct change suggestions quickly, and at the same time never misses any rare suggestions. The computation of the potentially huge set of type-change suggestions is efficient since it is based on a variational type inference algorithm that type checks a program with variations only once, efficiently reusing type information for shared parts. We have evaluated our method and compared it with previous approaches. Based on a large set of examples drawn from the literature, we have found that our method outperforms other approaches and provides a viable alternative.
在现代软件开发中,修改程序以响应类型错误扮演着重要的角色。然而,对于高表现力的类型系统来说,生成良好的类型错误消息仍然是一个问题。现有的方法在定位错误和提出补救措施方面往往缺乏精确性。具体来说,它们要么无法一致地定位类型错误的来源,要么报告了太多潜在的错误位置。此外,提供的更改建议往往是不正确的。这使得调试过程冗长而无效。我们提出了一种解决类型调试问题的方法,该方法基于生成和过滤一组全面的类型更改建议。具体来说,我们生成所有可能修复类型错误的(保留程序结构的)类型更改。这些建议将被排序,并以迭代的方式呈现给程序员。在某些情况下,我们还会提出修改程序的建议。在大多数情况下,这种策略可以快速地交付正确的变更建议,同时不会遗漏任何罕见的建议。潜在的大量类型更改建议的计算是高效的,因为它基于一种变分类型推断算法,该算法只对具有变化的程序进行一次类型检查,从而有效地重用共享部分的类型信息。我们已经评估了我们的方法,并与以前的方法进行了比较。基于从文献中提取的大量示例,我们发现我们的方法优于其他方法,并提供了一个可行的替代方案。
{"title":"Counter-factual typing for debugging type errors","authors":"Sheng Chen, Martin Erwig","doi":"10.1145/2535838.2535863","DOIUrl":"https://doi.org/10.1145/2535838.2535863","url":null,"abstract":"Changing a program in response to a type error plays an important part in modern software development. However, the generation of good type error messages remains a problem for highly expressive type systems. Existing approaches often suffer from a lack of precision in locating errors and proposing remedies. Specifically, they either fail to locate the source of the type error consistently, or they report too many potential error locations. Moreover, the change suggestions offered are often incorrect. This makes the debugging process tedious and ineffective. We present an approach to the problem of type debugging that is based on generating and filtering a comprehensive set of type-change suggestions. Specifically, we generate all (program-structure-preserving) type changes that can possibly fix the type error. These suggestions will be ranked and presented to the programmer in an iterative fashion. In some cases we also produce suggestions to change the program. In most situations, this strategy delivers the correct change suggestions quickly, and at the same time never misses any rare suggestions. The computation of the potentially huge set of type-change suggestions is efficient since it is based on a variational type inference algorithm that type checks a program with variations only once, efficiently reusing type information for shared parts. We have evaluated our method and compared it with previous approaches. Based on a large set of examples drawn from the literature, we have found that our method outperforms other approaches and provides a viable alternative.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90193668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 65
Consistency analysis of decision-making programs 决策程序的一致性分析
Swarat Chaudhuri, Azadeh Farzan, Zachary Kincaid
Applications in many areas of computing make discrete decisions under uncertainty, for reasons such as limited numerical precision in calculations and errors in sensor-derived inputs. As a result, individual decisions made by such programs may be nondeterministic, and lead to contradictory decisions at different points of an execution. This means that an otherwise correct program may execute along paths, that it would not follow under its ideal semantics, violating essential program invariants on the way. A program is said to be consistent if it does not suffer from this problem despite uncertainty in decisions. In this paper, we present a sound, automatic program analysis for verifying that a program is consistent in this sense. Our analysis proves that each decision made along a program execution is consistent with the decisions made earlier in the execution. The proof is done by generating an invariant that abstracts the set of all decisions made along executions that end at a program location l, then verifying, using a fixpoint constraint-solver, that no contradiction can be derived when these decisions are combined with new decisions made at l. We evaluate our analysis on a collection of programs implementing algorithms in computational geometry. Consistency is known to be a critical, frequently-violated, and thoroughly studied correctness property in geometry, but ours is the first attempt at automated verification of consistency of geometric algorithms. Our benchmark suite consists of implementations of convex hull computation, triangulation, and point location algorithms. On almost all examples that are not consistent (with two exceptions), our analysis is able to verify consistency within a few minutes.
由于计算的数值精度有限和传感器输入的误差等原因,许多计算领域的应用都是在不确定的情况下进行离散决策。因此,由这样的程序做出的单个决策可能是不确定的,并在执行的不同点导致相互矛盾的决策。这意味着,一个本来正确的程序可能会沿着它不遵循理想语义的路径执行,在此过程中违反了程序的基本不变量。如果一个程序在决策不确定的情况下没有出现这个问题,那么它就被称为是一致的。在本文中,我们提出了一个完善的、自动的程序分析来验证程序在这个意义上是一致的。我们的分析证明,在程序执行过程中做出的每个决策都与执行过程中早期做出的决策是一致的。证明是通过生成一个不变量来完成的,该不变量抽象了在程序位置l结束的执行过程中做出的所有决策的集合,然后使用不动点约束求解器验证,当这些决策与在l处做出的新决策相结合时,不会产生矛盾。我们在计算几何中实现算法的程序集合上评估了我们的分析。众所周知,一致性是几何中一个关键的、经常被违反的、被彻底研究过的正确性属性,但我们的研究是对几何算法一致性的自动验证的第一次尝试。我们的基准套件包括凸包计算、三角测量和点定位算法的实现。在几乎所有不一致的示例中(除了两个例外),我们的分析能够在几分钟内验证一致性。
{"title":"Consistency analysis of decision-making programs","authors":"Swarat Chaudhuri, Azadeh Farzan, Zachary Kincaid","doi":"10.1145/2535838.2535858","DOIUrl":"https://doi.org/10.1145/2535838.2535858","url":null,"abstract":"Applications in many areas of computing make discrete decisions under uncertainty, for reasons such as limited numerical precision in calculations and errors in sensor-derived inputs. As a result, individual decisions made by such programs may be nondeterministic, and lead to contradictory decisions at different points of an execution. This means that an otherwise correct program may execute along paths, that it would not follow under its ideal semantics, violating essential program invariants on the way. A program is said to be consistent if it does not suffer from this problem despite uncertainty in decisions. In this paper, we present a sound, automatic program analysis for verifying that a program is consistent in this sense. Our analysis proves that each decision made along a program execution is consistent with the decisions made earlier in the execution. The proof is done by generating an invariant that abstracts the set of all decisions made along executions that end at a program location l, then verifying, using a fixpoint constraint-solver, that no contradiction can be derived when these decisions are combined with new decisions made at l. We evaluate our analysis on a collection of programs implementing algorithms in computational geometry. Consistency is known to be a critical, frequently-violated, and thoroughly studied correctness property in geometry, but ours is the first attempt at automated verification of consistency of geometric algorithms. Our benchmark suite consists of implementations of convex hull computation, triangulation, and point location algorithms. On almost all examples that are not consistent (with two exceptions), our analysis is able to verify consistency within a few minutes.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"70 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91392154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A sound and complete abstraction for reasoning about parallel prefix sums 关于并行前缀和推理的一个健全和完整的抽象
Nathan Chong, A. Donaldson, J. Ketema
Prefix sums are key building blocks in the implementation of many concurrent software applications, and recently much work has gone into efficiently implementing prefix sums to run on massively parallel graphics processing units (GPUs). Because they lie at the heart of many GPU-accelerated applications, the correctness of prefix sum implementations is of prime importance. We introduce a novel abstraction, the interval of summations, that allows scalable reasoning about implementations of prefix sums. We present this abstraction as a monoid, and prove a soundness and completeness result showing that a generic sequential prefix sum implementation is correct for an array of length $n$ if and only if it computes the correct result for a specific test case when instantiated with the interval of summations monoid. This allows correctness to be established by running a single test where the input and result require O(n lg(n)) space. This improves upon an existing result by Sheeran where the input requires O(n lg(n)) space and the result O(n2 lg(n)) space, and is more feasible for large n than a method by Voigtlaender that uses O(n) space for the input and result but requires running O(n2) tests. We then extend our abstraction and results to the context of data-parallel programs, developing an automated verification method for GPU implementations of prefix sums. Our method uses static verification to prove that a generic prefix sum implementation is data race-free, after which functional correctness of the implementation can be determined by running a single test case under the interval of summations abstraction. We present an experimental evaluation using four different prefix sum algorithms, showing that our method is highly automatic, scales to large thread counts, and significantly outperforms Voigtlaender's method when applied to large arrays.
前缀和是实现许多并发软件应用程序的关键构建块,最近有很多工作是为了有效地实现在大规模并行图形处理单元(gpu)上运行的前缀和。由于前缀和位于许多gpu加速应用程序的核心,因此前缀和实现的正确性至关重要。我们引入了一个新的抽象,求和区间,它允许对前缀和的实现进行可伸缩的推理。我们将这个抽象表示为一个单群,并证明了一个稳健性和完备性结果,该结果表明对于长度为$n$的数组,一般顺序前缀和实现是正确的,当且仅当它以求和单群区间实例化时计算特定测试用例的正确结果。这允许通过运行单个测试来建立正确性,其中输入和结果需要O(n lg(n))个空间。这改进了Sheeran的现有结果,其中输入需要O(n lg(n))空间,结果需要O(n2 lg(n))空间,并且对于大n来说比Voigtlaender使用O(n)空间用于输入和结果但需要运行O(n2)个测试的方法更可行。然后,我们将我们的抽象和结果扩展到数据并行程序的上下文中,开发了一种用于GPU实现前缀和的自动验证方法。我们的方法使用静态验证来证明通用前缀和实现是无数据竞争的,之后可以通过在求和抽象的区间内运行单个测试用例来确定实现的功能正确性。我们使用四种不同的前缀和算法进行了实验评估,结果表明我们的方法高度自动化,适用于大线程数,并且在应用于大型数组时明显优于Voigtlaender的方法。
{"title":"A sound and complete abstraction for reasoning about parallel prefix sums","authors":"Nathan Chong, A. Donaldson, J. Ketema","doi":"10.1145/2535838.2535882","DOIUrl":"https://doi.org/10.1145/2535838.2535882","url":null,"abstract":"Prefix sums are key building blocks in the implementation of many concurrent software applications, and recently much work has gone into efficiently implementing prefix sums to run on massively parallel graphics processing units (GPUs). Because they lie at the heart of many GPU-accelerated applications, the correctness of prefix sum implementations is of prime importance. We introduce a novel abstraction, the interval of summations, that allows scalable reasoning about implementations of prefix sums. We present this abstraction as a monoid, and prove a soundness and completeness result showing that a generic sequential prefix sum implementation is correct for an array of length $n$ if and only if it computes the correct result for a specific test case when instantiated with the interval of summations monoid. This allows correctness to be established by running a single test where the input and result require O(n lg(n)) space. This improves upon an existing result by Sheeran where the input requires O(n lg(n)) space and the result O(n2 lg(n)) space, and is more feasible for large n than a method by Voigtlaender that uses O(n) space for the input and result but requires running O(n2) tests. We then extend our abstraction and results to the context of data-parallel programs, developing an automated verification method for GPU implementations of prefix sums. Our method uses static verification to prove that a generic prefix sum implementation is data race-free, after which functional correctness of the implementation can be determined by running a single test case under the interval of summations abstraction. We present an experimental evaluation using four different prefix sum algorithms, showing that our method is highly automatic, scales to large thread counts, and significantly outperforms Voigtlaender's method when applied to large arrays.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90374862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Bias-variance tradeoffs in program analysis 程序分析中的偏差-方差权衡
Rahul Sharma, A. Nori, A. Aiken
It is often the case that increasing the precision of a program analysis leads to worse results. It is our thesis that this phenomenon is the result of fundamental limits on the ability to use precise abstract domains as the basis for inferring strong invariants of programs. We show that bias-variance tradeoffs, an idea from learning theory, can be used to explain why more precise abstractions do not necessarily lead to better results and also provides practical techniques for coping with such limitations. Learning theory captures precision using a combinatorial quantity called the VC dimension. We compute the VC dimension for different abstractions and report on its usefulness as a precision metric for program analyses. We evaluate cross validation, a technique for addressing bias-variance tradeoffs, on an industrial strength program verification tool called YOGI. The tool produced using cross validation has significantly better running time, finds new defects, and has fewer time-outs than the current production version. Finally, we make some recommendations for tackling bias-variance tradeoffs in program analysis.
通常情况下,增加程序分析的精度会导致更糟糕的结果。我们的论点是,这种现象是使用精确抽象域作为推断程序强不变量的基础的能力受到基本限制的结果。我们展示了偏差-方差权衡,一个来自学习理论的想法,可以用来解释为什么更精确的抽象不一定会带来更好的结果,也提供了应对这些限制的实用技术。学习理论使用称为VC维的组合量来捕获精度。我们计算了不同抽象的VC维,并报告了它作为程序分析的精度度量的实用性。我们在一个名为YOGI的工业强度程序验证工具上评估交叉验证,这是一种解决偏差-方差权衡的技术。与当前的生产版本相比,使用交叉验证生成的工具具有更好的运行时间、发现新缺陷和更少的超时时间。最后,我们提出了一些在程序分析中处理偏差-方差权衡的建议。
{"title":"Bias-variance tradeoffs in program analysis","authors":"Rahul Sharma, A. Nori, A. Aiken","doi":"10.1145/2535838.2535853","DOIUrl":"https://doi.org/10.1145/2535838.2535853","url":null,"abstract":"It is often the case that increasing the precision of a program analysis leads to worse results. It is our thesis that this phenomenon is the result of fundamental limits on the ability to use precise abstract domains as the basis for inferring strong invariants of programs. We show that bias-variance tradeoffs, an idea from learning theory, can be used to explain why more precise abstractions do not necessarily lead to better results and also provides practical techniques for coping with such limitations. Learning theory captures precision using a combinatorial quantity called the VC dimension. We compute the VC dimension for different abstractions and report on its usefulness as a precision metric for program analyses. We evaluate cross validation, a technique for addressing bias-variance tradeoffs, on an industrial strength program verification tool called YOGI. The tool produced using cross validation has significantly better running time, finds new defects, and has fewer time-outs than the current production version. Finally, we make some recommendations for tackling bias-variance tradeoffs in program analysis.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"103 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86933260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Profiling for laziness 懒惰剖析
Stephen Chang, M. Felleisen
While many programmers appreciate the benefits of lazy programming at an abstract level, determining which parts of a concrete program to evaluate lazily poses a significant challenge for most of them. Over the past thirty years, experts have published numerous papers on the problem, but developing this level of expertise requires a significant amount of experience. We present a profiling-based technique that captures and automates this expertise for the insertion of laziness annotations into strict programs. To make this idea precise, we show how to equip a formal semantics with a metric that measures waste in an evaluation. Then we explain how to implement this metric as a dynamic profiling tool that suggests where to insert laziness into a program. Finally, we present evidence that our profiler's suggestions either match or improve on an expert's use of laziness in a range of real-world applications.
虽然许多程序员在抽象级别上欣赏惰性编程的好处,但确定要对具体程序的哪些部分进行惰性评估对他们中的大多数人来说是一个重大挑战。在过去的三十年里,专家们发表了大量关于这个问题的论文,但要达到这种水平的专业知识需要大量的经验。我们提出了一种基于分析的技术,它可以捕获并自动化将惰性注释插入到严格程序中的专业知识。为了使这个想法更加精确,我们将展示如何为形式化语义配备度量评估中的浪费的度量。然后我们解释如何实现这个指标作为一个动态分析工具,它建议在程序中插入惰性。最后,我们提供了证据,证明我们的分析器的建议在一系列实际应用程序中匹配或改进了专家对惰性的使用。
{"title":"Profiling for laziness","authors":"Stephen Chang, M. Felleisen","doi":"10.1145/2535838.2535887","DOIUrl":"https://doi.org/10.1145/2535838.2535887","url":null,"abstract":"While many programmers appreciate the benefits of lazy programming at an abstract level, determining which parts of a concrete program to evaluate lazily poses a significant challenge for most of them. Over the past thirty years, experts have published numerous papers on the problem, but developing this level of expertise requires a significant amount of experience. We present a profiling-based technique that captures and automates this expertise for the insertion of laziness annotations into strict programs. To make this idea precise, we show how to equip a formal semantics with a metric that measures waste in an evaluation. Then we explain how to implement this metric as a dynamic profiling tool that suggests where to insert laziness into a program. Finally, we present evidence that our profiler's suggestions either match or improve on an expert's use of laziness in a range of real-world applications.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"97 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86235769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Toward general diagnosis of static errors 对静态误差的一般诊断
Danfeng Zhang, A. Myers
We introduce a general way to locate programmer mistakes that are detected by static analyses such as type checking. The program analysis is expressed in a constraint language in which mistakes result in unsatisfiable constraints. Given an unsatisfiable system of constraints, both satisfiable and unsatisfiable constraints are analyzed, to identify the program expressions most likely to be the cause of unsatisfiability. The likelihood of different error explanations is evaluated under the assumption that the programmer's code is mostly correct, so the simplest explanations are chosen, following Bayesian principles. For analyses that rely on programmer-stated assumptions, the diagnosis also identifies assumptions likely to have been omitted. The new error diagnosis approach has been implemented for two very different program analyses: type inference in OCaml and information flow checking in Jif. The effectiveness of the approach is evaluated using previously collected programs containing errors. The results show that when compared to existing compilers and other tools, the general technique identifies the location of programmer errors significantly more accurately.
我们介绍了一种通用的方法来定位由静态分析(如类型检查)检测到的程序员错误。程序分析是用约束语言表达的,在约束语言中,错误会导致不满意的约束。给定一个不可满足的约束系统,对可满足约束和不可满足约束进行分析,以确定最可能导致不可满足的程序表达式。在假设程序员的代码基本正确的情况下,评估不同错误解释的可能性,因此选择最简单的解释,遵循贝叶斯原则。对于依赖于程序员陈述的假设的分析,诊断还可以识别可能被忽略的假设。新的错误诊断方法已经在两种非常不同的程序分析中实现:OCaml中的类型推断和Jif中的信息流检查。使用先前收集的包含错误的程序来评估该方法的有效性。结果表明,与现有的编译器和其他工具相比,通用技术可以更准确地识别程序员错误的位置。
{"title":"Toward general diagnosis of static errors","authors":"Danfeng Zhang, A. Myers","doi":"10.1145/2535838.2535870","DOIUrl":"https://doi.org/10.1145/2535838.2535870","url":null,"abstract":"We introduce a general way to locate programmer mistakes that are detected by static analyses such as type checking. The program analysis is expressed in a constraint language in which mistakes result in unsatisfiable constraints. Given an unsatisfiable system of constraints, both satisfiable and unsatisfiable constraints are analyzed, to identify the program expressions most likely to be the cause of unsatisfiability. The likelihood of different error explanations is evaluated under the assumption that the programmer's code is mostly correct, so the simplest explanations are chosen, following Bayesian principles. For analyses that rely on programmer-stated assumptions, the diagnosis also identifies assumptions likely to have been omitted. The new error diagnosis approach has been implemented for two very different program analyses: type inference in OCaml and information flow checking in Jif. The effectiveness of the approach is evaluated using previously collected programs containing errors. The results show that when compared to existing compilers and other tools, the general technique identifies the location of programmer errors significantly more accurately.","PeriodicalId":20683,"journal":{"name":"Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78973918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
期刊
Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1