首页 > 最新文献

Proceedings of the ACM on Programming Languages最新文献

英文 中文
Efficient Matching of Regular Expressions with Lookaround Assertions 使用回溯断言高效匹配正则表达式
IF 1.8 Q1 Engineering Pub Date : 2024-01-05 DOI: 10.1145/3632934
Konstantinos Mamouras, A. Chattopadhyay
Regular expressions have been extended with lookaround assertions, which are subdivided into lookahead and lookbehind assertions. These constructs are used to refine when a match for a pattern occurs in the input text based on the surrounding context. Current implementation techniques for lookaround involve backtracking search, which can give rise to running time that is super-linear in the length of input text. In this paper, we first consider a formal mathematical semantics for lookaround, which complements the commonly used operational understanding of lookaround in terms of a backtracking implementation. Our formal semantics allows us to establish several equational properties for simplifying lookaround assertions. Additionally, we propose a new algorithm for matching regular expressions with lookaround that has time complexity O(m · n), where m is the size of the regular expression and n is the length of the input text. The algorithm works by evaluating lookaround assertions in a bottom-up manner. Our algorithm makes use of a new notion of nondeterministic finite automata (NFAs), which we call oracle-NFAs. These automata are augmented with epsilon-transitions that are guarded by oracle queries that provide the truth values of lookaround assertions at every position in the text. We provide an implementation of our algorithm that incorporates three performance optimizations for reducing the work performed and memory used. We present an experimental comparison against PCRE and Java’s regex library, which are state-of-the-art regex engines that support lookaround assertions. Our experimental results show that, in contrast to PCRE and Java, our implementation does not suffer from super-linear running time and is several times faster.
正则表达式已扩展为环视断言(lookaround assertions),并细分为前视断言(lookahead)和后视断言(lookbehind assertions)。当输入文本中出现与某个模式匹配的内容时,这些结构会根据周围的上下文进行细化。目前的查找实现技术涉及回溯搜索,这可能导致运行时间与输入文本的长度成超线性关系。在本文中,我们首先考虑了查找的形式数学语义,它补充了常用的操作性查找理解,即反向跟踪实现。我们的形式语义允许我们建立几个等式属性,以简化查找断言。此外,我们还提出了一种用查找匹配正则表达式的新算法,其时间复杂度为 O(m-n),其中 m 是正则表达式的大小,n 是输入文本的长度。该算法以自下而上的方式评估查找断言。我们的算法使用了一种新的非确定性有限自动机(NFA)概念,我们称之为 oracle-NFA。这些自动机由epsilon-transitions增强,epsilon-transitions由oracle查询保护,oracle查询提供文本中每个位置的环视断言的真值。我们提供了算法的实现方法,其中包含三种性能优化,可减少工作量和内存使用量。我们对 PCRE 和 Java 的 regex 库进行了实验比较,它们都是支持环视断言的最先进 regex 引擎。实验结果表明,与 PCRE 和 Java 相比,我们的实现没有超线性运行时间的问题,而是快了好几倍。
{"title":"Efficient Matching of Regular Expressions with Lookaround Assertions","authors":"Konstantinos Mamouras, A. Chattopadhyay","doi":"10.1145/3632934","DOIUrl":"https://doi.org/10.1145/3632934","url":null,"abstract":"Regular expressions have been extended with lookaround assertions, which are subdivided into lookahead and lookbehind assertions. These constructs are used to refine when a match for a pattern occurs in the input text based on the surrounding context. Current implementation techniques for lookaround involve backtracking search, which can give rise to running time that is super-linear in the length of input text. In this paper, we first consider a formal mathematical semantics for lookaround, which complements the commonly used operational understanding of lookaround in terms of a backtracking implementation. Our formal semantics allows us to establish several equational properties for simplifying lookaround assertions. Additionally, we propose a new algorithm for matching regular expressions with lookaround that has time complexity O(m · n), where m is the size of the regular expression and n is the length of the input text. The algorithm works by evaluating lookaround assertions in a bottom-up manner. Our algorithm makes use of a new notion of nondeterministic finite automata (NFAs), which we call oracle-NFAs. These automata are augmented with epsilon-transitions that are guarded by oracle queries that provide the truth values of lookaround assertions at every position in the text. We provide an implementation of our algorithm that incorporates three performance optimizations for reducing the work performed and memory used. We present an experimental comparison against PCRE and Java’s regex library, which are state-of-the-art regex engines that support lookaround assertions. Our experimental results show that, in contrast to PCRE and Java, our implementation does not suffer from super-linear running time and is several times faster.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139383219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EasyBC: A Cryptography-Specific Language for Security Analysis of Block Ciphers against Differential Cryptanalysis EasyBC:针对差分密码分析的块密码安全分析密码学专用语言
IF 1.8 Q1 Engineering Pub Date : 2024-01-05 DOI: 10.1145/3632871
P. Sun, Fu Song, Yuqi Chen, Taolue Chen
Differential cryptanalysis is a powerful algorithmic-level attack, playing a central role in evaluating the security of symmetric cryptographic primitives. In general, the resistance against differential cryptanalysis can be characterized by the maximum expected differential characteristic probability. In this paper, we present generic and extensible approaches based on mixed integer linear programming (MILP) to bound such probability. We design a high-level cryptography-specific language EasyBC tailored for block ciphers and provide various rigorous procedures, as differential denotational semantics, to automate the generation of MILP from block ciphers written in EasyBC. We implement an open-sourced tool that provides support for fully automated resistance evaluation of block ciphers against differential cryptanalysis. The tool is extensively evaluated on 23 real-life cryptographic primitives including all the 10 finalists of the NIST lightweight cryptography standardization process. The experiments confirm the expressivity of EasyBC and show that the tool can effectively prove the resistance against differential cryptanalysis for all block ciphers under consideration. EasyBC makes resistance evaluation against differential cryptanalysis easily accessible to cryptographers.
差分密码分析是一种强大的算法级攻击,在评估对称密码基元的安全性方面发挥着核心作用。一般来说,抵御差分密码分析的能力可以用最大预期差分特征概率来表征。在本文中,我们提出了基于混合整数线性规划(MILP)的通用且可扩展的方法来约束这种概率。我们设计了一种为块密码量身定制的高级密码学专用语言 EasyBC,并提供了各种严格的程序作为差分表示语义,以便从 EasyBC 编写的块密码自动生成 MILP。我们实现了一个开源工具,支持对块密码进行全自动抗差分密码分析评估。该工具在 23 个现实生活中的密码基元上进行了广泛评估,包括 NIST 轻量级密码标准化过程中的所有 10 个入围基元。实验证实了 EasyBC 的表现力,并表明该工具能有效证明所考虑的所有块密码的抗差分密码分析能力。EasyBC 使密码学家可以轻松地进行抗差分密码分析能力评估。
{"title":"EasyBC: A Cryptography-Specific Language for Security Analysis of Block Ciphers against Differential Cryptanalysis","authors":"P. Sun, Fu Song, Yuqi Chen, Taolue Chen","doi":"10.1145/3632871","DOIUrl":"https://doi.org/10.1145/3632871","url":null,"abstract":"Differential cryptanalysis is a powerful algorithmic-level attack, playing a central role in evaluating the security of symmetric cryptographic primitives. In general, the resistance against differential cryptanalysis can be characterized by the maximum expected differential characteristic probability. In this paper, we present generic and extensible approaches based on mixed integer linear programming (MILP) to bound such probability. We design a high-level cryptography-specific language EasyBC tailored for block ciphers and provide various rigorous procedures, as differential denotational semantics, to automate the generation of MILP from block ciphers written in EasyBC. We implement an open-sourced tool that provides support for fully automated resistance evaluation of block ciphers against differential cryptanalysis. The tool is extensively evaluated on 23 real-life cryptographic primitives including all the 10 finalists of the NIST lightweight cryptography standardization process. The experiments confirm the expressivity of EasyBC and show that the tool can effectively prove the resistance against differential cryptanalysis for all block ciphers under consideration. EasyBC makes resistance evaluation against differential cryptanalysis easily accessible to cryptographers.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139383579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal Program Synthesis via Abstract Interpretation 通过抽象解释实现最优程序合成
IF 1.8 Q1 Engineering Pub Date : 2024-01-05 DOI: 10.1145/3632858
Stephen Mell, S. Zdancewic, O. Bastani
We consider the problem of synthesizing programs with numerical constants that optimize a quantitative objective, such as accuracy, over a set of input-output examples. We propose a general framework for optimal synthesis of such programs in a given domain specific language (DSL), with provable optimality guarantees. Our framework enumerates programs in a general search graph, where nodes represent subsets of concrete programs. To improve scalability, it uses A* search in conjunction with a search heuristic based on abstract interpretation; intuitively, this heuristic establishes upper bounds on the value of subtrees in the search graph, enabling the synthesizer to identify and prune subtrees that are provably suboptimal. In addition, we propose a natural strategy for constructing abstract transformers for monotonic semantics, which is a common property for components in DSLs for data classification. Finally, we implement our approach in the context of two such existing DSLs, demonstrating that our algorithm is more scalable than existing optimal synthesizers.
我们考虑的问题是,在一组输入-输出示例中,如何合成具有数字常数的程序,以优化定量目标(如准确性)。我们提出了一个通用框架,用于在给定的特定领域语言(DSL)中优化合成此类程序,并提供可证明的优化保证。我们的框架在一般搜索图中列举程序,其中节点代表具体程序的子集。为了提高可扩展性,它使用了 A* 搜索和基于抽象解释的搜索启发式;直观地说,这种启发式确定了搜索图中子树的值上限,使合成器能够识别和剪切可证明为次优的子树。此外,我们还提出了一种构建单调语义抽象转换器的自然策略,单调语义是数据分类 DSL 中组件的常见属性。最后,我们在两个现有 DSL 的背景下实现了我们的方法,证明我们的算法比现有的最优合成器更具可扩展性。
{"title":"Optimal Program Synthesis via Abstract Interpretation","authors":"Stephen Mell, S. Zdancewic, O. Bastani","doi":"10.1145/3632858","DOIUrl":"https://doi.org/10.1145/3632858","url":null,"abstract":"We consider the problem of synthesizing programs with numerical constants that optimize a quantitative objective, such as accuracy, over a set of input-output examples. We propose a general framework for optimal synthesis of such programs in a given domain specific language (DSL), with provable optimality guarantees. Our framework enumerates programs in a general search graph, where nodes represent subsets of concrete programs. To improve scalability, it uses A* search in conjunction with a search heuristic based on abstract interpretation; intuitively, this heuristic establishes upper bounds on the value of subtrees in the search graph, enabling the synthesizer to identify and prune subtrees that are provably suboptimal. In addition, we propose a natural strategy for constructing abstract transformers for monotonic semantics, which is a common property for components in DSLs for data classification. Finally, we implement our approach in the context of two such existing DSLs, demonstrating that our algorithm is more scalable than existing optimal synthesizers.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139383259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mostly Automated Verification of Liveness Properties for Distributed Protocols with Ranking Functions 使用排序功能自动验证分布式协议有效性属性的大多数方法
IF 1.8 Q1 Engineering Pub Date : 2024-01-05 DOI: 10.1145/3632877
Jianan Yao, Runzhou Tao, Ronghui Gu, Jason Nieh
Distributed protocols have long been formulated in terms of their safety and liveness properties. Much recent work has focused on automatically verifying the safety properties of distributed protocols, but doing so for liveness properties has remained a challenging, unsolved problem. We present LVR, the first framework that can mostly automatically verify liveness properties for distributed protocols. Our key insight is that most liveness properties for distributed protocols can be reduced to a set of safety properties with the help of ranking functions. Such ranking functions for practical distributed protocols have certain properties that make them straightforward to synthesize, contrary to conventional wisdom. We prove that verifying a liveness property can then be reduced to a simpler problem of verifying a set of safety properties, namely that the ranking function is strictly decreasing and nonnegative for any protocol state transition, and there is no deadlock. LVR automatically synthesizes ranking functions by formulating a parameterized function of integer protocol variables, statically analyzing the lower and upper bounds of the variables as well as how much they can change on each state transition, then feeding the constraints to an SMT solver to determine the coefficients of the ranking function. It then uses an off-the-shelf verification tool to find inductive invariants to verify safety properties for both ranking functions and deadlock freedom. We show that LVR can mostly automatically verify the liveness properties of several distributed protocols, including various versions of Paxos, with limited user guidance.
分布式协议长期以来一直是根据其安全性和有效性来制定的。最近的许多工作都集中在自动验证分布式协议的安全属性上,但验证有效性属性仍是一个具有挑战性的未决问题。我们提出了 LVR,这是第一个可以自动验证分布式协议有效性属性的框架。我们的主要观点是,在排序功能的帮助下,大多数分布式协议的有效性属性都可以简化为一组安全属性。实用分布式协议的这种排序函数具有某些特性,因此与传统观点相反,它们可以直接合成。我们证明,验证有效性属性可以简化为验证一组安全属性,即对于任何协议状态转换,排序函数都是严格递减且非负的,并且不存在死锁。LVR 通过制定整数协议变量的参数化函数、静态分析变量的下限和上限以及它们在每个状态转换中的变化程度,然后将约束条件输入 SMT 求解器以确定排序函数的系数,从而自动合成排序函数。然后,它使用现成的验证工具查找归纳不变式,以验证排序函数和死锁自由度的安全属性。我们展示了 LVR 在大多数情况下可以自动验证多个分布式协议的有效性属性,包括各种版本的 Paxos,只需有限的用户指导。
{"title":"Mostly Automated Verification of Liveness Properties for Distributed Protocols with Ranking Functions","authors":"Jianan Yao, Runzhou Tao, Ronghui Gu, Jason Nieh","doi":"10.1145/3632877","DOIUrl":"https://doi.org/10.1145/3632877","url":null,"abstract":"Distributed protocols have long been formulated in terms of their safety and liveness properties. Much recent work has focused on automatically verifying the safety properties of distributed protocols, but doing so for liveness properties has remained a challenging, unsolved problem. We present LVR, the first framework that can mostly automatically verify liveness properties for distributed protocols. Our key insight is that most liveness properties for distributed protocols can be reduced to a set of safety properties with the help of ranking functions. Such ranking functions for practical distributed protocols have certain properties that make them straightforward to synthesize, contrary to conventional wisdom. We prove that verifying a liveness property can then be reduced to a simpler problem of verifying a set of safety properties, namely that the ranking function is strictly decreasing and nonnegative for any protocol state transition, and there is no deadlock. LVR automatically synthesizes ranking functions by formulating a parameterized function of integer protocol variables, statically analyzing the lower and upper bounds of the variables as well as how much they can change on each state transition, then feeding the constraints to an SMT solver to determine the coefficients of the ranking function. It then uses an off-the-shelf verification tool to find inductive invariants to verify safety properties for both ranking functions and deadlock freedom. We show that LVR can mostly automatically verify the liveness properties of several distributed protocols, including various versions of Paxos, with limited user guidance.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139383532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating Well-Typed Terms That Are Not “Useless” 生成并非 "无用 "的优质术语
IF 1.8 Q1 Engineering Pub Date : 2024-01-05 DOI: 10.1145/3632919
Justin Frank, Benjamin Quiring, Leonidas Lampropoulos
Random generation of well-typed terms lies at the core of effective random testing of compilers for functional languages. Existing techniques have had success following a top-down type-oriented approach to generation that makes choices locally, which suffers from an inherent limitation: the type of an expression is often generated independently from the expression itself. Such generation frequently yields functions with argument types that cannot be used to produce a result in a meaningful way, leaving those arguments unused. Such "use-less" functions can hinder both performance, as the argument generation code is dead but still needs to be compiled, and effectiveness, as a lot of interesting optimizations are tested less frequently. In this paper, we introduce a novel algorithm that is significantly more effective at generating functions that use their arguments. We formalize both the "local" and the "nonlocal" algorithms as step-relations in an extension of the simply-typed lambda calculus with type and arguments holes, showing how delaying the generation of types for subexpressions by allowing nonlocal generation steps leads to "useful" functions.
随机生成类型良好的术语是对函数式语言编译器进行有效随机测试的核心。现有技术采用自上而下、面向类型的生成方法取得了成功,这种方法会在局部做出选择,但存在固有的局限性:表达式的类型通常是独立于表达式本身生成的。这种生成方式经常会产生参数类型无法用于产生有意义结果的函数,从而导致这些参数无法使用。这种 "不使用 "的函数既会影响性能,因为参数生成代码已经死亡,但仍需编译;也会影响效率,因为很多有趣的优化功能测试频率较低。在本文中,我们介绍了一种新型算法,它能更有效地生成使用参数的函数。我们将 "本地 "和 "非本地 "算法形式化为具有类型和参数孔的简单类型 lambda 微积分扩展中的步骤关系,展示了如何通过允许非本地生成步骤来延迟子表达式类型的生成,从而获得 "有用 "的函数。
{"title":"Generating Well-Typed Terms That Are Not “Useless”","authors":"Justin Frank, Benjamin Quiring, Leonidas Lampropoulos","doi":"10.1145/3632919","DOIUrl":"https://doi.org/10.1145/3632919","url":null,"abstract":"Random generation of well-typed terms lies at the core of effective random testing of compilers for functional languages. Existing techniques have had success following a top-down type-oriented approach to generation that makes choices locally, which suffers from an inherent limitation: the type of an expression is often generated independently from the expression itself. Such generation frequently yields functions with argument types that cannot be used to produce a result in a meaningful way, leaving those arguments unused. Such \"use-less\" functions can hinder both performance, as the argument generation code is dead but still needs to be compiled, and effectiveness, as a lot of interesting optimizations are tested less frequently. In this paper, we introduce a novel algorithm that is significantly more effective at generating functions that use their arguments. We formalize both the \"local\" and the \"nonlocal\" algorithms as step-relations in an extension of the simply-typed lambda calculus with type and arguments holes, showing how delaying the generation of types for subexpressions by allowing nonlocal generation steps leads to \"useful\" functions.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139383702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Type-Based Gradual Typing Performance Optimization 基于类型的渐进式打字性能优化
IF 1.8 Q1 Engineering Pub Date : 2024-01-05 DOI: 10.1145/3632931
J. P. Campora, Mohammad Wahiduzzaman Khan, Sheng Chen
Gradual typing has emerged as a popular design point in programming languages, attracting significant interests from both academia and industry. Programmers in gradually typed languages are free to utilize static and dynamic typing as needed. To make such languages sound, runtime checks mediate the boundary of typed and untyped code. Unfortunately, such checks can incur significant runtime overhead on programs that heavily mix static and dynamic typing. To combat this overhead without necessitating changes to the underlying implementations of languages, we present discriminative typing. Discriminative typing works by optimistically inferring types for functions and implementing an optimized version of the function based on this type. To preserve safety it also implements an un-optimized version of the function based purely on the provided annotations. With two versions of each function in hand, discriminative typing translates programs so that the optimized functions are called as frequently as possible while also preserving program behaviors. We have implemented discriminative typing in Reticulated Python and have evaluated its performance compared to guarded Reticulated Python. Our results show that discriminative typing improves the performance across 95% of tested programs, when compared to Reticulated, and achieves more than 4× speedup in more than 56% of these programs. We also compare its performance against a previous optimization approach and find that discriminative typing improved performance across 93% of tested programs, with 30% of these programs receiving speedups between 4 to 25 times. Finally, our evaluation shows that discriminative typing remarkably reduces the overhead of gradual typing on many mixed type configurations of programs. In addition, we have implemented discriminative typing in Grift and evaluated its performance. Our evaluation demonstrations that DT significantly improves performance of Grift
渐进式类型已成为编程语言中一个流行的设计点,吸引了学术界和工业界的浓厚兴趣。渐进类型语言的程序员可以根据需要自由使用静态和动态类型。为了使这种语言更加完善,运行时检查是已类型化代码和未类型化代码边界的中介。遗憾的是,在大量混合使用静态和动态类型的程序中,这种检查可能会产生巨大的运行时开销。为了在不改变语言底层实现的情况下消除这种开销,我们提出了辨别式类型。判别式类型的工作原理是以优化方式推断函数的类型,并根据这种类型实现函数的优化版本。为了保证安全性,它还会纯粹根据所提供的注释实现函数的非优化版本。有了每个函数的两个版本,判别式键入就能对程序进行翻译,以便在保留程序行为的同时,尽可能频繁地调用优化后的函数。我们在 Reticulated Python 中实现了判别式类型,并评估了它与有保护的 Reticulated Python 相比的性能。结果表明,与 Reticulated Python 相比,判别式类型提高了 95% 测试程序的性能,其中超过 56% 的程序速度提高了 4 倍以上。我们还将其性能与之前的优化方法进行了比较,发现判别式键入提高了 93% 测试程序的性能,其中 30% 的程序速度提高了 4 到 25 倍。最后,我们的评估表明,在许多混合类型配置的程序中,判别式键入显著降低了渐进键入的开销。此外,我们还在 Grift 中实现了判别式键入,并对其性能进行了评估。我们的评估表明,DT 显著提高了 Grift 的性能
{"title":"Type-Based Gradual Typing Performance Optimization","authors":"J. P. Campora, Mohammad Wahiduzzaman Khan, Sheng Chen","doi":"10.1145/3632931","DOIUrl":"https://doi.org/10.1145/3632931","url":null,"abstract":"Gradual typing has emerged as a popular design point in programming languages, attracting significant interests from both academia and industry. Programmers in gradually typed languages are free to utilize static and dynamic typing as needed. To make such languages sound, runtime checks mediate the boundary of typed and untyped code. Unfortunately, such checks can incur significant runtime overhead on programs that heavily mix static and dynamic typing. To combat this overhead without necessitating changes to the underlying implementations of languages, we present discriminative typing. Discriminative typing works by optimistically inferring types for functions and implementing an optimized version of the function based on this type. To preserve safety it also implements an un-optimized version of the function based purely on the provided annotations. With two versions of each function in hand, discriminative typing translates programs so that the optimized functions are called as frequently as possible while also preserving program behaviors. We have implemented discriminative typing in Reticulated Python and have evaluated its performance compared to guarded Reticulated Python. Our results show that discriminative typing improves the performance across 95% of tested programs, when compared to Reticulated, and achieves more than 4× speedup in more than 56% of these programs. We also compare its performance against a previous optimization approach and find that discriminative typing improved performance across 93% of tested programs, with 30% of these programs receiving speedups between 4 to 25 times. Finally, our evaluation shows that discriminative typing remarkably reduces the overhead of gradual typing on many mixed type configurations of programs. In addition, we have implemented discriminative typing in Grift and evaluated its performance. Our evaluation demonstrations that DT significantly improves performance of Grift","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139384001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Algebraic Effects Meet Hoare Logic in Cubical Agda 立方阿格达中的代数效应与胡尔逻辑
IF 1.8 Q1 Engineering Pub Date : 2024-01-05 DOI: 10.1145/3632898
Donnacha Oisín Kidney, Zhixuan Yang, Nicolas Wu
This paper presents a novel formalisation of algebraic effects with equations in Cubical Agda. Unlike previous work in the literature that employed setoids to deal with equations, the library presented here uses quotient types to faithfully encode the type of terms quotiented by laws. Apart from tools for equational reasoning, the library also provides an effect-generic Hoare logic for algebraic effects, which enables reasoning about effectful programs in terms of their pre- and post- conditions. A particularly novel aspect is that equational reasoning and Hoare-style reasoning are related by an elimination principle of Hoare logic.
本文介绍了立方阿格达方程代数效应的新形式化。与以往文献中使用setoids来处理方程的工作不同,本文介绍的库使用商类型来忠实地编码由法则商化的项的类型。除了等式推理工具外,该库还为代数效果提供了效果生成的霍尔逻辑(Hoare logic),从而可以根据前后条件推理效果程序。一个特别新颖的方面是,等式推理和霍尔式推理是通过霍尔逻辑的消除原理联系在一起的。
{"title":"Algebraic Effects Meet Hoare Logic in Cubical Agda","authors":"Donnacha Oisín Kidney, Zhixuan Yang, Nicolas Wu","doi":"10.1145/3632898","DOIUrl":"https://doi.org/10.1145/3632898","url":null,"abstract":"This paper presents a novel formalisation of algebraic effects with equations in Cubical Agda. Unlike previous work in the literature that employed setoids to deal with equations, the library presented here uses quotient types to faithfully encode the type of terms quotiented by laws. Apart from tools for equational reasoning, the library also provides an effect-generic Hoare logic for algebraic effects, which enables reasoning about effectful programs in terms of their pre- and post- conditions. A particularly novel aspect is that equational reasoning and Hoare-style reasoning are related by an elimination principle of Hoare logic.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139381829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Iris Instance for Verifying CompCert C Programs 用于验证 CompCert C 程序的 Iris 实例
IF 1.8 Q1 Engineering Pub Date : 2024-01-05 DOI: 10.1145/3632848
William Mansky, Ke Du
Iris is a generic separation logic framework that has been instantiated to reason about a wide range of programming languages and language features. Most Iris instances are defined on simple core calculi, but by connecting Iris to new or existing formal semantics for practical languages, we can also use it to reason about real programs. In this paper we develop an Iris instance based on CompCert, the verified C compiler, allowing us to prove correctness of C programs under the same semantics we use to compile and run them. We take inspiration from the Verified Software Toolchain (VST), a prior separation logic for CompCert C, and reimplement the program logic of VST in Iris. Unlike most Iris instances, this involves both a new model of resources for CompCert memories, and a new definition of weakest preconditions/Hoare triples, as the Iris defaults for both of these cannot be applied to CompCert as is. Ultimately, we obtain a complete program logic for CompCert C within Iris, and we reconstruct enough of VST's top-level automation to prove correctness of simple C programs.
Iris 是一个通用的分离逻辑框架,已被用于推理各种编程语言和语言特点。大多数 Iris 实例都是在简单的核心计算上定义的,但通过将 Iris 与新的或现有的实用语言形式语义连接起来,我们也可以用它来推理真实的程序。在本文中,我们开发了一个基于经过验证的 C 编译器 CompCert 的 Iris 实例,它允许我们在编译和运行 C 程序时使用的相同语义下证明 C 程序的正确性。我们从 CompCert C 先前的分离逻辑--验证软件工具链(VST)中汲取灵感,在 Iris 中重新实现了 VST 的程序逻辑。与大多数 Iris 实例不同的是,这涉及 CompCert 内存资源的新模型,以及最弱前提条件/霍尔三元组的新定义,因为这两个方面的 Iris 默认值都不能原封不动地应用于 CompCert。最终,我们在 Iris 中获得了 CompCert C 的完整程序逻辑,并重建了 VST 的顶层自动化,足以证明简单 C 程序的正确性。
{"title":"An Iris Instance for Verifying CompCert C Programs","authors":"William Mansky, Ke Du","doi":"10.1145/3632848","DOIUrl":"https://doi.org/10.1145/3632848","url":null,"abstract":"Iris is a generic separation logic framework that has been instantiated to reason about a wide range of programming languages and language features. Most Iris instances are defined on simple core calculi, but by connecting Iris to new or existing formal semantics for practical languages, we can also use it to reason about real programs. In this paper we develop an Iris instance based on CompCert, the verified C compiler, allowing us to prove correctness of C programs under the same semantics we use to compile and run them. We take inspiration from the Verified Software Toolchain (VST), a prior separation logic for CompCert C, and reimplement the program logic of VST in Iris. Unlike most Iris instances, this involves both a new model of resources for CompCert memories, and a new definition of weakest preconditions/Hoare triples, as the Iris defaults for both of these cannot be applied to CompCert as is. Ultimately, we obtain a complete program logic for CompCert C within Iris, and we reconstruct enough of VST's top-level automation to prove correctness of simple C programs.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139382270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DisLog: A Separation Logic for Disentanglement DisLog:用于解缠的分离逻辑
IF 1.8 Q1 Engineering Pub Date : 2024-01-05 DOI: 10.1145/3632853
Alexandre Moine, Sam Westrick, Stephanie Balzer
Disentanglement is a run-time property of parallel programs that facilitates task-local reasoning about the memory footprint of parallel tasks. In particular, it ensures that a task does not access any memory locations allocated by another concurrently executing task. Disentanglement can be exploited, for example, to implement a high-performance parallel memory manager, such as in the MPL (MaPLe) compiler for Parallel ML. Prior research on disentanglement has focused on the design of optimizations, either trusting the programmer to provide a disentangled program or relying on runtime instrumentation for detecting and managing entanglement. This paper provides the first static approach to verify that a program is disentangled: it contributes DisLog, a concurrent separation logic for disentanglement. DisLog enriches concurrent separation logic with the notions necessary for reasoning about the fork-join structure of parallel programs, allowing the verification that memory accesses are effectively disentangled. A large class of programs, including race-free programs, exhibit memory access patterns that are disentangled "by construction". To reason about these patterns, the paper distills from DisLog an almost standard concurrent separation logic, called DisLog+. In this high-level logic, no specific reasoning about memory accesses is needed: functional correctness proofs entail disentanglement. The paper illustrates the use of DisLog and DisLog+ on a range of case studies, including two different implementations of parallel deduplication via concurrent hashing. All our results are mechanized in the Coq proof assistant using Iris.
互不干涉是并行程序的一种运行时属性,它有助于对并行任务的内存足迹进行任务本地推理。特别是,它能确保一个任务不会访问另一个并发执行任务分配的任何内存位置。例如,可以利用反缠来实现高性能并行内存管理器,如并行 ML 的 MPL(MaPLe)编译器。之前关于反纠缠的研究主要集中在优化设计上,要么相信程序员能提供反纠缠程序,要么依赖运行时工具来检测和管理纠缠。本文提供了第一种静态方法来验证程序是否解缠:它贡献了一种用于解缠的并发分离逻辑--DisLog。DisLog 利用推理并行程序的叉连接结构所需的概念丰富了并发分离逻辑,从而可以验证内存访问是否被有效地分离。包括无竞赛程序在内的一大类程序表现出 "通过构造 "而被分解的内存访问模式。为了推理这些模式,本文从 DisLog 中提炼出一种几乎标准的并发分离逻辑,称为 DisLog+。在这种高级逻辑中,不需要对内存访问进行具体推理:功能正确性证明必然会产生分离。本文在一系列案例研究中说明了 DisLog 和 DisLog+ 的使用,包括通过并发散列实现并行重复数据删除的两种不同实现。我们的所有结果都通过 Iris 在 Coq 证明助手中实现了机械化。
{"title":"DisLog: A Separation Logic for Disentanglement","authors":"Alexandre Moine, Sam Westrick, Stephanie Balzer","doi":"10.1145/3632853","DOIUrl":"https://doi.org/10.1145/3632853","url":null,"abstract":"Disentanglement is a run-time property of parallel programs that facilitates task-local reasoning about the memory footprint of parallel tasks. In particular, it ensures that a task does not access any memory locations allocated by another concurrently executing task. Disentanglement can be exploited, for example, to implement a high-performance parallel memory manager, such as in the MPL (MaPLe) compiler for Parallel ML. Prior research on disentanglement has focused on the design of optimizations, either trusting the programmer to provide a disentangled program or relying on runtime instrumentation for detecting and managing entanglement. This paper provides the first static approach to verify that a program is disentangled: it contributes DisLog, a concurrent separation logic for disentanglement. DisLog enriches concurrent separation logic with the notions necessary for reasoning about the fork-join structure of parallel programs, allowing the verification that memory accesses are effectively disentangled. A large class of programs, including race-free programs, exhibit memory access patterns that are disentangled \"by construction\". To reason about these patterns, the paper distills from DisLog an almost standard concurrent separation logic, called DisLog+. In this high-level logic, no specific reasoning about memory accesses is needed: functional correctness proofs entail disentanglement. The paper illustrates the use of DisLog and DisLog+ on a range of case studies, including two different implementations of parallel deduplication via concurrent hashing. All our results are mechanized in the Coq proof assistant using Iris.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139383439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Indexed Types for a Statically Safe WebAssembly 静态安全 WebAssembly 的索引类型
IF 1.8 Q1 Engineering Pub Date : 2024-01-05 DOI: 10.1145/3632922
Adam T. Geller, Justin Frank, William J. Bowman
We present Wasm-prechk, a superset of WebAssembly (Wasm) that uses indexed types to express and check simple constraints over program values. This additional static reasoning enables safely removing dynamic safety checks from Wasm, such as memory bounds checks. We implement Wasm-prechk as an extension of the Wasmtime compiler and runtime, evaluate the run-time and compile-time performance of Wasm-prechk vs WebAssembly configurations with explicit dynamic checks, and find an average run-time performance gain of 1.71x faster in the widely used PolyBenchC benchmark suite, for a small overhead in binary size (7.18% larger) and type-checking time (1.4% slower). We also prove type and memory safety of Wasm-prechk, prove Wasm safely embeds into Wasm-prechk ensuring backwards compatibility, prove Wasm-prechk type-erases to Wasm, and discuss design and implementation trade-offs.
我们介绍的 Wasm-prechk 是 WebAssembly (Wasm) 的超集,它使用索引类型来表达和检查程序值的简单约束。这种额外的静态推理可以安全地移除 Wasm 中的动态安全检查,如内存边界检查。我们将 Wasm-prechk 作为 Wasmtime 编译器和运行时的扩展来实现,评估了 Wasm-prechk 与 WebAssembly 配置的运行时和编译时性能,发现在广泛使用的 PolyBenchC 基准套件中,运行时性能平均提高了 1.71 倍,而二进制大小(增加 7.18%)和类型检查时间(减少 1.4%)的开销很小。我们还证明了 Wasm-prechk 的类型和内存安全性,证明了 Wasm 可以安全地嵌入 Wasm-prechk 以确保向后兼容性,证明了 Wasm-prechk 对 Wasm 的类型迭代,并讨论了设计和实现上的权衡。
{"title":"Indexed Types for a Statically Safe WebAssembly","authors":"Adam T. Geller, Justin Frank, William J. Bowman","doi":"10.1145/3632922","DOIUrl":"https://doi.org/10.1145/3632922","url":null,"abstract":"We present Wasm-prechk, a superset of WebAssembly (Wasm) that uses indexed types to express and check simple constraints over program values. This additional static reasoning enables safely removing dynamic safety checks from Wasm, such as memory bounds checks. We implement Wasm-prechk as an extension of the Wasmtime compiler and runtime, evaluate the run-time and compile-time performance of Wasm-prechk vs WebAssembly configurations with explicit dynamic checks, and find an average run-time performance gain of 1.71x faster in the widely used PolyBenchC benchmark suite, for a small overhead in binary size (7.18% larger) and type-checking time (1.4% slower). We also prove type and memory safety of Wasm-prechk, prove Wasm safely embeds into Wasm-prechk ensuring backwards compatibility, prove Wasm-prechk type-erases to Wasm, and discuss design and implementation trade-offs.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139381719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the ACM on Programming Languages
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1