首页 > 最新文献

ACM Transactions on Programming Languages and Systems最新文献

英文 中文
Passport: Improving Automated Formal Verification Using Identifiers 护照:使用标识符改进自动正式验证
IF 1.3 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-06-26 DOI: https://dl.acm.org/doi/10.1145/3593374
Alex Sanchez-Stern, Emily First, Timothy Zhou, Zhanna Kaufman, Yuriy Brun, Talia Ringer

Formally verifying system properties is one of the most effective ways of improving system quality, but its high manual effort requirements often render it prohibitively expensive. Tools that automate formal verification by learning from proof corpora to synthesize proofs have just begun to show their promise. These tools are effective because of the richness of the data the proof corpora contain. This richness comes from the stylistic conventions followed by communities of proof developers, together with the powerful logical systems beneath proof assistants. However, this richness remains underexploited, with most work thus far focusing on architecture rather than on how to make the most of the proof data. This article systematically explores how to most effectively exploit one aspect of that proof data: identifiers.

We develop the Passport approach, a method for enriching the predictive Coq model used by an existing proof-synthesis tool with three new encoding mechanisms for identifiers: category vocabulary indexing, subword sequence modeling, and path elaboration. We evaluate our approach’s enrichment effect on three existing base tools: ASTactic, Tac, and Tok. In head-to-head comparisons, Passport automatically proves 29% more theorems than the best-performing of these base tools. Combining the three tools enhanced by the Passport approach automatically proves 38% more theorems than combining the three base tools. Finally, together, these base tools and their enhanced versions prove 45% more theorems than the combined base tools. Overall, our findings suggest that modeling identifiers can play a significant role in improving proof synthesis, leading to higher-quality software.

正式地验证系统属性是改进系统质量的最有效的方法之一,但是它的高手工工作量要求常常使它变得非常昂贵。通过学习证明语料库来合成证明来自动化形式验证的工具刚刚开始显示出它们的前景。这些工具是有效的,因为证明语料库包含了丰富的数据。这种丰富性来自于证明开发人员社区遵循的风格惯例,以及证明助手下强大的逻辑系统。然而,这种丰富性仍然没有得到充分利用,到目前为止,大多数工作都集中在架构上,而不是如何充分利用证明数据。本文系统地探讨了如何最有效地利用证明数据的一个方面:标识符。我们开发了Passport方法,这是一种通过三种新的标识符编码机制来丰富现有证明合成工具使用的预测Coq模型的方法:类别词汇索引、子词序列建模和路径细化。我们评估了我们的方法在三个现有基础工具上的丰富效果:ASTactic, Tac和Tok。在正面比较中,Passport自动证明的定理比这些基本工具中表现最好的多29%。结合Passport方法增强的三个工具自动证明的定理比结合三个基本工具多38%。最后,这些基本工具及其增强版本证明的定理比组合基本工具多45%。总体而言,我们的研究结果表明,建模标识符可以在改进证明综合方面发挥重要作用,从而获得更高质量的软件。
{"title":"Passport: Improving Automated Formal Verification Using Identifiers","authors":"Alex Sanchez-Stern, Emily First, Timothy Zhou, Zhanna Kaufman, Yuriy Brun, Talia Ringer","doi":"https://dl.acm.org/doi/10.1145/3593374","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3593374","url":null,"abstract":"<p>Formally verifying system properties is one of the most effective ways of improving system quality, but its high manual effort requirements often render it prohibitively expensive. Tools that automate formal verification by learning from proof corpora to synthesize proofs have just begun to show their promise. These tools are effective because of the richness of the data the proof corpora contain. This richness comes from the stylistic conventions followed by communities of proof developers, together with the powerful logical systems beneath proof assistants. However, this richness remains underexploited, with most work thus far focusing on architecture rather than on how to make the most of the proof data. This article systematically explores how to most effectively exploit one aspect of that proof data: identifiers.</p><p>We develop the Passport approach, a method for enriching the predictive Coq model used by an existing proof-synthesis tool with three new encoding mechanisms for identifiers: category vocabulary indexing, subword sequence modeling, and path elaboration. We evaluate our approach’s enrichment effect on three existing base tools: ASTactic, Tac, and Tok. In head-to-head comparisons, Passport automatically proves 29% more theorems than the best-performing of these base tools. Combining the three tools enhanced by the Passport approach automatically proves 38% more theorems than combining the three base tools. Finally, together, these base tools and their enhanced versions prove 45% more theorems than the combined base tools. Overall, our findings suggest that modeling identifiers can play a significant role in improving proof synthesis, leading to higher-quality software.</p>","PeriodicalId":50939,"journal":{"name":"ACM Transactions on Programming Languages and Systems","volume":"264 10","pages":""},"PeriodicalIF":1.3,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Side-channel Elimination via Partial Control-flow Linearization 通过部分控制流线性化消除侧通道
IF 1.3 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-06-26 DOI: https://dl.acm.org/doi/10.1145/3594736
Luigi Soares, Michael Canesche, Fernando Magno Quintão Pereira

Partial control-flow linearization is a code transformation conceived to maximize work performed in vectorized programs. In this article, we find a new service for it. We show that partial control-flow linearization protects programs against timing attacks. This transformation is sound: Given an instance of its public inputs, the partially linearized program always runs the same sequence of instructions, regardless of secret inputs. Incidentally, if the original program is publicly safe, then accesses to the data cache will be data oblivious in the transformed code. The transformation is optimal: Every branch that depends on some secret data is linearized; no branch that depends on only public data is linearized. Therefore, the transformation preserves loops that depend exclusively on public information. If every branch that leaves a loop depends on secret data, then the transformed program will not terminate. Our transformation extends previous work in non-trivial ways. It handles C constructs such as “goto,” “break,” “switch,” and “continue,” which are absent in the FaCT domain-specific language (2018). Like Constantine (2021), our transformation ensures operation invariance but without requiring profiling information. Additionally, in contrast to SC-Eliminator (2018) and Lif (2021), it handles programs containing loops whose trip count is not known at compilation time.

部分控制流线性化是一种代码转换,旨在使矢量化程序中执行的工作最大化。在本文中,我们将为它找到一个新的服务。我们展示了部分控制流线性化保护程序免受定时攻击。这种转换是合理的:给定其公共输入的一个实例,部分线性化的程序总是运行相同的指令序列,而不管秘密输入是什么。顺便提一下,如果原始程序是公共安全的,那么对数据缓存的访问在转换后的代码中将是数据无关的。这种转换是最优的:每个依赖于某些秘密数据的分支都是线性化的;没有只依赖于公共数据的分支是线性化的。因此,转换保留了完全依赖于公共信息的循环。如果每个离开循环的分支都依赖于秘密数据,那么转换后的程序将不会终止。我们的转换以不同寻常的方式扩展了之前的工作。它处理诸如“goto”、“break”、“switch”和“continue”等C结构,这些结构在FaCT领域特定语言(2018)中是不存在的。与Constantine(2021)一样,我们的转换确保了操作的不变性,但不需要分析信息。此外,与SC-Eliminator(2018)和liff(2021)相比,它处理包含在编译时不知道循环次数的程序。
{"title":"Side-channel Elimination via Partial Control-flow Linearization","authors":"Luigi Soares, Michael Canesche, Fernando Magno Quintão Pereira","doi":"https://dl.acm.org/doi/10.1145/3594736","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3594736","url":null,"abstract":"<p>Partial control-flow linearization is a code transformation conceived to maximize work performed in vectorized programs. In this article, we find a new service for it. We show that partial control-flow linearization protects programs against timing attacks. This transformation is sound: Given an instance of its public inputs, the partially linearized program always runs the same sequence of instructions, regardless of secret inputs. Incidentally, if the original program is publicly safe, then accesses to the data cache will be data oblivious in the transformed code. The transformation is optimal: Every branch that depends on some secret data is linearized; no branch that depends on only public data is linearized. Therefore, the transformation preserves loops that depend exclusively on public information. If every branch that leaves a loop depends on secret data, then the transformed program will not terminate. Our transformation extends previous work in non-trivial ways. It handles C constructs such as “goto,” “break,” “switch,” and “continue,” which are absent in the FaCT domain-specific language (2018). Like Constantine (2021), our transformation ensures operation invariance but without requiring profiling information. Additionally, in contrast to SC-Eliminator (2018) and Lif (2021), it handles programs containing loops whose trip count is not known at compilation time.</p>","PeriodicalId":50939,"journal":{"name":"ACM Transactions on Programming Languages and Systems","volume":"258 8","pages":""},"PeriodicalIF":1.3,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization-Aware Compiler-Level Event Profiling 支持优化的编译器级事件分析
IF 1.3 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-06-26 DOI: https://dl.acm.org/doi/10.1145/3591473
Matteo Basso, Aleksandar Prokopec, Andrea Rosà, Walter Binder

Tracking specific events in a program’s execution, such as object allocation or lock acquisition, is at the heart of dynamic analysis. Despite the apparent simplicity of this task, quantifying these events is challenging due to the presence of compiler optimizations. Profiling perturbs the optimizations that the compiler would normally do—a profiled program usually behaves differently than the original one.

In this article, we propose a novel technique for quantifying compiler-internal events in the optimized code, reducing the profiling perturbation on compiler optimizations. Our technique achieves this by instrumenting the program from within the compiler, and by delaying the instrumentation until the point in the compilation pipeline after which no subsequent optimizations can remove the events. We propose two different implementation strategies of our technique based on path-profiling, and a modification to the standard path-profiling algorithm that facilitates the use of the proposed strategies in a modern just-in-time (JIT) compiler. We use our technique to analyze the behaviour of the optimizations in Graal, a state-of-the-art compiler for the Java Virtual Machine, identifying the reasons behind a performance improvement of a specific optimization, and the causes behind an unexpected slowdown of another. Finally, our evaluation results show that the two proposed implementations result in a significantly lower execution-time overhead w.r.t. a naive implementation.

跟踪程序执行中的特定事件,例如对象分配或锁获取,是动态分析的核心。尽管这项任务看起来很简单,但由于存在编译器优化,对这些事件进行量化是一项挑战。分析干扰了编译器通常会做的优化——被分析的程序的行为通常与原始程序不同。在本文中,我们提出了一种量化优化代码中的编译器内部事件的新技术,以减少对编译器优化的分析干扰。我们的技术通过在编译器内部检测程序来实现这一点,并将检测延迟到编译管道中没有后续优化可以删除事件的点。我们提出了基于路径分析的两种不同的技术实现策略,以及对标准路径分析算法的修改,以促进在现代即时(JIT)编译器中使用所提出的策略。我们使用我们的技术来分析Graal(用于Java虚拟机的最先进的编译器)中优化的行为,确定特定优化的性能改进背后的原因,以及另一个意外减速背后的原因。最后,我们的评估结果表明,这两种建议的实现的执行时间开销明显较低,而不是单纯的实现。
{"title":"Optimization-Aware Compiler-Level Event Profiling","authors":"Matteo Basso, Aleksandar Prokopec, Andrea Rosà, Walter Binder","doi":"https://dl.acm.org/doi/10.1145/3591473","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3591473","url":null,"abstract":"<p>Tracking specific events in a program’s execution, such as object allocation or lock acquisition, is at the heart of dynamic analysis. Despite the apparent simplicity of this task, quantifying these events is challenging due to the presence of compiler optimizations. Profiling perturbs the optimizations that the compiler would normally do—a profiled program usually behaves differently than the original one.</p><p>In this article, we propose a novel technique for quantifying compiler-internal events in the optimized code, reducing the profiling perturbation on compiler optimizations. Our technique achieves this by instrumenting the program from within the compiler, and by delaying the instrumentation until the point in the compilation pipeline after which no subsequent optimizations can remove the events. We propose two different implementation strategies of our technique based on path-profiling, and a modification to the standard path-profiling algorithm that facilitates the use of the proposed strategies in a modern <b>just-in-time (JIT)</b> compiler. We use our technique to analyze the behaviour of the optimizations in Graal, a state-of-the-art compiler for the Java Virtual Machine, identifying the reasons behind a performance improvement of a specific optimization, and the causes behind an unexpected slowdown of another. Finally, our evaluation results show that the two proposed implementations result in a significantly lower execution-time overhead w.r.t. a naive implementation.</p>","PeriodicalId":50939,"journal":{"name":"ACM Transactions on Programming Languages and Systems","volume":"261 8","pages":""},"PeriodicalIF":1.3,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contextual Linear Types for Differential Privacy 差分隐私的上下文线性类型
IF 1.3 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-05-17 DOI: https://dl.acm.org/doi/10.1145/3589207
Matías Toro, David Darais, Chike Abuah, Joseph P. Near, Damián Árquez, Federico Olmedo, Éric Tanter

Language support for differentially private programming is both crucial and delicate. While elaborate program logics can be very expressive, type-system-based approaches using linear types tend to be more lightweight and amenable to automatic checking and inference, and in particular in the presence of higher-order programming. Since the seminal design of Fuzz, which is restricted to ϵ-differential privacy in its original design, significant progress has been made to support more advanced variants of differential privacy, like (ϵ, δ)-differential privacy. However, supporting these advanced privacy variants while also supporting higher-order programming in full has proven to be challenging. We present Jazz, a language and type system that uses linear types and latent contextual effects to support both advanced variants of differential privacy and higher-order programming. Latent contextual effects allow delaying the payment of effects for connectives such as products, sums, and functions, yielding advantages in terms of precision of the analysis and annotation burden upon elimination, as well as modularity. We formalize the core of Jazz, prove it sound for privacy via a logical relation for metric preservation, and illustrate its expressive power through a number of case studies drawn from the recent differential privacy literature.

对不同私有编程的语言支持既重要又微妙。虽然精心设计的程序逻辑可能非常具有表现力,但是使用线性类型的基于类型系统的方法往往更轻量级,更适合自动检查和推理,特别是在存在高阶编程的情况下。自Fuzz的开创性设计以来,它在原始设计中仅限于ϵ-differential隐私,在支持更高级的差分隐私变体(如(λ, δ)-差分隐私)方面取得了重大进展。然而,要在完全支持高阶编程的同时支持这些高级隐私变体,已被证明是具有挑战性的。我们介绍Jazz,这是一种语言和类型系统,它使用线性类型和潜在的上下文效应来支持差分隐私的高级变体和高阶编程。潜在的上下文效应允许延迟诸如产品、总和和函数等连接词的效果的支付,从而在分析的精确性和消除时的注释负担以及模块化方面产生优势。我们将Jazz的核心形式化,通过度量保存的逻辑关系证明它对隐私是合理的,并通过从最近的差分隐私文献中提取的许多案例研究来说明它的表达能力。
{"title":"Contextual Linear Types for Differential Privacy","authors":"Matías Toro, David Darais, Chike Abuah, Joseph P. Near, Damián Árquez, Federico Olmedo, Éric Tanter","doi":"https://dl.acm.org/doi/10.1145/3589207","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3589207","url":null,"abstract":"<p>Language support for differentially private programming is both crucial and delicate. While elaborate program logics can be very expressive, type-system-based approaches using linear types tend to be more lightweight and amenable to automatic checking and inference, and in particular in the presence of higher-order programming. Since the seminal design of <span>Fuzz</span>, which is restricted to ϵ-differential privacy in its original design, significant progress has been made to support more advanced variants of differential privacy, like (ϵ, <i>δ</i>)-differential privacy. However, supporting these advanced privacy variants while also supporting higher-order programming in full has proven to be challenging. We present <span>Jazz</span>, a language and type system that uses linear types and latent contextual effects to support both advanced variants of differential privacy and higher-order programming. Latent contextual effects allow delaying the payment of effects for connectives such as products, sums, and functions, yielding advantages in terms of precision of the analysis and annotation burden upon elimination, as well as modularity. We formalize the core of <span>Jazz</span>, prove it sound for privacy via a logical relation for metric preservation, and illustrate its expressive power through a number of case studies drawn from the recent differential privacy literature.</p>","PeriodicalId":50939,"journal":{"name":"ACM Transactions on Programming Languages and Systems","volume":"261 5","pages":""},"PeriodicalIF":1.3,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A First-order Logic with Frames 带框架的一阶逻辑
IF 1.3 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-05-15 DOI: https://dl.acm.org/doi/10.1145/3583057
Adithya Murali, Lucas Peña, Christof Löding, P. Madhusudan

We propose a novel logic, Frame Logic (FL), that extends first-order logic and recursive definitions with a construct Sp(·) that captures the implicit supports of formulas—the precise subset of the universe upon which their meaning depends. Using such supports, we formulate proof rules that facilitate frame reasoning elegantly when the underlying model undergoes change. We show that the logic is expressive by capturing several data-structures and also exhibit a translation from a precise fragment of separation logic to frame logic. Finally, we design a program logic based on frame logic for reasoning with programs that dynamically update heaps that facilitates local specifications and frame reasoning. This program logic consists of both localized proof rules as well as rules that derive the weakest tightest preconditions in frame logic.

我们提出了一种新的逻辑,框架逻辑(FL),它扩展了一阶逻辑和递归定义的构造Sp(·),该构造Sp(·)捕获了公式的隐式支持-它们的意义所依赖的宇宙的精确子集。利用这些支持,我们制定了证明规则,当底层模型发生变化时,这些规则可以方便地进行框架推理。我们通过捕获几个数据结构来展示逻辑的表达性,并且还展示了从精确的分离逻辑片段到框架逻辑的转换。最后,我们设计了一个基于帧逻辑的程序逻辑,用于动态更新堆的程序推理,从而促进了局部规范和帧推理。该程序逻辑既包括局部证明规则,也包括导出框架逻辑中最弱最紧前提条件的规则。
{"title":"A First-order Logic with Frames","authors":"Adithya Murali, Lucas Peña, Christof Löding, P. Madhusudan","doi":"https://dl.acm.org/doi/10.1145/3583057","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3583057","url":null,"abstract":"<p>We propose a novel logic, <i>Frame Logic</i> (FL), that extends first-order logic and recursive definitions with a construct <i>Sp</i>(·) that captures the <i>implicit supports</i> of formulas—the precise subset of the universe upon which their meaning depends. Using such supports, we formulate proof rules that facilitate frame reasoning elegantly when the underlying model undergoes change. We show that the logic is expressive by capturing several data-structures and also exhibit a translation from a <i>precise</i> fragment of separation logic to frame logic. Finally, we design a program logic based on frame logic for reasoning with programs that dynamically update heaps that facilitates local specifications and frame reasoning. This program logic consists of both localized proof rules as well as rules that derive the weakest tightest preconditions in frame logic.</p>","PeriodicalId":50939,"journal":{"name":"ACM Transactions on Programming Languages and Systems","volume":"262 11","pages":""},"PeriodicalIF":1.3,"publicationDate":"2023-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Derivative-based Parser Generator for Visibly Pushdown Grammars 用于可见下推语法的基于派生的解析器生成器
IF 1.3 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-05-15 DOI: https://dl.acm.org/doi/10.1145/3591472
Xiaodong Jia, Ashish Kumar, Gang Tan

In this article, we present a derivative-based, functional recognizer and parser generator for visibly pushdown grammars. The generated parser accepts ambiguous grammars and produces a parse forest containing all valid parse trees for an input string in linear time. Each parse tree in the forest can then be extracted also in linear time. Besides the parser generator, to allow more flexible forms of the visibly pushdown grammars, we also present a translator that converts a tagged CFG to a visibly pushdown grammar in a sound way, and the parse trees of the tagged CFG are further produced by running the semantic actions embedded in the parse trees of the translated visibly pushdown grammar. The performance of the parser is compared with popular parsing tools, including ANTLR, GNU Bison, and other popular hand-crafted parsers. The correctness and the time complexity of the core parsing algorithm are formally verified in the proof assistant Coq.

在这篇文章中,我们提出了一个基于导数的功能识别器和解析器生成器,用于可见的下推语法。生成的解析器接受模棱两可的语法,并在线性时间内生成包含输入字符串的所有有效解析树的解析林。然后,也可以在线性时间内提取林中的每个解析树。除了解析器生成器之外,为了允许更灵活的可见下推语法形式,我们还提供了一个转换器,可以将标记的CFG以合理的方式转换为可见下推语法,并且通过运行嵌入在已翻译的可见下推语法解析树中的语义操作来进一步生成标记CFG的解析树。将解析器的性能与流行的解析工具(包括ANTLR、GNU Bison和其他流行的手工解析器)进行比较。在证明辅助Coq中正式验证了核心解析算法的正确性和时间复杂度。
{"title":"A Derivative-based Parser Generator for Visibly Pushdown Grammars","authors":"Xiaodong Jia, Ashish Kumar, Gang Tan","doi":"https://dl.acm.org/doi/10.1145/3591472","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3591472","url":null,"abstract":"<p>In this article, we present a derivative-based, functional recognizer and parser generator for visibly pushdown grammars. The generated parser accepts ambiguous grammars and produces a parse forest containing all valid parse trees for an input string in linear time. Each parse tree in the forest can then be extracted also in linear time. Besides the parser generator, to allow more flexible forms of the visibly pushdown grammars, we also present a translator that converts a tagged CFG to a visibly pushdown grammar in a sound way, and the parse trees of the tagged CFG are further produced by running the semantic actions embedded in the parse trees of the translated visibly pushdown grammar. The performance of the parser is compared with popular parsing tools, including ANTLR, GNU Bison, and other popular hand-crafted parsers. The correctness and the time complexity of the core parsing algorithm are formally verified in the proof assistant Coq.</p>","PeriodicalId":50939,"journal":{"name":"ACM Transactions on Programming Languages and Systems","volume":"264 2","pages":""},"PeriodicalIF":1.3,"publicationDate":"2023-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Side-channel Elimination via Partial Control-flow Linearization 通过部分控制流线性化消除侧通道
IF 1.3 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-05-03 DOI: 10.1145/3594736
Luigi Soares, Michael Canesche, Fernando Magno Quintão Pereira
Partial control-flow linearization is a code transformation conceived to maximize work performed in vectorized programs. In this article, we find a new service for it. We show that partial control-flow linearization protects programs against timing attacks. This transformation is sound: Given an instance of its public inputs, the partially linearized program always runs the same sequence of instructions, regardless of secret inputs. Incidentally, if the original program is publicly safe, then accesses to the data cache will be data oblivious in the transformed code. The transformation is optimal: Every branch that depends on some secret data is linearized; no branch that depends on only public data is linearized. Therefore, the transformation preserves loops that depend exclusively on public information. If every branch that leaves a loop depends on secret data, then the transformed program will not terminate. Our transformation extends previous work in non-trivial ways. It handles C constructs such as “goto,” “break,” “switch,” and “continue,” which are absent in the FaCT domain-specific language (2018). Like Constantine (2021), our transformation ensures operation invariance but without requiring profiling information. Additionally, in contrast to SC-Eliminator (2018) and Lif (2021), it handles programs containing loops whose trip count is not known at compilation time.
部分控制流线性化是一种代码转换,旨在最大化矢量化程序中执行的工作。在本文中,我们为它找到了一种新的服务。我们证明了部分控制流线性化可以保护程序免受定时攻击。这种转换是合理的:给定其公共输入的实例,部分线性化的程序总是运行相同的指令序列,而不考虑秘密输入。顺便说一句,如果原始程序是公共安全的,那么对数据缓存的访问将是转换代码中的数据遗忘。变换是最优的:每个依赖于某些秘密数据的分支都被线性化;任何只依赖于公共数据的分支都不会被线性化。因此,转换保留了完全依赖于公共信息的循环。如果每个离开循环的分支都依赖于秘密数据,那么转换后的程序将不会终止。我们的转换以非琐碎的方式扩展了以前的工作。它处理C结构,如“goto”、“break”、“switch”和“continue”,这些在FaCT领域特定语言中是不存在的(2018)。与Constantine(2021)一样,我们的转换确保了操作不变性,但不需要分析信息。此外,与SC Eliminator(2018)和Lif(2021)相比,它处理包含在编译时行程计数未知的循环的程序。
{"title":"Side-channel Elimination via Partial Control-flow Linearization","authors":"Luigi Soares, Michael Canesche, Fernando Magno Quintão Pereira","doi":"10.1145/3594736","DOIUrl":"https://doi.org/10.1145/3594736","url":null,"abstract":"Partial control-flow linearization is a code transformation conceived to maximize work performed in vectorized programs. In this article, we find a new service for it. We show that partial control-flow linearization protects programs against timing attacks. This transformation is sound: Given an instance of its public inputs, the partially linearized program always runs the same sequence of instructions, regardless of secret inputs. Incidentally, if the original program is publicly safe, then accesses to the data cache will be data oblivious in the transformed code. The transformation is optimal: Every branch that depends on some secret data is linearized; no branch that depends on only public data is linearized. Therefore, the transformation preserves loops that depend exclusively on public information. If every branch that leaves a loop depends on secret data, then the transformed program will not terminate. Our transformation extends previous work in non-trivial ways. It handles C constructs such as “goto,” “break,” “switch,” and “continue,” which are absent in the FaCT domain-specific language (2018). Like Constantine (2021), our transformation ensures operation invariance but without requiring profiling information. Additionally, in contrast to SC-Eliminator (2018) and Lif (2021), it handles programs containing loops whose trip count is not known at compilation time.","PeriodicalId":50939,"journal":{"name":"ACM Transactions on Programming Languages and Systems","volume":"45 1","pages":"1 - 43"},"PeriodicalIF":1.3,"publicationDate":"2023-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41499424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multiple Input Parsing and Lexical Analysis 多输入分析与词汇分析
IF 1.3 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-05-03 DOI: 10.1145/3594734
E. Scott, A. Johnstone, R. Walsh
This article introduces two new approaches in the areas of lexical analysis and context-free parsing. We present an extension, MGLL, of generalised parsing which allows multiple input strings to be parsed together efficiently, and we present an enhanced approach to lexical analysis which exploits this multiple parsing capability. The work provides new power to formal language specification and disambiguation, and brings new techniques into the historically well-studied areas of lexical and syntax analysis. It encompasses character-level parsing at one extreme and the classical LEX/YACC style division at the other, allowing the advantages of both approaches.
本文介绍了词汇分析和上下文无关解析两种新方法。我们提出了广义解析的扩展MGLL,它允许多个输入字符串被有效地一起解析,并且我们提出了一种利用这种多重解析能力的词汇分析增强方法。这项工作为形式语言规范和歧义消除提供了新的力量,并将新的技术带入了历史上研究得很好的词汇和语法分析领域。它在一个极端包含字符级解析,在另一个极端则包含经典的LEX/YACC风格划分,从而实现了这两种方法的优势。
{"title":"Multiple Input Parsing and Lexical Analysis","authors":"E. Scott, A. Johnstone, R. Walsh","doi":"10.1145/3594734","DOIUrl":"https://doi.org/10.1145/3594734","url":null,"abstract":"This article introduces two new approaches in the areas of lexical analysis and context-free parsing. We present an extension, MGLL, of generalised parsing which allows multiple input strings to be parsed together efficiently, and we present an enhanced approach to lexical analysis which exploits this multiple parsing capability. The work provides new power to formal language specification and disambiguation, and brings new techniques into the historically well-studied areas of lexical and syntax analysis. It encompasses character-level parsing at one extreme and the classical LEX/YACC style division at the other, allowing the advantages of both approaches.","PeriodicalId":50939,"journal":{"name":"ACM Transactions on Programming Languages and Systems","volume":"45 1","pages":"1 - 44"},"PeriodicalIF":1.3,"publicationDate":"2023-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46875570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Optimization-Aware Compiler-Level Event Profiling 具有优化意识的编译器级事件分析
IF 1.3 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-04-10 DOI: 10.1145/3591473
Matteo Basso, Aleksandar Prokopec, Andrea Rosà, Walter Binder
Tracking specific events in a program’s execution, such as object allocation or lock acquisition, is at the heart of dynamic analysis. Despite the apparent simplicity of this task, quantifying these events is challenging due to the presence of compiler optimizations. Profiling perturbs the optimizations that the compiler would normally do—a profiled program usually behaves differently than the original one. In this article, we propose a novel technique for quantifying compiler-internal events in the optimized code, reducing the profiling perturbation on compiler optimizations. Our technique achieves this by instrumenting the program from within the compiler, and by delaying the instrumentation until the point in the compilation pipeline after which no subsequent optimizations can remove the events. We propose two different implementation strategies of our technique based on path-profiling, and a modification to the standard path-profiling algorithm that facilitates the use of the proposed strategies in a modern just-in-time (JIT) compiler. We use our technique to analyze the behaviour of the optimizations in Graal, a state-of-the-art compiler for the Java Virtual Machine, identifying the reasons behind a performance improvement of a specific optimization, and the causes behind an unexpected slowdown of another. Finally, our evaluation results show that the two proposed implementations result in a significantly lower execution-time overhead w.r.t. a naive implementation.
跟踪程序执行中的特定事件,如对象分配或锁定获取,是动态分析的核心。尽管此任务明显简单,但由于存在编译器优化,量化这些事件具有挑战性。评测会干扰编译器通常会进行的优化——评测程序的行为通常与原始程序不同。在本文中,我们提出了一种新的技术来量化优化代码中的编译器内部事件,减少编译器优化中的评测干扰。我们的技术通过从编译器内检测程序来实现这一点,并通过将检测延迟到编译管道中的某个点来实现,在该点之后,任何后续优化都无法删除事件。我们提出了两种不同的基于路径评测的技术实现策略,并对标准路径评测算法进行了修改,以便于在现代实时(JIT)编译器中使用所提出的策略。我们使用我们的技术来分析Graal中优化的行为,Graal是Java虚拟机的最先进的编译器,确定了特定优化的性能提高背后的原因,以及另一个优化的意外放缓背后的原因。最后,我们的评估结果表明,与天真的实现相比,所提出的两种实现显著降低了执行时间开销。
{"title":"Optimization-Aware Compiler-Level Event Profiling","authors":"Matteo Basso, Aleksandar Prokopec, Andrea Rosà, Walter Binder","doi":"10.1145/3591473","DOIUrl":"https://doi.org/10.1145/3591473","url":null,"abstract":"Tracking specific events in a program’s execution, such as object allocation or lock acquisition, is at the heart of dynamic analysis. Despite the apparent simplicity of this task, quantifying these events is challenging due to the presence of compiler optimizations. Profiling perturbs the optimizations that the compiler would normally do—a profiled program usually behaves differently than the original one. In this article, we propose a novel technique for quantifying compiler-internal events in the optimized code, reducing the profiling perturbation on compiler optimizations. Our technique achieves this by instrumenting the program from within the compiler, and by delaying the instrumentation until the point in the compilation pipeline after which no subsequent optimizations can remove the events. We propose two different implementation strategies of our technique based on path-profiling, and a modification to the standard path-profiling algorithm that facilitates the use of the proposed strategies in a modern just-in-time (JIT) compiler. We use our technique to analyze the behaviour of the optimizations in Graal, a state-of-the-art compiler for the Java Virtual Machine, identifying the reasons behind a performance improvement of a specific optimization, and the causes behind an unexpected slowdown of another. Finally, our evaluation results show that the two proposed implementations result in a significantly lower execution-time overhead w.r.t. a naive implementation.","PeriodicalId":50939,"journal":{"name":"ACM Transactions on Programming Languages and Systems","volume":"45 1","pages":"1 - 50"},"PeriodicalIF":1.3,"publicationDate":"2023-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49661240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Derivative-based Parser Generator for Visibly Pushdown Grammars 一种基于导数的可视下推语法分析器生成器
IF 1.3 2区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-04-08 DOI: 10.1145/3591472
Xiaodong Jia, Ashish Kumar, Gang Tan
In this article, we present a derivative-based, functional recognizer and parser generator for visibly pushdown grammars. The generated parser accepts ambiguous grammars and produces a parse forest containing all valid parse trees for an input string in linear time. Each parse tree in the forest can then be extracted also in linear time. Besides the parser generator, to allow more flexible forms of the visibly pushdown grammars, we also present a translator that converts a tagged CFG to a visibly pushdown grammar in a sound way, and the parse trees of the tagged CFG are further produced by running the semantic actions embedded in the parse trees of the translated visibly pushdown grammar. The performance of the parser is compared with popular parsing tools, including ANTLR, GNU Bison, and other popular hand-crafted parsers. The correctness and the time complexity of the core parsing algorithm are formally verified in the proof assistant Coq.
在本文中,我们为可视下推语法提供了一个基于导数的函数识别器和语法分析器生成器。生成的解析器接受不明确的语法,并生成包含线性时间内输入字符串的所有有效解析树的解析林。林中的每个解析树也可以在线性时间中提取。除了语法分析器生成器之外,为了允许更灵活的可视下推语法形式,我们还提出了一个翻译器,该翻译器以合理的方式将标记的CFG转换为可视下推文法,并且通过运行嵌入到翻译的可视下压文法的解析树中的语义动作来进一步生成标记的CFG的解析树。将解析器的性能与流行的解析工具进行比较,包括ANTLR、GNU Bison和其他流行的手工解析器。在证明辅助Coq中正式验证了核心解析算法的正确性和时间复杂性。
{"title":"A Derivative-based Parser Generator for Visibly Pushdown Grammars","authors":"Xiaodong Jia, Ashish Kumar, Gang Tan","doi":"10.1145/3591472","DOIUrl":"https://doi.org/10.1145/3591472","url":null,"abstract":"In this article, we present a derivative-based, functional recognizer and parser generator for visibly pushdown grammars. The generated parser accepts ambiguous grammars and produces a parse forest containing all valid parse trees for an input string in linear time. Each parse tree in the forest can then be extracted also in linear time. Besides the parser generator, to allow more flexible forms of the visibly pushdown grammars, we also present a translator that converts a tagged CFG to a visibly pushdown grammar in a sound way, and the parse trees of the tagged CFG are further produced by running the semantic actions embedded in the parse trees of the translated visibly pushdown grammar. The performance of the parser is compared with popular parsing tools, including ANTLR, GNU Bison, and other popular hand-crafted parsers. The correctness and the time complexity of the core parsing algorithm are formally verified in the proof assistant Coq.","PeriodicalId":50939,"journal":{"name":"ACM Transactions on Programming Languages and Systems","volume":"45 1","pages":"1 - 68"},"PeriodicalIF":1.3,"publicationDate":"2023-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44937025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
ACM Transactions on Programming Languages and Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1