首页 > 最新文献

Proceedings of the ACM on Programming Languages最新文献

英文 中文
Formally Verifying Optimizations with Block Simulations 正式验证优化与块模拟
Q1 Engineering Pub Date : 2023-10-16 DOI: 10.1145/3622799
Léo Gourdin, Benjamin Bonneau, Sylvain Boulmé, David Monniaux, Alexandre Bérard
CompCert (ACM Software System Award 2021) is the first industrial-strength compiler with a mechanically checked proof of correctness. Yet, CompCert remains a moderately optimizing C compiler. Indeed, some optimizations of “gcc ‍-O1” such as Lazy Code Motion (LCM) or Strength Reduction (SR) were still missing: developing these efficient optimizations together with their formal proofs remained a challenge. Cyril Six et al. have developed efficient formally verified translation validators for certifying the results of superblock schedulers and peephole optimizations. We revisit and generalize their approach into a framework (integrated into CompCert) able to validate many more optimizations: an enhanced superblock scheduler, but also Dead Code Elimination (DCE), Constant Propagation (CP), and more noticeably, LCM and SR. In contrast to other approaches to translation validation, we co-design our untrusted optimizations and their validators. Our optimizations provide hints, in the forms of invariants or CFG morphisms , that help keep the formally verified validators both simple and efficient. Such designs seem applicable beyond CompCert.
CompCert (ACM软件系统奖2021)是第一个具有机械检查正确性证明的工业强度编译器。然而,CompCert仍然是一个适度优化的C编译器。事实上,“gcc‍- 01”的一些优化,如惰性代码运动(LCM)或强度降低(SR)仍然缺失:开发这些有效的优化以及它们的正式证明仍然是一个挑战。Cyril Six等人开发了高效的经过正式验证的翻译验证器,用于验证超级块调度程序和窥视孔优化的结果。我们重新审视并将他们的方法推广到一个框架中(集成到CompCert中),该框架能够验证更多的优化:增强的超级块调度程序,还有死代码消除(DCE)、持续传播(CP),以及更明显的LCM和sr。与其他翻译验证方法相比,我们共同设计了不可信的优化及其验证器。我们的优化以不变量或CFG模态的形式提供提示,帮助保持正式验证的验证器既简单又高效。这样的设计似乎适用于CompCert以外的领域。
{"title":"Formally Verifying Optimizations with Block Simulations","authors":"Léo Gourdin, Benjamin Bonneau, Sylvain Boulmé, David Monniaux, Alexandre Bérard","doi":"10.1145/3622799","DOIUrl":"https://doi.org/10.1145/3622799","url":null,"abstract":"CompCert (ACM Software System Award 2021) is the first industrial-strength compiler with a mechanically checked proof of correctness. Yet, CompCert remains a moderately optimizing C compiler. Indeed, some optimizations of “gcc ‍-O1” such as Lazy Code Motion (LCM) or Strength Reduction (SR) were still missing: developing these efficient optimizations together with their formal proofs remained a challenge. Cyril Six et al. have developed efficient formally verified translation validators for certifying the results of superblock schedulers and peephole optimizations. We revisit and generalize their approach into a framework (integrated into CompCert) able to validate many more optimizations: an enhanced superblock scheduler, but also Dead Code Elimination (DCE), Constant Propagation (CP), and more noticeably, LCM and SR. In contrast to other approaches to translation validation, we co-design our untrusted optimizations and their validators. Our optimizations provide hints, in the forms of invariants or CFG morphisms , that help keep the formally verified validators both simple and efficient. Such designs seem applicable beyond CompCert.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136077529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hardware-Aware Static Optimization of Hyperdimensional Computations 硬件感知的超维计算静态优化
Q1 Engineering Pub Date : 2023-10-16 DOI: 10.1145/3622797
Pu (Luke) Yi, Sara Achour
Binary spatter code (BSC)-based hyperdimensional computing (HDC) is a highly error-resilient approximate computational paradigm suited for error-prone, emerging hardware platforms. In BSC HDC, the basic datatype is a hypervector , a typically large binary vector, where the size of the hypervector has a significant impact on the fidelity and resource usage of the computation. Typically, the hypervector size is dynamically tuned to deliver the desired accuracy; this process is time-consuming and often produces hypervector sizes that lack accuracy guarantees and produce poor results when reused for very similar workloads. We present Heim, a hardware-aware static analysis and optimization framework for BSC HD computations. Heim analytically derives the minimum hypervector size that minimizes resource usage and meets the target accuracy requirement. Heim guarantees the optimized computation converges to the user-provided accuracy target on expectation, even in the presence of hardware error. Heim deploys a novel static analysis procedure that unifies theoretical results from the neuroscience community to systematically optimize HD computations. We evaluate Heim against dynamic tuning-based optimization on 25 benchmark data structures. Given a 99% accuracy requirement, Heim-optimized computations achieve a 99.2%-100.0% median accuracy, up to 49.5% higher than dynamic tuning-based optimization, while achieving 1.15x-7.14x reductions in hypervector size compared to HD computations that achieve comparable query accuracy and finding parametrizations 30.0x-100167.4x faster than dynamic tuning-based approaches. We also use Heim to systematically evaluate the performance benefits of using analog CAMs and multiple-bit-per-cell ReRAM over conventional hardware, while maintaining iso-accuracy – for both emerging technologies, we find usages where the emerging hardware imparts significant benefits.
基于二进制飞溅码(BSC)的超维计算(HDC)是一种高度容错的近似计算范式,适用于易出错的新兴硬件平台。在BSC HDC中,基本数据类型是一个超向量,一个典型的大二进制向量,其中超向量的大小对计算的保真度和资源使用有重大影响。通常,超向量的大小是动态调整的,以提供所需的精度;这个过程非常耗时,并且经常产生缺乏准确性保证的超向量大小,并且在非常相似的工作负载中重用时产生很差的结果。我们提出了Heim,一个硬件感知的BSC HD计算静态分析和优化框架。Heim通过解析推导出最小的超向量大小,使资源使用最小化并满足目标精度要求。即使在存在硬件错误的情况下,Heim也能保证优化后的计算在期望上收敛到用户提供的精度目标。海姆部署了一种新的静态分析程序,将神经科学界的理论结果统一起来,系统地优化HD计算。我们在25个基准数据结构上对Heim进行了基于动态调优的优化评估。给定99%的精度要求,heim优化的计算实现了99.2%-100.0%的中位数精度,比基于动态调优的优化高出49.5%,同时与HD计算相比,实现了1.15 -7.14倍的超向量大小减少,HD计算实现了相当的查询精度,并且比基于动态调优的方法快30.0 - 100167.1倍。我们还使用Heim系统地评估了与传统硬件相比,使用模拟cam和每单元多比特ReRAM的性能优势,同时保持了等精度——对于这两种新兴技术,我们发现新兴硬件的使用带来了显著的优势。
{"title":"Hardware-Aware Static Optimization of Hyperdimensional Computations","authors":"Pu (Luke) Yi, Sara Achour","doi":"10.1145/3622797","DOIUrl":"https://doi.org/10.1145/3622797","url":null,"abstract":"Binary spatter code (BSC)-based hyperdimensional computing (HDC) is a highly error-resilient approximate computational paradigm suited for error-prone, emerging hardware platforms. In BSC HDC, the basic datatype is a hypervector , a typically large binary vector, where the size of the hypervector has a significant impact on the fidelity and resource usage of the computation. Typically, the hypervector size is dynamically tuned to deliver the desired accuracy; this process is time-consuming and often produces hypervector sizes that lack accuracy guarantees and produce poor results when reused for very similar workloads. We present Heim, a hardware-aware static analysis and optimization framework for BSC HD computations. Heim analytically derives the minimum hypervector size that minimizes resource usage and meets the target accuracy requirement. Heim guarantees the optimized computation converges to the user-provided accuracy target on expectation, even in the presence of hardware error. Heim deploys a novel static analysis procedure that unifies theoretical results from the neuroscience community to systematically optimize HD computations. We evaluate Heim against dynamic tuning-based optimization on 25 benchmark data structures. Given a 99% accuracy requirement, Heim-optimized computations achieve a 99.2%-100.0% median accuracy, up to 49.5% higher than dynamic tuning-based optimization, while achieving 1.15x-7.14x reductions in hypervector size compared to HD computations that achieve comparable query accuracy and finding parametrizations 30.0x-100167.4x faster than dynamic tuning-based approaches. We also use Heim to systematically evaluate the performance benefits of using analog CAMs and multiple-bit-per-cell ReRAM over conventional hardware, while maintaining iso-accuracy – for both emerging technologies, we find usages where the emerging hardware imparts significant benefits.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136112661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leaf: Modularity for Temporary Sharing in Separation Logic 叶:分离逻辑中临时共享的模块化
Q1 Engineering Pub Date : 2023-10-16 DOI: 10.1145/3622798
Travis Hance, Jon Howell, Oded Padon, Bryan Parno
In concurrent verification, separation logic provides a strong story for handling both resources that are owned exclusively and resources that are shared persistently (i.e., forever). However, the situation is more complicated for temporarily shared state, where state might be shared and then later reclaimed as exclusive. We believe that a framework for temporarily-shared state should meet two key goals not adequately met by existing techniques. One, it should allow and encourage users to verify new sharing strategies. Two, it should provide an abstraction where users manipulate shared state in a way agnostic to the means with which it is shared. We present Leaf, a library in the Iris separation logic which accomplishes both of these goals by introducing a novel operator, which we call guarding, that allows one proposition to represent a shared version of another. We demonstrate that Leaf meets these two goals through a modular case study: we verify a reader-writer lock that supports shared state, and a hash table built on top of it that uses shared state.
在并发验证中,分离逻辑为处理独占资源和持久共享(即永远共享)的资源提供了强有力的支持。然而,对于临时共享状态,情况更加复杂,其中状态可能被共享,然后被回收为排他状态。我们认为临时共享状态的框架应该满足现有技术无法充分满足的两个关键目标。首先,它应该允许并鼓励用户验证新的共享策略。第二,它应该提供一个抽象,让用户以一种与共享方式无关的方式操作共享状态。我们介绍了Iris分离逻辑中的一个库Leaf,它通过引入一个新的算子(我们称之为守卫)来实现这两个目标,该算子允许一个命题表示另一个命题的共享版本。我们通过一个模块化的案例研究证明Leaf满足了这两个目标:我们验证了一个支持共享状态的读写锁,以及一个构建在它之上的使用共享状态的散列表。
{"title":"Leaf: Modularity for Temporary Sharing in Separation Logic","authors":"Travis Hance, Jon Howell, Oded Padon, Bryan Parno","doi":"10.1145/3622798","DOIUrl":"https://doi.org/10.1145/3622798","url":null,"abstract":"In concurrent verification, separation logic provides a strong story for handling both resources that are owned exclusively and resources that are shared persistently (i.e., forever). However, the situation is more complicated for temporarily shared state, where state might be shared and then later reclaimed as exclusive. We believe that a framework for temporarily-shared state should meet two key goals not adequately met by existing techniques. One, it should allow and encourage users to verify new sharing strategies. Two, it should provide an abstraction where users manipulate shared state in a way agnostic to the means with which it is shared. We present Leaf, a library in the Iris separation logic which accomplishes both of these goals by introducing a novel operator, which we call guarding, that allows one proposition to represent a shared version of another. We demonstrate that Leaf meets these two goals through a modular case study: we verify a reader-writer lock that supports shared state, and a hash table built on top of it that uses shared state.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136112808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Pretty Expressive Printer 一台非常有表现力的打印机
Q1 Engineering Pub Date : 2023-10-16 DOI: 10.1145/3622837
Sorawee Porncharoenwase, Justin Pombrio, Emina Torlak
Pretty printers make trade-offs between the expressiveness of their pretty printing language, the optimality objective that they minimize when choosing between different ways to lay out a document, and the performance of their algorithm. This paper presents a new pretty printer, Π e , that is strictly more expressive than all pretty printers in the literature and provably minimizes an optimality objective. Furthermore, the time complexity of Π e is better than many existing pretty printers. When choosing among different ways to lay out a document, Π e consults a user-supplied cost factory , which determines the optimality objective, giving Π e a unique degree of flexibility. We use the Lean theorem prover to verify the correctness (validity and optimality) of Π e , and implement Π e concretely as a pretty printer that we call PrettyExpressive. To evaluate our pretty printer against others, we develop a formal framework for reasoning about the expressiveness of pretty printing languages, and survey pretty printers in the literature, comparing their expressiveness, optimality, worst-case time complexity, and practical running time. Our evaluation shows that PrettyExpressive is efficient and effective at producing optimal layouts. PrettyExpressive has also seen real-world adoption: it serves as a foundation of a code formatter for Racket.
漂亮的打印机在他们漂亮的打印语言的表现力、他们在选择不同的文档布局方式时最小化的最优性目标以及他们的算法性能之间进行权衡。本文提出了一种新的漂亮打印机Π e,它比文献中的所有漂亮打印机都具有严格的表现力,并可证明最小化了最优性目标。此外,Π e的时间复杂度优于许多现有的漂亮打印机。在选择不同的文档布局方式时,Π e咨询用户提供的成本工厂,该成本工厂确定最优性目标,从而使Π e具有独特的灵活性。我们使用精益定理证明器来验证Π e的正确性(有效性和最优性),并将Π e具体实现为我们称之为PrettyExpressive的漂亮打印机。为了评估我们的漂亮打印机和其他的,我们开发了一个正式的框架来推理漂亮打印语言的表达能力,并调查了文献中的漂亮打印机,比较了它们的表达能力、最优性、最坏情况下的时间复杂度和实际运行时间。我们的评估表明,PrettyExpressive在生成最佳布局方面是高效和有效的。PrettyExpressive也在现实世界中得到了应用:它可以作为一个用于球拍的代码格式化器的基础。
{"title":"A Pretty Expressive Printer","authors":"Sorawee Porncharoenwase, Justin Pombrio, Emina Torlak","doi":"10.1145/3622837","DOIUrl":"https://doi.org/10.1145/3622837","url":null,"abstract":"Pretty printers make trade-offs between the expressiveness of their pretty printing language, the optimality objective that they minimize when choosing between different ways to lay out a document, and the performance of their algorithm. This paper presents a new pretty printer, Π e , that is strictly more expressive than all pretty printers in the literature and provably minimizes an optimality objective. Furthermore, the time complexity of Π e is better than many existing pretty printers. When choosing among different ways to lay out a document, Π e consults a user-supplied cost factory , which determines the optimality objective, giving Π e a unique degree of flexibility. We use the Lean theorem prover to verify the correctness (validity and optimality) of Π e , and implement Π e concretely as a pretty printer that we call PrettyExpressive. To evaluate our pretty printer against others, we develop a formal framework for reasoning about the expressiveness of pretty printing languages, and survey pretty printers in the literature, comparing their expressiveness, optimality, worst-case time complexity, and practical running time. Our evaluation shows that PrettyExpressive is efficient and effective at producing optimal layouts. PrettyExpressive has also seen real-world adoption: it serves as a foundation of a code formatter for Racket.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136112951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rhombus: A New Spin on Macros without All the Parentheses 菱形:没有圆括号的宏的新旋转
Q1 Engineering Pub Date : 2023-10-16 DOI: 10.1145/3622818
Matthew Flatt, Taylor Allred, Nia Angle, Stephen De Gabrielle, Robert Bruce Findler, Jack Firth, Kiran Gopinathan, Ben Greenman, Siddhartha Kasivajhula, Alex Knauth, Jay McCarthy, Sam Phillips, Sorawee Porncharoenwase, Jens Axel Søgaard, Sam Tobin-Hochstadt
Rhombus is a new language that is built on Racket. It offers the same kind of language extensibility as Racket itself, but using traditional (infix) notation. Although Rhombus is far from the first language to support Lisp-style macros without Lisp-style parentheses, Rhombus offers a novel synthesis of macro technology that is practical and expressive. A key element is the use of multiple binding spaces for context-specific sublanguages. For example, expressions and pattern-matching forms can use the same operators with different meanings and without creating conflicts. Context-sensitive bindings, in turn, facilitate a language design that reduces the notational distance between the core language and macro facilities. For example, repetitions can be defined and used in binding and expression contexts generally, which enables a smoother transition from programming to metaprogramming. Finally, since handling static information (such as types) is also a necessary part of growing macros beyond Lisp, Rhombus includes support in its expansion protocol for communicating static information among bindings and expressions. The Rhombus implementation demonstrates that all of these pieces can work together in a coherent and user-friendly language.
Rhombus是一种基于Racket的新语言。它提供了与Racket本身相同的语言可扩展性,但使用传统的(中缀)表示法。尽管Rhombus远非第一个支持没有lisp风格括号的lisp风格宏的语言,但Rhombus提供了一种实用且富有表现力的宏技术的新颖综合。一个关键因素是为特定于上下文的子语言使用多个绑定空间。例如,表达式和模式匹配表单可以使用具有不同含义的相同操作符,而不会产生冲突。反过来,上下文敏感的绑定促进了语言设计,减少了核心语言和宏工具之间的符号距离。例如,通常可以在绑定和表达式上下文中定义和使用重复,从而实现从编程到元编程的更平滑的转换。最后,由于处理静态信息(如类型)也是在Lisp之外扩展宏的必要部分,因此Rhombus在其扩展协议中包含了在绑定和表达式之间通信静态信息的支持。Rhombus实现表明,所有这些部分都可以以一致的、用户友好的语言一起工作。
{"title":"Rhombus: A New Spin on Macros without All the Parentheses","authors":"Matthew Flatt, Taylor Allred, Nia Angle, Stephen De Gabrielle, Robert Bruce Findler, Jack Firth, Kiran Gopinathan, Ben Greenman, Siddhartha Kasivajhula, Alex Knauth, Jay McCarthy, Sam Phillips, Sorawee Porncharoenwase, Jens Axel Søgaard, Sam Tobin-Hochstadt","doi":"10.1145/3622818","DOIUrl":"https://doi.org/10.1145/3622818","url":null,"abstract":"Rhombus is a new language that is built on Racket. It offers the same kind of language extensibility as Racket itself, but using traditional (infix) notation. Although Rhombus is far from the first language to support Lisp-style macros without Lisp-style parentheses, Rhombus offers a novel synthesis of macro technology that is practical and expressive. A key element is the use of multiple binding spaces for context-specific sublanguages. For example, expressions and pattern-matching forms can use the same operators with different meanings and without creating conflicts. Context-sensitive bindings, in turn, facilitate a language design that reduces the notational distance between the core language and macro facilities. For example, repetitions can be defined and used in binding and expression contexts generally, which enables a smoother transition from programming to metaprogramming. Finally, since handling static information (such as types) is also a necessary part of growing macros beyond Lisp, Rhombus includes support in its expansion protocol for communicating static information among bindings and expressions. The Rhombus implementation demonstrates that all of these pieces can work together in a coherent and user-friendly language.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136112526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantifying and Mitigating Cache Side Channel Leakage with Differential Set 基于差分集的高速缓存侧信道泄漏量化与缓解方法
Q1 Engineering Pub Date : 2023-10-16 DOI: 10.1145/3622850
Cong Ma, Dinghao Wu, Gang Tan, Mahmut Taylan Kandemir, Danfeng Zhang
Cache side-channel attacks leverage secret-dependent footprints in CPU cache to steal confidential information, such as encryption keys. Due to the lack of a proper abstraction for reasoning about cache side channels, existing static program analysis tools that can quantify or mitigate cache side channels are built on very different kinds of abstractions. As a consequence, it is hard to bridge advances in quantification and mitigation research. Moreover, existing abstractions lead to imprecise results. In this paper, we present a novel abstraction, called differential set, for analyzing cache side channels at compile time. A distinguishing feature of differential sets is that it allows compositional and precise reasoning about cache side channels. Moreover, it is the first abstraction that carries sufficient information for both side channel quantification and mitigation. Based on this new abstraction, we develop a static analysis tool DSA that automatically quantifies and mitigates cache side channel leakage at the same time. Experimental evaluation on a set of commonly used benchmarks shows that DSA can produce more precise leakage bound as well as mitigated code with fewer memory footprints, when compared with state-of-the-art tools that only quantify or mitigate cache side channel leakage.
缓存侧通道攻击利用CPU缓存中与秘密相关的足迹来窃取机密信息,例如加密密钥。由于缺乏适当的抽象来对缓存侧通道进行推理,现有的可以量化或减轻缓存侧通道的静态程序分析工具是建立在非常不同的抽象类型上的。因此,很难在量化和缓解研究方面取得进展。此外,现有的抽象会导致不精确的结果。在本文中,我们提出了一种新的抽象,称为微分集,用于在编译时分析缓存侧通道。微分集的一个显著特征是它允许对缓存侧通道进行组合和精确的推理。此外,它是第一个抽象,为侧信道量化和缓解提供了足够的信息。基于这种新的抽象,我们开发了一个静态分析工具DSA,可以自动量化和减轻缓存侧信道泄漏。对一组常用基准的实验评估表明,与仅量化或减轻缓存侧通道泄漏的最先进工具相比,DSA可以产生更精确的泄漏绑定以及减少内存占用的代码。
{"title":"Quantifying and Mitigating Cache Side Channel Leakage with Differential Set","authors":"Cong Ma, Dinghao Wu, Gang Tan, Mahmut Taylan Kandemir, Danfeng Zhang","doi":"10.1145/3622850","DOIUrl":"https://doi.org/10.1145/3622850","url":null,"abstract":"Cache side-channel attacks leverage secret-dependent footprints in CPU cache to steal confidential information, such as encryption keys. Due to the lack of a proper abstraction for reasoning about cache side channels, existing static program analysis tools that can quantify or mitigate cache side channels are built on very different kinds of abstractions. As a consequence, it is hard to bridge advances in quantification and mitigation research. Moreover, existing abstractions lead to imprecise results. In this paper, we present a novel abstraction, called differential set, for analyzing cache side channels at compile time. A distinguishing feature of differential sets is that it allows compositional and precise reasoning about cache side channels. Moreover, it is the first abstraction that carries sufficient information for both side channel quantification and mitigation. Based on this new abstraction, we develop a static analysis tool DSA that automatically quantifies and mitigates cache side channel leakage at the same time. Experimental evaluation on a set of commonly used benchmarks shows that DSA can produce more precise leakage bound as well as mitigated code with fewer memory footprints, when compared with state-of-the-art tools that only quantify or mitigate cache side channel leakage.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136112529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Profilers Can Help Navigate Type Migration 分析器如何帮助导航类型迁移
Q1 Engineering Pub Date : 2023-10-16 DOI: 10.1145/3622817
Ben Greenman, Matthias Felleisen, Christos Dimoulas
Sound migratory typing envisions a safe and smooth refactoring of untyped code bases to typed ones. However, the cost of enforcing safety with run-time checks is often prohibitively high, thus performance regressions are a likely occurrence. Additional types can often recover performance, but choosing the right components to type is difficult because of the exponential size of the migratory typing lattice. In principal though, migration could be guided by off-the-shelf profiling tools. To examine this hypothesis, this paper follows the rational programmer method and reports on the results of an experiment on tens of thousands of performance-debugging scenarios via seventeen strategies for turning profiler output into an actionable next step. The most effective strategy is the use of deep types to eliminate the most costly boundaries between typed and untyped components; this strategy succeeds in more than 50% of scenarios if two performance degradations are tolerable along the way.
合理的迁移类型设想了将非类型化的代码库安全而顺利地重构为类型化的代码库。然而,使用运行时检查来强制执行安全性的成本通常高得令人望而却步,因此很可能出现性能退化。额外的类型通常可以恢复性能,但是选择正确的组件类型是困难的,因为迁移类型晶格的指数大小。原则上,迁移可以由现成的分析工具来指导。为了检验这一假设,本文遵循了理性程序员方法,并报告了通过17种策略将分析器输出转换为可操作的下一步的数万种性能调试场景的实验结果。最有效的策略是使用深度类型来消除类型化和非类型化组件之间最昂贵的边界;如果在此过程中可以容忍两次性能下降,则此策略在50%以上的场景中都能成功。
{"title":"How Profilers Can Help Navigate Type Migration","authors":"Ben Greenman, Matthias Felleisen, Christos Dimoulas","doi":"10.1145/3622817","DOIUrl":"https://doi.org/10.1145/3622817","url":null,"abstract":"Sound migratory typing envisions a safe and smooth refactoring of untyped code bases to typed ones. However, the cost of enforcing safety with run-time checks is often prohibitively high, thus performance regressions are a likely occurrence. Additional types can often recover performance, but choosing the right components to type is difficult because of the exponential size of the migratory typing lattice. In principal though, migration could be guided by off-the-shelf profiling tools. To examine this hypothesis, this paper follows the rational programmer method and reports on the results of an experiment on tens of thousands of performance-debugging scenarios via seventeen strategies for turning profiler output into an actionable next step. The most effective strategy is the use of deep types to eliminate the most costly boundaries between typed and untyped components; this strategy succeeds in more than 50% of scenarios if two performance degradations are tolerable along the way.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136113272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Message Chains for Distributed System Verification 分布式系统验证的消息链
Q1 Engineering Pub Date : 2023-10-16 DOI: 10.1145/3622876
Federico Mora, Ankush Desai, Elizabeth Polgreen, Sanjit A. Seshia
Verification of asynchronous distributed programs is challenging due to the need to reason about numerous control paths resulting from the myriad interleaving of messages and failures. In this paper, we propose an automated bookkeeping method based on message chains. Message chains reveal structure in asynchronous distributed system executions and can help programmers verify their systems at the message passing level of abstraction. To evaluate our contributions empirically we build a verification prototype for the P programming language that integrates message chains. We use it to verify 16 benchmarks from related work, one new benchmark that exemplifies the kinds of systems our method focuses on, and two industrial benchmarks. We find that message chains are able to simplify existing proofs and our prototype performs comparably to existing work in terms of runtime. We extend our work with support for specification mining and find that message chains provide enough structure to allow existing learning and program synthesis tools to automatically infer meaningful specifications using only execution examples.
异步分布式程序的验证是具有挑战性的,因为需要对无数消息和故障交错产生的众多控制路径进行推理。本文提出了一种基于消息链的自动记账方法。消息链揭示了异步分布式系统执行中的结构,可以帮助程序员在消息传递抽象层验证他们的系统。为了根据经验评估我们的贡献,我们为集成消息链的P编程语言构建了一个验证原型。我们用它来验证来自相关工作的16个基准,一个新的基准,体现了我们的方法所关注的系统类型,以及两个工业基准。我们发现消息链能够简化现有的证明,我们的原型在运行时方面的表现与现有的工作相当。我们通过支持规范挖掘扩展了我们的工作,并发现消息链提供了足够的结构,允许现有的学习和程序合成工具仅使用执行示例自动推断有意义的规范。
{"title":"Message Chains for Distributed System Verification","authors":"Federico Mora, Ankush Desai, Elizabeth Polgreen, Sanjit A. Seshia","doi":"10.1145/3622876","DOIUrl":"https://doi.org/10.1145/3622876","url":null,"abstract":"Verification of asynchronous distributed programs is challenging due to the need to reason about numerous control paths resulting from the myriad interleaving of messages and failures. In this paper, we propose an automated bookkeeping method based on message chains. Message chains reveal structure in asynchronous distributed system executions and can help programmers verify their systems at the message passing level of abstraction. To evaluate our contributions empirically we build a verification prototype for the P programming language that integrates message chains. We use it to verify 16 benchmarks from related work, one new benchmark that exemplifies the kinds of systems our method focuses on, and two industrial benchmarks. We find that message chains are able to simplify existing proofs and our prototype performs comparably to existing work in terms of runtime. We extend our work with support for specification mining and find that message chains provide enough structure to allow existing learning and program synthesis tools to automatically infer meaningful specifications using only execution examples.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136115394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validating IoT Devices with Rate-Based Session Types 使用基于速率的会话类型验证物联网设备
Q1 Engineering Pub Date : 2023-10-16 DOI: 10.1145/3622854
Grant Iraci, Cheng-En Chuang, Raymond Hu, Lukasz Ziarek
We develop a session types based framework for implementing and validating rate-based message passing systems in Internet of Things (IoT) domains. To model the indefinite repetition present in many embedded and IoT systems, we introduce a timed process calculus with a periodic recursion primitive. This allows us to model rate-based computations and communications inherent to these application domains. We introduce a definition of rate based session types in a binary session types setting and a new compatibility relationship, which we call rate compatibility. Programs which type check enjoy the standard session types guarantees as well as rate error freedom --- meaning processes which exchanges messages do so at the same rate. Rate compatibility is defined through a new notion of type expansion, a relation that allows communication between processes of differing periods by synthesizing and checking a common superperiod type. We prove type preservation and rate error freedom for our system, and show a decidable method for type checking based on computing superperiods for a collection of processes. We implement a prototype of our type system including rate compatibility via an embedding into the native type system of Rust. We apply this framework to a range of examples from our target domain such as Android software sensors, wearable devices, and sound processing.
我们开发了一个基于会话类型的框架,用于实现和验证物联网(IoT)领域中基于速率的消息传递系统。为了对许多嵌入式和物联网系统中存在的不确定重复进行建模,我们引入了一个带有周期性递归原语的定时过程演算。这允许我们对这些应用程序域固有的基于速率的计算和通信进行建模。我们在二进制会话类型设置中引入了基于速率的会话类型的定义和一种新的兼容关系,我们称之为速率兼容。进行类型检查的程序享有标准会话类型保证以及速率错误自由,这意味着交换消息的进程以相同的速率进行。速率兼容性通过类型扩展的新概念来定义,这种关系通过综合和检查共同的超周期类型来允许不同周期的进程之间的通信。我们证明了系统的类型保持和错误率自由,并给出了一种基于计算进程集合超周期的类型检查的可判定方法。我们通过嵌入Rust的原生类型系统实现了一个包含速率兼容性的类型系统原型。我们将此框架应用于目标领域的一系列示例,如Android软件传感器、可穿戴设备和声音处理。
{"title":"Validating IoT Devices with Rate-Based Session Types","authors":"Grant Iraci, Cheng-En Chuang, Raymond Hu, Lukasz Ziarek","doi":"10.1145/3622854","DOIUrl":"https://doi.org/10.1145/3622854","url":null,"abstract":"We develop a session types based framework for implementing and validating rate-based message passing systems in Internet of Things (IoT) domains. To model the indefinite repetition present in many embedded and IoT systems, we introduce a timed process calculus with a periodic recursion primitive. This allows us to model rate-based computations and communications inherent to these application domains. We introduce a definition of rate based session types in a binary session types setting and a new compatibility relationship, which we call rate compatibility. Programs which type check enjoy the standard session types guarantees as well as rate error freedom --- meaning processes which exchanges messages do so at the same rate. Rate compatibility is defined through a new notion of type expansion, a relation that allows communication between processes of differing periods by synthesizing and checking a common superperiod type. We prove type preservation and rate error freedom for our system, and show a decidable method for type checking based on computing superperiods for a collection of processes. We implement a prototype of our type system including rate compatibility via an embedding into the native type system of Rust. We apply this framework to a range of examples from our target domain such as Android software sensors, wearable devices, and sound processing.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136116751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainable Program Synthesis by Localizing Specifications 本地化规范的可解释程序合成
Q1 Engineering Pub Date : 2023-10-16 DOI: 10.1145/3622874
Amirmohammad Nazari, Yifei Huang, Roopsha Samanta, Arjun Radhakrishna, Mukund Raghothaman
The traditional formulation of the program synthesis problem is to find a program that meets a logical correctness specification. When synthesis is successful, there is a guarantee that the implementation satisfies the specification. Unfortunately, synthesis engines are typically monolithic algorithms, and obscure the correspondence between the specification, implementation and user intent. In contrast, humans often include comments in their code to guide future developers towards the purpose and design of different parts of the codebase. In this paper, we introduce subspecifications as a mechanism to augment the synthesized implementation with explanatory notes of this form. In this model, the user may ask for explanations of different parts of the implementation; the subspecification generated in response is a logical formula that describes the constraints induced on that subexpression by the global specification and surrounding implementation. We develop algorithms to construct and verify subspecifications and investigate their theoretical properties. We perform an experimental evaluation of the subspecification generation procedure, and measure its effectiveness and running time. Finally, we conduct a user study to determine whether subspecifications are useful: we find that subspecifications greatly aid in understanding the global specification, in identifying alternative implementations, and in debugging faulty implementations.
程序综合问题的传统表述是找到一个满足逻辑正确性规范的程序。当合成成功时,就可以保证实现满足规范。不幸的是,合成引擎是典型的单一算法,模糊了规范、实现和用户意图之间的对应关系。相反,人类经常在代码中包含注释,以指导未来的开发人员实现代码库不同部分的目的和设计。在本文中,我们引入子规范作为一种机制,用这种形式的解释性注释来增强综合实现。在这个模型中,用户可以要求对实现的不同部分进行解释;在响应中生成的子规范是一个逻辑公式,它描述了全局规范和周围实现对该子表达式的约束。我们开发算法来构建和验证子规范,并研究它们的理论性质。我们对子规范生成过程进行了实验评估,并测量了其有效性和运行时间。最后,我们进行了一项用户研究,以确定子规范是否有用:我们发现子规范在理解全局规范、识别替代实现和调试错误实现方面有很大帮助。
{"title":"Explainable Program Synthesis by Localizing Specifications","authors":"Amirmohammad Nazari, Yifei Huang, Roopsha Samanta, Arjun Radhakrishna, Mukund Raghothaman","doi":"10.1145/3622874","DOIUrl":"https://doi.org/10.1145/3622874","url":null,"abstract":"The traditional formulation of the program synthesis problem is to find a program that meets a logical correctness specification. When synthesis is successful, there is a guarantee that the implementation satisfies the specification. Unfortunately, synthesis engines are typically monolithic algorithms, and obscure the correspondence between the specification, implementation and user intent. In contrast, humans often include comments in their code to guide future developers towards the purpose and design of different parts of the codebase. In this paper, we introduce subspecifications as a mechanism to augment the synthesized implementation with explanatory notes of this form. In this model, the user may ask for explanations of different parts of the implementation; the subspecification generated in response is a logical formula that describes the constraints induced on that subexpression by the global specification and surrounding implementation. We develop algorithms to construct and verify subspecifications and investigate their theoretical properties. We perform an experimental evaluation of the subspecification generation procedure, and measure its effectiveness and running time. Finally, we conduct a user study to determine whether subspecifications are useful: we find that subspecifications greatly aid in understanding the global specification, in identifying alternative implementations, and in debugging faulty implementations.","PeriodicalId":20697,"journal":{"name":"Proceedings of the ACM on Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136112660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the ACM on Programming Languages
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1