首页 > 最新文献

arXiv - CS - Programming Languages最新文献

英文 中文
From Program Logics to Language Logics 从程序逻辑到语言逻辑
Pub Date : 2024-08-02 DOI: arxiv-2408.01515
Matteo Cimini
Program logics are a powerful formal method in the context of programverification. Can we develop a counterpart of program logics in the context oflanguage verification? This paper proposes language logics, which allow forstatements of the form ${P} mathcal{X} {Q}$ where $mathcal{X}$, thesubject of analysis, can be a language component such as a piece of grammar, atyping rule, a reduction rule or other parts of a language definition. Todemonstrate our approach, we develop $mathbb{L}$, a language logic that can beused to analyze language definitions on various aspects of language design. Weillustrate $mathbb{L}$ to the analysis of some selected aspects of aprogramming language. We have also implemented an automated prover for$mathbb{L}$, and we confirm that the tool repeats these analyses. Ultimately,$mathbb{L}$ cannot verify languages. Nonetheless, we believe that this paperprovides a strong first step towards adopting the methods of program logics forthe analysis of languages.
在程序验证中,程序逻辑是一种强大的形式化方法。我们能否在语言验证中发展出与程序逻辑相对应的呢?本文提出了语言逻辑,它允许以 ${P}mathcal{X}}{Q}$ 的形式陈述,其中 $mathcal{X}$ 作为分析对象,可以是一个语言组件,如语法片段、分类规则、还原规则或语言定义的其他部分。为了演示我们的方法,我们开发了$mathbb{L}$,这是一种语言逻辑,可以用来分析语言设计中各个方面的语言定义。我们将 $mathbb{L}$ 用于分析编程语言的某些选定方面。我们还实现了$mathbb{L}$的自动求证器,并确认该工具可以重复这些分析。最终,$mathbb{L}$ 无法验证语言。尽管如此,我们相信本文为采用程序逻辑方法分析语言迈出了坚实的第一步。
{"title":"From Program Logics to Language Logics","authors":"Matteo Cimini","doi":"arxiv-2408.01515","DOIUrl":"https://doi.org/arxiv-2408.01515","url":null,"abstract":"Program logics are a powerful formal method in the context of program\u0000verification. Can we develop a counterpart of program logics in the context of\u0000language verification? This paper proposes language logics, which allow for\u0000statements of the form ${P} mathcal{X} {Q}$ where $mathcal{X}$, the\u0000subject of analysis, can be a language component such as a piece of grammar, a\u0000typing rule, a reduction rule or other parts of a language definition. To\u0000demonstrate our approach, we develop $mathbb{L}$, a language logic that can be\u0000used to analyze language definitions on various aspects of language design. We\u0000illustrate $mathbb{L}$ to the analysis of some selected aspects of a\u0000programming language. We have also implemented an automated prover for\u0000$mathbb{L}$, and we confirm that the tool repeats these analyses. Ultimately,\u0000$mathbb{L}$ cannot verify languages. Nonetheless, we believe that this paper\u0000provides a strong first step towards adopting the methods of program logics for\u0000the analysis of languages.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Regrading Policies for Flexible Information Flow Control in Session-Typed Concurrency 会话类型并发中灵活信息流控制的重整策略
Pub Date : 2024-07-29 DOI: arxiv-2407.20410
Farzaneh Derakhshan, Stephanie Balzer, Yue Yao
Noninterference guarantees that an attacker cannot infer secrets byinteracting with a program. Information flow control (IFC) type systems assertnoninterference by tracking the level of information learned (pc) anddisallowing communication to entities of lesser or unrelated level than the pc.Control flow constructs such as loops are at odds with this pattern becausethey necessitate downgrading the pc upon recursion to be practical. In aconcurrent setting, however, downgrading is not generally safe. This paperutilizes session types to track the flow of information and contributes an IFCtype system for message-passing concurrent processes that allows downgradingthe pc upon recursion. To make downgrading safe, the paper introduces regradingpolicies. Regrading policies are expressed in terms of integrity labels, whichare also key to safe composition of entities with different regrading policies.The paper develops the type system and proves progress-sensitivenoninterference for well-typed processes, ruling out timing attacks thatexploit the relative order of messages. The type system has been implemented ina type checker, which supports security-polymorphic processes using localsecurity theories.
不干涉保证了攻击者无法通过与程序的交互推断出秘密。信息流控制(IFC)类型的系统通过跟踪所获知信息(pc)的级别,并禁止与级别低于pc或与pc无关的实体通信,来保证无干扰。控制流结构(如循环)与这种模式相悖,因为它们必须在递归时降低pc的级别,才能实用。然而,在当前的环境中,降级通常并不安全。本文利用会话类型来跟踪信息流,并为消息传递并发进程贡献了一个 IFC 类型系统,它允许在递归时降级 pc。为了使降级安全,本文引入了降级策略。重新分级策略用完整性标签来表示,这也是具有不同重新分级策略的实体安全组合的关键。论文开发了类型系统,并证明了对于类型良好的进程,进度对干扰的敏感性,排除了利用消息相对顺序的定时攻击。该类型系统已在类型检查器中实现,它支持使用本地安全理论的安全多态进程。
{"title":"Regrading Policies for Flexible Information Flow Control in Session-Typed Concurrency","authors":"Farzaneh Derakhshan, Stephanie Balzer, Yue Yao","doi":"arxiv-2407.20410","DOIUrl":"https://doi.org/arxiv-2407.20410","url":null,"abstract":"Noninterference guarantees that an attacker cannot infer secrets by\u0000interacting with a program. Information flow control (IFC) type systems assert\u0000noninterference by tracking the level of information learned (pc) and\u0000disallowing communication to entities of lesser or unrelated level than the pc.\u0000Control flow constructs such as loops are at odds with this pattern because\u0000they necessitate downgrading the pc upon recursion to be practical. In a\u0000concurrent setting, however, downgrading is not generally safe. This paper\u0000utilizes session types to track the flow of information and contributes an IFC\u0000type system for message-passing concurrent processes that allows downgrading\u0000the pc upon recursion. To make downgrading safe, the paper introduces regrading\u0000policies. Regrading policies are expressed in terms of integrity labels, which\u0000are also key to safe composition of entities with different regrading policies.\u0000The paper develops the type system and proves progress-sensitive\u0000noninterference for well-typed processes, ruling out timing attacks that\u0000exploit the relative order of messages. The type system has been implemented in\u0000a type checker, which supports security-polymorphic processes using local\u0000security theories.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141866615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Formal Foundations for Translational Separation Logic Verifiers (extended version) 翻译分离逻辑验证器的形式基础(扩展版)
Pub Date : 2024-07-29 DOI: arxiv-2407.20002
Thibault DardinierETH Zurich, Michael SammlerETH Zurich, Gaurav ParthasarathyETH Zurich, Alexander J. SummersUniversity of British Columbia, Peter MüllerETH Zurich
Program verification tools are often implemented as front-end translations ofan input program into an intermediate verification language (IVL) such asBoogie, GIL, Viper, or Why3. The resulting IVL program is then verified usingan existing back-end verifier. A soundness proof for such a translationalverifier needs to relate the input program and verification logic to thesemantics of the IVL, which in turn needs to be connected with the verificationlogic implemented in the back-end verifiers. Performing such proofs ischallenging due to the large semantic gap between the input and output programsand logics, especially for complex verification logics such as separationlogic. This paper presents a formal framework for reasoning about translationalseparation logic verifiers. At its center is a generic core IVL that capturesthe essence of different separation logics. We define its operational semanticsand formally connect it to two different back-end verifiers, which use symbolicexecution and verification condition generation, resp. Crucially, thissemantics uses angelic non-determinism to enable the application of differentproof search algorithms and heuristics in the back-end verifiers. An axiomaticsemantics for the core IVL simplifies reasoning about the front-end translationby performing essential proof steps once and for all in the equivalence proofwith the operational semantics rather than for each concrete front-endtranslation. We illustrate the usefulness of our formal framework by instantiating ourcore IVL with elements of Viper and connecting it to two Viper back-ends aswell as a front-end for concurrent separation logic. All our technical resultshave been formalized in Isabelle/HOL, including the core IVL and its semantics,the semantics of two back-ends for a subset of Viper, and all proofs.
程序验证工具通常是将输入程序前端翻译成中间验证语言(IVL),如 Boogie、GIL、Viper 或 Why3。然后使用现有的后端验证器来验证生成的 IVL 程序。这种翻译验证器的合理性证明需要将输入程序和验证逻辑与 IVL 的语义联系起来,而 IVL 的语义又需要与后端验证器中实现的验证逻辑联系起来。由于输入和输出程序与逻辑之间存在巨大的语义差距,特别是对于复杂的验证逻辑(如分离逻辑),进行这样的证明非常具有挑战性。本文提出了一个用于推理转换分离逻辑验证器的形式框架。其核心是一个通用核心 IVL,它抓住了不同分离逻辑的本质。我们定义了它的操作语义,并将其正式连接到两个不同的后端验证器,这两个验证器分别使用符号执行和验证条件生成。最重要的是,这个语义使用了天使非决定论,以便在后端验证器中应用不同的验证搜索算法和启发式算法。核心 IVL 的公理化语义简化了对前端翻译的推理,因为它可以在与运算语义的等价性证明中一次性执行基本的证明步骤,而不是针对每个具体的前端翻译。我们用 Viper 的元素实例化了我们的核心 IVL,并将其连接到两个 Viper 后端以及并发分离逻辑的前端,从而说明了我们的形式框架的实用性。我们的所有技术成果都已在 Isabelle/HOL 中形式化,包括核心 IVL 及其语义、Viper 子集的两个后端语义以及所有证明。
{"title":"Formal Foundations for Translational Separation Logic Verifiers (extended version)","authors":"Thibault DardinierETH Zurich, Michael SammlerETH Zurich, Gaurav ParthasarathyETH Zurich, Alexander J. SummersUniversity of British Columbia, Peter MüllerETH Zurich","doi":"arxiv-2407.20002","DOIUrl":"https://doi.org/arxiv-2407.20002","url":null,"abstract":"Program verification tools are often implemented as front-end translations of\u0000an input program into an intermediate verification language (IVL) such as\u0000Boogie, GIL, Viper, or Why3. The resulting IVL program is then verified using\u0000an existing back-end verifier. A soundness proof for such a translational\u0000verifier needs to relate the input program and verification logic to the\u0000semantics of the IVL, which in turn needs to be connected with the verification\u0000logic implemented in the back-end verifiers. Performing such proofs is\u0000challenging due to the large semantic gap between the input and output programs\u0000and logics, especially for complex verification logics such as separation\u0000logic. This paper presents a formal framework for reasoning about translational\u0000separation logic verifiers. At its center is a generic core IVL that captures\u0000the essence of different separation logics. We define its operational semantics\u0000and formally connect it to two different back-end verifiers, which use symbolic\u0000execution and verification condition generation, resp. Crucially, this\u0000semantics uses angelic non-determinism to enable the application of different\u0000proof search algorithms and heuristics in the back-end verifiers. An axiomatic\u0000semantics for the core IVL simplifies reasoning about the front-end translation\u0000by performing essential proof steps once and for all in the equivalence proof\u0000with the operational semantics rather than for each concrete front-end\u0000translation. We illustrate the usefulness of our formal framework by instantiating our\u0000core IVL with elements of Viper and connecting it to two Viper back-ends as\u0000well as a front-end for concurrent separation logic. All our technical results\u0000have been formalized in Isabelle/HOL, including the core IVL and its semantics,\u0000the semantics of two back-ends for a subset of Viper, and all proofs.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141866527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting and explaining (in)equivalence of context-free grammars 检测和解释无上下文语法的(不)等价性
Pub Date : 2024-07-25 DOI: arxiv-2407.18220
Marko Schmellenkamp, Thomas Zeume, Sven Argo, Sandra Kiefer, Cedric Siems, Fynn Stebel
We propose a scalable framework for deciding, proving, and explaining(in)equivalence of context-free grammars. We present an implementation of theframework and evaluate it on large data sets collected within educationalsupport systems. Even though the equivalence problem for context-free languagesis undecidable in general, the framework is able to handle a large portion ofthese datasets. It introduces and combines techniques from several areas, suchas an abstract grammar transformation language to identify equivalent grammarsas well as sufficiently similar inequivalent grammars, theory-based comparisonalgorithms for a large class of context-free languages, and agraph-theory-inspired grammar canonization that allows to efficiently identifyisomorphic grammars.
我们提出了一个可扩展的框架,用于决定、证明和解释无上下文语法的(不)等价性。我们介绍了该框架的实现,并在教育支持系统中收集的大型数据集上对其进行了评估。尽管无上下文语言的等价性问题在一般情况下是不可判定的,但该框架能够处理这些数据集的大部分内容。该框架引入并结合了多个领域的技术,如用于识别等价语法和足够相似的不等价语法的抽象语法转换语言、用于一大类无上下文语言的基于理论的比较算法,以及可高效识别同构语法的图论启发的语法规范化。
{"title":"Detecting and explaining (in)equivalence of context-free grammars","authors":"Marko Schmellenkamp, Thomas Zeume, Sven Argo, Sandra Kiefer, Cedric Siems, Fynn Stebel","doi":"arxiv-2407.18220","DOIUrl":"https://doi.org/arxiv-2407.18220","url":null,"abstract":"We propose a scalable framework for deciding, proving, and explaining\u0000(in)equivalence of context-free grammars. We present an implementation of the\u0000framework and evaluate it on large data sets collected within educational\u0000support systems. Even though the equivalence problem for context-free languages\u0000is undecidable in general, the framework is able to handle a large portion of\u0000these datasets. It introduces and combines techniques from several areas, such\u0000as an abstract grammar transformation language to identify equivalent grammars\u0000as well as sufficiently similar inequivalent grammars, theory-based comparison\u0000algorithms for a large class of context-free languages, and a\u0000graph-theory-inspired grammar canonization that allows to efficiently identify\u0000isomorphic grammars.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141775834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPLAT: A framework for optimised GPU code-generation for SParse reguLar ATtention SPLAT:优化 GPU 代码生成以实现稀疏重组的框架
Pub Date : 2024-07-23 DOI: arxiv-2407.16847
Ahan Gupta, Yueming Yuan, Devansh Jain, Yuhao Ge, David Aponte, Yanqi Zhou, Charith Mendis
Multi-head-self-attention (MHSA) mechanisms achieve state-of-the-art (SOTA)performance across natural language processing and vision tasks. However, theirquadratic dependence on sequence lengths has bottlenecked inference speeds. Tocircumvent this bottleneck, researchers have proposed various sparse-MHSAmodels, where a subset of full attention is computed. Despite their promise,current sparse libraries and compilers do not support high-performanceimplementations for diverse sparse-MHSA patterns due to the underlying sparseformats they operate on. These formats, which are typically designed forhigh-performance & scientific computing applications, are either curated forextreme amounts of random sparsity (<1% non-zero values), or specific sparsitypatterns. However, the sparsity patterns in sparse-MHSA are moderately sparse(10-50% non-zero values) and varied, resulting in existing sparse-formatstrading off generality for performance. We bridge this gap, achieving both generality and performance, by proposing anovel sparse format: affine-compressed-sparse-row (ACSR) and supportingcode-generation scheme, SPLAT, that generates high-performance implementationsfor diverse sparse-MHSA patterns on GPUs. Core to our proposed format and codegeneration algorithm is the observation that common sparse-MHSA patterns haveuniquely regular geometric properties. These properties, which can be analyzedjust-in-time, expose novel optimizations and tiling strategies that SPLATexploits to generate high-performance implementations for diverse patterns. Todemonstrate SPLAT's efficacy, we use it to generate code for varioussparse-MHSA models, achieving geomean speedups of 2.05x and 4.05x overhand-written kernels written in triton and TVM respectively on A100 GPUs.Moreover, its interfaces are intuitive and easy to use with existingimplementations of MHSA in JAX.
多头自我注意(MHSA)机制在自然语言处理和视觉任务中实现了最先进的(SOTA)性能。然而,它们对序列长度的二次依赖性制约了推理速度。为了规避这一瓶颈,研究人员提出了各种稀疏-MHSA 模型,即计算全注意力的子集。尽管目前的稀疏库和编译器很有前途,但由于其运行的底层稀疏格式,它们并不支持各种稀疏-MHSA 模式的高性能实现。这些格式通常是为高性能计算和科学计算应用而设计的,要么是经过精心策划的大量随机稀疏性(<1% 非零值),要么是特定的稀疏性模式。然而,稀疏-MHSA 中的稀疏模式是中度稀疏(10%-50% 非零值)和多样的,导致现有的稀疏格式为了性能而牺牲了通用性。我们提出了一种新的稀疏格式:仿射压缩稀疏行(ACSR)和配套的代码生成方案 SPLAT,可以在 GPU 上生成各种稀疏-MHSA 模式的高性能实现,从而弥补了这一差距,实现了通用性和性能的双赢。我们提出的格式和代码生成算法的核心是观察到常见的稀疏-MHSA 图案具有独特的规则几何特性。这些特性可以及时分析,揭示出新颖的优化和平铺策略,SPLAT利用这些策略为各种图案生成高性能的实现。为了证明 SPLAT 的功效,我们用它来生成各种解析-MHSA 模型的代码,在 A100 GPU 上比用 triton 和 TVM 编写的内核分别提高了 2.05 倍和 4.05 倍。
{"title":"SPLAT: A framework for optimised GPU code-generation for SParse reguLar ATtention","authors":"Ahan Gupta, Yueming Yuan, Devansh Jain, Yuhao Ge, David Aponte, Yanqi Zhou, Charith Mendis","doi":"arxiv-2407.16847","DOIUrl":"https://doi.org/arxiv-2407.16847","url":null,"abstract":"Multi-head-self-attention (MHSA) mechanisms achieve state-of-the-art (SOTA)\u0000performance across natural language processing and vision tasks. However, their\u0000quadratic dependence on sequence lengths has bottlenecked inference speeds. To\u0000circumvent this bottleneck, researchers have proposed various sparse-MHSA\u0000models, where a subset of full attention is computed. Despite their promise,\u0000current sparse libraries and compilers do not support high-performance\u0000implementations for diverse sparse-MHSA patterns due to the underlying sparse\u0000formats they operate on. These formats, which are typically designed for\u0000high-performance & scientific computing applications, are either curated for\u0000extreme amounts of random sparsity (<1% non-zero values), or specific sparsity\u0000patterns. However, the sparsity patterns in sparse-MHSA are moderately sparse\u0000(10-50% non-zero values) and varied, resulting in existing sparse-formats\u0000trading off generality for performance. We bridge this gap, achieving both generality and performance, by proposing a\u0000novel sparse format: affine-compressed-sparse-row (ACSR) and supporting\u0000code-generation scheme, SPLAT, that generates high-performance implementations\u0000for diverse sparse-MHSA patterns on GPUs. Core to our proposed format and code\u0000generation algorithm is the observation that common sparse-MHSA patterns have\u0000uniquely regular geometric properties. These properties, which can be analyzed\u0000just-in-time, expose novel optimizations and tiling strategies that SPLAT\u0000exploits to generate high-performance implementations for diverse patterns. To\u0000demonstrate SPLAT's efficacy, we use it to generate code for various\u0000sparse-MHSA models, achieving geomean speedups of 2.05x and 4.05x over\u0000hand-written kernels written in triton and TVM respectively on A100 GPUs.\u0000Moreover, its interfaces are intuitive and easy to use with existing\u0000implementations of MHSA in JAX.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"141 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Language-Based Security for Low-Level MPC 基于语言的低级多用途运算引擎安全性
Pub Date : 2024-07-23 DOI: arxiv-2407.16504
Christian Skalka, Joseph P. Near
Secure Multi-Party Computation (MPC) is an important enabling technology fordata privacy in modern distributed applications. Currently, proof methods forlow-level MPC protocols are primarily manual and thus tedious and error-prone,and are also non-standardized and unfamiliar to most PL theorists. As a steptowards better language support and language-based enforcement, we develop anew staged PL for defining a variety of low-level probabilistic MPC protocols.We also formulate a collection of confidentiality and integrity hyperpropertiesfor our language model that are familiar from information flow, includingconditional noninterference, gradual release, and robust declassification. Wedemonstrate their relation to standard MPC threat models of passive andmalicious security, and how they can be leveraged in security verification ofprotocols. To prove these properties we develop automated tactics in$mathbb{F}_2$ that can be integrated with separation logic-style reasoning.
安全多方计算(MPC)是现代分布式应用中数据隐私的一项重要使能技术。目前,低级多方计算协议的证明方法主要是手动的,因此既繁琐又容易出错,而且也是非标准化的,大多数 PL 理论家都不熟悉。为了提供更好的语言支持和基于语言的执行,我们开发了一种新的分阶段 PL,用于定义各种低级概率 MPC 协议。我们还为我们的语言模型制定了一系列信息流中熟悉的保密性和完整性超属性,包括有条件不干涉、逐步释放和稳健解密。我们展示了它们与被动和恶意安全的标准 MPC 威胁模型的关系,以及如何在协议的安全验证中利用它们。为了证明这些特性,我们在$mathbb{F}_2$中开发了可与分离逻辑式推理相结合的自动策略。
{"title":"Language-Based Security for Low-Level MPC","authors":"Christian Skalka, Joseph P. Near","doi":"arxiv-2407.16504","DOIUrl":"https://doi.org/arxiv-2407.16504","url":null,"abstract":"Secure Multi-Party Computation (MPC) is an important enabling technology for\u0000data privacy in modern distributed applications. Currently, proof methods for\u0000low-level MPC protocols are primarily manual and thus tedious and error-prone,\u0000and are also non-standardized and unfamiliar to most PL theorists. As a step\u0000towards better language support and language-based enforcement, we develop a\u0000new staged PL for defining a variety of low-level probabilistic MPC protocols.\u0000We also formulate a collection of confidentiality and integrity hyperproperties\u0000for our language model that are familiar from information flow, including\u0000conditional noninterference, gradual release, and robust declassification. We\u0000demonstrate their relation to standard MPC threat models of passive and\u0000malicious security, and how they can be leveraged in security verification of\u0000protocols. To prove these properties we develop automated tactics in\u0000$mathbb{F}_2$ that can be integrated with separation logic-style reasoning.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"94 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141775835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preventing Out-of-Gas Exceptions by Typing 通过键入防止气源异常
Pub Date : 2024-07-22 DOI: arxiv-2407.15676
Luca Aceto, Daniele Gorla, Stian Lybech, Mohammad Hamdaqa
We continue the development of TinySol, a minimal object-oriented languagebased on Solidity, the standard smart-contract language used for the Ethereumplatform. We first extend TinySol with exceptions and a gas mechanism, andequip it with a small-step operational semantics. Introducing the gas mechanismis fundamental for modelling real-life smart contracts in TinySol, since thisis the way in which termination of Ethereum smart contracts is usually ensured.We then devise a type system for smart contracts guaranteeing that suchprograms never run out of gas at runtime. This is a desirable property forsmart contracts, since a transaction that runs out of gas is aborted, but theprice paid to run the code is not returned to the invoker.
我们继续开发 TinySol,这是一种基于以太坊平台使用的标准智能合约语言 Solidity 的最小面向对象语言。我们首先用异常和气体机制扩展了 TinySol,并为其配备了小步操作语义。引入气体机制是在TinySol中模拟现实生活中的智能合约的基础,因为这是通常确保以太坊智能合约终止的方式。这对于智能合约来说是一个理想的属性,因为耗尽气体的交易会中止,但为运行代码所支付的价格不会返还给调用者。
{"title":"Preventing Out-of-Gas Exceptions by Typing","authors":"Luca Aceto, Daniele Gorla, Stian Lybech, Mohammad Hamdaqa","doi":"arxiv-2407.15676","DOIUrl":"https://doi.org/arxiv-2407.15676","url":null,"abstract":"We continue the development of TinySol, a minimal object-oriented language\u0000based on Solidity, the standard smart-contract language used for the Ethereum\u0000platform. We first extend TinySol with exceptions and a gas mechanism, and\u0000equip it with a small-step operational semantics. Introducing the gas mechanism\u0000is fundamental for modelling real-life smart contracts in TinySol, since this\u0000is the way in which termination of Ethereum smart contracts is usually ensured.\u0000We then devise a type system for smart contracts guaranteeing that such\u0000programs never run out of gas at runtime. This is a desirable property for\u0000smart contracts, since a transaction that runs out of gas is aborted, but the\u0000price paid to run the code is not returned to the invoker.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141775836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SNIP: Speculative Execution and Non-Interference Preservation for Compiler Transformations SNIP:编译器转换的推测性执行和非干涉保护
Pub Date : 2024-07-21 DOI: arxiv-2407.15080
Sören van der Wall, Roland Meyer
We address the problem of preserving non-interference across compilertransformations under speculative semantics. We develop a proof method thatensures the preservation uniformly across all source programs. The basis of ourproof method is a new form of simulation relation. It operates over directivesthat model the attacker's control over the micro-architectural state, and itaccounts for the fact that the compiler transformation may change the influenceof the micro-architectural state on the execution (and hence the directives).Using our proof method, we show the correctness of dead code elimination. Whenwe tried to prove register allocation correct, we identified a previouslyunknown weakness that introduces violations to non-interference. We haveconfirmed the weakness for a mainstream compiler on code from the libsodiumcryptographic library. To reclaim security once more, we develop a novel staticanalysis that operates on a product of source program and register-allocatedprogram. Using the analysis, we present an automated fix to existing registerallocation implementations. We prove the correctness of the fixed registerallocations with our proof method.
我们探讨了在推测语义下,如何在编译器变换中保持互不干涉的问题。我们开发了一种证明方法,可以确保在所有源程序中统一地保持不干涉。我们的证明方法的基础是一种新形式的模拟关系。它作用于模拟攻击者对微体系结构状态控制的指令,并考虑到编译器转换可能会改变微体系结构状态对执行(以及指令)的影响这一事实。当我们试图证明寄存器分配的正确性时,我们发现了一个以前未知的弱点,它引入了违反互不干涉原则的行为。我们在 libsodium 密码库代码的主流编译器中证实了这一弱点。为了再次找回安全性,我们开发了一种新颖的静态分析方法,可对源程序和寄存器分配程序的乘积进行操作。利用该分析,我们对现有的寄存器分配实现进行了自动修复。我们用证明方法证明了固定寄存器分配的正确性。
{"title":"SNIP: Speculative Execution and Non-Interference Preservation for Compiler Transformations","authors":"Sören van der Wall, Roland Meyer","doi":"arxiv-2407.15080","DOIUrl":"https://doi.org/arxiv-2407.15080","url":null,"abstract":"We address the problem of preserving non-interference across compiler\u0000transformations under speculative semantics. We develop a proof method that\u0000ensures the preservation uniformly across all source programs. The basis of our\u0000proof method is a new form of simulation relation. It operates over directives\u0000that model the attacker's control over the micro-architectural state, and it\u0000accounts for the fact that the compiler transformation may change the influence\u0000of the micro-architectural state on the execution (and hence the directives).\u0000Using our proof method, we show the correctness of dead code elimination. When\u0000we tried to prove register allocation correct, we identified a previously\u0000unknown weakness that introduces violations to non-interference. We have\u0000confirmed the weakness for a mainstream compiler on code from the libsodium\u0000cryptographic library. To reclaim security once more, we develop a novel static\u0000analysis that operates on a product of source program and register-allocated\u0000program. Using the analysis, we present an automated fix to existing register\u0000allocation implementations. We prove the correctness of the fixed register\u0000allocations with our proof method.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Describe Data to get Science-Data-Ready Tooling: Awkward as a Target for Kaitai Struct YAML 描述数据,获取科学数据就绪工具:作为开泰结构 YAML 目标的尴尬
Pub Date : 2024-07-19 DOI: arxiv-2407.14461
Manasvi Goyal, Andrea Zonca, Amy Roberts, Jim Pivarski, Ianna Osborne
In some fields, scientific data formats differ across experiments due tospecialized hardware and data acquisition systems. Researchers need to develop,document, and maintain experiment-specific analysis software to interact withthese data formats. These software are often tightly coupled with a particulardata format. This proliferation of custom data formats has been a prominentchallenge for small to mid-scale experiments. The widespread adoption of ROOThas largely mitigated this problem for the Large Hadron Collider experiments.However, many smaller experiments continue to use custom data formats to meetspecific research needs. Therefore, simplifying the process of accessing aunique data format for analysis holds immense value for scientific communitieswithin HEP. We have added Awkward Arrays as a target language for Kaitai Structfor this purpose. Researchers can describe their custom data format in theKaitai Struct YAML (KSY) language. The Kaitai Struct Compiler generates C++code to fill the LayoutBuilder buffers using the KSY format. In a few steps,the Kaitai Struct Awkward Runtime API can convert the generated C++ code into acompiled Python module. Finally, the raw data can be passed to the module toproduce Awkward Arrays. This paper introduces the Awkward Target for the KaitaiStruct Compiler and the Kaitai Struct Awkward Runtime API. It also demonstratesthe conversion of a given KSY for a specific custom file format to AwkwardArrays.
在某些领域,由于硬件和数据采集系统的特殊性,不同实验的科学数据格式各不相同。研究人员需要开发、记录和维护特定于实验的分析软件,以便与这些数据格式进行交互。这些软件通常与特定的数据格式紧密结合。定制数据格式的激增一直是中小型实验面临的一个突出挑战。ROOT 的广泛采用在很大程度上缓解了大型强子对撞机实验的这一问题。然而,许多小型实验仍在继续使用自定义数据格式,以满足特定的研究需求。因此,简化访问独特数据格式进行分析的过程对于 HEP 内的科学界具有巨大价值。为此,我们为 Kaitai Struct 增加了 Awkward Arrays 作为目标语言。研究人员可以用 Kaitai Struct YAML(KSY)语言描述他们的自定义数据格式。Kaitai Struct 编译器会生成 C++ 代码,使用 KSY 格式填充 LayoutBuilder 缓冲区。只需几步,Kaitai Struct Awkward Runtime API 就能将生成的 C++ 代码转换为编译后的 Python 模块。最后,原始数据可以传递给模块,生成 Awkward 数组。本文介绍了用于 KaitaiStruct 编译器的 Awkward Target 和 Kaitai Struct Awkward Runtime API。本文还演示了将特定自定义文件格式的给定 KSY 转换为 AwkwardArrays 的过程。
{"title":"Describe Data to get Science-Data-Ready Tooling: Awkward as a Target for Kaitai Struct YAML","authors":"Manasvi Goyal, Andrea Zonca, Amy Roberts, Jim Pivarski, Ianna Osborne","doi":"arxiv-2407.14461","DOIUrl":"https://doi.org/arxiv-2407.14461","url":null,"abstract":"In some fields, scientific data formats differ across experiments due to\u0000specialized hardware and data acquisition systems. Researchers need to develop,\u0000document, and maintain experiment-specific analysis software to interact with\u0000these data formats. These software are often tightly coupled with a particular\u0000data format. This proliferation of custom data formats has been a prominent\u0000challenge for small to mid-scale experiments. The widespread adoption of ROOT\u0000has largely mitigated this problem for the Large Hadron Collider experiments.\u0000However, many smaller experiments continue to use custom data formats to meet\u0000specific research needs. Therefore, simplifying the process of accessing a\u0000unique data format for analysis holds immense value for scientific communities\u0000within HEP. We have added Awkward Arrays as a target language for Kaitai Struct\u0000for this purpose. Researchers can describe their custom data format in the\u0000Kaitai Struct YAML (KSY) language. The Kaitai Struct Compiler generates C++\u0000code to fill the LayoutBuilder buffers using the KSY format. In a few steps,\u0000the Kaitai Struct Awkward Runtime API can convert the generated C++ code into a\u0000compiled Python module. Finally, the raw data can be passed to the module to\u0000produce Awkward Arrays. This paper introduces the Awkward Target for the Kaitai\u0000Struct Compiler and the Kaitai Struct Awkward Runtime API. It also demonstrates\u0000the conversion of a given KSY for a specific custom file format to Awkward\u0000Arrays.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"181 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141737354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approximate Relational Reasoning for Higher-Order Probabilistic Programs 高阶概率程序的近似关系推理
Pub Date : 2024-07-19 DOI: arxiv-2407.14107
Philipp G. Haselwarter, Kwing Hei Li, Alejandro Aguirre, Simon Oddershede Gregersen, Joseph Tassarotti, Lars Birkedal
Properties such as provable security and correctness for randomized programsare naturally expressed relationally as approximate equivalences. As a result,a number of relational program logics have been developed to reason about suchapproximate equivalences of probabilistic programs. However, existingapproximate relational logics are mostly restricted to first-order programswithout general state. In this paper we develop Approxis, a higher-order approximate relationalseparation logic for reasoning about approximate equivalence of programswritten in an expressive ML-like language with discrete probabilistic sampling,higher-order functions, and higher-order state. The Approxis logic recasts theconcept of error credits in the relational setting to reason about relationalapproximation, which allows for expressive notions of modularity andcomposition, a range of new approximate relational rules, and aninternalization of a standard limiting argument for showing exact probabilisticequivalences by approximation. We also use Approxis to develop a logicalrelation model that quantifies over error credits, which can be used to proveexact contextual equivalence. We demonstrate the flexibility of our approach ona range of examples, including the PRP/PRF switching lemma, IND$-CPA securityof an encryption scheme, and a collection of rejection samplers. All of theresults have been mechanized in the Coq proof assistant and the Iris separationlogic framework.
随机化程序的可证明安全性和正确性等属性,可以自然地通过关系表达为近似等价。因此,人们开发了许多关系程序逻辑来推理概率程序的近似等价性。然而,现有的近似关系逻辑大多局限于没有一般状态的一阶程序。在本文中,我们开发了一种高阶近似关系分离逻辑 Approxis,用于推理用具有离散概率采样、高阶函数和高阶状态的表达式 ML 样语言编写的程序的近似等价性。Approxis 逻辑重构了关系设置中的误差信用概念,以推理关系近似,它允许模块化和组合的表达式概念、一系列新的近似关系规则,以及标准限制论证的内部化,从而通过近似来显示精确的概率等价性。我们还利用 Approxis 开发了一个逻辑关联模型,该模型可量化错误信用,并可用于证明精确的上下文等价性。我们在一系列示例中展示了我们方法的灵活性,包括 PRP/PRF 切换两难、加密方案的 IND$-CPA 安全性以及一系列拒绝采样器。所有结果都已在 Coq 证明助手和 Iris 分离逻辑框架中实现了机械化。
{"title":"Approximate Relational Reasoning for Higher-Order Probabilistic Programs","authors":"Philipp G. Haselwarter, Kwing Hei Li, Alejandro Aguirre, Simon Oddershede Gregersen, Joseph Tassarotti, Lars Birkedal","doi":"arxiv-2407.14107","DOIUrl":"https://doi.org/arxiv-2407.14107","url":null,"abstract":"Properties such as provable security and correctness for randomized programs\u0000are naturally expressed relationally as approximate equivalences. As a result,\u0000a number of relational program logics have been developed to reason about such\u0000approximate equivalences of probabilistic programs. However, existing\u0000approximate relational logics are mostly restricted to first-order programs\u0000without general state. In this paper we develop Approxis, a higher-order approximate relational\u0000separation logic for reasoning about approximate equivalence of programs\u0000written in an expressive ML-like language with discrete probabilistic sampling,\u0000higher-order functions, and higher-order state. The Approxis logic recasts the\u0000concept of error credits in the relational setting to reason about relational\u0000approximation, which allows for expressive notions of modularity and\u0000composition, a range of new approximate relational rules, and an\u0000internalization of a standard limiting argument for showing exact probabilistic\u0000equivalences by approximation. We also use Approxis to develop a logical\u0000relation model that quantifies over error credits, which can be used to prove\u0000exact contextual equivalence. We demonstrate the flexibility of our approach on\u0000a range of examples, including the PRP/PRF switching lemma, IND$-CPA security\u0000of an encryption scheme, and a collection of rejection samplers. All of the\u0000results have been mechanized in the Coq proof assistant and the Iris separation\u0000logic framework.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141737353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Programming Languages
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1