首页 > 最新文献

arXiv - CS - Programming Languages最新文献

英文 中文
typedKanren: Statically Typed Relational Programming with Exhaustive Matching in Haskell typedKanren:用 Haskell 中的穷举匹配进行静态类型关系编程
Pub Date : 2024-08-06 DOI: arxiv-2408.03170
Nikolai Kudasov, Artem Starikov
We present a statically typed embedding of relational programming(specifically a dialect of miniKanren with disequality constraints) in Haskell.Apart from handling types, our dialect extends standard relational combinatorrepertoire with a variation of relational matching that supports staticexhaustiveness checks. To hide the boilerplate definitions and supportcomfortable logic programming with user-defined data types we use genericprogramming via GHC.Generics as well as metaprogramming via Template Haskell.We demonstrate our dialect on several examples and compare its performanceagainst some other known implementations of miniKanren.
除了处理类型之外,我们的方言还通过支持静态穷举检查的关系匹配变体扩展了标准关系组合库。为了隐藏模板定义并支持用户自定义数据类型的舒适逻辑编程,我们通过 GHC.Generics 使用了泛型编程,并通过 Template Haskell 使用了元编程。
{"title":"typedKanren: Statically Typed Relational Programming with Exhaustive Matching in Haskell","authors":"Nikolai Kudasov, Artem Starikov","doi":"arxiv-2408.03170","DOIUrl":"https://doi.org/arxiv-2408.03170","url":null,"abstract":"We present a statically typed embedding of relational programming\u0000(specifically a dialect of miniKanren with disequality constraints) in Haskell.\u0000Apart from handling types, our dialect extends standard relational combinator\u0000repertoire with a variation of relational matching that supports static\u0000exhaustiveness checks. To hide the boilerplate definitions and support\u0000comfortable logic programming with user-defined data types we use generic\u0000programming via GHC.Generics as well as metaprogramming via Template Haskell.\u0000We demonstrate our dialect on several examples and compare its performance\u0000against some other known implementations of miniKanren.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inferring Accumulative Effects of Higher Order Programs 推断高阶程序的累积效应
Pub Date : 2024-08-05 DOI: arxiv-2408.02791
Mihai Nicola, Chaitanya Agarwal, Eric Koskinen, Thomas Wies
Many temporal safety properties of higher-order programs go beyond simpleevent sequencing and require an automaton register (or "accumulator") toexpress, such as input-dependency, event summation, resource usage, ensuringequal event magnitude, computation cost, etc. Some steps have been made towardsverifying more basic temporal event sequences via reductions to fairtermination [Murase et al. 2016] or some input-dependent properties throughdeductive proof systems [Nanjo et al. 2018]. However, there are currently noautomated techniques to verify the more general class of register-automatonsafety properties of higher-order programs. We introduce an abstract interpretation-based analysis to compute dependent,register-automata effects of recursive, higher-order programs. We captureproperties of a program's effects in terms of automata that summarizes thehistory of observed effects using an accumulator register. The key novelty is anew abstract domain for context-dependent effects, capable of abstractingrelations between the program environment, the automaton control state, and theaccumulator value. The upshot is a dataflow type and effect system thatcomputes context-sensitive effect summaries. We demonstrate our work via aprototype implementation that computes dependent effect summaries (andvalidates assertions) for OCaml-like recursive higher order programs. As abasis of comparison, we describe reductions to assertion checking foreffect-free programs, and demonstrate that our approach outperforms prior toolsDrift and RCaml/PCSat. Overall, across a set of 21 new benchmarks, RCaml/PCSatcould not verify any, Drift verified 9 benchmarks, and evDrift verified 19;evDrift also had a 30.5x over Drift on those benchmarks that both tools couldsolve.
高阶程序的许多时态安全特性超出了简单的事件排序,需要自动机寄存器(或 "累加器")来表达,如输入依赖性、事件求和、资源使用、确保事件量级相等、计算成本等。在通过还原到公平终结(fairtermination)[Murase 等人,2016 年] 或通过演绎证明系统(Nanjo 等人,2018 年] 来验证更基本的时序事件序列方面,人们已经迈出了一些步伐。然而,目前还没有自动化技术来验证高阶程序更一般的寄存器自动安全属性。我们引入了一种基于抽象解释的分析方法,来计算递归高阶程序的依赖性寄存器自动效应。我们用自动机捕捉程序的效应属性,自动机总结了使用累加器寄存器观察到的效应历史。关键的新颖之处在于为上下文相关效应提供了一个新的抽象域,能够抽象出程序环境、自动机控制状态和累加器值之间的关系。其结果是建立了一个数据流类型和效应系统,该系统可以计算与上下文相关的效应摘要。我们通过一个原型实现来演示我们的工作,它可以计算类似于 OCaml 的递归高阶程序的依赖效应摘要(并验证断言)。作为比较,我们描述了无效应程序断言检查的简化方法,并证明我们的方法优于先前的工具Drift和RCaml/PCSat。总体而言,在一组 21 个新基准中,RCaml/PCSat 无法验证任何基准,Drift 验证了 9 个基准,而 evDrift 验证了 19 个;在两个工具都能解决的基准上,evDrift 比 Drift 高出 30.5 倍。
{"title":"Inferring Accumulative Effects of Higher Order Programs","authors":"Mihai Nicola, Chaitanya Agarwal, Eric Koskinen, Thomas Wies","doi":"arxiv-2408.02791","DOIUrl":"https://doi.org/arxiv-2408.02791","url":null,"abstract":"Many temporal safety properties of higher-order programs go beyond simple\u0000event sequencing and require an automaton register (or \"accumulator\") to\u0000express, such as input-dependency, event summation, resource usage, ensuring\u0000equal event magnitude, computation cost, etc. Some steps have been made towards\u0000verifying more basic temporal event sequences via reductions to fair\u0000termination [Murase et al. 2016] or some input-dependent properties through\u0000deductive proof systems [Nanjo et al. 2018]. However, there are currently no\u0000automated techniques to verify the more general class of register-automaton\u0000safety properties of higher-order programs. We introduce an abstract interpretation-based analysis to compute dependent,\u0000register-automata effects of recursive, higher-order programs. We capture\u0000properties of a program's effects in terms of automata that summarizes the\u0000history of observed effects using an accumulator register. The key novelty is a\u0000new abstract domain for context-dependent effects, capable of abstracting\u0000relations between the program environment, the automaton control state, and the\u0000accumulator value. The upshot is a dataflow type and effect system that\u0000computes context-sensitive effect summaries. We demonstrate our work via a\u0000prototype implementation that computes dependent effect summaries (and\u0000validates assertions) for OCaml-like recursive higher order programs. As a\u0000basis of comparison, we describe reductions to assertion checking for\u0000effect-free programs, and demonstrate that our approach outperforms prior tools\u0000Drift and RCaml/PCSat. Overall, across a set of 21 new benchmarks, RCaml/PCSat\u0000could not verify any, Drift verified 9 benchmarks, and evDrift verified 19;\u0000evDrift also had a 30.5x over Drift on those benchmarks that both tools could\u0000solve.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Program Logics to Language Logics 从程序逻辑到语言逻辑
Pub Date : 2024-08-02 DOI: arxiv-2408.01515
Matteo Cimini
Program logics are a powerful formal method in the context of programverification. Can we develop a counterpart of program logics in the context oflanguage verification? This paper proposes language logics, which allow forstatements of the form ${P} mathcal{X} {Q}$ where $mathcal{X}$, thesubject of analysis, can be a language component such as a piece of grammar, atyping rule, a reduction rule or other parts of a language definition. Todemonstrate our approach, we develop $mathbb{L}$, a language logic that can beused to analyze language definitions on various aspects of language design. Weillustrate $mathbb{L}$ to the analysis of some selected aspects of aprogramming language. We have also implemented an automated prover for$mathbb{L}$, and we confirm that the tool repeats these analyses. Ultimately,$mathbb{L}$ cannot verify languages. Nonetheless, we believe that this paperprovides a strong first step towards adopting the methods of program logics forthe analysis of languages.
在程序验证中,程序逻辑是一种强大的形式化方法。我们能否在语言验证中发展出与程序逻辑相对应的呢?本文提出了语言逻辑,它允许以 ${P}mathcal{X}}{Q}$ 的形式陈述,其中 $mathcal{X}$ 作为分析对象,可以是一个语言组件,如语法片段、分类规则、还原规则或语言定义的其他部分。为了演示我们的方法,我们开发了$mathbb{L}$,这是一种语言逻辑,可以用来分析语言设计中各个方面的语言定义。我们将 $mathbb{L}$ 用于分析编程语言的某些选定方面。我们还实现了$mathbb{L}$的自动求证器,并确认该工具可以重复这些分析。最终,$mathbb{L}$ 无法验证语言。尽管如此,我们相信本文为采用程序逻辑方法分析语言迈出了坚实的第一步。
{"title":"From Program Logics to Language Logics","authors":"Matteo Cimini","doi":"arxiv-2408.01515","DOIUrl":"https://doi.org/arxiv-2408.01515","url":null,"abstract":"Program logics are a powerful formal method in the context of program\u0000verification. Can we develop a counterpart of program logics in the context of\u0000language verification? This paper proposes language logics, which allow for\u0000statements of the form ${P} mathcal{X} {Q}$ where $mathcal{X}$, the\u0000subject of analysis, can be a language component such as a piece of grammar, a\u0000typing rule, a reduction rule or other parts of a language definition. To\u0000demonstrate our approach, we develop $mathbb{L}$, a language logic that can be\u0000used to analyze language definitions on various aspects of language design. We\u0000illustrate $mathbb{L}$ to the analysis of some selected aspects of a\u0000programming language. We have also implemented an automated prover for\u0000$mathbb{L}$, and we confirm that the tool repeats these analyses. Ultimately,\u0000$mathbb{L}$ cannot verify languages. Nonetheless, we believe that this paper\u0000provides a strong first step towards adopting the methods of program logics for\u0000the analysis of languages.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Regrading Policies for Flexible Information Flow Control in Session-Typed Concurrency 会话类型并发中灵活信息流控制的重整策略
Pub Date : 2024-07-29 DOI: arxiv-2407.20410
Farzaneh Derakhshan, Stephanie Balzer, Yue Yao
Noninterference guarantees that an attacker cannot infer secrets byinteracting with a program. Information flow control (IFC) type systems assertnoninterference by tracking the level of information learned (pc) anddisallowing communication to entities of lesser or unrelated level than the pc.Control flow constructs such as loops are at odds with this pattern becausethey necessitate downgrading the pc upon recursion to be practical. In aconcurrent setting, however, downgrading is not generally safe. This paperutilizes session types to track the flow of information and contributes an IFCtype system for message-passing concurrent processes that allows downgradingthe pc upon recursion. To make downgrading safe, the paper introduces regradingpolicies. Regrading policies are expressed in terms of integrity labels, whichare also key to safe composition of entities with different regrading policies.The paper develops the type system and proves progress-sensitivenoninterference for well-typed processes, ruling out timing attacks thatexploit the relative order of messages. The type system has been implemented ina type checker, which supports security-polymorphic processes using localsecurity theories.
不干涉保证了攻击者无法通过与程序的交互推断出秘密。信息流控制(IFC)类型的系统通过跟踪所获知信息(pc)的级别,并禁止与级别低于pc或与pc无关的实体通信,来保证无干扰。控制流结构(如循环)与这种模式相悖,因为它们必须在递归时降低pc的级别,才能实用。然而,在当前的环境中,降级通常并不安全。本文利用会话类型来跟踪信息流,并为消息传递并发进程贡献了一个 IFC 类型系统,它允许在递归时降级 pc。为了使降级安全,本文引入了降级策略。重新分级策略用完整性标签来表示,这也是具有不同重新分级策略的实体安全组合的关键。论文开发了类型系统,并证明了对于类型良好的进程,进度对干扰的敏感性,排除了利用消息相对顺序的定时攻击。该类型系统已在类型检查器中实现,它支持使用本地安全理论的安全多态进程。
{"title":"Regrading Policies for Flexible Information Flow Control in Session-Typed Concurrency","authors":"Farzaneh Derakhshan, Stephanie Balzer, Yue Yao","doi":"arxiv-2407.20410","DOIUrl":"https://doi.org/arxiv-2407.20410","url":null,"abstract":"Noninterference guarantees that an attacker cannot infer secrets by\u0000interacting with a program. Information flow control (IFC) type systems assert\u0000noninterference by tracking the level of information learned (pc) and\u0000disallowing communication to entities of lesser or unrelated level than the pc.\u0000Control flow constructs such as loops are at odds with this pattern because\u0000they necessitate downgrading the pc upon recursion to be practical. In a\u0000concurrent setting, however, downgrading is not generally safe. This paper\u0000utilizes session types to track the flow of information and contributes an IFC\u0000type system for message-passing concurrent processes that allows downgrading\u0000the pc upon recursion. To make downgrading safe, the paper introduces regrading\u0000policies. Regrading policies are expressed in terms of integrity labels, which\u0000are also key to safe composition of entities with different regrading policies.\u0000The paper develops the type system and proves progress-sensitive\u0000noninterference for well-typed processes, ruling out timing attacks that\u0000exploit the relative order of messages. The type system has been implemented in\u0000a type checker, which supports security-polymorphic processes using local\u0000security theories.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141866615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Formal Foundations for Translational Separation Logic Verifiers (extended version) 翻译分离逻辑验证器的形式基础(扩展版)
Pub Date : 2024-07-29 DOI: arxiv-2407.20002
Thibault DardinierETH Zurich, Michael SammlerETH Zurich, Gaurav ParthasarathyETH Zurich, Alexander J. SummersUniversity of British Columbia, Peter MüllerETH Zurich
Program verification tools are often implemented as front-end translations ofan input program into an intermediate verification language (IVL) such asBoogie, GIL, Viper, or Why3. The resulting IVL program is then verified usingan existing back-end verifier. A soundness proof for such a translationalverifier needs to relate the input program and verification logic to thesemantics of the IVL, which in turn needs to be connected with the verificationlogic implemented in the back-end verifiers. Performing such proofs ischallenging due to the large semantic gap between the input and output programsand logics, especially for complex verification logics such as separationlogic. This paper presents a formal framework for reasoning about translationalseparation logic verifiers. At its center is a generic core IVL that capturesthe essence of different separation logics. We define its operational semanticsand formally connect it to two different back-end verifiers, which use symbolicexecution and verification condition generation, resp. Crucially, thissemantics uses angelic non-determinism to enable the application of differentproof search algorithms and heuristics in the back-end verifiers. An axiomaticsemantics for the core IVL simplifies reasoning about the front-end translationby performing essential proof steps once and for all in the equivalence proofwith the operational semantics rather than for each concrete front-endtranslation. We illustrate the usefulness of our formal framework by instantiating ourcore IVL with elements of Viper and connecting it to two Viper back-ends aswell as a front-end for concurrent separation logic. All our technical resultshave been formalized in Isabelle/HOL, including the core IVL and its semantics,the semantics of two back-ends for a subset of Viper, and all proofs.
程序验证工具通常是将输入程序前端翻译成中间验证语言(IVL),如 Boogie、GIL、Viper 或 Why3。然后使用现有的后端验证器来验证生成的 IVL 程序。这种翻译验证器的合理性证明需要将输入程序和验证逻辑与 IVL 的语义联系起来,而 IVL 的语义又需要与后端验证器中实现的验证逻辑联系起来。由于输入和输出程序与逻辑之间存在巨大的语义差距,特别是对于复杂的验证逻辑(如分离逻辑),进行这样的证明非常具有挑战性。本文提出了一个用于推理转换分离逻辑验证器的形式框架。其核心是一个通用核心 IVL,它抓住了不同分离逻辑的本质。我们定义了它的操作语义,并将其正式连接到两个不同的后端验证器,这两个验证器分别使用符号执行和验证条件生成。最重要的是,这个语义使用了天使非决定论,以便在后端验证器中应用不同的验证搜索算法和启发式算法。核心 IVL 的公理化语义简化了对前端翻译的推理,因为它可以在与运算语义的等价性证明中一次性执行基本的证明步骤,而不是针对每个具体的前端翻译。我们用 Viper 的元素实例化了我们的核心 IVL,并将其连接到两个 Viper 后端以及并发分离逻辑的前端,从而说明了我们的形式框架的实用性。我们的所有技术成果都已在 Isabelle/HOL 中形式化,包括核心 IVL 及其语义、Viper 子集的两个后端语义以及所有证明。
{"title":"Formal Foundations for Translational Separation Logic Verifiers (extended version)","authors":"Thibault DardinierETH Zurich, Michael SammlerETH Zurich, Gaurav ParthasarathyETH Zurich, Alexander J. SummersUniversity of British Columbia, Peter MüllerETH Zurich","doi":"arxiv-2407.20002","DOIUrl":"https://doi.org/arxiv-2407.20002","url":null,"abstract":"Program verification tools are often implemented as front-end translations of\u0000an input program into an intermediate verification language (IVL) such as\u0000Boogie, GIL, Viper, or Why3. The resulting IVL program is then verified using\u0000an existing back-end verifier. A soundness proof for such a translational\u0000verifier needs to relate the input program and verification logic to the\u0000semantics of the IVL, which in turn needs to be connected with the verification\u0000logic implemented in the back-end verifiers. Performing such proofs is\u0000challenging due to the large semantic gap between the input and output programs\u0000and logics, especially for complex verification logics such as separation\u0000logic. This paper presents a formal framework for reasoning about translational\u0000separation logic verifiers. At its center is a generic core IVL that captures\u0000the essence of different separation logics. We define its operational semantics\u0000and formally connect it to two different back-end verifiers, which use symbolic\u0000execution and verification condition generation, resp. Crucially, this\u0000semantics uses angelic non-determinism to enable the application of different\u0000proof search algorithms and heuristics in the back-end verifiers. An axiomatic\u0000semantics for the core IVL simplifies reasoning about the front-end translation\u0000by performing essential proof steps once and for all in the equivalence proof\u0000with the operational semantics rather than for each concrete front-end\u0000translation. We illustrate the usefulness of our formal framework by instantiating our\u0000core IVL with elements of Viper and connecting it to two Viper back-ends as\u0000well as a front-end for concurrent separation logic. All our technical results\u0000have been formalized in Isabelle/HOL, including the core IVL and its semantics,\u0000the semantics of two back-ends for a subset of Viper, and all proofs.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141866527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting and explaining (in)equivalence of context-free grammars 检测和解释无上下文语法的(不)等价性
Pub Date : 2024-07-25 DOI: arxiv-2407.18220
Marko Schmellenkamp, Thomas Zeume, Sven Argo, Sandra Kiefer, Cedric Siems, Fynn Stebel
We propose a scalable framework for deciding, proving, and explaining(in)equivalence of context-free grammars. We present an implementation of theframework and evaluate it on large data sets collected within educationalsupport systems. Even though the equivalence problem for context-free languagesis undecidable in general, the framework is able to handle a large portion ofthese datasets. It introduces and combines techniques from several areas, suchas an abstract grammar transformation language to identify equivalent grammarsas well as sufficiently similar inequivalent grammars, theory-based comparisonalgorithms for a large class of context-free languages, and agraph-theory-inspired grammar canonization that allows to efficiently identifyisomorphic grammars.
我们提出了一个可扩展的框架,用于决定、证明和解释无上下文语法的(不)等价性。我们介绍了该框架的实现,并在教育支持系统中收集的大型数据集上对其进行了评估。尽管无上下文语言的等价性问题在一般情况下是不可判定的,但该框架能够处理这些数据集的大部分内容。该框架引入并结合了多个领域的技术,如用于识别等价语法和足够相似的不等价语法的抽象语法转换语言、用于一大类无上下文语言的基于理论的比较算法,以及可高效识别同构语法的图论启发的语法规范化。
{"title":"Detecting and explaining (in)equivalence of context-free grammars","authors":"Marko Schmellenkamp, Thomas Zeume, Sven Argo, Sandra Kiefer, Cedric Siems, Fynn Stebel","doi":"arxiv-2407.18220","DOIUrl":"https://doi.org/arxiv-2407.18220","url":null,"abstract":"We propose a scalable framework for deciding, proving, and explaining\u0000(in)equivalence of context-free grammars. We present an implementation of the\u0000framework and evaluate it on large data sets collected within educational\u0000support systems. Even though the equivalence problem for context-free languages\u0000is undecidable in general, the framework is able to handle a large portion of\u0000these datasets. It introduces and combines techniques from several areas, such\u0000as an abstract grammar transformation language to identify equivalent grammars\u0000as well as sufficiently similar inequivalent grammars, theory-based comparison\u0000algorithms for a large class of context-free languages, and a\u0000graph-theory-inspired grammar canonization that allows to efficiently identify\u0000isomorphic grammars.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141775834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPLAT: A framework for optimised GPU code-generation for SParse reguLar ATtention SPLAT:优化 GPU 代码生成以实现稀疏重组的框架
Pub Date : 2024-07-23 DOI: arxiv-2407.16847
Ahan Gupta, Yueming Yuan, Devansh Jain, Yuhao Ge, David Aponte, Yanqi Zhou, Charith Mendis
Multi-head-self-attention (MHSA) mechanisms achieve state-of-the-art (SOTA)performance across natural language processing and vision tasks. However, theirquadratic dependence on sequence lengths has bottlenecked inference speeds. Tocircumvent this bottleneck, researchers have proposed various sparse-MHSAmodels, where a subset of full attention is computed. Despite their promise,current sparse libraries and compilers do not support high-performanceimplementations for diverse sparse-MHSA patterns due to the underlying sparseformats they operate on. These formats, which are typically designed forhigh-performance & scientific computing applications, are either curated forextreme amounts of random sparsity (<1% non-zero values), or specific sparsitypatterns. However, the sparsity patterns in sparse-MHSA are moderately sparse(10-50% non-zero values) and varied, resulting in existing sparse-formatstrading off generality for performance. We bridge this gap, achieving both generality and performance, by proposing anovel sparse format: affine-compressed-sparse-row (ACSR) and supportingcode-generation scheme, SPLAT, that generates high-performance implementationsfor diverse sparse-MHSA patterns on GPUs. Core to our proposed format and codegeneration algorithm is the observation that common sparse-MHSA patterns haveuniquely regular geometric properties. These properties, which can be analyzedjust-in-time, expose novel optimizations and tiling strategies that SPLATexploits to generate high-performance implementations for diverse patterns. Todemonstrate SPLAT's efficacy, we use it to generate code for varioussparse-MHSA models, achieving geomean speedups of 2.05x and 4.05x overhand-written kernels written in triton and TVM respectively on A100 GPUs.Moreover, its interfaces are intuitive and easy to use with existingimplementations of MHSA in JAX.
多头自我注意(MHSA)机制在自然语言处理和视觉任务中实现了最先进的(SOTA)性能。然而,它们对序列长度的二次依赖性制约了推理速度。为了规避这一瓶颈,研究人员提出了各种稀疏-MHSA 模型,即计算全注意力的子集。尽管目前的稀疏库和编译器很有前途,但由于其运行的底层稀疏格式,它们并不支持各种稀疏-MHSA 模式的高性能实现。这些格式通常是为高性能计算和科学计算应用而设计的,要么是经过精心策划的大量随机稀疏性(<1% 非零值),要么是特定的稀疏性模式。然而,稀疏-MHSA 中的稀疏模式是中度稀疏(10%-50% 非零值)和多样的,导致现有的稀疏格式为了性能而牺牲了通用性。我们提出了一种新的稀疏格式:仿射压缩稀疏行(ACSR)和配套的代码生成方案 SPLAT,可以在 GPU 上生成各种稀疏-MHSA 模式的高性能实现,从而弥补了这一差距,实现了通用性和性能的双赢。我们提出的格式和代码生成算法的核心是观察到常见的稀疏-MHSA 图案具有独特的规则几何特性。这些特性可以及时分析,揭示出新颖的优化和平铺策略,SPLAT利用这些策略为各种图案生成高性能的实现。为了证明 SPLAT 的功效,我们用它来生成各种解析-MHSA 模型的代码,在 A100 GPU 上比用 triton 和 TVM 编写的内核分别提高了 2.05 倍和 4.05 倍。
{"title":"SPLAT: A framework for optimised GPU code-generation for SParse reguLar ATtention","authors":"Ahan Gupta, Yueming Yuan, Devansh Jain, Yuhao Ge, David Aponte, Yanqi Zhou, Charith Mendis","doi":"arxiv-2407.16847","DOIUrl":"https://doi.org/arxiv-2407.16847","url":null,"abstract":"Multi-head-self-attention (MHSA) mechanisms achieve state-of-the-art (SOTA)\u0000performance across natural language processing and vision tasks. However, their\u0000quadratic dependence on sequence lengths has bottlenecked inference speeds. To\u0000circumvent this bottleneck, researchers have proposed various sparse-MHSA\u0000models, where a subset of full attention is computed. Despite their promise,\u0000current sparse libraries and compilers do not support high-performance\u0000implementations for diverse sparse-MHSA patterns due to the underlying sparse\u0000formats they operate on. These formats, which are typically designed for\u0000high-performance & scientific computing applications, are either curated for\u0000extreme amounts of random sparsity (<1% non-zero values), or specific sparsity\u0000patterns. However, the sparsity patterns in sparse-MHSA are moderately sparse\u0000(10-50% non-zero values) and varied, resulting in existing sparse-formats\u0000trading off generality for performance. We bridge this gap, achieving both generality and performance, by proposing a\u0000novel sparse format: affine-compressed-sparse-row (ACSR) and supporting\u0000code-generation scheme, SPLAT, that generates high-performance implementations\u0000for diverse sparse-MHSA patterns on GPUs. Core to our proposed format and code\u0000generation algorithm is the observation that common sparse-MHSA patterns have\u0000uniquely regular geometric properties. These properties, which can be analyzed\u0000just-in-time, expose novel optimizations and tiling strategies that SPLAT\u0000exploits to generate high-performance implementations for diverse patterns. To\u0000demonstrate SPLAT's efficacy, we use it to generate code for various\u0000sparse-MHSA models, achieving geomean speedups of 2.05x and 4.05x over\u0000hand-written kernels written in triton and TVM respectively on A100 GPUs.\u0000Moreover, its interfaces are intuitive and easy to use with existing\u0000implementations of MHSA in JAX.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Language-Based Security for Low-Level MPC 基于语言的低级多用途运算引擎安全性
Pub Date : 2024-07-23 DOI: arxiv-2407.16504
Christian Skalka, Joseph P. Near
Secure Multi-Party Computation (MPC) is an important enabling technology fordata privacy in modern distributed applications. Currently, proof methods forlow-level MPC protocols are primarily manual and thus tedious and error-prone,and are also non-standardized and unfamiliar to most PL theorists. As a steptowards better language support and language-based enforcement, we develop anew staged PL for defining a variety of low-level probabilistic MPC protocols.We also formulate a collection of confidentiality and integrity hyperpropertiesfor our language model that are familiar from information flow, includingconditional noninterference, gradual release, and robust declassification. Wedemonstrate their relation to standard MPC threat models of passive andmalicious security, and how they can be leveraged in security verification ofprotocols. To prove these properties we develop automated tactics in$mathbb{F}_2$ that can be integrated with separation logic-style reasoning.
安全多方计算(MPC)是现代分布式应用中数据隐私的一项重要使能技术。目前,低级多方计算协议的证明方法主要是手动的,因此既繁琐又容易出错,而且也是非标准化的,大多数 PL 理论家都不熟悉。为了提供更好的语言支持和基于语言的执行,我们开发了一种新的分阶段 PL,用于定义各种低级概率 MPC 协议。我们还为我们的语言模型制定了一系列信息流中熟悉的保密性和完整性超属性,包括有条件不干涉、逐步释放和稳健解密。我们展示了它们与被动和恶意安全的标准 MPC 威胁模型的关系,以及如何在协议的安全验证中利用它们。为了证明这些特性,我们在$mathbb{F}_2$中开发了可与分离逻辑式推理相结合的自动策略。
{"title":"Language-Based Security for Low-Level MPC","authors":"Christian Skalka, Joseph P. Near","doi":"arxiv-2407.16504","DOIUrl":"https://doi.org/arxiv-2407.16504","url":null,"abstract":"Secure Multi-Party Computation (MPC) is an important enabling technology for\u0000data privacy in modern distributed applications. Currently, proof methods for\u0000low-level MPC protocols are primarily manual and thus tedious and error-prone,\u0000and are also non-standardized and unfamiliar to most PL theorists. As a step\u0000towards better language support and language-based enforcement, we develop a\u0000new staged PL for defining a variety of low-level probabilistic MPC protocols.\u0000We also formulate a collection of confidentiality and integrity hyperproperties\u0000for our language model that are familiar from information flow, including\u0000conditional noninterference, gradual release, and robust declassification. We\u0000demonstrate their relation to standard MPC threat models of passive and\u0000malicious security, and how they can be leveraged in security verification of\u0000protocols. To prove these properties we develop automated tactics in\u0000$mathbb{F}_2$ that can be integrated with separation logic-style reasoning.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141775835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preventing Out-of-Gas Exceptions by Typing 通过键入防止气源异常
Pub Date : 2024-07-22 DOI: arxiv-2407.15676
Luca Aceto, Daniele Gorla, Stian Lybech, Mohammad Hamdaqa
We continue the development of TinySol, a minimal object-oriented languagebased on Solidity, the standard smart-contract language used for the Ethereumplatform. We first extend TinySol with exceptions and a gas mechanism, andequip it with a small-step operational semantics. Introducing the gas mechanismis fundamental for modelling real-life smart contracts in TinySol, since thisis the way in which termination of Ethereum smart contracts is usually ensured.We then devise a type system for smart contracts guaranteeing that suchprograms never run out of gas at runtime. This is a desirable property forsmart contracts, since a transaction that runs out of gas is aborted, but theprice paid to run the code is not returned to the invoker.
我们继续开发 TinySol,这是一种基于以太坊平台使用的标准智能合约语言 Solidity 的最小面向对象语言。我们首先用异常和气体机制扩展了 TinySol,并为其配备了小步操作语义。引入气体机制是在TinySol中模拟现实生活中的智能合约的基础,因为这是通常确保以太坊智能合约终止的方式。这对于智能合约来说是一个理想的属性,因为耗尽气体的交易会中止,但为运行代码所支付的价格不会返还给调用者。
{"title":"Preventing Out-of-Gas Exceptions by Typing","authors":"Luca Aceto, Daniele Gorla, Stian Lybech, Mohammad Hamdaqa","doi":"arxiv-2407.15676","DOIUrl":"https://doi.org/arxiv-2407.15676","url":null,"abstract":"We continue the development of TinySol, a minimal object-oriented language\u0000based on Solidity, the standard smart-contract language used for the Ethereum\u0000platform. We first extend TinySol with exceptions and a gas mechanism, and\u0000equip it with a small-step operational semantics. Introducing the gas mechanism\u0000is fundamental for modelling real-life smart contracts in TinySol, since this\u0000is the way in which termination of Ethereum smart contracts is usually ensured.\u0000We then devise a type system for smart contracts guaranteeing that such\u0000programs never run out of gas at runtime. This is a desirable property for\u0000smart contracts, since a transaction that runs out of gas is aborted, but the\u0000price paid to run the code is not returned to the invoker.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141775836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SNIP: Speculative Execution and Non-Interference Preservation for Compiler Transformations SNIP:编译器转换的推测性执行和非干涉保护
Pub Date : 2024-07-21 DOI: arxiv-2407.15080
Sören van der Wall, Roland Meyer
We address the problem of preserving non-interference across compilertransformations under speculative semantics. We develop a proof method thatensures the preservation uniformly across all source programs. The basis of ourproof method is a new form of simulation relation. It operates over directivesthat model the attacker's control over the micro-architectural state, and itaccounts for the fact that the compiler transformation may change the influenceof the micro-architectural state on the execution (and hence the directives).Using our proof method, we show the correctness of dead code elimination. Whenwe tried to prove register allocation correct, we identified a previouslyunknown weakness that introduces violations to non-interference. We haveconfirmed the weakness for a mainstream compiler on code from the libsodiumcryptographic library. To reclaim security once more, we develop a novel staticanalysis that operates on a product of source program and register-allocatedprogram. Using the analysis, we present an automated fix to existing registerallocation implementations. We prove the correctness of the fixed registerallocations with our proof method.
我们探讨了在推测语义下,如何在编译器变换中保持互不干涉的问题。我们开发了一种证明方法,可以确保在所有源程序中统一地保持不干涉。我们的证明方法的基础是一种新形式的模拟关系。它作用于模拟攻击者对微体系结构状态控制的指令,并考虑到编译器转换可能会改变微体系结构状态对执行(以及指令)的影响这一事实。当我们试图证明寄存器分配的正确性时,我们发现了一个以前未知的弱点,它引入了违反互不干涉原则的行为。我们在 libsodium 密码库代码的主流编译器中证实了这一弱点。为了再次找回安全性,我们开发了一种新颖的静态分析方法,可对源程序和寄存器分配程序的乘积进行操作。利用该分析,我们对现有的寄存器分配实现进行了自动修复。我们用证明方法证明了固定寄存器分配的正确性。
{"title":"SNIP: Speculative Execution and Non-Interference Preservation for Compiler Transformations","authors":"Sören van der Wall, Roland Meyer","doi":"arxiv-2407.15080","DOIUrl":"https://doi.org/arxiv-2407.15080","url":null,"abstract":"We address the problem of preserving non-interference across compiler\u0000transformations under speculative semantics. We develop a proof method that\u0000ensures the preservation uniformly across all source programs. The basis of our\u0000proof method is a new form of simulation relation. It operates over directives\u0000that model the attacker's control over the micro-architectural state, and it\u0000accounts for the fact that the compiler transformation may change the influence\u0000of the micro-architectural state on the execution (and hence the directives).\u0000Using our proof method, we show the correctness of dead code elimination. When\u0000we tried to prove register allocation correct, we identified a previously\u0000unknown weakness that introduces violations to non-interference. We have\u0000confirmed the weakness for a mainstream compiler on code from the libsodium\u0000cryptographic library. To reclaim security once more, we develop a novel static\u0000analysis that operates on a product of source program and register-allocated\u0000program. Using the analysis, we present an automated fix to existing register\u0000allocation implementations. We prove the correctness of the fixed register\u0000allocations with our proof method.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Programming Languages
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1