首页 > 最新文献

arXiv - CS - Programming Languages最新文献

英文 中文
Memory Consistency and Program Transformations 内存一致性和程序转换
Pub Date : 2024-09-18 DOI: arxiv-2409.12013
Akshay GopalakrishnanMcGill University, Clark VerbruggeMcGill University, Mark BattyUniversity of Kent
A memory consistency model specifies the allowed behaviors of shared memoryconcurrent programs. At the language level, these models are known to have anon-trivial impact on the safety of program optimizations, limiting the abilityto rearrange/refactor code without introducing new behaviors. Existingprogramming language memory models try to address this by permitting more(relaxed/weak) concurrent behaviors but are still unable to allow all thedesired optimizations. A core problem is that weaker consistency models mayalso render optimizations unsafe, a conclusion that goes against the intuitionof them allowing more behaviors. This exposes an open problem of thecompositional interaction between memory consistency semantics andoptimizations: which parts of the semantics correspond to allowing/disallowingwhich set of optimizations is unclear. In this work, we establish a formalfoundation suitable enough to understand this compositional nature, decomposingoptimizations into a finite set of elementary effects on program executiontraces, over which aspects of safety can be assessed. We use this decompositionto identify a desirable compositional property (complete) that would guaranteethe safety of optimizations from one memory model to another. We showcase itspracticality by proving such a property between Sequential Consistency (SC) and$SC_{RR}$, the latter allowing independent read-read reordering over $SC$. Ourwork potentially paves way to a new design methodology of programming-languagememory models, one that places emphasis on the optimizations desired to beperformed.
内存一致性模型规定了共享内存并发程序允许的行为。众所周知,在语言层面,这些模型会对程序优化的安全性产生非同小可的影响,从而限制了在不引入新行为的情况下重新排列/重构代码的能力。现有的编程语言内存模型试图通过允许更多并发行为(宽松/弱化)来解决这个问题,但仍然无法实现所有期望的优化。一个核心问题是,较弱的一致性模型也可能使优化变得不安全,而这一结论与允许更多行为的直觉相悖。这就暴露了内存一致性语义和优化之间的组合交互这一未决问题:语义的哪些部分对应于允许/不允许哪一组优化,这一点并不清楚。在这项工作中,我们建立了一个足以理解这种组合性质的形式基础,将优化分解为对程序执行轨迹的一系列有限的基本影响,通过这些影响可以评估安全性的各个方面。我们利用这种分解来确定一种理想的组合特性(完全性),它可以保证从一种内存模型到另一种内存模型的优化的安全性。我们通过证明连续一致性(SC)和$SC_{RR}$之间的这种属性来展示它的实用性,后者允许在$SC$上进行独立的读取重排序。我们的工作有可能为编程语言内存模型的新设计方法铺平道路,这种方法的重点是希望实现的优化。
{"title":"Memory Consistency and Program Transformations","authors":"Akshay GopalakrishnanMcGill University, Clark VerbruggeMcGill University, Mark BattyUniversity of Kent","doi":"arxiv-2409.12013","DOIUrl":"https://doi.org/arxiv-2409.12013","url":null,"abstract":"A memory consistency model specifies the allowed behaviors of shared memory\u0000concurrent programs. At the language level, these models are known to have a\u0000non-trivial impact on the safety of program optimizations, limiting the ability\u0000to rearrange/refactor code without introducing new behaviors. Existing\u0000programming language memory models try to address this by permitting more\u0000(relaxed/weak) concurrent behaviors but are still unable to allow all the\u0000desired optimizations. A core problem is that weaker consistency models may\u0000also render optimizations unsafe, a conclusion that goes against the intuition\u0000of them allowing more behaviors. This exposes an open problem of the\u0000compositional interaction between memory consistency semantics and\u0000optimizations: which parts of the semantics correspond to allowing/disallowing\u0000which set of optimizations is unclear. In this work, we establish a formal\u0000foundation suitable enough to understand this compositional nature, decomposing\u0000optimizations into a finite set of elementary effects on program execution\u0000traces, over which aspects of safety can be assessed. We use this decomposition\u0000to identify a desirable compositional property (complete) that would guarantee\u0000the safety of optimizations from one memory model to another. We showcase its\u0000practicality by proving such a property between Sequential Consistency (SC) and\u0000$SC_{RR}$, the latter allowing independent read-read reordering over $SC$. Our\u0000work potentially paves way to a new design methodology of programming-language\u0000memory models, one that places emphasis on the optimizations desired to be\u0000performed.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Quantum Multiparty Session Types 迈向量子多方会话类型
Pub Date : 2024-09-17 DOI: arxiv-2409.11133
Ivan Lanese, Ugo Dal Lago, Vikraman Choudhury
Multiparty Session Types (MPSTs) offer a structured way of specifyingcommunication protocols and guarantee relevant communication properties, suchas deadlock-freedom. In this paper, we extend a minimal MPST system withquantum data and operations, enabling the specification of quantum protocols.Quantum MPSTs (QMPSTs) provide a formal notation to describe quantum protocols,both at the abstract level of global types, describing which communications cantake place in the system and their dependencies, and at the concrete level oflocal types and quantum processes, describing the expected behavior of eachparticipant in the protocol. Type-checking relates these two levels formally,ensuring that processes behave as prescribed by the global type. Beyond usualcommunication properties, QMPSTs also allow us to prove that qubits are ownedby a single process at any time, capturing the quantum no-cloning andno-deleting theorems. We use our approach to verify four quantum protocols fromthe literature, respectively Teleportation, Secret Sharing, Bit-Commitment, andKey Distribution.
多方会话类型(MPSTs)为指定通信协议提供了一种结构化的方法,并保证了相关的通信属性,如无死锁。量子多方会话类型(QMPSTs)提供了一种描述量子协议的正式符号,它既可以在全局类型的抽象层面上描述系统中可以进行的通信及其依赖关系,也可以在局部类型和量子过程的具体层面上描述协议中每个参与者的预期行为。类型检查将这两个层次正式联系起来,确保进程的行为符合全局类型的规定。除了通常的通信属性外,QMPST 还允许我们证明量子比特在任何时候都为单个进程所拥有,从而捕捉量子无克隆和无删除定理。我们用我们的方法验证了文献中的四个量子协议,分别是远距传输、秘密共享、比特承诺和密钥分发。
{"title":"Towards Quantum Multiparty Session Types","authors":"Ivan Lanese, Ugo Dal Lago, Vikraman Choudhury","doi":"arxiv-2409.11133","DOIUrl":"https://doi.org/arxiv-2409.11133","url":null,"abstract":"Multiparty Session Types (MPSTs) offer a structured way of specifying\u0000communication protocols and guarantee relevant communication properties, such\u0000as deadlock-freedom. In this paper, we extend a minimal MPST system with\u0000quantum data and operations, enabling the specification of quantum protocols.\u0000Quantum MPSTs (QMPSTs) provide a formal notation to describe quantum protocols,\u0000both at the abstract level of global types, describing which communications can\u0000take place in the system and their dependencies, and at the concrete level of\u0000local types and quantum processes, describing the expected behavior of each\u0000participant in the protocol. Type-checking relates these two levels formally,\u0000ensuring that processes behave as prescribed by the global type. Beyond usual\u0000communication properties, QMPSTs also allow us to prove that qubits are owned\u0000by a single process at any time, capturing the quantum no-cloning and\u0000no-deleting theorems. We use our approach to verify four quantum protocols from\u0000the literature, respectively Teleportation, Secret Sharing, Bit-Commitment, and\u0000Key Distribution.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scheme Pearl: Quantum Continuations 珍珠计划量子延续
Pub Date : 2024-09-17 DOI: arxiv-2409.11106
Vikraman Choudhury, Borislav Agapiev, Amr Sabry
We advance the thesis that the simulation of quantum circuits isfundamentally about the efficient management of a large (potentiallyexponential) number of delimited continuations. The family of Scheme languages,with its efficient implementations of first-class continuations and with itsimperative constructs, provides an elegant host for modeling and simulatingquantum circuits.
我们提出的论点是,量子电路的仿真从根本上说是对大量(可能是指数级的)分界连续过程的有效管理。Scheme语言家族高效地实现了一流的连续性,并具有互操作构造,为量子电路的建模和仿真提供了一个优雅的宿主。
{"title":"Scheme Pearl: Quantum Continuations","authors":"Vikraman Choudhury, Borislav Agapiev, Amr Sabry","doi":"arxiv-2409.11106","DOIUrl":"https://doi.org/arxiv-2409.11106","url":null,"abstract":"We advance the thesis that the simulation of quantum circuits is\u0000fundamentally about the efficient management of a large (potentially\u0000exponential) number of delimited continuations. The family of Scheme languages,\u0000with its efficient implementations of first-class continuations and with its\u0000imperative constructs, provides an elegant host for modeling and simulating\u0000quantum circuits.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minuska: Towards a Formally Verified Programming Language Framework 米努斯卡迈向正式验证的编程语言框架
Pub Date : 2024-09-17 DOI: arxiv-2409.11530
Jan Tušil, Jan Obdržálek
Programming language frameworks allow us to generate language tools (e.g.,interpreters) just from a formal description of the syntax and semantics of aprogramming language. As these frameworks tend to be quite complex, an issuearises whether we can trust the generated tools. To address this issue, weintroduce a practical formal programming language framework called Minuska,which always generates a provably correct interpreter given a valid languagedefinition. This is achieved by (1) defining a language MinusLang forexpressing programming language definitions and giving it formal semantics and(2) using the Coq proof assistant to implement an interpreter parametric in aMinusLang definition and to prove it correct. Minuska provides strongcorrectness guarantees and can support nontrivial languages while performingwell. This is the extended version of the SEFM24 paper of the same name.
编程语言框架允许我们仅根据对编程语言语法和语义的正式描述就生成语言工具(如解释器)。由于这些框架往往相当复杂,因此出现了一个问题,即我们是否可以信任生成的工具。为了解决这个问题,我们引入了一个名为 Minuska 的实用形式化编程语言框架,它总能在语言定义有效的情况下生成可证明正确的解释器。这是通过以下方法实现的:(1) 定义 MinusLang 语言来表达编程语言定义,并赋予其形式语义;(2) 使用 Coq 证明助手来实现 MinusLang 定义中的解释器参数,并证明其正确性。Minuska 提供了很强的正确性保证,可以支持非难语言,同时性能良好。本文是 SEFM24 同名论文的扩展版本。
{"title":"Minuska: Towards a Formally Verified Programming Language Framework","authors":"Jan Tušil, Jan Obdržálek","doi":"arxiv-2409.11530","DOIUrl":"https://doi.org/arxiv-2409.11530","url":null,"abstract":"Programming language frameworks allow us to generate language tools (e.g.,\u0000interpreters) just from a formal description of the syntax and semantics of a\u0000programming language. As these frameworks tend to be quite complex, an issue\u0000arises whether we can trust the generated tools. To address this issue, we\u0000introduce a practical formal programming language framework called Minuska,\u0000which always generates a provably correct interpreter given a valid language\u0000definition. This is achieved by (1) defining a language MinusLang for\u0000expressing programming language definitions and giving it formal semantics and\u0000(2) using the Coq proof assistant to implement an interpreter parametric in a\u0000MinusLang definition and to prove it correct. Minuska provides strong\u0000correctness guarantees and can support nontrivial languages while performing\u0000well. This is the extended version of the SEFM24 paper of the same name.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
No Saved Kaleidosope: an 100% Jitted Neural Network Coding Language with Pythonic Syntax 没有保存的 Kaleidosope:具有 Pythonic 语法的 100% Jitted 神经网络编码语言
Pub Date : 2024-09-17 DOI: arxiv-2409.11600
Augusto Seben da Rosa, Marlon Daniel Angeli, Jorge Aikes Junior, Alef Iury Ferreira, Lucas Rafael Gris, Anderson da Silva Soares, Arnaldo Candido Junior, Frederico Santos de Oliveira, Gabriel Trevisan Damke, Rafael Teixeira Sousa
We developed a jitted compiler for training Artificial Neural Networks usingC++, LLVM and Cuda. It features object-oriented characteristics, strong typing,parallel workers for data pre-processing, pythonic syntax for expressions,PyTorch like model declaration and Automatic Differentiation. We implement themechanisms of cache and pooling in order to manage VRAM, cuBLAS for highperformance matrix multiplication and cuDNN for convolutional layers. Ourexperiments with Residual Convolutional Neural Networks on ImageNet, we reachsimilar speed but degraded performance. Also, the GRU network experiments showsimilar accuracy, but our compiler have degraded speed in that task. However,our compiler demonstrates promising results at the CIFAR-10 benchmark, in whichwe reach the same performance and about the same speed as PyTorch. We make thecode publicly available at: https://github.com/NoSavedDATA/NoSavedKaleidoscope
我们使用 C++、LLVM 和 Cuda 开发了一个用于训练人工神经网络的 jitted 编译器。它具有面向对象的特点、强类型化、用于数据预处理的并行工作者、表达式的 pythonic 语法、类似 PyTorch 的模型声明和自动微分功能。我们实现了缓存和池化主题机制,以管理 VRAM、用于高性能矩阵乘法的 cuBLAS 和用于卷积层的 cuDNN。通过在 ImageNet 上使用残差卷积神经网络进行实验,我们发现速度与之相近,但性能有所下降。此外,GRU 网络实验也显示了相似的准确性,但我们的编译器在该任务中的速度有所下降。不过,我们的编译器在 CIFAR-10 基准测试中取得了不错的成绩,性能和速度与 PyTorch 差不多。我们公开了代码,网址是:https://github.com/NoSavedDATA/NoSavedKaleidoscope。
{"title":"No Saved Kaleidosope: an 100% Jitted Neural Network Coding Language with Pythonic Syntax","authors":"Augusto Seben da Rosa, Marlon Daniel Angeli, Jorge Aikes Junior, Alef Iury Ferreira, Lucas Rafael Gris, Anderson da Silva Soares, Arnaldo Candido Junior, Frederico Santos de Oliveira, Gabriel Trevisan Damke, Rafael Teixeira Sousa","doi":"arxiv-2409.11600","DOIUrl":"https://doi.org/arxiv-2409.11600","url":null,"abstract":"We developed a jitted compiler for training Artificial Neural Networks using\u0000C++, LLVM and Cuda. It features object-oriented characteristics, strong typing,\u0000parallel workers for data pre-processing, pythonic syntax for expressions,\u0000PyTorch like model declaration and Automatic Differentiation. We implement the\u0000mechanisms of cache and pooling in order to manage VRAM, cuBLAS for high\u0000performance matrix multiplication and cuDNN for convolutional layers. Our\u0000experiments with Residual Convolutional Neural Networks on ImageNet, we reach\u0000similar speed but degraded performance. Also, the GRU network experiments show\u0000similar accuracy, but our compiler have degraded speed in that task. However,\u0000our compiler demonstrates promising results at the CIFAR-10 benchmark, in which\u0000we reach the same performance and about the same speed as PyTorch. We make the\u0000code publicly available at: https://github.com/NoSavedDATA/NoSavedKaleidoscope","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Incredible Shrinking Context... in a decompiler near you 不可思议的缩小语境......就在您身边的解码器中
Pub Date : 2024-09-17 DOI: arxiv-2409.11157
Sifis Lagouvardos, Yannis Bollanos, Neville Grech, Yannis Smaragdakis
Decompilation of binary code has arisen as a highly-important application inthe space of Ethereum VM (EVM) smart contracts. Major new decompilers appearnearly every year and attain popularity, for a multitude of reverse-engineeringor tool-building purposes. Technically, the problem is fundamental: it consistsof recovering high-level control flow from a highly-optimizedcontinuation-passing-style (CPS) representation. Architecturally, decompilerscan be built using either static analysis or symbolic execution techniques. We present Shrknr, a static-analysis-based decompiler succeeding thestate-of-the-art Elipmoc decompiler. Shrknr manages to achieve drasticimprovements relative to the state of the art, in all significant dimensions:scalability, completeness, precision. Chief among the techniques employed is anew variant of static analysis context: shrinking context sensitivity.Shrinking context sensitivity performs deep cuts in the static analysiscontext, eagerly "forgetting" control-flow history, in order to leave room forfurther precise reasoning. We compare Shrnkr to state-of-the-art decompilers, both static-analysis- andsymbolic-execution-based. In a standard benchmark set, Shrnkr scales to over99.5% of contracts (compared to ~95%), covers (i.e., reaches and manages todecompile) 67% more code, and reduces key imprecision metrics by over 65%.
在以太坊虚拟机(EVM)智能合约领域,二进制代码的反编译已成为一项非常重要的应用。几乎每年都会出现新的大型反编译器,并广受欢迎,用于多种逆向工程或工具构建目的。从技术上讲,问题是根本性的:它包括从高度优化的连续传递式(CPS)表示中恢复高级控制流。从架构上讲,反编译器可以使用静态分析或符号执行技术来构建。我们介绍的 Shrknr 是一种基于静态分析的反编译器,它继承了最先进的 Elipmoc 反编译器。Shrknr 在可扩展性、完整性和精确性等所有重要方面都比目前的技术水平有了大幅提高。收缩上下文敏感性对静态分析上下文进行深度切割,急切地 "遗忘 "控制流历史,以便为进一步精确推理留出空间。我们将 Shrnkr 与最先进的基于静态分析和基于符号执行的反编译器进行了比较。在标准基准集中,Shrnkr可扩展到99.5%以上的合约(相比之下约为95%),覆盖(即达到并管理编译)的代码多了67%,关键的不精确度指标降低了65%以上。
{"title":"The Incredible Shrinking Context... in a decompiler near you","authors":"Sifis Lagouvardos, Yannis Bollanos, Neville Grech, Yannis Smaragdakis","doi":"arxiv-2409.11157","DOIUrl":"https://doi.org/arxiv-2409.11157","url":null,"abstract":"Decompilation of binary code has arisen as a highly-important application in\u0000the space of Ethereum VM (EVM) smart contracts. Major new decompilers appear\u0000nearly every year and attain popularity, for a multitude of reverse-engineering\u0000or tool-building purposes. Technically, the problem is fundamental: it consists\u0000of recovering high-level control flow from a highly-optimized\u0000continuation-passing-style (CPS) representation. Architecturally, decompilers\u0000can be built using either static analysis or symbolic execution techniques. We present Shrknr, a static-analysis-based decompiler succeeding the\u0000state-of-the-art Elipmoc decompiler. Shrknr manages to achieve drastic\u0000improvements relative to the state of the art, in all significant dimensions:\u0000scalability, completeness, precision. Chief among the techniques employed is a\u0000new variant of static analysis context: shrinking context sensitivity.\u0000Shrinking context sensitivity performs deep cuts in the static analysis\u0000context, eagerly \"forgetting\" control-flow history, in order to leave room for\u0000further precise reasoning. We compare Shrnkr to state-of-the-art decompilers, both static-analysis- and\u0000symbolic-execution-based. In a standard benchmark set, Shrnkr scales to over\u000099.5% of contracts (compared to ~95%), covers (i.e., reaches and manages to\u0000decompile) 67% more code, and reduces key imprecision metrics by over 65%.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coordination-free Collaborative Replication based on Operational Transformation 基于业务转型的无协调协作复制
Pub Date : 2024-09-16 DOI: arxiv-2409.09934
Masato Takeichi
We introduce Coordination-free Collaborative Replication (CCR), a new methodfor maintaining consistency across replicas in distributed systems withoutrequiring explicit coordination messages. CCR automates conflict resolution,contrasting with traditional Data-sharing systems that typically involvecentralized update management or predefined consistency rules. Operational Transformation (OT), commonly used in collaborative editing,ensures consistency by transforming operations while maintaining documentintegrity across replicas. However, OT assumes server-based coordination, whichis unsuitable for modern, decentralized Peer-to-Peer (P2P) systems. Conflict-free Replicated Data Type (CRDT), like Two-Phase Sets (2P-Sets),guarantees eventual consistency by allowing commutative and associativeoperations but often result in counterintuitive behaviors, such as failing tore-add an item to a shopping cart once removed. In contrast, CCR employs a more intuitive approach to replication. It allowsfor straightforward updates and conflict resolution based on the current datastate, enhancing clarity and usability compared to CRDTs. Furthermore, CCRaddresses inefficiencies in messaging by developing a versatile protocol basedon data stream confluence, thus providing a more efficient and practicalsolution for collaborative data sharing in distributed systems.
我们介绍了无协调协作复制(CCR),这是一种在分布式系统中保持各副本一致性的新方法,无需明确的协调信息。CCR 自动解决冲突,与传统的数据共享系统形成鲜明对比,后者通常涉及集中更新管理或预定义的一致性规则。操作转换(OT)通常用于协同编辑,通过转换操作来确保一致性,同时保持各副本之间的文档完整性。然而,OT 假定基于服务器的协调,不适合现代分散的点对点(P2P)系统。无冲突复制数据类型(Conflict-free Replicated Data Type,CRDT)与两相集(Two-Phase Sets,2P-Sets)一样,通过允许交换和关联操作来保证最终的一致性,但往往会导致一些反直觉的行为,比如一旦删除购物车中的物品,就无法再添加到购物车中。相比之下,CCR 采用了一种更直观的复制方法。它允许根据当前数据状态进行直接更新和冲突解决,与 CRDT 相比,提高了清晰度和可用性。此外,CCR 通过开发基于数据流汇合的通用协议,解决了消息传递的低效问题,从而为分布式系统中的协作数据共享提供了更高效、更实用的解决方案。
{"title":"Coordination-free Collaborative Replication based on Operational Transformation","authors":"Masato Takeichi","doi":"arxiv-2409.09934","DOIUrl":"https://doi.org/arxiv-2409.09934","url":null,"abstract":"We introduce Coordination-free Collaborative Replication (CCR), a new method\u0000for maintaining consistency across replicas in distributed systems without\u0000requiring explicit coordination messages. CCR automates conflict resolution,\u0000contrasting with traditional Data-sharing systems that typically involve\u0000centralized update management or predefined consistency rules. Operational Transformation (OT), commonly used in collaborative editing,\u0000ensures consistency by transforming operations while maintaining document\u0000integrity across replicas. However, OT assumes server-based coordination, which\u0000is unsuitable for modern, decentralized Peer-to-Peer (P2P) systems. Conflict-free Replicated Data Type (CRDT), like Two-Phase Sets (2P-Sets),\u0000guarantees eventual consistency by allowing commutative and associative\u0000operations but often result in counterintuitive behaviors, such as failing to\u0000re-add an item to a shopping cart once removed. In contrast, CCR employs a more intuitive approach to replication. It allows\u0000for straightforward updates and conflict resolution based on the current data\u0000state, enhancing clarity and usability compared to CRDTs. Furthermore, CCR\u0000addresses inefficiencies in messaging by developing a versatile protocol based\u0000on data stream confluence, thus providing a more efficient and practical\u0000solution for collaborative data sharing in distributed systems.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Repr Types: One Abstraction to Rule Them All Repr 类型:一个抽象概念统领一切
Pub Date : 2024-09-12 DOI: arxiv-2409.07950
Viktor Palmkvist, Anders Ågren Thuné, Elias Castegren, David Broman
The choice of how to represent an abstract type can have a major impact onthe performance of a program, yet mainstream compilers cannot performoptimizations at such a high level. When dealing with optimizations of datatype representations, an important feature is having extensiblerepresentation-flexible data types; the ability for a programmer to add newabstract types and operations, as well as concrete implementations of these,without modifying the compiler or a previously defined library. Many researchprojects support high-level optimizations through static analysis,instrumentation, or benchmarking, but they are all restricted in at least oneaspect of extensibility. This paper presents a new approach to representation-flexible data typeswithout such restrictions and which still finds efficient optimizations. Ourapproach centers around a single built-in type $texttt{repr}$ and functionoverloading with cost annotations for operation implementations. We evaluateour approach (i) by defining a universal collection type as a library, a singletype for all conventional collections, and (ii) by designing and implementing arepresentation-flexible graph library. Programs using $texttt{repr}$ types aretypically faster than programs with idiomatic representation choices --sometimes dramatically so -- as long as the compiler finds good implementationsfor all operations. Our compiler performs the analysis efficiently by findingoptimized solutions quickly and by reusing previous results to avoidrecomputations.
选择如何表示抽象类型会对程序的性能产生重大影响,但主流编译器无法在如此高的层次上进行优化。在处理数据类型表示法的优化问题时,一个重要的特点是具有可扩展的表示法--灵活的数据类型;程序员能够在不修改编译器或以前定义的库的情况下,添加新的抽象类型和操作,以及这些类型和操作的具体实现。许多研究项目通过静态分析、工具或基准测试支持高级优化,但它们至少在可扩展性的一个方面受到限制。本文提出了一种新的方法来处理表示灵活的数据类型,这种方法没有这些限制,但仍能找到高效的优化方法。我们的方法围绕单一内置类型 $texttt{repr}$,以及对操作实现进行代价注解的函数重载。我们评估了我们的方法:(i) 将通用集合类型定义为库,即所有常规集合的单一类型;(ii) 设计并实现了灵活的图库。只要编译器能为所有操作找到良好的实现,那么使用 $texttt{repr}$ 类型的程序通常比使用惯用表示选择的程序更快,有时甚至快得惊人。我们的编译器通过快速找到优化方案和重复使用以前的结果来避免重新计算,从而高效地执行分析。
{"title":"Repr Types: One Abstraction to Rule Them All","authors":"Viktor Palmkvist, Anders Ågren Thuné, Elias Castegren, David Broman","doi":"arxiv-2409.07950","DOIUrl":"https://doi.org/arxiv-2409.07950","url":null,"abstract":"The choice of how to represent an abstract type can have a major impact on\u0000the performance of a program, yet mainstream compilers cannot perform\u0000optimizations at such a high level. When dealing with optimizations of data\u0000type representations, an important feature is having extensible\u0000representation-flexible data types; the ability for a programmer to add new\u0000abstract types and operations, as well as concrete implementations of these,\u0000without modifying the compiler or a previously defined library. Many research\u0000projects support high-level optimizations through static analysis,\u0000instrumentation, or benchmarking, but they are all restricted in at least one\u0000aspect of extensibility. This paper presents a new approach to representation-flexible data types\u0000without such restrictions and which still finds efficient optimizations. Our\u0000approach centers around a single built-in type $texttt{repr}$ and function\u0000overloading with cost annotations for operation implementations. We evaluate\u0000our approach (i) by defining a universal collection type as a library, a single\u0000type for all conventional collections, and (ii) by designing and implementing a\u0000representation-flexible graph library. Programs using $texttt{repr}$ types are\u0000typically faster than programs with idiomatic representation choices --\u0000sometimes dramatically so -- as long as the compiler finds good implementations\u0000for all operations. Our compiler performs the analysis efficiently by finding\u0000optimized solutions quickly and by reusing previous results to avoid\u0000recomputations.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
$μλεδ$-Calculus: A Self Optimizing Language that Seems to Exhibit Paradoxical Transfinite Cognitive Capabilities μλεδ$-微积分:一种似乎具有自相矛盾的无限认知能力的自我优化语言
Pub Date : 2024-09-09 DOI: arxiv-2409.05351
Ronie Salgado
Formal mathematics and computer science proofs are formalized usingHilbert-Russell-style logical systems which are designed to not admit paradoxesand self-refencing reasoning. These logical systems are natural way to describeand reason syntactic about tree-like data structures. We found thatWittgenstein-style logic is an alternate system whose propositional elementsare directed graphs (points and arrows) capable of performing paraconsistentself-referencing reasoning without exploding. Imperative programming languageare typically compiled and optimized with SSA-based graphs whose most generalrepresentation is the Sea of Node. By restricting the Sea of Nodes to only thedata dependencies nodes, we attempted to stablish syntactic-semanticcorrespondences with the Lambda-calculus optimization. Surprisingly, when wetested our optimizer of the lambda calculus we performed a natural extensiononto the $mulambda$ which is always terminating. This always terminatingalgorithm is an actual paradox whose resulting graphs are geometrical fractals,which seem to be isomorphic to original source program. These fractalstructures looks like a perfect compressor of a program, which seem to resemblean actual physical black-hole with a naked singularity. In addition to thesesurprising results, we propose two additional extensions to the calculus tomodel the cognitive process of self-aware beings: 1) $epsilon$-expressions tomodel syntactic to semantic expansion as a general model of macros; 2)$delta$-functional expressions as a minimal model of input and output. Weprovide detailed step-by-step construction of our language interpreter,compiler and optimizer.
形式数学和计算机科学的证明是通过希尔伯特-鲁塞尔式逻辑系统形式化的,这些逻辑系统的设计不允许悖论和自反推理。这些逻辑系统是描述树状数据结构并进行语法推理的自然方法。我们发现维特根斯坦式逻辑是另一种系统,它的命题元素是有向图(点和箭头),能够执行准一致自引用推理而不会爆炸。命令式编程语言通常使用基于 SSA 的图进行编译和优化,而 SSA 的最一般表示形式是节点海。通过将 "节点之海 "限制为只有数据依赖关系节点,我们试图建立与 Lambda 计算优化的语法语义对应关系。令人惊讶的是,当我们对λ演算法的优化器进行测试时,我们对总是终止的$mulambda$进行了自然扩展。这个总是终止的算法是一个实际的悖论,它产生的图是几何分形,似乎与原始源程序同构。这些分形结构看起来就像一个程序的完美压缩器,似乎类似于一个具有赤裸裸奇点的真实物理黑洞。除了这些令人惊讶的结果之外,我们还提出了两个对微积分的额外扩展,以模拟具有自我意识的生物的认知过程:1)$epsilon$表达式,作为宏的一般模型来模拟句法到语义的扩展;2)$elta$函数表达式,作为输入和输出的最小模型。我们将逐步详细地构建我们的语言解释器、编译器和优化器。
{"title":"$μλεδ$-Calculus: A Self Optimizing Language that Seems to Exhibit Paradoxical Transfinite Cognitive Capabilities","authors":"Ronie Salgado","doi":"arxiv-2409.05351","DOIUrl":"https://doi.org/arxiv-2409.05351","url":null,"abstract":"Formal mathematics and computer science proofs are formalized using\u0000Hilbert-Russell-style logical systems which are designed to not admit paradoxes\u0000and self-refencing reasoning. These logical systems are natural way to describe\u0000and reason syntactic about tree-like data structures. We found that\u0000Wittgenstein-style logic is an alternate system whose propositional elements\u0000are directed graphs (points and arrows) capable of performing paraconsistent\u0000self-referencing reasoning without exploding. Imperative programming language\u0000are typically compiled and optimized with SSA-based graphs whose most general\u0000representation is the Sea of Node. By restricting the Sea of Nodes to only the\u0000data dependencies nodes, we attempted to stablish syntactic-semantic\u0000correspondences with the Lambda-calculus optimization. Surprisingly, when we\u0000tested our optimizer of the lambda calculus we performed a natural extension\u0000onto the $mulambda$ which is always terminating. This always terminating\u0000algorithm is an actual paradox whose resulting graphs are geometrical fractals,\u0000which seem to be isomorphic to original source program. These fractal\u0000structures looks like a perfect compressor of a program, which seem to resemble\u0000an actual physical black-hole with a naked singularity. In addition to these\u0000surprising results, we propose two additional extensions to the calculus to\u0000model the cognitive process of self-aware beings: 1) $epsilon$-expressions to\u0000model syntactic to semantic expansion as a general model of macros; 2)\u0000$delta$-functional expressions as a minimal model of input and output. We\u0000provide detailed step-by-step construction of our language interpreter,\u0000compiler and optimizer.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conversational Concurrency 会话并发
Pub Date : 2024-09-06 DOI: arxiv-2409.04055
Tony Garnock-Jones
Concurrent computations resemble conversations. In a conversation,participants direct utterances at others and, as the conversation evolves,exploit the known common context to advance the conversation. Similarly,collaborating software components share knowledge with each other in order tomake progress as a group towards a common goal. This dissertation studies concurrency from the perspective of cooperativeknowledge-sharing, taking the conversational exchange of knowledge as a centralconcern in the design of concurrent programming languages. In doing so, itmakes five contributions: 1. It develops the idea of a common dataspace as amedium for knowledge exchange among concurrent components, enabling a newapproach to concurrent programming. While dataspaces loosely resemble both"fact spaces" from the world of Linda-style languages and Erlang'scollaborative model, they significantly differ in many details. 2. It offersthe first crisp formulation of cooperative, conversational knowledge-exchangeas a mathematical model. 3. It describes two faithful implementations of themodel for two quite different languages. 4. It proposes a completely novelsuite of linguistic constructs for organizing the internal structure ofindividual actors in a conversational setting. The combination of dataspaceswith these constructs is dubbed Syndicate. 5. It presents and analyzes evidencesuggesting that the proposed techniques and constructs combine to simplifyconcurrent programming. The dataspace concept stands alone in its focus on representation andmanipulation of conversational frames and conversational state and in itsintegral use of explicit epistemic knowledge. The design is particularly suitedto integration of general-purpose I/O with otherwise-functional languages, butalso applies to actor-like settings more generally.
并发计算类似于对话。在会话中,参与者会直接向他人发表言论,并随着会话的发展,利用已知的共同语境来推进会话。同样,协作软件组件之间也会共享知识,以便作为一个群体朝着共同的目标前进。本论文从合作知识共享的角度研究并发性,将知识的对话交流作为并发编程语言设计的核心问题。为此,本论文做出了五项贡献:1.它提出了将公共数据空间作为并发组件之间进行知识交流的媒介的观点,为并发编程提供了一种新方法。虽然数据空间与 Linda 风格语言世界中的 "事实空间 "和 Erlang 的协作模型大致相似,但它们在许多细节上存在显著差异。2.它首次以数学模型的形式清晰地表述了合作式会话知识交换。3.3. 它描述了两种完全不同的语言对该模型的两种忠实实现。4.4. 它提出了一套全新的语言构造,用于组织会话环境中个体行动者的内部结构。数据空间与这些结构的组合被称为 Syndicate。5.它提出并分析了一些证据,这些证据表明,所提出的技术与结构相结合,可以简化当前的程序设计。数据空间概念的独特之处在于,它侧重于会话框架和会话状态的表示和操作,以及对显式认识论知识的整合使用。这种设计特别适用于将通用输入/输出与其他功能语言整合在一起,但也适用于更广泛的演员式设置。
{"title":"Conversational Concurrency","authors":"Tony Garnock-Jones","doi":"arxiv-2409.04055","DOIUrl":"https://doi.org/arxiv-2409.04055","url":null,"abstract":"Concurrent computations resemble conversations. In a conversation,\u0000participants direct utterances at others and, as the conversation evolves,\u0000exploit the known common context to advance the conversation. Similarly,\u0000collaborating software components share knowledge with each other in order to\u0000make progress as a group towards a common goal. This dissertation studies concurrency from the perspective of cooperative\u0000knowledge-sharing, taking the conversational exchange of knowledge as a central\u0000concern in the design of concurrent programming languages. In doing so, it\u0000makes five contributions: 1. It develops the idea of a common dataspace as a\u0000medium for knowledge exchange among concurrent components, enabling a new\u0000approach to concurrent programming. While dataspaces loosely resemble both\u0000\"fact spaces\" from the world of Linda-style languages and Erlang's\u0000collaborative model, they significantly differ in many details. 2. It offers\u0000the first crisp formulation of cooperative, conversational knowledge-exchange\u0000as a mathematical model. 3. It describes two faithful implementations of the\u0000model for two quite different languages. 4. It proposes a completely novel\u0000suite of linguistic constructs for organizing the internal structure of\u0000individual actors in a conversational setting. The combination of dataspaces\u0000with these constructs is dubbed Syndicate. 5. It presents and analyzes evidence\u0000suggesting that the proposed techniques and constructs combine to simplify\u0000concurrent programming. The dataspace concept stands alone in its focus on representation and\u0000manipulation of conversational frames and conversational state and in its\u0000integral use of explicit epistemic knowledge. The design is particularly suited\u0000to integration of general-purpose I/O with otherwise-functional languages, but\u0000also applies to actor-like settings more generally.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142179507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Programming Languages
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1