首页 > 最新文献

Journal of the ACM最新文献

英文 中文
Pure-Circuit: Tight Inapproximability for PPAD 纯电路:PPAD 的严格不可逼近性
IF 2.3 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-15 DOI: 10.1145/3678166
Argyrios Deligkas, John Fearnley, Alexandros Hollender, Themistoklis Melissourgos
The current state-of-the-art methods for showing inapproximability in PPAD arise from the ε-Generalized-Circuit (ε- GCircuit ) problem. Rubinstein (2018) showed that there exists a small unknown constant ε for which ε- GCircuit is PPAD -hard, and subsequent work has shown hardness results for other problems in PPAD by using ε- GCircuit as an intermediate problem. We introduce Pure-Circuit , a new intermediate problem for PPAD , which can be thought of as ε- GCircuit pushed to the limit as ε → 1, and we show that the problem is PPAD -complete. We then prove that ε- GCircuit is PPAD -hard for all ε < 1/10 by a reduction from Pure-Circuit , and thus strengthen all prior work that has used GCircuit as an intermediate problem from the existential-constant regime to the large-constant regime. We show that stronger inapproximability results can be derived by reducing directly from Pure-Circuit . In particular, we prove tight inapproximability results for computing approximate Nash equilibria and approximate well-supported Nash equilibria in graphical games, for finding approximate well-supported Nash equilibria in polymatrix games, and for finding approximate equilibria in threshold games.
目前最先进的显示 PPAD 不可逼近性的方法来自ε-广义电路(ε- GCircuit )问题。Rubinstein(2018)证明存在一个未知的小常数ε,对于该常数,ε- GCircuit 是 PPAD -hard,随后的工作通过使用 ε- GCircuit 作为中间问题,展示了 PPAD 中其他问题的硬度结果。 我们引入了 Pure-Circuit,它是 PPAD 的一个新的中间问题,可以看作是 ε- GCircuit 在 ε → 1 时被推到了极限,我们证明了这个问题是 PPAD -complete 的。然后,我们通过从纯电路(Pure-Circuit)的还原证明,ε- GCircuit 在所有 ε < 1/10 的情况下都是 PPAD -hard,从而加强了之前所有将 GCircuit 作为从存在常数机制到大常数机制的中间问题的工作。 我们证明,从纯电路直接还原可以得到更强的不可逼近性结果。特别是,我们证明了计算图形博弈中的近似纳什均衡和近似支持良好的纳什均衡、寻找多矩阵博弈中的近似支持良好的纳什均衡以及寻找门槛博弈中的近似均衡的严密不可逼近性结果。
{"title":"Pure-Circuit: Tight Inapproximability for PPAD","authors":"Argyrios Deligkas, John Fearnley, Alexandros Hollender, Themistoklis Melissourgos","doi":"10.1145/3678166","DOIUrl":"https://doi.org/10.1145/3678166","url":null,"abstract":"\u0000 The current state-of-the-art methods for showing inapproximability in\u0000 PPAD\u0000 arise from the ε-Generalized-Circuit (ε-\u0000 GCircuit\u0000 ) problem. Rubinstein (2018) showed that there exists a small unknown constant ε for which ε-\u0000 GCircuit\u0000 is\u0000 PPAD\u0000 -hard, and subsequent work has shown hardness results for other problems in\u0000 PPAD\u0000 by using ε-\u0000 GCircuit\u0000 as an intermediate problem.\u0000 \u0000 \u0000 We introduce\u0000 Pure-Circuit\u0000 , a new intermediate problem for\u0000 PPAD\u0000 , which can be thought of as ε-\u0000 GCircuit\u0000 pushed to the limit as ε → 1, and we show that the problem is\u0000 PPAD\u0000 -complete. We then prove that ε-\u0000 GCircuit\u0000 is\u0000 PPAD\u0000 -hard for all ε < 1/10 by a reduction from\u0000 Pure-Circuit\u0000 , and thus strengthen all prior work that has used\u0000 GCircuit\u0000 as an intermediate problem from the existential-constant regime to the large-constant regime.\u0000 \u0000 \u0000 We show that stronger inapproximability results can be derived by reducing directly from\u0000 Pure-Circuit\u0000 . In particular, we prove tight inapproximability results for computing approximate Nash equilibria and approximate well-supported Nash equilibria in graphical games, for finding approximate well-supported Nash equilibria in polymatrix games, and for finding approximate equilibria in threshold games.\u0000","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141646350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Logical Approach to Type Soundness 类型健全性的逻辑方法
IF 2.3 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-10 DOI: 10.1145/3676954
Amin Timany, Robbert Krebbers, Derek Dreyer, Lars Birkedal
Type soundness, which asserts that “well-typed programs cannot go wrong”, is widely viewed as the canonical theorem one must prove to establish that a type system is doing its job. It is commonly proved using the so-called syntactic approach (aka progress and preservation ), which has had a huge impact on the study and teaching of programming language foundations. Unfortunately, syntactic type soundness is a rather weak theorem. It only applies to programs that are well-typed in their entirety, and thus tells us nothing about the many programs written in “safe” languages that make use of “unsafe” language features. Even worse, it tells us nothing about whether type systems achieve one of their main goals: enforcement of data abstraction. One can easily define a language that enjoys syntactic type soundness and yet fails to support even the most basic modular reasoning principles for abstraction mechanisms like closures, objects, and abstract data types. Given these concerns, we argue that programming languages researchers should no longer be satisfied with proving syntactic type soundness, and should instead start proving semantic type soundness , a more useful theorem which captures more accurately what type systems are actually good for. Semantic type soundness is an old idea—Milner’s original account of type soundness from 1978 was semantic—but it fell out of favor in the 1990s due to limitations and complexities of denotational models. In the succeeding decades, thanks to a series of technical advances—notably, step-indexed Kripke logical relations constructed over operational semantics, and higher-order concurrent separation logic as consolidated in the Iris framework in Coq—we can now build (machine-checked) semantic soundness proofs at a much higher level of abstraction than was previously possible. The resulting “logical” approach to semantic type soundness has already been employed to great effect in a number of recent papers, but those papers typically (a) concern advanced problem scenarios that complicate the presentation, (b) assume significant prior knowledge of the reader, and (c) suppress many details of the proofs. Here, we aim to provide a gentler, more pedagogically motivated introduction to logical type soundness, targeted at a broader audience that may or may not be familiar with logical relations and Iris. As a bonus, we also show how logical type soundness proofs can easily be generalized to establish an even stronger relational property— representation independence —for realistic type systems.
类型健全性断言 "类型良好的程序不会出错",被广泛视为建立类型系统正常运行所必须证明的典型定理。它通常使用所谓的语法方法(又称进步和保存)来证明,这种方法对编程语言基础的研究和教学产生了巨大影响。不幸的是,语法类型健全性是一个相当弱的定理。它只适用于整体类型完备的程序,因此对于许多使用 "不安全 "语言特性的 "安全 "语言编写的程序,它一无所知。更糟糕的是,它对类型系统是否实现了其主要目标之一--执行数据抽象--一无所知。我们可以轻而易举地定义一种语言,它在语法类型上是健全的,但却不支持抽象机制(如闭包、对象和抽象数据类型)最基本的模块化推理原则。 考虑到这些问题,我们认为编程语言研究人员不应再满足于证明语法类型健全性,而应开始证明语义类型健全性,这是一个更有用的定理,它能更准确地捕捉类型系统的实际用途。语义类型完备性是一个古老的概念--1978 年米尔纳(Milner)对类型完备性的最初论述就是语义类型完备性,但由于指称模型的局限性和复杂性,它在 20 世纪 90 年代逐渐失宠。在随后的几十年里,由于一系列技术进步,特别是在运算语义上构建的步进索引克里普克逻辑关系,以及在 Coq 的 Iris 框架中整合的高阶并发分离逻辑,我们现在可以在比以前更高的抽象层次上构建(机器检查的)语义完备性证明。 由此产生的语义类型完备性 "逻辑 "方法已经在最近的一些论文中得到了很好的应用,但这些论文通常(a)涉及高级问题场景,使表述复杂化;(b)假定读者有大量的先验知识;(c)压制了证明的许多细节。在这里,我们的目标是提供一个更温和、更有教学动机的逻辑类型健全性介绍,面向可能熟悉或不熟悉逻辑关系和 Iris 的更广泛读者。另外,我们还展示了逻辑类型合理性证明如何轻松地推广到现实类型系统中,以建立一个更强大的关系属性--表示独立性。
{"title":"A Logical Approach to Type Soundness","authors":"Amin Timany, Robbert Krebbers, Derek Dreyer, Lars Birkedal","doi":"10.1145/3676954","DOIUrl":"https://doi.org/10.1145/3676954","url":null,"abstract":"\u0000 Type soundness, which asserts that “well-typed programs cannot go wrong”, is widely viewed as the canonical theorem one must prove to establish that a type system is doing its job. It is commonly proved using the so-called\u0000 syntactic approach\u0000 (aka\u0000 progress and preservation\u0000 ), which has had a huge impact on the study and teaching of programming language foundations. Unfortunately, syntactic type soundness is a rather weak theorem. It only applies to programs that are well-typed in their entirety, and thus tells us nothing about the many programs written in “safe” languages that make use of “unsafe” language features. Even worse, it tells us nothing about whether type systems achieve one of their main goals: enforcement of data abstraction. One can easily define a language that enjoys syntactic type soundness and yet fails to support even the most basic modular reasoning principles for abstraction mechanisms like closures, objects, and abstract data types.\u0000 \u0000 \u0000 Given these concerns, we argue that programming languages researchers should no longer be satisfied with proving syntactic type soundness, and should instead start proving\u0000 semantic type soundness\u0000 , a more useful theorem which captures more accurately what type systems are actually good for. Semantic type soundness is an old idea—Milner’s original account of type soundness from 1978 was semantic—but it fell out of favor in the 1990s due to limitations and complexities of denotational models. In the succeeding decades, thanks to a series of technical advances—notably,\u0000 step-indexed Kripke logical relations\u0000 constructed over operational semantics, and\u0000 higher-order concurrent separation logic\u0000 as consolidated in the\u0000 Iris\u0000 framework in Coq—we can now build (machine-checked) semantic soundness proofs at a much higher level of abstraction than was previously possible.\u0000 \u0000 \u0000 The resulting “logical” approach to semantic type soundness has already been employed to great effect in a number of recent papers, but those papers typically (a) concern advanced problem scenarios that complicate the presentation, (b) assume significant prior knowledge of the reader, and (c) suppress many details of the proofs. Here, we aim to provide a gentler, more pedagogically motivated introduction to logical type soundness, targeted at a broader audience that may or may not be familiar with logical relations and Iris. As a bonus, we also show how logical type soundness proofs can easily be generalized to establish an even stronger\u0000 relational\u0000 property—\u0000 representation independence\u0000 —for realistic type systems.\u0000","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141662303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Query lower bounds for log-concave sampling 对数凹采样的查询下限
IF 2.5 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-21 DOI: 10.1145/3673651
Sinho Chewi, Jaume de Dios Pont, Jerry Li, Chen Lu, Shyam Narayanan

Log-concave sampling has witnessed remarkable algorithmic advances in recent years, but the corresponding problem of proving lower bounds for this task has remained elusive, with lower bounds previously known only in dimension one. In this work, we establish the following query lower bounds: (1) sampling from strongly log-concave and log-smooth distributions in dimension d ≥ 2 requires Ω(log κ) queries, which is sharp in any constant dimension, and (2) sampling from Gaussians in dimension d (hence also from general log-concave and log-smooth distributions in dimension d) requires (widetilde{Omega }(min (sqrt kappa log d, d)) ) queries, which is nearly sharp for the class of Gaussians. Here κ denotes the condition number of the target distribution. Our proofs rely upon (1) a multiscale construction inspired by work on the Kakeya conjecture in geometric measure theory, and (2) a novel reduction that demonstrates that block Krylov algorithms are optimal for this problem, as well as connections to lower bound techniques based on Wishart matrices developed in the matrix-vector query literature.

近年来,对数凹采样在算法上取得了显著的进步,但证明这一任务下界的相应问题却一直难以解决,以前只知道维数一的下界。在这项工作中,我们建立了以下查询下界:(1) 从维度 d≥2 的强对数凹分布和对数平滑分布中采样需要 Ω(log κ) 个查询,这在任何常量维度中都是尖锐的;(2) 从维度 d 的高斯分布(因此也是从维度 d 的一般对数凹分布和对数平滑分布中采样)中采样需要 (widetilde{Omega }(min (sqrt kappa log d, d)) ) 个查询,这对于高斯分布类来说几乎是尖锐的。这里 κ 表示目标分布的条件数。我们的证明依赖于:(1)受几何度量理论中 Kakeya 猜想的启发而进行的多尺度构造;(2)一种新颖的还原,证明了块克雷洛夫算法是该问题的最优算法,以及与矩阵向量查询文献中开发的基于 Wishart 矩阵的下界技术的联系。
{"title":"Query lower bounds for log-concave sampling","authors":"Sinho Chewi, Jaume de Dios Pont, Jerry Li, Chen Lu, Shyam Narayanan","doi":"10.1145/3673651","DOIUrl":"https://doi.org/10.1145/3673651","url":null,"abstract":"<p>Log-concave sampling has witnessed remarkable algorithmic advances in recent years, but the corresponding problem of proving <i>lower bounds</i> for this task has remained elusive, with lower bounds previously known only in dimension one. In this work, we establish the following query lower bounds: (1) sampling from strongly log-concave and log-smooth distributions in dimension <i>d</i> ≥ 2 requires <i>Ω</i>(log <i>κ</i>) queries, which is sharp in any constant dimension, and (2) sampling from Gaussians in dimension <i>d</i> (hence also from general log-concave and log-smooth distributions in dimension <i>d</i>) requires (widetilde{Omega }(min (sqrt kappa log d, d)) ) queries, which is nearly sharp for the class of Gaussians. Here <i>κ</i> denotes the condition number of the target distribution. Our proofs rely upon (1) a multiscale construction inspired by work on the Kakeya conjecture in geometric measure theory, and (2) a novel reduction that demonstrates that block Krylov algorithms are optimal for this problem, as well as connections to lower bound techniques based on Wishart matrices developed in the matrix-vector query literature.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transaction Fee Mechanism Design 交易费机制设计
IF 2.5 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-20 DOI: 10.1145/3674143
Tim Roughgarden

Demand for blockchains such as Bitcoin and Ethereum is far larger than supply, necessitating a mechanism that selects a subset of transactions to include “on-chain” from the pool of all pending transactions. This paper investigates the problem of designing a blockchain transaction fee mechanism through the lens of mechanism design. We introduce two new forms of incentive-compatibility that capture some of the idiosyncrasies of the blockchain setting, one (MMIC) that protects against deviations by profit-maximizing miners and one (OCA-proofness) that protects against off-chain collusion between miners and users.

This study is immediately applicable to major change (made on August 5, 2021) to Ethereum’s transaction fee mechanism, based on a proposal called “EIP-1559.” Originally, Ethereum’s transaction fee mechanism was a first-price (pay-as-bid) auction. EIP-1559 suggested making several tightly coupled changes, including the introduction of variable-size blocks, a history-dependent reserve price, and the burning of a significant portion of the transaction fees. We prove that this new mechanism earns an impressive report card: it satisfies the MMIC and OCA-proofness conditions, and is also dominant-strategy incentive compatible (DSIC) except when there is a sudden demand spike. We also introduce an alternative design, the “tipless mechanism,” which offers an incomparable slate of incentive-compatibility guarantees—it is MMIC and DSIC, and OCA-proof unless in the midst of a demand spike.

对比特币和以太坊等区块链的需求远远大于供应,这就需要一种机制,从所有待处理的交易池中选择一个交易子集纳入 "链上"。本文从机制设计的角度研究了区块链交易费机制的设计问题。我们引入了两种新形式的激励兼容性,它们捕捉到了区块链环境的一些特殊性,一种(MMIC)可防止利润最大化的矿工偏离机制,另一种(OCA-proofness)可防止矿工和用户之间的链外串通。这项研究立即适用于以太坊交易费机制的重大变革(2021 年 8 月 5 日),该变革基于一项名为 "EIP-1559 "的提案。最初,以太坊的交易费用机制是一种先定价(按标价付费)拍卖。EIP-1559 建议做出几项紧密耦合的改变,包括引入大小可变的区块、历史底价以及烧掉大部分交易费用。我们证明了这一新机制获得了令人印象深刻的成绩单:它满足了 MMIC 和 OCA-proofness 条件,而且还与主导策略激励兼容(DSIC),除非需求突然激增。我们还引入了另一种设计--"无小费机制",它提供了无与伦比的激励相容保证--它满足MMIC和DSIC条件,而且除非在需求激增的情况下,否则也是OCA-proof。
{"title":"Transaction Fee Mechanism Design","authors":"Tim Roughgarden","doi":"10.1145/3674143","DOIUrl":"https://doi.org/10.1145/3674143","url":null,"abstract":"<p>Demand for blockchains such as Bitcoin and Ethereum is far larger than supply, necessitating a mechanism that selects a subset of transactions to include “on-chain” from the pool of all pending transactions. This paper investigates the problem of designing a blockchain transaction fee mechanism through the lens of mechanism design. We introduce two new forms of incentive-compatibility that capture some of the idiosyncrasies of the blockchain setting, one (MMIC) that protects against deviations by profit-maximizing miners and one (OCA-proofness) that protects against off-chain collusion between miners and users. </p><p>This study is immediately applicable to major change (made on August 5, 2021) to Ethereum’s transaction fee mechanism, based on a proposal called “EIP-1559.” Originally, Ethereum’s transaction fee mechanism was a first-price (pay-as-bid) auction. EIP-1559 suggested making several tightly coupled changes, including the introduction of variable-size blocks, a history-dependent reserve price, and the burning of a significant portion of the transaction fees. We prove that this new mechanism earns an impressive report card: it satisfies the MMIC and OCA-proofness conditions, and is also dominant-strategy incentive compatible (DSIC) except when there is a sudden demand spike. We also introduce an alternative design, the “tipless mechanism,” which offers an incomparable slate of incentive-compatibility guarantees—it is MMIC and DSIC, and OCA-proof unless in the midst of a demand spike.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse Higher Order Čech Filtrations 稀疏高阶 Čech 过滤
IF 2.5 2区 计算机科学 Q2 Computer Science Pub Date : 2024-05-27 DOI: 10.1145/3666085
Mickaël Buchet, Bianca B Dornelas, Michael Kerber

For a finite set of balls of radius r, the k-fold cover is the space covered by at least k balls. Fixing the ball centers and varying the radius, we obtain a nested sequence of spaces that is called the k-fold filtration of the centers. For k = 1, the construction is the union-of-balls filtration that is popular in topological data analysis. For larger k, it yields a cleaner shape reconstruction in the presence of outliers. We contribute a sparsification algorithm to approximate the topology of the k-fold filtration. Our method is a combination and adaptation of several techniques from the well-studied case k = 1, resulting in a sparsification of linear size that can be computed in expected near-linear time with respect to the number of input points. Our method also extends to the multicover bifiltration, composed of the k-fold filtrations for several values of k, with the same size and complexity bounds.

对于半径为 r 的有限球集,k-折叠覆盖是至少由 k 个球覆盖的空间。固定球心并改变半径,我们会得到一个嵌套空间序列,称为球心的 k 折叠过滤。对于 k = 1,该结构就是拓扑数据分析中常用的球联盟过滤。对于较大的 k,在存在离群值的情况下,它能产生更简洁的形状重构。我们贡献了一种稀疏化算法来近似 k 倍过滤的拓扑结构。我们的方法是对 k = 1 情况下几种技术的组合和调整,从而产生了一种线性大小的稀疏化,可以在与输入点数量接近线性的预期时间内计算出来。我们的方法还可扩展到多覆盖分层,由多个 k 值的 k 折叠过滤组成,具有相同的大小和复杂度限制。
{"title":"Sparse Higher Order Čech Filtrations","authors":"Mickaël Buchet, Bianca B Dornelas, Michael Kerber","doi":"10.1145/3666085","DOIUrl":"https://doi.org/10.1145/3666085","url":null,"abstract":"<p>For a finite set of balls of radius <i>r</i>, the <i>k</i>-fold cover is the space covered by at least <i>k</i> balls. Fixing the ball centers and varying the radius, we obtain a nested sequence of spaces that is called the <i>k</i>-fold filtration of the centers. For <i>k</i> = 1, the construction is the union-of-balls filtration that is popular in topological data analysis. For larger <i>k</i>, it yields a cleaner shape reconstruction in the presence of outliers. We contribute a sparsification algorithm to approximate the topology of the <i>k</i>-fold filtration. Our method is a combination and adaptation of several techniques from the well-studied case <i>k</i> = 1, resulting in a sparsification of linear size that can be computed in expected near-linear time with respect to the number of input points. Our method also extends to the multicover bifiltration, composed of the <i>k</i>-fold filtrations for several values of <i>k</i>, with the same size and complexity bounds.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141170701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Killing a Vortex 杀死漩涡
IF 2.5 2区 计算机科学 Q2 Computer Science Pub Date : 2024-05-14 DOI: 10.1145/3664648
Dimitrios Thilikos, Sebastian Wiederrecht

The Graph Minors Structure Theorem of Robertson and Seymour asserts that, for every graph H, every H-minor-free graph can be obtained by clique-sums of “almost embeddable” graphs. Here a graph is “almost embeddable” if it can be obtained from a graph of bounded Euler-genus by pasting graphs of bounded pathwidth in an “orderly fashion” into a bounded number of faces, called the vortices, and then adding a bounded number of additional vertices, called apices, with arbitrary neighborhoods. Our main result is a full classification of all graphs H for which the use of vortices in the theorem above can be avoided. To this end we identify a (parametric) graph (mathscr{S}_t) and prove that all (mathscr{S}_t)-minor-free graphs can be obtained by clique-sums of graphs embeddable in a surface of bounded Euler-genus after deleting a bounded number of vertices. We show that this result is tight in the sense that the appearance of vortices cannot be avoided for H-minor-free graphs, whenever H is not a minor of (mathscr{S}_t) for some (tin mathbb {N}. ) Using our new structure theorem, we design an algorithm that, given an (mathscr{S}_t)-minor-free graph G, computes the generating function of all perfect matchings of G in polynomial time. Our results, combined with known complexity results, imply a complete characterization of minor-closed graph classes where the number of perfect matchings is polynomially computable: They are exactly those graph classes that do not contain every (mathscr{S}_t) as a minor. This provides a sharp complexity dichotomy for the problem of counting perfect matchings in minor-closed classes.

罗伯逊(Robertson)和西摩(Seymour)提出的 "图最小值结构定理"(Graph Minors Structure Theorem)认为,对于每个图 H,每个无 H 最小值的图都可以通过 "几乎可嵌入 "图的簇和得到。这里的 "几乎可嵌入 "图指的是通过将有界路径宽度的图 "有序地 "粘贴到有界数的面上(称为涡面),然后再添加有界数的额外顶点(称为顶点)和任意邻域,就能从有界欧拉源图中得到的图。我们的主要成果是对所有图 H 进行全面分类,对于这些图 H,可以避免在上述定理中使用漩涡。为此,我们确定了一个(参数)图 (mathscr{S}_t),并证明了所有 (mathscr{S}_t)-minor-free图都可以通过删除一定数量的顶点后嵌入有界欧拉属表面的图的clique-sums得到。我们证明了这一结果的严密性,即只要 H 不是某个 (tin mathbb {N} 的 (mathscr{S}_t) 的 minor,那么对于无 H minor 的图来说,涡旋的出现就无法避免。我们的结果与已知的复杂性结果相结合,意味着完全匹配数可多项式计算的次要封闭图类的完整特征:它们正是那些不包含每个 minor(mathscr{S}_t)的图类。这为计算小封闭类中的完全匹配问题提供了一个尖锐的复杂性二分法。
{"title":"Killing a Vortex","authors":"Dimitrios Thilikos, Sebastian Wiederrecht","doi":"10.1145/3664648","DOIUrl":"https://doi.org/10.1145/3664648","url":null,"abstract":"<p>The Graph Minors Structure Theorem of Robertson and Seymour asserts that, for every graph <i>H</i>, every <i>H</i>-minor-free graph can be obtained by clique-sums of “almost embeddable” graphs. Here a graph is “almost embeddable” if it can be obtained from a graph of bounded Euler-genus by pasting graphs of bounded pathwidth in an “orderly fashion” into a bounded number of faces, called the <i>vortices</i>, and then adding a bounded number of additional vertices, called <i>apices</i>, with arbitrary neighborhoods. Our main result is a full classification of all graphs <i>H</i> for which the use of vortices in the theorem above can be avoided. To this end we identify a (parametric) graph (mathscr{S}_t) and prove that all (mathscr{S}_t)-minor-free graphs can be obtained by clique-sums of graphs embeddable in a surface of bounded Euler-genus after deleting a bounded number of vertices. We show that this result is tight in the sense that the appearance of vortices cannot be avoided for <i>H</i>-minor-free graphs, whenever <i>H</i> is not a minor of (mathscr{S}_t) for some (tin mathbb {N}. ) Using our new structure theorem, we design an algorithm that, given an (mathscr{S}_t)-minor-free graph <i>G</i>, computes the generating function of all perfect matchings of <i>G</i> in polynomial time. Our results, combined with known complexity results, imply a complete characterization of minor-closed graph classes where the number of perfect matchings is polynomially computable: They are exactly those graph classes that do not contain every (mathscr{S}_t) as a minor. This provides a <i>sharp</i> complexity dichotomy for the problem of counting perfect matchings in minor-closed classes.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140940924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Separations in Proof Complexity and TFNP 证明复杂性与 TFNP 的分离
IF 2.5 2区 计算机科学 Q2 Computer Science Pub Date : 2024-05-09 DOI: 10.1145/3663758
Mika Göös, Alexandros Hollender, Siddhartha Jain, Gilbert Maystre, William Pires, Robert Robere, Ran Tao

It is well-known that Resolution proofs can be efficiently simulated by Sherali–Adams (SA) proofs. We show, however, that any such simulation needs to exploit huge coefficients: Resolution cannot be efficiently simulated by SA when the coefficients are written in unary. We also show that Reversible Resolution (a variant of MaxSAT Resolution) cannot be efficiently simulated by Nullstellensatz (NS).

These results have consequences for total ({text{upshape sffamily NP}} ) search problems. First, we characterise the classes ({text{upshape sffamily PPADS}} ), ({text{upshape sffamily PPAD}} ), ({text{upshape sffamily SOPL}} ) by unary-SA, unary-NS, and Reversible Resolution, respectively. Second, we show that, relative to an oracle, ({text{upshape sffamily PLS}} notsubseteq {text{upshape sffamily PPP}} ), ({text{upshape sffamily SOPL}} notsubseteq {text{upshape sffamily PPA}} ), and ({text{upshape sffamily EOPL}} notsubseteq {text{upshape sffamily UEOPL}} ). In particular, together with prior work, this gives a complete picture of the black-box relationships between all classical ({text{upshape sffamily TFNP}} ) classes introduced in the 1990s.

众所周知,解析证明可以通过谢拉利-亚当斯(Sherali-Adams,SA)证明进行高效模拟。然而,我们发现,任何此类模拟都需要利用庞大的系数:当系数以一元形式书写时,SA 无法高效地模拟解析。我们还证明了可逆解析(MaxSAT解析的一种变体)无法通过空策略(Nullstellensatz,NS)进行有效模拟。这些结果对总搜索({text{upshape sffamily NP}} )问题有影响。首先,我们通过unary-SA、unary-NS和可逆解析分别描述了({text{upshape sffamily PPADS}} )、({text{upshape sffamily PPAD}} )、({text{upshape sffamily SOPL}} )类。其次,我们证明,相对于甲骨文,({text ({text (upshape (sffamily PLS}}))notsubseteq {text{upshapesffamily PPP}}.),({text ({向上形狀 (sffamily SOPL}}notsubseteq {text{upshape sffamily PPA} }),和 ({text (上形) (sffamily EOPL}}not(subseteq {text{upshape sffamily UEOPL}} )。).特别是,结合之前的工作,这就完整地描述了 20 世纪 90 年代引入的所有经典类之间的黑箱关系。
{"title":"Separations in Proof Complexity and TFNP","authors":"Mika Göös, Alexandros Hollender, Siddhartha Jain, Gilbert Maystre, William Pires, Robert Robere, Ran Tao","doi":"10.1145/3663758","DOIUrl":"https://doi.org/10.1145/3663758","url":null,"abstract":"<p>It is well-known that Resolution proofs can be efficiently simulated by Sherali–Adams (SA) proofs. We show, however, that any such simulation needs to exploit huge coefficients: Resolution cannot be efficiently simulated by SA when the coefficients are written in unary. We also show that <i>Reversible Resolution</i> (a variant of MaxSAT Resolution) cannot be efficiently simulated by Nullstellensatz (NS). </p><p>These results have consequences for total ({text{upshape sffamily NP}} ) search problems. First, we characterise the classes ({text{upshape sffamily PPADS}} ), ({text{upshape sffamily PPAD}} ), ({text{upshape sffamily SOPL}} ) by unary-SA, unary-NS, and Reversible Resolution, respectively. Second, we show that, relative to an oracle, ({text{upshape sffamily PLS}} notsubseteq {text{upshape sffamily PPP}} ), ({text{upshape sffamily SOPL}} notsubseteq {text{upshape sffamily PPA}} ), and ({text{upshape sffamily EOPL}} notsubseteq {text{upshape sffamily UEOPL}} ). In particular, together with prior work, this gives a complete picture of the black-box relationships between all classical ({text{upshape sffamily TFNP}} ) classes introduced in the 1990s.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140940739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smoothed Analysis of Information Spreading in Dynamic Networks 动态网络中信息传播的平滑分析
IF 2.5 2区 计算机科学 Q2 Computer Science Pub Date : 2024-05-01 DOI: 10.1145/3661831
Michael Dinitz, Jeremy Fineman, Seth Gilbert, Calvin Newport

The best known solutions for k-message broadcast in dynamic networks of size n require Ω(nk) rounds. In this paper, we see if these bounds can be improved by smoothed analysis. To do so, we study perhaps the most natural randomized algorithm for disseminating tokens in this setting: at every time step, choose a token to broadcast randomly from the set of tokens you know. We show that with even a small amount of smoothing (i.e., one random edge added per round), this natural strategy solves k-message broadcast in (tilde{O}(n+k^3) ) rounds, with high probability, beating the best known bounds for (k=o(sqrt {n}) ) and matching the Ω(n + k) lower bound for static networks for k = O(n1/3) (ignoring logarithmic factors). In fact, the main result we show is even stronger and more general: given ℓ-smoothing (i.e., ℓ random edges added per round), this simple strategy terminates in O(kn2/3log 1/3(n)ℓ− 1/3) rounds. We then prove this analysis close to tight with an almost-matching lower bound. To better understand the impact of smoothing on information spreading, we next turn our attention to static networks, proving a tight bound of (tilde{O}(ksqrt {n}) ) rounds to solve k-message broadcast, which is better than what our strategy can achieve in the dynamic setting. This confirms the intuition that although smoothed analysis reduces the difficulties induced by changing graph structures, it does not eliminate them altogether. Finally, we apply tools developed to support our smoothed analysis to prove an optimal result for k-message broadcast in so-called well-mixed networks in the absence of smoothing. By comparing this result to an existing lower bound for well-mixed networks, we establish a formal separation between oblivious and strongly adaptive adversaries with respect to well-mixed token spreading, partially resolving an open question on the impact of adversary strength on the k-message broadcast problem.

在规模为 n 的动态网络中,k 消息广播的已知最佳解决方案需要 Ω(nk) 轮。在本文中,我们将探讨能否通过平滑分析来改进这些约束。为此,我们研究了在这种情况下传播令牌的最自然的随机算法:在每个时间步,从已知的令牌集合中随机选择一个令牌进行广播。我们的研究表明,即使进行少量的平滑处理(即每轮增加一条随机边),这种自然策略也能在 (tilde{O}(n+k^3) ) 轮内高概率地解决 k 消息广播问题,超过了 (k=o(sqrt {n}) ) 的已知最佳边界,并与 k = O(n1/3) 的静态网络的 Ω(n + k) 下限相匹配(忽略对数因子)。事实上,我们展示的主要结果甚至更强、更普遍:在给定 ℓ 平滑(即每轮添加 ℓ 随机边)的情况下,这一简单策略在 O(kn2/3log 1/3(n)ℓ- 1/3) 轮内终止。然后,我们用一个几乎匹配的下限证明了这一分析接近严密。为了更好地理解平滑化对信息传播的影响,我们接下来把注意力转向了静态网络,证明了解决 k 消息广播所需的 (tilde{O}(ksqrt {n}) ) 轮次的紧约束,这比我们的策略在动态环境中所能达到的效果要好。这印证了我们的直觉:虽然平滑分析可以减少图结构变化带来的困难,但并不能完全消除这些困难。最后,我们应用为支持平滑分析而开发的工具,证明了在没有平滑分析的情况下,所谓混合良好网络中 k 消息广播的最优结果。通过将这一结果与现有的混杂网络下限进行比较,我们在混杂令牌传播方面正式区分了遗忘型对手和强适应型对手,从而部分解决了对手强度对 k 信息广播问题的影响这一悬而未决的问题。
{"title":"Smoothed Analysis of Information Spreading in Dynamic Networks","authors":"Michael Dinitz, Jeremy Fineman, Seth Gilbert, Calvin Newport","doi":"10.1145/3661831","DOIUrl":"https://doi.org/10.1145/3661831","url":null,"abstract":"<p>The best known solutions for <i>k</i>-message broadcast in dynamic networks of size <i>n</i> require <i>Ω</i>(<i>nk</i>) rounds. In this paper, we see if these bounds can be improved by smoothed analysis. To do so, we study perhaps the most natural randomized algorithm for disseminating tokens in this setting: at every time step, choose a token to broadcast randomly from the set of tokens you know. We show that with even a small amount of smoothing (i.e., one random edge added per round), this natural strategy solves <i>k</i>-message broadcast in (tilde{O}(n+k^3) ) rounds, with high probability, beating the best known bounds for (k=o(sqrt {n}) ) and matching the <i>Ω</i>(<i>n</i> + <i>k</i>) lower bound for static networks for <i>k</i> = <i>O</i>(<i>n</i><sup>1/3</sup>) (ignoring logarithmic factors). In fact, the main result we show is even stronger and more general: given ℓ-smoothing (i.e., ℓ random edges added per round), this simple strategy terminates in <i>O</i>(<i>kn</i><sup>2/3</sup>log <sup>1/3</sup>(<i>n</i>)ℓ<sup>− 1/3</sup>) rounds. We then prove this analysis close to tight with an almost-matching lower bound. To better understand the impact of smoothing on information spreading, we next turn our attention to static networks, proving a tight bound of (tilde{O}(ksqrt {n}) ) rounds to solve <i>k</i>-message broadcast, which is better than what our strategy can achieve in the dynamic setting. This confirms the intuition that although smoothed analysis reduces the difficulties induced by changing graph structures, it does not eliminate them altogether. Finally, we apply tools developed to support our smoothed analysis to prove an optimal result for <i>k</i>-message broadcast in so-called well-mixed networks in the absence of smoothing. By comparing this result to an existing lower bound for well-mixed networks, we establish a formal separation between oblivious and strongly adaptive adversaries with respect to well-mixed token spreading, partially resolving an open question on the impact of adversary strength on the <i>k</i>-message broadcast problem.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140830493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Verifiable Quantum Advantage without Structure 可验证的无结构量子优势
IF 2.5 2区 计算机科学 Q2 Computer Science Pub Date : 2024-04-22 DOI: 10.1145/3658665
Takashi Yamakawa, Mark Zhandry

We show the following hold, unconditionally unless otherwise stated, relative to a random oracle:

There are NP search problems solvable by quantum polynomial-time machines but not classical probabilistic polynomial-time machines.

There exist functions that are one-way, and even collision resistant, against classical adversaries but are easily inverted quantumly. Similar counterexamples exist for digital signatures and CPA-secure public key encryption (the latter requiring the assumption of a classically CPA-secure encryption scheme). Interestingly, the counterexample does not necessarily extend to the case of other cryptographic objects such as PRGs.

There are unconditional publicly verifiable proofs of quantumness with the minimal rounds of interaction: for uniform adversaries, the proofs are non-interactive, whereas for non-uniform adversaries the proofs are two message public coin.

Our results do not appear to contradict the Aaronson-Ambanis conjecture. Assuming this conjecture, there exist publicly verifiable certifiable randomness, again with the minimal rounds of interaction.

By replacing the random oracle with a concrete cryptographic hash function such as SHA2, we obtain plausible Minicrypt instantiations of the above results. Previous analogous results all required substantial structure, either in terms of highly structured oracles and/or algebraic assumptions in Cryptomania and beyond.

除非另有说明,否则我们将无条件地证明以下相对于随机甲骨文的观点是成立的:-量子多项式时间机器可以解决 NP 搜索问题,但经典概率多项式时间机器却不能。数字签名和 CPA 安全公钥加密(后者需要假设经典 CPA 安全加密方案)也存在类似的反例。有趣的是,这个反例并不一定会延伸到其他密码对象(如 PRGs)的情况。假定存在这个猜想,那么就存在可公开验证的可认证随机性,同样也需要最少轮次的交互。通过用具体的加密哈希函数(如 SHA2)代替随机甲骨文,我们得到了上述结果可信的 Minicrypt 实例。以前的类似结果都需要大量的结构,要么是高度结构化的神谕,要么是 Cryptomania 及其他的代数假设。
{"title":"Verifiable Quantum Advantage without Structure","authors":"Takashi Yamakawa, Mark Zhandry","doi":"10.1145/3658665","DOIUrl":"https://doi.org/10.1145/3658665","url":null,"abstract":"<p>We show the following hold, unconditionally unless otherwise stated, relative to a random oracle: <p><table border=\"0\" list-type=\"bullet\" width=\"95%\"><tr><td valign=\"top\"><p>•</p></td><td colspan=\"5\" valign=\"top\"><p>There are NP <i>search</i> problems solvable by quantum polynomial-time machines but not classical probabilistic polynomial-time machines.</p></td></tr><tr><td valign=\"top\"><p>•</p></td><td colspan=\"5\" valign=\"top\"><p>There exist functions that are one-way, and even collision resistant, against classical adversaries but are easily inverted quantumly. Similar counterexamples exist for digital signatures and CPA-secure public key encryption (the latter requiring the assumption of a classically CPA-secure encryption scheme). Interestingly, the counterexample does not necessarily extend to the case of other cryptographic objects such as PRGs.</p></td></tr><tr><td valign=\"top\"><p>•</p></td><td colspan=\"5\" valign=\"top\"><p>There are unconditional publicly verifiable proofs of quantumness with the minimal rounds of interaction: for uniform adversaries, the proofs are non-interactive, whereas for non-uniform adversaries the proofs are two message public coin.</p></td></tr><tr><td valign=\"top\"><p>•</p></td><td colspan=\"5\" valign=\"top\"><p>Our results do not appear to contradict the Aaronson-Ambanis conjecture. Assuming this conjecture, there exist publicly verifiable certifiable randomness, again with the minimal rounds of interaction.</p></td></tr></table></p>\u0000By replacing the random oracle with a concrete cryptographic hash function such as SHA2, we obtain plausible Minicrypt instantiations of the above results. Previous analogous results all required substantial structure, either in terms of highly structured oracles and/or algebraic assumptions in Cryptomania and beyond.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140636980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Bitcoin Backbone Protocol: Analysis and Applications 比特币骨干协议:分析与应用
IF 2.5 2区 计算机科学 Q2 Computer Science Pub Date : 2024-04-18 DOI: 10.1145/3653445
Juan A. Garay, Aggelos Kiayias, Nikos Leonardos

Bitcoin is the first and most popular decentralized cryptocurrency to date. In this work, we extract and analyze the core of the Bitcoin protocol, which we term the Bitcoin backbone, and prove three of its fundamental properties which we call Common Prefix, Chain Quality and Chain Growth in the static setting where the number of players remains fixed. Our proofs hinge on appropriate and novel assumptions on the “hashing power” of the protocol participants and their interplay with the protocol parameters and the time needed for reliable message passing between honest parties in terms of computational steps. A takeaway from our analysis is that, all else being equal, the protocol’s provable tolerance in terms of the number of adversarial parties (or, equivalently, their “hashing power” in our model) decreases as the duration of a message passing round increases.

Next, we propose and analyze applications that can be built “on top” of the backbone protocol, specifically focusing on Byzantine agreement (BA) and on the notion of a public transaction ledger. Regarding BA, we observe that a proposal due to Nakamoto falls short of solving it, and present a simple alternative which works assuming that the adversary’s hashing power is bounded by 1/3. The public transaction ledger captures the essence of Bitcoin’s operation as a cryptocurrency, in the sense that it guarantees the liveness and persistence of committed transactions. Based on this notion we describe and analyze the Bitcoin system as well as a more elaborate BA protocol and we prove them secure assuming the adversary’s hashing power is strictly less than 1/2. Instrumental to this latter result is a technique we call 2-for-1 proof-of-work(PoW) that has proven to be useful in the design of other PoW-based protocols.

比特币是迄今为止第一种也是最流行的去中心化加密货币。在这项工作中,我们提取并分析了比特币协议的核心(我们称之为比特币主干),并证明了它的三个基本属性,我们称之为静态环境下的通用前缀、链质量和链增长(玩家数量保持固定)。我们的证明依赖于对协议参与者 "散列能力 "的适当而新颖的假设,以及这些假设与协议参数和诚实各方之间可靠信息传递所需的计算步骤之间的相互作用。我们的分析得出的一个结论是,在其他条件相同的情况下,随着一轮信息传递持续时间的增加,协议在敌对方数量(或等同于我们模型中的敌对方 "散列能力")方面的可证明容忍度会降低。接下来,我们提出并分析了可以建立在骨干协议 "之上 "的应用,特别是拜占庭协议(BA)和公共交易分类账的概念。关于拜占庭协议,我们注意到中本聪提出的建议无法解决这一问题,并提出了一个简单的替代方案,该方案假设对手的散列能力以 1/3 为界。公共交易账本抓住了比特币作为加密货币运行的本质,因为它保证了已承诺交易的有效性和持久性。基于这一概念,我们描述并分析了比特币系统和一个更复杂的 BA 协议,并证明了它们的安全性,前提是对手的散列能力严格小于 1/2。我们称之为 2-for-1 工作证明(PoW)的技术对后一个结果至关重要,这种技术已被证明在设计其他基于 PoW 的协议时非常有用。
{"title":"The Bitcoin Backbone Protocol: Analysis and Applications","authors":"Juan A. Garay, Aggelos Kiayias, Nikos Leonardos","doi":"10.1145/3653445","DOIUrl":"https://doi.org/10.1145/3653445","url":null,"abstract":"<p>Bitcoin is the first and most popular decentralized cryptocurrency to date. In this work, we extract and analyze the core of the Bitcoin protocol, which we term the Bitcoin <i>backbone</i>, and prove three of its fundamental properties which we call <i>Common Prefix</i>, <i>Chain Quality</i> and <i>Chain Growth</i> in the static setting where the number of players remains fixed. Our proofs hinge on appropriate and novel assumptions on the “hashing power” of the protocol participants and their interplay with the protocol parameters and the time needed for reliable message passing between honest parties in terms of computational steps. A takeaway from our analysis is that, all else being equal, the protocol’s provable tolerance in terms of the number of adversarial parties (or, equivalently, their “hashing power” in our model) decreases as the duration of a message passing round increases. </p><p>Next, we propose and analyze applications that can be built “on top” of the backbone protocol, specifically focusing on Byzantine agreement (BA) and on the notion of a public transaction ledger. Regarding BA, we observe that a proposal due to Nakamoto falls short of solving it, and present a simple alternative which works assuming that the adversary’s hashing power is bounded by 1/3. The public transaction ledger captures the essence of Bitcoin’s operation as a cryptocurrency, in the sense that it guarantees the liveness and persistence of committed transactions. Based on this notion we describe and analyze the Bitcoin system as well as a more elaborate BA protocol and we prove them secure assuming the adversary’s hashing power is strictly less than 1/2. Instrumental to this latter result is a technique we call <i>2-for-1 proof-of-work</i>\u0000(PoW) that has proven to be useful in the design of other PoW-based protocols.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140609941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of the ACM
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1