首页 > 最新文献

Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing最新文献

英文 中文
Average-case fine-grained hardness 平均细粒硬度
Pub Date : 2017-06-19 DOI: 10.1145/3055399.3055466
Marshall Ball, Alon Rosen, Manuel Sabin, Prashant Nalini Vasudevan
We present functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widely-conjectured worst-case hardness for problems from the study of fine-grained complexity. Unconditional constructions of such functions are known from before (Goldmann et al., IPL '94), but these have been canonical functions that have not found further use, while our functions are closely related to well-studied problems and have considerable algebraic structure. Based on the average-case hardness and structural properties of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing fine-grained One-Way Functions. We also show how our reductions make conjectures regarding the worst-case hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO '03). We prove our hardness results in each case by showing fine-grained reductions from solving one of three problems - namely, Orthogonal Vectors (OV), 3SUM, and All-Pairs Shortest Paths (APSP) - in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us functions that require n2-o(1) time to compute on average, and that of APSP gives us a function that requires n3-o(1) time. Using the same techniques we also obtain a conditional average-case time hierarchy of functions.
我们提出的函数可以在一些固定的多项式时间内计算,但对于任何在更短时间内运行的算法来说,平均来说都很难,假设从细粒度复杂性研究中得到的问题的最坏情况硬度被广泛推测。这种函数的无条件构造以前就已经知道了(Goldmann et al., IPL '94),但这些都是没有进一步使用的规范函数,而我们的函数与研究得很好的问题密切相关,并且具有相当大的代数结构。基于函数的平均情况硬度和结构性质,我们概述了工作证明方案的构造,并讨论了构造细粒度单向函数的可能方法。我们还展示了我们的约简是如何对我们所约简的问题(以及强指数时间假设)的最坏情况硬度进行推测的,这在某种意义上类似于(Naor, CRYPTO '03)。我们通过展示从解决三个问题之一(即正交向量(OV), 3SUM和全对最短路径(APSP))到在均匀随机输入上正确计算我们的函数的细粒度缩减来证明每种情况下的硬度结果。然后OV和3SUM的推测硬度给出了平均需要n2-o(1)时间计算的函数,而APSP的推测硬度给出了需要n2-o(1)时间计算的函数。使用相同的技术,我们还获得了函数的条件平均情况时间层次结构。
{"title":"Average-case fine-grained hardness","authors":"Marshall Ball, Alon Rosen, Manuel Sabin, Prashant Nalini Vasudevan","doi":"10.1145/3055399.3055466","DOIUrl":"https://doi.org/10.1145/3055399.3055466","url":null,"abstract":"We present functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widely-conjectured worst-case hardness for problems from the study of fine-grained complexity. Unconditional constructions of such functions are known from before (Goldmann et al., IPL '94), but these have been canonical functions that have not found further use, while our functions are closely related to well-studied problems and have considerable algebraic structure. Based on the average-case hardness and structural properties of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing fine-grained One-Way Functions. We also show how our reductions make conjectures regarding the worst-case hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO '03). We prove our hardness results in each case by showing fine-grained reductions from solving one of three problems - namely, Orthogonal Vectors (OV), 3SUM, and All-Pairs Shortest Paths (APSP) - in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us functions that require n2-o(1) time to compute on average, and that of APSP gives us a function that requires n3-o(1) time. Using the same techniques we also obtain a conditional average-case time hierarchy of functions.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85137519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
The next 700 network programming languages (invited talk) 下700种网络编程语言(特邀演讲)
Pub Date : 2017-06-19 DOI: 10.1145/3055399.3081042
Nate Foster
Specification and verification of computer networks has become a reality in recent years, with the emergence of domain-specific programming languages and automated reasoning tools. But the design of these frameworks has been largely ad hoc, driven more by the needs of applications and the capabilities of hardware than by any foundational principles. This talk will present NetKAT, a language for programming networks based on a well-studied mathematical foundation: regular languages and finite automata. The talk will describe the design of the language, discuss its semantic underpinnings, and present highlights from ongoing work extending the language with stateful and probabilistic features.
近年来,随着领域特定编程语言和自动推理工具的出现,计算机网络的规范和验证已经成为现实。但是这些框架的设计在很大程度上是临时的,更多的是由应用程序的需求和硬件的能力驱动,而不是由任何基本原则驱动。本次演讲将介绍NetKAT,一种基于充分研究的数学基础:正则语言和有限自动机的网络编程语言。该演讲将描述该语言的设计,讨论其语义基础,并介绍正在进行的扩展该语言的有状态和概率特征的重点工作。
{"title":"The next 700 network programming languages (invited talk)","authors":"Nate Foster","doi":"10.1145/3055399.3081042","DOIUrl":"https://doi.org/10.1145/3055399.3081042","url":null,"abstract":"Specification and verification of computer networks has become a reality in recent years, with the emergence of domain-specific programming languages and automated reasoning tools. But the design of these frameworks has been largely ad hoc, driven more by the needs of applications and the capabilities of hardware than by any foundational principles. This talk will present NetKAT, a language for programming networks based on a well-studied mathematical foundation: regular languages and finite automata. The talk will describe the design of the language, discuss its semantic underpinnings, and present highlights from ongoing work extending the language with stateful and probabilistic features.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82330487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards optimal two-source extractors and Ramsey graphs 最优双源提取器和Ramsey图
Pub Date : 2017-06-19 DOI: 10.1145/3055399.3055429
Gil Cohen
The main contribution of this work is a construction of a two-source extractor for quasi-logarithmic min-entropy. That is, an extractor for two independent n-bit sources with min-entropy Ο(logn), which is optimal up to the poly(loglogn) factor. A strong motivation for constructing two-source extractors for low entropy is for Ramsey graphs constructions. Our two-source extractor readily yields a (logn)(logloglogn)Ο(1)-Ramsey graph on n vertices. Although there has been exciting progress towards constructing O(logn)-Ramsey graphs in recent years, a line of work that this paper contributes to, it is not clear if current techniques can be pushed so as to match this bound. Interestingly, however, as an artifact of current techniques, one obtains strongly explicit Ramsey graphs, namely, graphs on n vertices where the existence of an edge connecting any pair of vertices can be determined in time poly(logn). On top of our strongly explicit construction, in this work, we consider algorithms that output the entire graph in poly(n)-time, and make progress towards matching the desired Ο(logn) bound in this setting. In our opinion, this is a natural setting in which Ramsey graphs constructions should be studied. The main technical novelty of this work lies in an improved construction of an independence-preserving merger (IPM), a variant of the well-studied notion of a merger, which was recently introduced by Cohen and Schulman. Our construction is based on a new connection to correlation breakers with advice. In fact, our IPM satisfies a stronger and more natural property than that required by the original definition, and we believe it may find further applications.
本工作的主要贡献是构造了准对数最小熵的双源提取器。也就是说,一个具有最小熵Ο(logn)的两个独立n位源的提取器,它是最优的,直到poly(loglog)因子。构造低熵双源提取器的一个强烈动机是为了构造拉姆齐图。我们的双源提取器很容易在n个顶点上生成(logn)(logloglogn)Ο(1)-Ramsey图。尽管近年来在构造O(logn)-Ramsey图方面取得了令人兴奋的进展,这是本文所做的一系列工作,但目前尚不清楚当前的技术是否可以推动到匹配这个界限。然而,有趣的是,作为当前技术的产物,人们得到了强显式拉姆齐图,即n个顶点上的图,其中连接任何一对顶点的边的存在可以在时间poly(logn)中确定。在我们强烈明确的构造之上,在这项工作中,我们考虑了在poly(n)时间内输出整个图的算法,并在此设置中朝着匹配所需的Ο(logn)界取得进展。我们认为,这是研究拉姆齐图结构的自然环境。这项工作的主要技术新颖之处在于改进了独立保留合并(IPM)的构造,IPM是最近由Cohen和Schulman提出的经过充分研究的合并概念的一种变体。我们的构造是基于与带有建议的相关断续符的新连接。事实上,我们的IPM满足了比原始定义要求的更强、更自然的性质,我们相信它可以找到进一步的应用。
{"title":"Towards optimal two-source extractors and Ramsey graphs","authors":"Gil Cohen","doi":"10.1145/3055399.3055429","DOIUrl":"https://doi.org/10.1145/3055399.3055429","url":null,"abstract":"The main contribution of this work is a construction of a two-source extractor for quasi-logarithmic min-entropy. That is, an extractor for two independent n-bit sources with min-entropy Ο(logn), which is optimal up to the poly(loglogn) factor. A strong motivation for constructing two-source extractors for low entropy is for Ramsey graphs constructions. Our two-source extractor readily yields a (logn)(logloglogn)Ο(1)-Ramsey graph on n vertices. Although there has been exciting progress towards constructing O(logn)-Ramsey graphs in recent years, a line of work that this paper contributes to, it is not clear if current techniques can be pushed so as to match this bound. Interestingly, however, as an artifact of current techniques, one obtains strongly explicit Ramsey graphs, namely, graphs on n vertices where the existence of an edge connecting any pair of vertices can be determined in time poly(logn). On top of our strongly explicit construction, in this work, we consider algorithms that output the entire graph in poly(n)-time, and make progress towards matching the desired Ο(logn) bound in this setting. In our opinion, this is a natural setting in which Ramsey graphs constructions should be studied. The main technical novelty of this work lies in an improved construction of an independence-preserving merger (IPM), a variant of the well-studied notion of a merger, which was recently introduced by Cohen and Schulman. Our construction is based on a new connection to correlation breakers with advice. In fact, our IPM satisfies a stronger and more natural property than that required by the original definition, and we believe it may find further applications.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89109186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
On independent sets, 2-to-2 games, and Grassmann graphs 在独立集,2对2博弈和Grassmann图上
Pub Date : 2017-06-19 DOI: 10.1145/3055399.3055432
Subhash Khot, Dor Minzer, S. Safra
We present a candidate reduction from the 3-Lin problem to the 2-to-2 Games problem and present a combinatorial hypothesis about Grassmann graphs which, if correct, is sufficient to show the soundness of the reduction in a certain non-standard sense. A reduction that is sound in this non-standard sense implies that it is NP-hard to distinguish whether an n-vertex graph has an independent set of size ( 1- 1/√2 ) n - o(n) or whether every independent set has size o(n), and consequently, that it is NP-hard to approximate the Vertex Cover problem within a factor √2-o(1).
我们提出了一个从3-Lin问题到2-to-2博弈问题的候选约简,并提出了一个关于Grassmann图的组合假设,如果正确,则足以证明该约简在某种非标准意义上的合理性。在这种非标准意义上合理的约简意味着区分n顶点图是否具有大小为(1 - 1/√2)n- o(n)的独立集或是否每个独立集的大小为o(n)是np -困难的,因此,在√2-o(1)因子内近似顶点覆盖问题是np -困难的。
{"title":"On independent sets, 2-to-2 games, and Grassmann graphs","authors":"Subhash Khot, Dor Minzer, S. Safra","doi":"10.1145/3055399.3055432","DOIUrl":"https://doi.org/10.1145/3055399.3055432","url":null,"abstract":"We present a candidate reduction from the 3-Lin problem to the 2-to-2 Games problem and present a combinatorial hypothesis about Grassmann graphs which, if correct, is sufficient to show the soundness of the reduction in a certain non-standard sense. A reduction that is sound in this non-standard sense implies that it is NP-hard to distinguish whether an n-vertex graph has an independent set of size ( 1- 1/√2 ) n - o(n) or whether every independent set has size o(n), and consequently, that it is NP-hard to approximate the Vertex Cover problem within a factor √2-o(1).","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77328279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
Equivocating Yao: constant-round adaptively secure multiparty computation in the plain model 模糊Yao: plain模型中的恒轮自适应安全多方计算
Pub Date : 2017-06-19 DOI: 10.1145/3055399.3055495
R. Canetti, Oxana Poburinnaya, Muthuramakrishnan Venkitasubramaniam
Yao's circuit garbling scheme is one of the basic building blocks of cryptographic protocol design. Originally designed to enable two-message, two-party secure computation, the scheme has been extended in many ways and has innumerable applications. Still, a basic question has remained open throughout the years: Can the scheme be extended to guarantee security in the face of an adversary that corrupts both parties, adaptively, as the computation proceeds? We provide a positive answer to this question. We define a new type of encryption, called functionally equivocal encryption (FEE), and show that when Yao's scheme is implemented with an FEE as the underlying encryption mechanism, it becomes secure against such adaptive adversaries. We then show how to implement FEE from any one way function. Combining our scheme with non-committing encryption, we obtain the first two-message, two-party computation protocol, and the first constant-rounds multiparty computation protocol, in the plain model, that are secure against semi-honest adversaries who can adaptively corrupt all parties. A number of extensions and applications are described within.
Yao的电路乱码方案是加密协议设计的基本组成部分之一。该方案最初的设计是为了实现双消息、两方的安全计算,现已扩展到许多方面,并具有无数的应用。尽管如此,一个基本的问题多年来一直没有解决:在计算进行的过程中,该方案是否可以扩展到在面对对手时保证安全性,从而自适应地破坏双方?我们对这个问题给出了肯定的答案。我们定义了一种新的加密类型,称为功能模糊加密(FEE),并表明当Yao的方案以FEE作为底层加密机制实现时,它对这种自适应对手变得安全。然后,我们将展示如何从任何单向函数实现FEE。将我们的方案与非提交加密相结合,我们在普通模型中获得了第一个双消息、两方计算协议和第一个常数轮多方计算协议,它们对于可以自适应地破坏所有各方的半诚实对手是安全的。中描述了许多扩展和应用程序。
{"title":"Equivocating Yao: constant-round adaptively secure multiparty computation in the plain model","authors":"R. Canetti, Oxana Poburinnaya, Muthuramakrishnan Venkitasubramaniam","doi":"10.1145/3055399.3055495","DOIUrl":"https://doi.org/10.1145/3055399.3055495","url":null,"abstract":"Yao's circuit garbling scheme is one of the basic building blocks of cryptographic protocol design. Originally designed to enable two-message, two-party secure computation, the scheme has been extended in many ways and has innumerable applications. Still, a basic question has remained open throughout the years: Can the scheme be extended to guarantee security in the face of an adversary that corrupts both parties, adaptively, as the computation proceeds? We provide a positive answer to this question. We define a new type of encryption, called functionally equivocal encryption (FEE), and show that when Yao's scheme is implemented with an FEE as the underlying encryption mechanism, it becomes secure against such adaptive adversaries. We then show how to implement FEE from any one way function. Combining our scheme with non-committing encryption, we obtain the first two-message, two-party computation protocol, and the first constant-rounds multiparty computation protocol, in the plain model, that are secure against semi-honest adversaries who can adaptively corrupt all parties. A number of extensions and applications are described within.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89505340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
A strongly polynomial algorithm for bimodular integer linear programming 双模整数线性规划的强多项式算法
Pub Date : 2017-06-19 DOI: 10.1145/3055399.3055473
S. Artmann, R. Weismantel, R. Zenklusen
We present a strongly polynomial algorithm to solve integer programs of the form max{cT x: Ax≤ b, xεℤn }, for AεℤmXn with rank(A)=n, bε≤m, cε≤n, and where all determinants of (nXn)-sub-matrices of A are bounded by 2 in absolute value. In particular, this implies that integer programs max{cT x : Q x≤ b, xεℤ≥0n}, where Qε ℤmXn has the property that all subdeterminants are bounded by 2 in absolute value, can be solved in strongly polynomial time. We thus obtain an extension of the well-known result that integer programs with constraint matrices that are totally unimodular are solvable in strongly polynomial time.
对于秩(a)=n, b≤m, c≤n,且a的(nXn)-子矩阵的所有行列式的绝对值均以2为界的形式为max{cT x: Ax≤b, xε xn}的整数规划,给出了一种强多项式算法。特别地,这意味着整数规划max{cT x: Q x≤b, xε n≥0n},其中Qε n mXn具有所有子行列式的绝对值以2为界的性质,可以在强多项式时间内求解。由此得到了具有完全非模约束矩阵的整数规划在强多项式时间内可解这一著名结果的推广。
{"title":"A strongly polynomial algorithm for bimodular integer linear programming","authors":"S. Artmann, R. Weismantel, R. Zenklusen","doi":"10.1145/3055399.3055473","DOIUrl":"https://doi.org/10.1145/3055399.3055473","url":null,"abstract":"We present a strongly polynomial algorithm to solve integer programs of the form max{cT x: Ax≤ b, xεℤn }, for AεℤmXn with rank(A)=n, bε≤m, cε≤n, and where all determinants of (nXn)-sub-matrices of A are bounded by 2 in absolute value. In particular, this implies that integer programs max{cT x : Q x≤ b, xεℤ≥0n}, where Qε ℤmXn has the property that all subdeterminants are bounded by 2 in absolute value, can be solved in strongly polynomial time. We thus obtain an extension of the well-known result that integer programs with constraint matrices that are totally unimodular are solvable in strongly polynomial time.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89985143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
Hardness amplification for entangled games via anchoring 通过锚定增加纠缠博弈的硬度
Pub Date : 2017-06-19 DOI: 10.1145/3055399.3055433
Mohammad Bavarian, Thomas Vidick, H. Yuen
We study the parallel repetition of one-round games involving players that can use quantum entanglement. A major open question in this area is whether parallel repetition reduces the entangled value of a game at an exponential rate - in other words, does an analogue of Raz's parallel repetition theorem hold for games with players sharing quantum entanglement? Previous results only apply to special classes of games. We introduce a class of games we call anchored. We then introduce a simple transformation on games called anchoring, inspired in part by the Feige-Kilian transformation, that turns any (multiplayer) game into an anchored game. Unlike the Feige-Kilian transformation, our anchoring transformation is completeness preserving. We prove an exponential-decay parallel repetition theorem for anchored games that involve any number of entangled players. We also prove a threshold version of our parallel repetition theorem for anchored games. Together, our parallel repetition theorems and anchoring transformation provide the first hardness amplification techniques for general entangled games. We give an application to the games version of the Quantum PCP Conjecture.
我们研究了可以使用量子纠缠的玩家参与的一轮游戏的平行重复。该领域的一个主要开放问题是,平行重复是否会以指数速率降低游戏的纠缠值——换句话说,拉兹平行重复定理的类似物是否适用于玩家共享量子纠缠的游戏?之前的结果只适用于特殊类型的游戏。我们介绍了一类我们称之为锚定的游戏。然后我们将介绍一种叫做锚定的简单转变,其部分灵感来自Feige-Kilian转变,即将任何(多人)游戏转变为锚定游戏。与Feige-Kilian变换不同,我们的锚定变换是完全保持的。对于包含任意数量纠缠玩家的锚定博弈,我们证明了一个指数衰减平行重复定理。我们还证明了锚定游戏平行重复定理的阈值版本。我们的平行重复定理和锚定变换共同为一般纠缠博弈提供了第一个硬度放大技术。给出了量子PCP猜想的游戏版的一个应用。
{"title":"Hardness amplification for entangled games via anchoring","authors":"Mohammad Bavarian, Thomas Vidick, H. Yuen","doi":"10.1145/3055399.3055433","DOIUrl":"https://doi.org/10.1145/3055399.3055433","url":null,"abstract":"We study the parallel repetition of one-round games involving players that can use quantum entanglement. A major open question in this area is whether parallel repetition reduces the entangled value of a game at an exponential rate - in other words, does an analogue of Raz's parallel repetition theorem hold for games with players sharing quantum entanglement? Previous results only apply to special classes of games. We introduce a class of games we call anchored. We then introduce a simple transformation on games called anchoring, inspired in part by the Feige-Kilian transformation, that turns any (multiplayer) game into an anchored game. Unlike the Feige-Kilian transformation, our anchoring transformation is completeness preserving. We prove an exponential-decay parallel repetition theorem for anchored games that involve any number of entangled players. We also prove a threshold version of our parallel repetition theorem for anchored games. Together, our parallel repetition theorems and anchoring transformation provide the first hardness amplification techniques for general entangled games. We give an application to the games version of the Quantum PCP Conjecture.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76686469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Bernoulli factories and black-box reductions in mechanism design 机构设计中的伯努利工厂和黑盒缩减
Pub Date : 2017-06-19 DOI: 10.1145/3055399.3055492
S. Dughmi, Jason D. Hartline, Robert D. Kleinberg, Rad Niazadeh
We provide a polynomial-time reduction from Bayesian incentive-compatible mechanism design to Bayesian algorithm design for welfare maximization problems. Unlike prior results, our reduction achieves exact incentive compatibility for problems with multi-dimensional and continuous type spaces. The key technical barrier preventing exact incentive compatibility in prior black-box reductions is that repairing violations of incentive constraints requires understanding the distribution of the mechanism's output, which is typically #P-hard to compute. Reductions that instead estimate the output distribution by sampling inevitably suffer from sampling error, which typically precludes exact incentive compatibility. We overcome this barrier by employing and generalizing the computational model in the literature on "Bernoulli Factories". In a Bernoulli factory problem, one is given a function mapping the bias of an 'input coin' to that of an 'output coin', and the challenge is to efficiently simulate the output coin given only sample access to the input coin. Consider a generalization which we call the "expectations from samples" computational model, in which a problem instance is specified by a function mapping the expected values of a set of input distributions to a distribution over outcomes. The challenge is to give a polynomial time algorithm that exactly samples from the distribution over outcomes given only sample access to the input distributions. In this model we give a polynomial time algorithm for the function given by "exponential weights": expected values of the input distributions correspond to the weights of alternatives and we wish to select an alternative with probability proportional to its weight. This algorithm is the key ingredient in designing an incentive compatible mechanism for bipartite matching, which can be used to make the approximately incentive compatible reduction of Hartline-Malekian-Kleinberg [2015] exactly incentive compatible.
我们提供了一个从贝叶斯激励相容机制设计到贝叶斯算法设计的多项式时间化简。与先前的结果不同,我们的约简对具有多维和连续类型空间的问题实现了精确的激励相容。在先前的黑盒约简中,阻碍激励兼容的关键技术障碍是,修复违反激励约束的行为需要理解机制产出的分布,这通常是很难计算的。通过抽样来估计输出分布的减少不可避免地会受到抽样误差的影响,这通常会排除精确的激励兼容性。我们通过在“伯努利工厂”的文献中采用和推广计算模型来克服这一障碍。在伯努利工厂问题中,给定一个映射“输入币”到“输出币”偏差的函数,挑战是在只给定输入币的样本访问权限的情况下有效地模拟输出币。考虑一种我们称之为“样本期望”计算模型的泛化,在这种模型中,问题实例由一个函数指定,该函数将一组输入分布的期望值映射到结果上的分布。挑战在于给出一个多项式时间的算法,该算法在只给出对输入分布的采样访问的情况下,精确地从分布中采样结果。在这个模型中,我们为“指数权重”给出的函数给出了一个多项式时间算法:输入分布的期望值对应于备选项的权重,我们希望以与其权重成比例的概率选择一个备选项。该算法是设计二部匹配激励相容机制的关键要素,可用于使Hartline-Malekian-Kleinberg[2015]的近似激励相容约简完全激励相容。
{"title":"Bernoulli factories and black-box reductions in mechanism design","authors":"S. Dughmi, Jason D. Hartline, Robert D. Kleinberg, Rad Niazadeh","doi":"10.1145/3055399.3055492","DOIUrl":"https://doi.org/10.1145/3055399.3055492","url":null,"abstract":"We provide a polynomial-time reduction from Bayesian incentive-compatible mechanism design to Bayesian algorithm design for welfare maximization problems. Unlike prior results, our reduction achieves exact incentive compatibility for problems with multi-dimensional and continuous type spaces. The key technical barrier preventing exact incentive compatibility in prior black-box reductions is that repairing violations of incentive constraints requires understanding the distribution of the mechanism's output, which is typically #P-hard to compute. Reductions that instead estimate the output distribution by sampling inevitably suffer from sampling error, which typically precludes exact incentive compatibility. We overcome this barrier by employing and generalizing the computational model in the literature on \"Bernoulli Factories\". In a Bernoulli factory problem, one is given a function mapping the bias of an 'input coin' to that of an 'output coin', and the challenge is to efficiently simulate the output coin given only sample access to the input coin. Consider a generalization which we call the \"expectations from samples\" computational model, in which a problem instance is specified by a function mapping the expected values of a set of input distributions to a distribution over outcomes. The challenge is to give a polynomial time algorithm that exactly samples from the distribution over outcomes given only sample access to the input distributions. In this model we give a polynomial time algorithm for the function given by \"exponential weights\": expected values of the input distributions correspond to the weights of alternatives and we wish to select an alternative with probability proportional to its weight. This algorithm is the key ingredient in designing an incentive compatible mechanism for bipartite matching, which can be used to make the approximately incentive compatible reduction of Hartline-Malekian-Kleinberg [2015] exactly incentive compatible.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78259235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Low rank approximation with entrywise l1-norm error 低秩近似与入口方向11范数误差
Pub Date : 2017-06-19 DOI: 10.1145/3055399.3055431
Zhao Song, David P. Woodruff, Peilin Zhong
We study the ℓ1-low rank approximation problem, where for a given n x d matrix A and approximation factor α ≤ 1, the goal is to output a rank-k matrix  for which ‖A-Â‖1 ≤ α · min rank-k matrices A′ ‖A-A′‖1, where for an n x d matrix C, we let ‖C‖1 = ∑i=1n ∑j=1d |Ci,j|. This error measure is known to be more robust than the Frobenius norm in the presence of outliers and is indicated in models where Gaussian assumptions on the noise may not apply. The problem was shown to be NP-hard by Gillis and Vavasis and a number of heuristics have been proposed. It was asked in multiple places if there are any approximation algorithms. We give the first provable approximation algorithms for ℓ1-low rank approximation, showing that it is possible to achieve approximation factor α = (logd) #183; poly(k) in nnz(A) + (n+d) poly(k) time, where nnz(A) denotes the number of non-zero entries of A. If k is constant, we further improve the approximation ratio to O(1) with a poly(nd)-time algorithm. Under the Exponential Time Hypothesis, we show there is no poly(nd)-time algorithm achieving a (1+1/log1+γ(nd))-approximation, for γ > 0 an arbitrarily small constant, even when k = 1. We give a number of additional results for ℓ1-low rank approximation: nearly tight upper and lower bounds for column subset selection, CUR decompositions, extensions to low rank approximation with respect to ℓp-norms for 1 ≤ p < 2 and earthmover distance, low-communication distributed protocols and low-memory streaming algorithms, algorithms with limited randomness, and bicriteria algorithms. We also give a preliminary empirical evaluation.
我们研究了1-低秩近似问题,其中对于给定的n x d矩阵a和近似因子α≤1,目标是输出一个秩-k矩阵Â,其中‖a -Â‖1≤α·最小秩-k矩阵a′‖a - a′‖1,其中对于n x d矩阵C,我们令‖C‖1 =∑i=1n∑j=1d |Ci,j|。在存在异常值的情况下,这种误差测量已知比Frobenius范数更稳健,并且在对噪声的高斯假设可能不适用的模型中表示。Gillis和Vavasis证明了这个问题是np困难的,并提出了许多启发式方法。很多地方都问过是否有近似算法。我们给出了第一个可证明的l_1 -低秩近似的逼近算法,表明可以实现近似因子α = (logd) #183;poly(k) in nnz(A) + (n+d) poly(k) time,其中nnz(A)表示A的非零条目数。如果k为常数,我们进一步使用poly(nd) time算法将近似比提高到O(1)。在指数时间假设下,我们证明没有多(nd)时间算法实现(1+1/log1+γ(nd))-近似,即使当k = 1时,γ > 0是一个任意小的常数。我们给出了一些额外的结果:列子集选择的近紧上界和下界,CUR分解,关于1≤p < 2和土方距离的低秩近似的扩展,低通信分布式协议和低内存流算法,有限随机性算法和双准则算法。并给出了初步的实证评价。
{"title":"Low rank approximation with entrywise l1-norm error","authors":"Zhao Song, David P. Woodruff, Peilin Zhong","doi":"10.1145/3055399.3055431","DOIUrl":"https://doi.org/10.1145/3055399.3055431","url":null,"abstract":"We study the ℓ1-low rank approximation problem, where for a given n x d matrix A and approximation factor α ≤ 1, the goal is to output a rank-k matrix  for which ‖A-Â‖1 ≤ α · min rank-k matrices A′ ‖A-A′‖1, where for an n x d matrix C, we let ‖C‖1 = ∑i=1n ∑j=1d |Ci,j|. This error measure is known to be more robust than the Frobenius norm in the presence of outliers and is indicated in models where Gaussian assumptions on the noise may not apply. The problem was shown to be NP-hard by Gillis and Vavasis and a number of heuristics have been proposed. It was asked in multiple places if there are any approximation algorithms. We give the first provable approximation algorithms for ℓ1-low rank approximation, showing that it is possible to achieve approximation factor α = (logd) #183; poly(k) in nnz(A) + (n+d) poly(k) time, where nnz(A) denotes the number of non-zero entries of A. If k is constant, we further improve the approximation ratio to O(1) with a poly(nd)-time algorithm. Under the Exponential Time Hypothesis, we show there is no poly(nd)-time algorithm achieving a (1+1/log1+γ(nd))-approximation, for γ > 0 an arbitrarily small constant, even when k = 1. We give a number of additional results for ℓ1-low rank approximation: nearly tight upper and lower bounds for column subset selection, CUR decompositions, extensions to low rank approximation with respect to ℓp-norms for 1 ≤ p < 2 and earthmover distance, low-communication distributed protocols and low-memory streaming algorithms, algorithms with limited randomness, and bicriteria algorithms. We also give a preliminary empirical evaluation.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74962182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 96
Algorithms for stable and perturbation-resilient problems 稳定和扰动弹性问题的算法
Pub Date : 2017-06-19 DOI: 10.1145/3055399.3055487
Haris Angelidakis, K. Makarychev, Yury Makarychev
We study the notion of stability and perturbation resilience introduced by Bilu and Linial (2010) and Awasthi, Blum, and Sheffet (2012). A combinatorial optimization problem is α-stable or α-perturbation-resilient if the optimal solution does not change when we perturb all parameters of the problem by a factor of at most α. In this paper, we give improved algorithms for stable instances of various clustering and combinatorial optimization problems. We also prove several hardness results. We first give an exact algorithm for 2-perturbation resilient instances of clustering problems with natural center-based objectives. The class of clustering problems with natural center-based objectives includes such problems as k-means, k-median, and k-center. Our result improves upon the result of Balcan and Liang (2016), who gave an algorithm for clustering 1+√≈2.41 perturbation-resilient instances. Our result is tight in the sense that no polynomial-time algorithm can solve (2ε)-perturbation resilient instances of k-center unless NP = RP, as was shown by Balcan, Haghtalab, and White (2016). We then give an exact algorithm for (2-2/k)-stable instances of Minimum Multiway Cut with k terminals, improving the previous result of Makarychev, Makarychev, and Vijayaraghavan (2014), who gave an algorithm for 4-stable instances. We also give an algorithm for (2-2/k+ς)-weakly stable instances of Minimum Multiway Cut. Finally, we show that there are no robust polynomial-time algorithms for n1-ε-stable instances of Set Cover, Minimum Vertex Cover, and Min 2-Horn Deletion (unless P = NP).
我们研究了Bilu and Linial(2010)和Awasthi, Blum, and Sheffet(2012)引入的稳定性和扰动弹性的概念。当对组合优化问题的所有参数进行不超过α的扰动时,其最优解不发生变化,则该组合优化问题为α-稳定或α-扰动弹性问题。本文给出了各种聚类和组合优化问题稳定实例的改进算法。我们还证明了几个硬度结果。我们首先给出了具有自然中心目标的聚类问题的2摄动弹性实例的精确算法。具有自然中心目标的聚类问题包括k-means、k-median和k-center等问题。我们的结果改进了Balcan和Liang(2016)的结果,他们给出了1+√≈2.41个扰动弹性实例聚类的算法。我们的结果在某种意义上是紧密的,除非NP = RP,否则没有多项式时间算法可以解决k中心的(2ε)-扰动弹性实例,如Balcan, Haghtalab和White(2016)所示。然后,我们给出了具有k个终端的最小多路切割(2-2/k)稳定实例的精确算法,改进了Makarychev, Makarychev和Vijayaraghavan(2014)的先前结果,他们给出了4个稳定实例的算法。我们还给出了(2-2/k+ς)-最小多路切割的弱稳定实例的算法。最后,我们证明了对于集覆盖、最小顶点覆盖和最小2角删除(除非P = NP)的n1-ε-稳定实例没有鲁棒的多项式时间算法。
{"title":"Algorithms for stable and perturbation-resilient problems","authors":"Haris Angelidakis, K. Makarychev, Yury Makarychev","doi":"10.1145/3055399.3055487","DOIUrl":"https://doi.org/10.1145/3055399.3055487","url":null,"abstract":"We study the notion of stability and perturbation resilience introduced by Bilu and Linial (2010) and Awasthi, Blum, and Sheffet (2012). A combinatorial optimization problem is α-stable or α-perturbation-resilient if the optimal solution does not change when we perturb all parameters of the problem by a factor of at most α. In this paper, we give improved algorithms for stable instances of various clustering and combinatorial optimization problems. We also prove several hardness results. We first give an exact algorithm for 2-perturbation resilient instances of clustering problems with natural center-based objectives. The class of clustering problems with natural center-based objectives includes such problems as k-means, k-median, and k-center. Our result improves upon the result of Balcan and Liang (2016), who gave an algorithm for clustering 1+√≈2.41 perturbation-resilient instances. Our result is tight in the sense that no polynomial-time algorithm can solve (2ε)-perturbation resilient instances of k-center unless NP = RP, as was shown by Balcan, Haghtalab, and White (2016). We then give an exact algorithm for (2-2/k)-stable instances of Minimum Multiway Cut with k terminals, improving the previous result of Makarychev, Makarychev, and Vijayaraghavan (2014), who gave an algorithm for 4-stable instances. We also give an algorithm for (2-2/k+ς)-weakly stable instances of Minimum Multiway Cut. Finally, we show that there are no robust polynomial-time algorithms for n1-ε-stable instances of Set Cover, Minimum Vertex Cover, and Min 2-Horn Deletion (unless P = NP).","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85004156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
期刊
Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1