首页 > 最新文献

Proceedings of the forty-seventh annual ACM symposium on Theory of Computing最新文献

英文 中文
Garbled RAM From One-Way Functions 单向函数的乱码RAM
Pub Date : 2015-06-14 DOI: 10.1145/2746539.2746593
Sanjam Garg, Steve Lu, R. Ostrovsky, Alessandra Scafuro
Yao's garbled circuit construction is a very fundamental result in cryptography and recent efficiency optimizations have brought it much closer to practice. However these constructions work only for circuits and garbling a RAM program involves the inefficient process of first converting it into a circuit. Towards the goal of avoiding this inefficiency, Lu and Ostrovsky (Eurocrypt 2013) introduced the notion of "garbled RAM" as a method to garble RAM programs directly. It can be seen as a RAM analogue of Yao's garbled circuits such that, the size of the garbled program and the time it takes to create and evaluate it, is proportional only to the running time on the RAM program rather than its circuit size. Known realizations of this primitive, either need to rely on strong computational assumptions or do not achieve the aforementioned efficiency (Gentry, Halevi, Lu, Ostrovsky, Raykova and Wichs, EUROCRYPT 2014). In this paper we provide the first construction with strictly poly-logarithmic overhead in both space and time based only on the minimal assumption that one-way functions exist. Our scheme allows for garbling multiple programs being executed on a persistent database, and has the additional feature that the program garbling is decoupled from the database garbling. This allows a client to provide multiple garbled programs to the server as part of a pre-processing phase and then later determine the order and the inputs on which these programs are to be executed, doing work independent of the running times of the programs itself.
Yao的乱码电路构造是密码学的一个非常基本的结果,最近的效率优化使它更接近实践。然而,这些结构只适用于电路,而对RAM程序进行乱码涉及到首先将其转换为电路的低效过程。为了避免这种低效率,Lu和Ostrovsky (Eurocrypt 2013)引入了“乱码RAM”的概念,作为直接乱码RAM程序的方法。它可以被看作是姚的乱码电路的RAM模拟,这样,乱码程序的大小和创建和评估它所需的时间,仅与RAM程序的运行时间成正比,而不是其电路大小。这种原语的已知实现,要么需要依赖于强大的计算假设,要么无法实现上述效率(Gentry, Halevi, Lu, Ostrovsky, Raykova和Wichs, EUROCRYPT 2014)。本文仅基于单向函数存在的最小假设,给出了在空间和时间上都具有严格多对数开销的第一种结构。我们的方案允许对在持久数据库上执行的多个程序进行乱码,并且具有将程序乱码与数据库乱码解耦的附加特性。这允许客户端向服务器提供多个乱码程序,作为预处理阶段的一部分,然后确定执行这些程序的顺序和输入,独立于程序本身的运行时间进行工作。
{"title":"Garbled RAM From One-Way Functions","authors":"Sanjam Garg, Steve Lu, R. Ostrovsky, Alessandra Scafuro","doi":"10.1145/2746539.2746593","DOIUrl":"https://doi.org/10.1145/2746539.2746593","url":null,"abstract":"Yao's garbled circuit construction is a very fundamental result in cryptography and recent efficiency optimizations have brought it much closer to practice. However these constructions work only for circuits and garbling a RAM program involves the inefficient process of first converting it into a circuit. Towards the goal of avoiding this inefficiency, Lu and Ostrovsky (Eurocrypt 2013) introduced the notion of \"garbled RAM\" as a method to garble RAM programs directly. It can be seen as a RAM analogue of Yao's garbled circuits such that, the size of the garbled program and the time it takes to create and evaluate it, is proportional only to the running time on the RAM program rather than its circuit size. Known realizations of this primitive, either need to rely on strong computational assumptions or do not achieve the aforementioned efficiency (Gentry, Halevi, Lu, Ostrovsky, Raykova and Wichs, EUROCRYPT 2014). In this paper we provide the first construction with strictly poly-logarithmic overhead in both space and time based only on the minimal assumption that one-way functions exist. Our scheme allows for garbling multiple programs being executed on a persistent database, and has the additional feature that the program garbling is decoupled from the database garbling. This allows a client to provide multiple garbled programs to the server as part of a pre-processing phase and then later determine the order and the inputs on which these programs are to be executed, doing work independent of the running times of the programs itself.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87846520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
Approximating the Nash Social Welfare with Indivisible Items 用不可分物品逼近纳什社会福利
Pub Date : 2015-06-14 DOI: 10.1145/2746539.2746589
R. Cole, Vasilis Gkatzelis
We study the problem of allocating a set of indivisible items among agents with additive valuations, with the goal of maximizing the geometric mean of the agents' valuations, i.e., the Nash social welfare. This problem is known to be NP-hard, and our main result is the first efficient constant-factor approximation algorithm for this objective. We first observe that the integrality gap of the natural fractional relaxation is exponential, so we propose a different fractional allocation which implies a tighter upper bound and, after appropriate rounding, yields a good integral allocation. An interesting contribution of this work is the fractional allocation that we use. The relaxation of our problem can be solved efficiently using the Eisenberg-Gale program, whose optimal solution can be interpreted as a market equilibrium with the dual variables playing the role of item prices. Using this market-based interpretation, we define an alternative equilibrium allocation where the amount of spending that can go into any given item is bounded, thus keeping the highly priced items under-allocated, and forcing the agents to spend on lower priced items. The resulting equilibrium prices reveal more information regarding how to assign items so as to obtain a good integral allocation.
研究了一组不可分割的项目在具有可加性价值的智能体之间的分配问题,其目标是最大化这些智能体价值的几何平均值,即纳什社会福利。这个问题被认为是np困难的,我们的主要结果是针对这个目标的第一个有效的常因子近似算法。我们首先观察到自然分数松弛的积分间隙是指数的,因此我们提出了一种不同的分数分配,它意味着一个更紧的上界,并且在适当的舍入之后,产生了一个很好的积分分配。这项工作的一个有趣贡献是我们使用的分数分配。我们的问题的松弛可以用艾森伯格-盖尔计划有效地解决,其最优解可以解释为一个双变量扮演商品价格的市场均衡。使用这种基于市场的解释,我们定义了另一种均衡分配,其中可以用于任何给定项目的支出金额是有限的,从而保持高价项目的分配不足,并迫使代理在低价项目上花费。由此产生的均衡价格揭示了有关如何分配项目以获得良好的整体分配的更多信息。
{"title":"Approximating the Nash Social Welfare with Indivisible Items","authors":"R. Cole, Vasilis Gkatzelis","doi":"10.1145/2746539.2746589","DOIUrl":"https://doi.org/10.1145/2746539.2746589","url":null,"abstract":"We study the problem of allocating a set of indivisible items among agents with additive valuations, with the goal of maximizing the geometric mean of the agents' valuations, i.e., the Nash social welfare. This problem is known to be NP-hard, and our main result is the first efficient constant-factor approximation algorithm for this objective. We first observe that the integrality gap of the natural fractional relaxation is exponential, so we propose a different fractional allocation which implies a tighter upper bound and, after appropriate rounding, yields a good integral allocation. An interesting contribution of this work is the fractional allocation that we use. The relaxation of our problem can be solved efficiently using the Eisenberg-Gale program, whose optimal solution can be interpreted as a market equilibrium with the dual variables playing the role of item prices. Using this market-based interpretation, we define an alternative equilibrium allocation where the amount of spending that can go into any given item is bounded, thus keeping the highly priced items under-allocated, and forcing the agents to spend on lower priced items. The resulting equilibrium prices reveal more information regarding how to assign items so as to obtain a good integral allocation.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84446189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 146
Nearly-Linear Time Positive LP Solver with Faster Convergence Rate 收敛速度较快的近线性时间正LP求解器
Pub Date : 2015-06-14 DOI: 10.1145/2746539.2746573
Z. Zhu, L. Orecchia
Positive linear programs (LP), also known as packing and covering linear programs, are an important class of problems that bridges computer science, operation research, and optimization. Efficient algorithms for solving such LPs have received significant attention in the past 20 years [2, 3, 4, 6, 7, 9, 11, 15, 16, 18, 19, 21, 24, 25, 26, 29, 30]. Unfortunately, all known nearly-linear time algorithms for producing (1+ε)-approximate solutions to positive LPs have a running time dependence that is at least proportional to ε-2. This is also known as an O(1/√T) convergence rate and is particularly poor in many applications. In this paper, we leverage insights from optimization theory to break this longstanding barrier. Our algorithms solve the packing LP in time ~O(N ε-1) and the covering LP in time ~O(N ε-1.5). At high level, they can be described as linear couplings of several first-order descent steps. This is the first application of our linear coupling technique (see [1]) to problems that are not amenable to blackbox applications known iterative algorithms in convex optimization. Our work also introduces a sequence of new techniques, including the stochastic and the non-symmetric execution of gradient truncation operations, which may be of independent interest.
正线性规划(LP),也被称为包装和覆盖线性规划,是连接计算机科学、运筹学和优化的重要问题。在过去的20年里,求解这类lp的高效算法受到了极大的关注[2,3,4,6,7,9,11,15,16,18,19,21,24,25,26,29,30]。不幸的是,所有已知的用于生成正lp的(1+ε)近似解的近线性时间算法的运行时间依赖至少与ε-2成正比。这也被称为O(1/√T)收敛速率,在许多应用中特别差。在本文中,我们利用优化理论的见解来打破这个长期存在的障碍。我们的算法在时间~O(N ε-1)和覆盖时间~O(N ε-1.5)内分别求解了填充LP和覆盖LP。在高层次上,它们可以被描述为几个一阶下降步骤的线性耦合。这是我们的线性耦合技术(参见[1])在凸优化中已知迭代算法的黑盒应用无法解决的问题上的第一个应用。我们的工作还介绍了一系列新技术,包括梯度截断操作的随机和非对称执行,这可能是独立的兴趣。
{"title":"Nearly-Linear Time Positive LP Solver with Faster Convergence Rate","authors":"Z. Zhu, L. Orecchia","doi":"10.1145/2746539.2746573","DOIUrl":"https://doi.org/10.1145/2746539.2746573","url":null,"abstract":"Positive linear programs (LP), also known as packing and covering linear programs, are an important class of problems that bridges computer science, operation research, and optimization. Efficient algorithms for solving such LPs have received significant attention in the past 20 years [2, 3, 4, 6, 7, 9, 11, 15, 16, 18, 19, 21, 24, 25, 26, 29, 30]. Unfortunately, all known nearly-linear time algorithms for producing (1+ε)-approximate solutions to positive LPs have a running time dependence that is at least proportional to ε-2. This is also known as an O(1/√T) convergence rate and is particularly poor in many applications. In this paper, we leverage insights from optimization theory to break this longstanding barrier. Our algorithms solve the packing LP in time ~O(N ε-1) and the covering LP in time ~O(N ε-1.5). At high level, they can be described as linear couplings of several first-order descent steps. This is the first application of our linear coupling technique (see [1]) to problems that are not amenable to blackbox applications known iterative algorithms in convex optimization. Our work also introduces a sequence of new techniques, including the stochastic and the non-symmetric execution of gradient truncation operations, which may be of independent interest.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88391564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
Leveled Fully Homomorphic Signatures from Standard Lattices 标准格上的平全同态签名
Pub Date : 2015-06-14 DOI: 10.1145/2746539.2746576
D. Wichs
In a homomorphic signature scheme, a user Alice signs some large dataset x using her secret signing key and uploads the signed data to an untrusted remote server. The server can then run some computation y=f(x) over the signed data and homomorphically derive a short signature σf,y certifying that y is the correct output of the computation f. Anybody can verify the tuple (f, y, σf,y) using Alice's public verification key and become convinced of this fact without having to retrieve the entire underlying data. In this work, we construct the first leveled fully homomorphic signature} schemes that can evaluate arbitrary {circuits} over signed data. Only the maximal {depth} d of the circuits needs to be fixed a-priori at setup, and the size of the evaluated signature grows polynomially in d, but is otherwise independent of the circuit size or the data size. Our solution is based on the (sub-exponential) hardness of the small integer solution (SIS) problem in standard lattices and satisfies full (adaptive) security. In the standard model, we get a scheme with large public parameters whose size exceeds the total size of a dataset. In the random-oracle model, we get a scheme with short public parameters. In both cases, the schemes can be used to sign many different datasets. The complexity of verifying a signature for a computation f is at least as large as that of computing f, but can be amortized when verifying the same computation over many different datasets. Furthermore, the signatures can be made context-hiding so as not to reveal anything about the data beyond the outcome of the computation. These results offer a significant improvement in capabilities and assumptions over the best prior homomorphic signature schemes, which were limited to evaluating polynomials of constant degree. As a building block of independent interest, we introduce a new notion called homomorphic trapdoor functions (HTDF) which conceptually unites homomorphic encryption and signatures. We construct HTDFs by relying on the techniques developed by Gentry et al. (CRYPTO '13) and Boneh et al. (EUROCRYPT '14) in the contexts of fully homomorphic and attribute-based encryptions.
在同态签名方案中,用户Alice使用她的秘密签名密钥对一些大型数据集x进行签名,并将签名后的数据上传到不受信任的远程服务器。然后,服务器可以对签名的数据运行一些计算y=f(x),并同态地推导出一个短签名σf,y,证明y是计算f的正确输出。任何人都可以使用Alice的公共验证密钥验证元组(f, y, σf,y),并且无需检索整个底层数据就可以确信这一事实。在这项工作中,我们构造了一级完全同态签名方案,该方案可以在签名数据上评估任意{电路}。只有电路的最大{深度}d需要在设置时先验地固定,并且评估的签名的大小在d中以多项式方式增长,但在其他方面与电路大小或数据大小无关。我们的解基于标准格中小整数解(SIS)问题的(次指数)硬度,满足完全(自适应)安全性。在标准模型中,我们得到一个具有大公共参数的方案,其大小超过了数据集的总大小。在随机oracle模型中,我们得到了一个具有短公共参数的方案。在这两种情况下,方案都可以用于签署许多不同的数据集。验证计算f的签名的复杂性至少与计算f的复杂性一样大,但是当在许多不同的数据集上验证相同的计算时,可以平摊。此外,签名可以是上下文隐藏的,这样除了计算结果之外,就不会透露任何关于数据的信息。这些结果在能力和假设方面提供了显著的改进,超过了最好的先验同态签名方案,这些方案仅限于评估常数次多项式。作为独立感兴趣的构建块,我们引入了一个新的概念,称为同态陷门函数(HTDF),它在概念上统一了同态加密和签名。我们通过依赖Gentry等人(CRYPTO '13)和Boneh等人(EUROCRYPT '14)在完全同态和基于属性的加密上下文中开发的技术来构建html。
{"title":"Leveled Fully Homomorphic Signatures from Standard Lattices","authors":"D. Wichs","doi":"10.1145/2746539.2746576","DOIUrl":"https://doi.org/10.1145/2746539.2746576","url":null,"abstract":"In a homomorphic signature scheme, a user Alice signs some large dataset x using her secret signing key and uploads the signed data to an untrusted remote server. The server can then run some computation y=f(x) over the signed data and homomorphically derive a short signature σf,y certifying that y is the correct output of the computation f. Anybody can verify the tuple (f, y, σf,y) using Alice's public verification key and become convinced of this fact without having to retrieve the entire underlying data. In this work, we construct the first leveled fully homomorphic signature} schemes that can evaluate arbitrary {circuits} over signed data. Only the maximal {depth} d of the circuits needs to be fixed a-priori at setup, and the size of the evaluated signature grows polynomially in d, but is otherwise independent of the circuit size or the data size. Our solution is based on the (sub-exponential) hardness of the small integer solution (SIS) problem in standard lattices and satisfies full (adaptive) security. In the standard model, we get a scheme with large public parameters whose size exceeds the total size of a dataset. In the random-oracle model, we get a scheme with short public parameters. In both cases, the schemes can be used to sign many different datasets. The complexity of verifying a signature for a computation f is at least as large as that of computing f, but can be amortized when verifying the same computation over many different datasets. Furthermore, the signatures can be made context-hiding so as not to reveal anything about the data beyond the outcome of the computation. These results offer a significant improvement in capabilities and assumptions over the best prior homomorphic signature schemes, which were limited to evaluating polynomials of constant degree. As a building block of independent interest, we introduce a new notion called homomorphic trapdoor functions (HTDF) which conceptually unites homomorphic encryption and signatures. We construct HTDFs by relying on the techniques developed by Gentry et al. (CRYPTO '13) and Boneh et al. (EUROCRYPT '14) in the contexts of fully homomorphic and attribute-based encryptions.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85490830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 206
Quantum Information Complexity 量子信息复杂度
Pub Date : 2015-06-14 DOI: 10.1145/2746539.2746613
D. Touchette
We define a new notion of information cost for quantum protocols, and a corresponding notion of quantum information complexity for bipartite quantum tasks. These are the fully quantum generalizations of the analogous quantities for bipartite classical tasks that have found many applications recently, in particular for proving communication complexity lower bounds and direct sum theorems. Finding such a quantum generalization of information complexity was one of the open problems recently raised by Braverman (STOC'12). Previous attempts have been made to define such a quantity for quantum protocols, with particular applications in mind; our notion differs from these in many respects. First, it directly provides a lower bound on the quantum communication cost, independent of the number of rounds of the underlying protocol. Secondly, we provide an operational interpretation for quantum information complexity: we show that it is exactly equal to the amortized quantum communication complexity of a bipartite task on a given input. This generalizes a result of Braverman and Rao (FOCS'11) to quantum protocols. Along the way to prove this result, we even strengthens the classical result in a bounded round scenario, and also prove important structural properties of quantum information cost and complexity. We prove that using this definition leads to the first general direct sum theorem for bounded round quantum communication complexity. Previous direct sum results in quantum communication complexity either held for some particular classes of functions, or were general but only held for single-round protocols. We also discuss potential applications of the new quantities to obtain lower bounds on quantum communication complexity.
我们为量子协议定义了一个新的信息成本概念,并为二部量子任务定义了相应的量子信息复杂度概念。这些是二部经典任务的类似量的全量子推广,最近已经发现了许多应用,特别是在证明通信复杂性下界和直接和定理方面。找到这种信息复杂性的量子泛化是布雷弗曼(STOC'12)最近提出的开放问题之一。以前的尝试已经为量子协议定义了这样一个量,并考虑了特定的应用;我们的观念在许多方面与这些观念不同。首先,它直接提供了量子通信成本的下界,与底层协议的轮数无关。其次,我们提供了量子信息复杂性的操作解释:我们证明它正好等于给定输入上的二部任务的平摊量子通信复杂性。这将Braverman和Rao (FOCS'11)的结果推广到量子协议。在证明这一结果的过程中,我们甚至在一个有界的圆场景中加强了经典的结果,也证明了量子信息成本和复杂性的重要结构性质。我们证明了利用这个定义可以导出有界圆量子通信复杂度的第一个一般直接和定理。之前的直接和导致的量子通信复杂性要么适用于某些特定的函数类,要么是通用的,但只适用于单轮协议。我们还讨论了新量的潜在应用,以获得量子通信复杂性的下界。
{"title":"Quantum Information Complexity","authors":"D. Touchette","doi":"10.1145/2746539.2746613","DOIUrl":"https://doi.org/10.1145/2746539.2746613","url":null,"abstract":"We define a new notion of information cost for quantum protocols, and a corresponding notion of quantum information complexity for bipartite quantum tasks. These are the fully quantum generalizations of the analogous quantities for bipartite classical tasks that have found many applications recently, in particular for proving communication complexity lower bounds and direct sum theorems. Finding such a quantum generalization of information complexity was one of the open problems recently raised by Braverman (STOC'12). Previous attempts have been made to define such a quantity for quantum protocols, with particular applications in mind; our notion differs from these in many respects. First, it directly provides a lower bound on the quantum communication cost, independent of the number of rounds of the underlying protocol. Secondly, we provide an operational interpretation for quantum information complexity: we show that it is exactly equal to the amortized quantum communication complexity of a bipartite task on a given input. This generalizes a result of Braverman and Rao (FOCS'11) to quantum protocols. Along the way to prove this result, we even strengthens the classical result in a bounded round scenario, and also prove important structural properties of quantum information cost and complexity. We prove that using this definition leads to the first general direct sum theorem for bounded round quantum communication complexity. Previous direct sum results in quantum communication complexity either held for some particular classes of functions, or were general but only held for single-round protocols. We also discuss potential applications of the new quantities to obtain lower bounds on quantum communication complexity.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75182372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
High Parallel Complexity Graphs and Memory-Hard Functions 高并行复杂度图和内存硬函数
Pub Date : 2015-06-14 DOI: 10.1145/2746539.2746622
J. Alwen, Vladimir Serbinenko
We develop new theoretical tools for proving lower-bounds on the (amortized) complexity of certain functions in models of parallel computation. We apply the tools to construct a class of functions with high amortized memory complexity in the *parallel* Random Oracle Model (pROM); a variant of the standard ROM allowing for batches of *simultaneous* queries. In particular we obtain a new, more robust, type of Memory-Hard Functions (MHF); a security primitive which has recently been gaining acceptance in practice as an effective means of countering brute-force attacks on security relevant functions. Along the way we also demonstrate an important shortcoming of previous definitions of MHFs and give a new definition addressing the problem. The tools we develop represent an adaptation of the powerful pebbling paradigm (initially introduced by Hewitt and Paterson [HP70] and Cook [Coo73]) to a simple and intuitive parallel setting. We define a simple pebbling game Gp over graphs which aims to abstract parallel computation in an intuitive way. As a conceptual contribution we define a measure of pebbling complexity for graphs called *cumulative complexity* (CC) and show how it overcomes a crucial shortcoming (in the parallel setting) exhibited by more traditional complexity measures used in the past. As a main technical contribution we give an explicit construction of a constant in-degree family of graphs whose CC in Gp approaches maximality to within a polylogarithmic factor for any graph of equal size (analogous to the graphs of Tarjan et. al. [PTC76, LT82] for sequential pebbling games). Finally, for a given graph G and related function fG, we derive a lower-bound on the amortized memory complexity of fG in the pROM in terms of the CC of G in the game Gp.
我们开发了新的理论工具来证明并行计算模型中某些函数(平摊)复杂度的下界。我们应用这些工具在并行随机Oracle模型(pROM)中构造了一类具有高平摊内存复杂度的函数;一种允许批量“同时”查询的标准ROM的变体。特别是,我们获得了一种新的,更健壮的记忆硬函数(MHF)类型;一种安全原语,最近在实践中作为对抗对安全相关功能的暴力攻击的有效手段而得到认可。在此过程中,我们还展示了先前mhf定义的一个重要缺点,并给出了解决该问题的新定义。我们开发的工具代表了强大的卵石范式(最初由Hewitt和Paterson [HP70]和Cook [Coo73]引入)对简单直观的并行设置的适应。我们定义了一个简单的图形博弈Gp,旨在以直观的方式抽象并行计算。作为概念上的贡献,我们定义了一种称为“累积复杂性”(CC)的图的卵石复杂性度量,并展示了它如何克服过去使用的更传统的复杂性度量所表现出的一个关键缺点(在并行设置中)。作为一项主要的技术贡献,我们给出了一个恒定度图族的明确构造,其Gp中的CC接近于任何相等大小的图的多对数因子内的最大值(类似于Tarjan等人[PTC76, LT82]的连续卵石游戏的图)。最后,对于给定的图G和相关函数fG,我们根据博弈Gp中G的CC导出了pROM中fG的平摊内存复杂度的下界。
{"title":"High Parallel Complexity Graphs and Memory-Hard Functions","authors":"J. Alwen, Vladimir Serbinenko","doi":"10.1145/2746539.2746622","DOIUrl":"https://doi.org/10.1145/2746539.2746622","url":null,"abstract":"We develop new theoretical tools for proving lower-bounds on the (amortized) complexity of certain functions in models of parallel computation. We apply the tools to construct a class of functions with high amortized memory complexity in the *parallel* Random Oracle Model (pROM); a variant of the standard ROM allowing for batches of *simultaneous* queries. In particular we obtain a new, more robust, type of Memory-Hard Functions (MHF); a security primitive which has recently been gaining acceptance in practice as an effective means of countering brute-force attacks on security relevant functions. Along the way we also demonstrate an important shortcoming of previous definitions of MHFs and give a new definition addressing the problem. The tools we develop represent an adaptation of the powerful pebbling paradigm (initially introduced by Hewitt and Paterson [HP70] and Cook [Coo73]) to a simple and intuitive parallel setting. We define a simple pebbling game Gp over graphs which aims to abstract parallel computation in an intuitive way. As a conceptual contribution we define a measure of pebbling complexity for graphs called *cumulative complexity* (CC) and show how it overcomes a crucial shortcoming (in the parallel setting) exhibited by more traditional complexity measures used in the past. As a main technical contribution we give an explicit construction of a constant in-degree family of graphs whose CC in Gp approaches maximality to within a polylogarithmic factor for any graph of equal size (analogous to the graphs of Tarjan et. al. [PTC76, LT82] for sequential pebbling games). Finally, for a given graph G and related function fG, we derive a lower-bound on the amortized memory complexity of fG in the pROM in terms of the CC of G in the game Gp.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72593746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 84
Small Value Parallel Repetition for General Games 一般游戏的小值平行重复
Pub Date : 2015-06-14 DOI: 10.1145/2746539.2746565
M. Braverman, A. Garg
We prove a parallel repetition theorem for general games with value tending to 0. Previously Dinur and Steurer proved such a theorem for the special case of projection games. We use information theoretic techniques in our proof. Our proofs also extend to the high value regime (value close to 1) and provide alternate proofs for the parallel repetition theorems of Holenstein and Rao for general and projection games respectively. We also extend the example of Feige and Verbitsky to show that the small-value parallel repetition bound we obtain is tight. Our techniques are elementary in that we only need to employ basic information theory and discrete probability in the small-value parallel repetition proof.
我们证明了值趋近于0的一般对策的平行重复定理。先前Dinur和Steurer为投影对策的特殊情况证明了这样一个定理。我们在证明中使用了信息理论技术。我们的证明也扩展到高值域(值接近1),并分别为一般和投影对策的Holenstein和Rao的平行重复定理提供了替代证明。我们还推广了Feige和Verbitsky的例子,证明了我们得到的小值平行重复边界是紧的。我们的技术是初级的,我们只需要在小值并行重复证明中使用基本的信息论和离散概率。
{"title":"Small Value Parallel Repetition for General Games","authors":"M. Braverman, A. Garg","doi":"10.1145/2746539.2746565","DOIUrl":"https://doi.org/10.1145/2746539.2746565","url":null,"abstract":"We prove a parallel repetition theorem for general games with value tending to 0. Previously Dinur and Steurer proved such a theorem for the special case of projection games. We use information theoretic techniques in our proof. Our proofs also extend to the high value regime (value close to 1) and provide alternate proofs for the parallel repetition theorems of Holenstein and Rao for general and projection games respectively. We also extend the example of Feige and Verbitsky to show that the small-value parallel repetition bound we obtain is tight. Our techniques are elementary in that we only need to employ basic information theory and discrete probability in the small-value parallel repetition proof.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80306046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Improved Noisy Population Recovery, and Reverse Bonami-Beckner Inequality for Sparse Functions 改进的噪声种群恢复和稀疏函数的逆Bonami-Beckner不等式
Pub Date : 2015-06-14 DOI: 10.1145/2746539.2746540
Shachar Lovett, Jiapeng Zhang
The noisy population recovery problem is a basic statistical inference problem. Given an unknown distribution in {0,1}n with support of size k, and given access only to noisy samples from it, where each bit is flipped independently with probability (1-μ)/2, estimate the original probability up to an additive error of ε. We give an algorithm which solves this problem in time polynomial in (klog log k, n, 1/ε). This improves on the previous algorithm of Wigderson and Yehudayoff [FOCS 2012] which solves the problem in time polynomial in (klog k, n, 1/ε). Our main technical contribution, which facilitates the algorithm, is a new reverse Bonami-Beckner inequality for the L1 norm of sparse functions.
噪声种群恢复问题是一个基本的统计推理问题。给定支持大小为k的{0,1}n中的未知分布,并且只能访问其中的噪声样本,其中每个比特以(1-μ)/2的概率独立翻转,估计原始概率直至加性误差ε。给出了一种求解该问题的时间多项式算法(klog log k, n, 1/ε)。这是对Wigderson和Yehudayoff [FOCS 2012]之前的算法的改进,该算法在(klog k, n, 1/ε)的时间多项式中解决问题。我们的主要技术贡献是简化了算法,为稀疏函数的L1范数提供了一个新的反向Bonami-Beckner不等式。
{"title":"Improved Noisy Population Recovery, and Reverse Bonami-Beckner Inequality for Sparse Functions","authors":"Shachar Lovett, Jiapeng Zhang","doi":"10.1145/2746539.2746540","DOIUrl":"https://doi.org/10.1145/2746539.2746540","url":null,"abstract":"The noisy population recovery problem is a basic statistical inference problem. Given an unknown distribution in {0,1}n with support of size k, and given access only to noisy samples from it, where each bit is flipped independently with probability (1-μ)/2, estimate the original probability up to an additive error of ε. We give an algorithm which solves this problem in time polynomial in (klog log k, n, 1/ε). This improves on the previous algorithm of Wigderson and Yehudayoff [FOCS 2012] which solves the problem in time polynomial in (klog k, n, 1/ε). Our main technical contribution, which facilitates the algorithm, is a new reverse Bonami-Beckner inequality for the L1 norm of sparse functions.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85159564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Succinct Garbling and Indistinguishability Obfuscation for RAM Programs RAM程序的简洁乱码和不可区分混淆
Pub Date : 2015-06-14 DOI: 10.1145/2746539.2746621
R. Canetti, Justin Holmgren, Abhishek Jain, V. Vaikuntanathan
We show how to construct succinct Indistinguishability Obfuscation (IO) schemes for RAM programs. That is, given a RAM program whose computation requires space S and time T, we generate a RAM program with size and space requirements of ~O(S) and runtime ~O(T). The construction uses non-succinct IO (i.e., IO for circuits) and injective one way functions, both with sub-exponential security. A main component in our scheme is a succinct garbling scheme for RAM programs. Our garbling scheme has the same size, space and runtime parameters as above, and requires only polynomial security of the underlying primitives. This scheme has other qualitatively new applications such as publicly verifiable succinct non-interactive delegation of computation and succinct functional encryption.
我们展示了如何为RAM程序构建简洁的不可区分混淆(IO)方案。也就是说,给定一个RAM程序,其计算需要空间S和时间T,我们生成一个RAM程序,其大小和空间需求为~O(S),运行时间为~O(T)。该结构使用非简洁IO(即电路的IO)和单射单向函数,两者都具有次指数安全性。该方案的主要组成部分是一个简洁的RAM程序乱码方案。我们的乱码方案具有与上述相同的大小、空间和运行时参数,并且只要求底层原语的多项式安全性。该方案还具有其他质的新应用,如公开可验证的简洁的非交互式计算委托和简洁的功能加密。
{"title":"Succinct Garbling and Indistinguishability Obfuscation for RAM Programs","authors":"R. Canetti, Justin Holmgren, Abhishek Jain, V. Vaikuntanathan","doi":"10.1145/2746539.2746621","DOIUrl":"https://doi.org/10.1145/2746539.2746621","url":null,"abstract":"We show how to construct succinct Indistinguishability Obfuscation (IO) schemes for RAM programs. That is, given a RAM program whose computation requires space S and time T, we generate a RAM program with size and space requirements of ~O(S) and runtime ~O(T). The construction uses non-succinct IO (i.e., IO for circuits) and injective one way functions, both with sub-exponential security. A main component in our scheme is a succinct garbling scheme for RAM programs. Our garbling scheme has the same size, space and runtime parameters as above, and requires only polynomial security of the underlying primitives. This scheme has other qualitatively new applications such as publicly verifiable succinct non-interactive delegation of computation and succinct functional encryption.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87345038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
Spectral Sparsification and Regret Minimization Beyond Matrix Multiplicative Updates 超越矩阵乘法更新的频谱稀疏化和遗憾最小化
Pub Date : 2015-06-14 DOI: 10.1145/2746539.2746610
Z. Zhu, Zhenyu A. Liao, L. Orecchia
In this paper, we provide a novel construction of the linear-sized spectral sparsifiers of Batson, Spielman and Srivastava [11]. While previous constructions required Ω(n4) running time [11, 45], our sparsification routine can be implemented in almost-quadratic running time O(n2+ε). The fundamental conceptual novelty of our work is the leveraging of a strong connection between sparsification and a regret minimization problem over density matrices. This connection was known to provide an interpretation of the randomized sparsifiers of Spielman and Srivastava [39] via the application of matrix multiplicative weight updates (MWU) [17, 43]. In this paper, we explain how matrix MWU naturally arises as an instance of the Follow-the-Regularized-Leader framework and generalize this approach to yield a larger class of updates. This new class allows us to accelerate the construction of linear-sized spectral sparsifiers, and give novel insights on the motivation behind Batson, Spielman and Srivastava [11].
在本文中,我们提供了Batson, Spielman和Srivastava[11]的线性大小谱稀疏器的新构造。虽然以前的构造需要Ω(n4)运行时间[11,45],但我们的稀疏化程序可以在几乎二次的运行时间O(n2+ε)内实现。我们工作的基本概念新颖之处在于利用密度矩阵上的稀疏化和遗憾最小化问题之间的紧密联系。已知这种联系通过应用矩阵乘法权重更新(MWU)为Spielman和Srivastava[39]的随机稀疏化提供了解释[17,43]。在本文中,我们解释了矩阵MWU是如何作为follow -the- regulalized - leader框架的一个实例自然出现的,并推广了这种方法来产生更大的更新类。这个新的类使我们能够加速线性大小的光谱稀疏器的构建,并对Batson, Spielman和Srivastava[11]背后的动机提供新颖的见解。
{"title":"Spectral Sparsification and Regret Minimization Beyond Matrix Multiplicative Updates","authors":"Z. Zhu, Zhenyu A. Liao, L. Orecchia","doi":"10.1145/2746539.2746610","DOIUrl":"https://doi.org/10.1145/2746539.2746610","url":null,"abstract":"In this paper, we provide a novel construction of the linear-sized spectral sparsifiers of Batson, Spielman and Srivastava [11]. While previous constructions required Ω(n4) running time [11, 45], our sparsification routine can be implemented in almost-quadratic running time O(n2+ε). The fundamental conceptual novelty of our work is the leveraging of a strong connection between sparsification and a regret minimization problem over density matrices. This connection was known to provide an interpretation of the randomized sparsifiers of Spielman and Srivastava [39] via the application of matrix multiplicative weight updates (MWU) [17, 43]. In this paper, we explain how matrix MWU naturally arises as an instance of the Follow-the-Regularized-Leader framework and generalize this approach to yield a larger class of updates. This new class allows us to accelerate the construction of linear-sized spectral sparsifiers, and give novel insights on the motivation behind Batson, Spielman and Srivastava [11].","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81503638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 114
期刊
Proceedings of the forty-seventh annual ACM symposium on Theory of Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1