Sanjam Garg, Steve Lu, R. Ostrovsky, Alessandra Scafuro
Yao's garbled circuit construction is a very fundamental result in cryptography and recent efficiency optimizations have brought it much closer to practice. However these constructions work only for circuits and garbling a RAM program involves the inefficient process of first converting it into a circuit. Towards the goal of avoiding this inefficiency, Lu and Ostrovsky (Eurocrypt 2013) introduced the notion of "garbled RAM" as a method to garble RAM programs directly. It can be seen as a RAM analogue of Yao's garbled circuits such that, the size of the garbled program and the time it takes to create and evaluate it, is proportional only to the running time on the RAM program rather than its circuit size. Known realizations of this primitive, either need to rely on strong computational assumptions or do not achieve the aforementioned efficiency (Gentry, Halevi, Lu, Ostrovsky, Raykova and Wichs, EUROCRYPT 2014). In this paper we provide the first construction with strictly poly-logarithmic overhead in both space and time based only on the minimal assumption that one-way functions exist. Our scheme allows for garbling multiple programs being executed on a persistent database, and has the additional feature that the program garbling is decoupled from the database garbling. This allows a client to provide multiple garbled programs to the server as part of a pre-processing phase and then later determine the order and the inputs on which these programs are to be executed, doing work independent of the running times of the programs itself.
{"title":"Garbled RAM From One-Way Functions","authors":"Sanjam Garg, Steve Lu, R. Ostrovsky, Alessandra Scafuro","doi":"10.1145/2746539.2746593","DOIUrl":"https://doi.org/10.1145/2746539.2746593","url":null,"abstract":"Yao's garbled circuit construction is a very fundamental result in cryptography and recent efficiency optimizations have brought it much closer to practice. However these constructions work only for circuits and garbling a RAM program involves the inefficient process of first converting it into a circuit. Towards the goal of avoiding this inefficiency, Lu and Ostrovsky (Eurocrypt 2013) introduced the notion of \"garbled RAM\" as a method to garble RAM programs directly. It can be seen as a RAM analogue of Yao's garbled circuits such that, the size of the garbled program and the time it takes to create and evaluate it, is proportional only to the running time on the RAM program rather than its circuit size. Known realizations of this primitive, either need to rely on strong computational assumptions or do not achieve the aforementioned efficiency (Gentry, Halevi, Lu, Ostrovsky, Raykova and Wichs, EUROCRYPT 2014). In this paper we provide the first construction with strictly poly-logarithmic overhead in both space and time based only on the minimal assumption that one-way functions exist. Our scheme allows for garbling multiple programs being executed on a persistent database, and has the additional feature that the program garbling is decoupled from the database garbling. This allows a client to provide multiple garbled programs to the server as part of a pre-processing phase and then later determine the order and the inputs on which these programs are to be executed, doing work independent of the running times of the programs itself.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87846520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the problem of allocating a set of indivisible items among agents with additive valuations, with the goal of maximizing the geometric mean of the agents' valuations, i.e., the Nash social welfare. This problem is known to be NP-hard, and our main result is the first efficient constant-factor approximation algorithm for this objective. We first observe that the integrality gap of the natural fractional relaxation is exponential, so we propose a different fractional allocation which implies a tighter upper bound and, after appropriate rounding, yields a good integral allocation. An interesting contribution of this work is the fractional allocation that we use. The relaxation of our problem can be solved efficiently using the Eisenberg-Gale program, whose optimal solution can be interpreted as a market equilibrium with the dual variables playing the role of item prices. Using this market-based interpretation, we define an alternative equilibrium allocation where the amount of spending that can go into any given item is bounded, thus keeping the highly priced items under-allocated, and forcing the agents to spend on lower priced items. The resulting equilibrium prices reveal more information regarding how to assign items so as to obtain a good integral allocation.
{"title":"Approximating the Nash Social Welfare with Indivisible Items","authors":"R. Cole, Vasilis Gkatzelis","doi":"10.1145/2746539.2746589","DOIUrl":"https://doi.org/10.1145/2746539.2746589","url":null,"abstract":"We study the problem of allocating a set of indivisible items among agents with additive valuations, with the goal of maximizing the geometric mean of the agents' valuations, i.e., the Nash social welfare. This problem is known to be NP-hard, and our main result is the first efficient constant-factor approximation algorithm for this objective. We first observe that the integrality gap of the natural fractional relaxation is exponential, so we propose a different fractional allocation which implies a tighter upper bound and, after appropriate rounding, yields a good integral allocation. An interesting contribution of this work is the fractional allocation that we use. The relaxation of our problem can be solved efficiently using the Eisenberg-Gale program, whose optimal solution can be interpreted as a market equilibrium with the dual variables playing the role of item prices. Using this market-based interpretation, we define an alternative equilibrium allocation where the amount of spending that can go into any given item is bounded, thus keeping the highly priced items under-allocated, and forcing the agents to spend on lower priced items. The resulting equilibrium prices reveal more information regarding how to assign items so as to obtain a good integral allocation.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84446189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Positive linear programs (LP), also known as packing and covering linear programs, are an important class of problems that bridges computer science, operation research, and optimization. Efficient algorithms for solving such LPs have received significant attention in the past 20 years [2, 3, 4, 6, 7, 9, 11, 15, 16, 18, 19, 21, 24, 25, 26, 29, 30]. Unfortunately, all known nearly-linear time algorithms for producing (1+ε)-approximate solutions to positive LPs have a running time dependence that is at least proportional to ε-2. This is also known as an O(1/√T) convergence rate and is particularly poor in many applications. In this paper, we leverage insights from optimization theory to break this longstanding barrier. Our algorithms solve the packing LP in time ~O(N ε-1) and the covering LP in time ~O(N ε-1.5). At high level, they can be described as linear couplings of several first-order descent steps. This is the first application of our linear coupling technique (see [1]) to problems that are not amenable to blackbox applications known iterative algorithms in convex optimization. Our work also introduces a sequence of new techniques, including the stochastic and the non-symmetric execution of gradient truncation operations, which may be of independent interest.
{"title":"Nearly-Linear Time Positive LP Solver with Faster Convergence Rate","authors":"Z. Zhu, L. Orecchia","doi":"10.1145/2746539.2746573","DOIUrl":"https://doi.org/10.1145/2746539.2746573","url":null,"abstract":"Positive linear programs (LP), also known as packing and covering linear programs, are an important class of problems that bridges computer science, operation research, and optimization. Efficient algorithms for solving such LPs have received significant attention in the past 20 years [2, 3, 4, 6, 7, 9, 11, 15, 16, 18, 19, 21, 24, 25, 26, 29, 30]. Unfortunately, all known nearly-linear time algorithms for producing (1+ε)-approximate solutions to positive LPs have a running time dependence that is at least proportional to ε-2. This is also known as an O(1/√T) convergence rate and is particularly poor in many applications. In this paper, we leverage insights from optimization theory to break this longstanding barrier. Our algorithms solve the packing LP in time ~O(N ε-1) and the covering LP in time ~O(N ε-1.5). At high level, they can be described as linear couplings of several first-order descent steps. This is the first application of our linear coupling technique (see [1]) to problems that are not amenable to blackbox applications known iterative algorithms in convex optimization. Our work also introduces a sequence of new techniques, including the stochastic and the non-symmetric execution of gradient truncation operations, which may be of independent interest.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88391564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a homomorphic signature scheme, a user Alice signs some large dataset x using her secret signing key and uploads the signed data to an untrusted remote server. The server can then run some computation y=f(x) over the signed data and homomorphically derive a short signature σf,y certifying that y is the correct output of the computation f. Anybody can verify the tuple (f, y, σf,y) using Alice's public verification key and become convinced of this fact without having to retrieve the entire underlying data. In this work, we construct the first leveled fully homomorphic signature} schemes that can evaluate arbitrary {circuits} over signed data. Only the maximal {depth} d of the circuits needs to be fixed a-priori at setup, and the size of the evaluated signature grows polynomially in d, but is otherwise independent of the circuit size or the data size. Our solution is based on the (sub-exponential) hardness of the small integer solution (SIS) problem in standard lattices and satisfies full (adaptive) security. In the standard model, we get a scheme with large public parameters whose size exceeds the total size of a dataset. In the random-oracle model, we get a scheme with short public parameters. In both cases, the schemes can be used to sign many different datasets. The complexity of verifying a signature for a computation f is at least as large as that of computing f, but can be amortized when verifying the same computation over many different datasets. Furthermore, the signatures can be made context-hiding so as not to reveal anything about the data beyond the outcome of the computation. These results offer a significant improvement in capabilities and assumptions over the best prior homomorphic signature schemes, which were limited to evaluating polynomials of constant degree. As a building block of independent interest, we introduce a new notion called homomorphic trapdoor functions (HTDF) which conceptually unites homomorphic encryption and signatures. We construct HTDFs by relying on the techniques developed by Gentry et al. (CRYPTO '13) and Boneh et al. (EUROCRYPT '14) in the contexts of fully homomorphic and attribute-based encryptions.
{"title":"Leveled Fully Homomorphic Signatures from Standard Lattices","authors":"D. Wichs","doi":"10.1145/2746539.2746576","DOIUrl":"https://doi.org/10.1145/2746539.2746576","url":null,"abstract":"In a homomorphic signature scheme, a user Alice signs some large dataset x using her secret signing key and uploads the signed data to an untrusted remote server. The server can then run some computation y=f(x) over the signed data and homomorphically derive a short signature σf,y certifying that y is the correct output of the computation f. Anybody can verify the tuple (f, y, σf,y) using Alice's public verification key and become convinced of this fact without having to retrieve the entire underlying data. In this work, we construct the first leveled fully homomorphic signature} schemes that can evaluate arbitrary {circuits} over signed data. Only the maximal {depth} d of the circuits needs to be fixed a-priori at setup, and the size of the evaluated signature grows polynomially in d, but is otherwise independent of the circuit size or the data size. Our solution is based on the (sub-exponential) hardness of the small integer solution (SIS) problem in standard lattices and satisfies full (adaptive) security. In the standard model, we get a scheme with large public parameters whose size exceeds the total size of a dataset. In the random-oracle model, we get a scheme with short public parameters. In both cases, the schemes can be used to sign many different datasets. The complexity of verifying a signature for a computation f is at least as large as that of computing f, but can be amortized when verifying the same computation over many different datasets. Furthermore, the signatures can be made context-hiding so as not to reveal anything about the data beyond the outcome of the computation. These results offer a significant improvement in capabilities and assumptions over the best prior homomorphic signature schemes, which were limited to evaluating polynomials of constant degree. As a building block of independent interest, we introduce a new notion called homomorphic trapdoor functions (HTDF) which conceptually unites homomorphic encryption and signatures. We construct HTDFs by relying on the techniques developed by Gentry et al. (CRYPTO '13) and Boneh et al. (EUROCRYPT '14) in the contexts of fully homomorphic and attribute-based encryptions.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85490830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We define a new notion of information cost for quantum protocols, and a corresponding notion of quantum information complexity for bipartite quantum tasks. These are the fully quantum generalizations of the analogous quantities for bipartite classical tasks that have found many applications recently, in particular for proving communication complexity lower bounds and direct sum theorems. Finding such a quantum generalization of information complexity was one of the open problems recently raised by Braverman (STOC'12). Previous attempts have been made to define such a quantity for quantum protocols, with particular applications in mind; our notion differs from these in many respects. First, it directly provides a lower bound on the quantum communication cost, independent of the number of rounds of the underlying protocol. Secondly, we provide an operational interpretation for quantum information complexity: we show that it is exactly equal to the amortized quantum communication complexity of a bipartite task on a given input. This generalizes a result of Braverman and Rao (FOCS'11) to quantum protocols. Along the way to prove this result, we even strengthens the classical result in a bounded round scenario, and also prove important structural properties of quantum information cost and complexity. We prove that using this definition leads to the first general direct sum theorem for bounded round quantum communication complexity. Previous direct sum results in quantum communication complexity either held for some particular classes of functions, or were general but only held for single-round protocols. We also discuss potential applications of the new quantities to obtain lower bounds on quantum communication complexity.
{"title":"Quantum Information Complexity","authors":"D. Touchette","doi":"10.1145/2746539.2746613","DOIUrl":"https://doi.org/10.1145/2746539.2746613","url":null,"abstract":"We define a new notion of information cost for quantum protocols, and a corresponding notion of quantum information complexity for bipartite quantum tasks. These are the fully quantum generalizations of the analogous quantities for bipartite classical tasks that have found many applications recently, in particular for proving communication complexity lower bounds and direct sum theorems. Finding such a quantum generalization of information complexity was one of the open problems recently raised by Braverman (STOC'12). Previous attempts have been made to define such a quantity for quantum protocols, with particular applications in mind; our notion differs from these in many respects. First, it directly provides a lower bound on the quantum communication cost, independent of the number of rounds of the underlying protocol. Secondly, we provide an operational interpretation for quantum information complexity: we show that it is exactly equal to the amortized quantum communication complexity of a bipartite task on a given input. This generalizes a result of Braverman and Rao (FOCS'11) to quantum protocols. Along the way to prove this result, we even strengthens the classical result in a bounded round scenario, and also prove important structural properties of quantum information cost and complexity. We prove that using this definition leads to the first general direct sum theorem for bounded round quantum communication complexity. Previous direct sum results in quantum communication complexity either held for some particular classes of functions, or were general but only held for single-round protocols. We also discuss potential applications of the new quantities to obtain lower bounds on quantum communication complexity.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75182372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We develop new theoretical tools for proving lower-bounds on the (amortized) complexity of certain functions in models of parallel computation. We apply the tools to construct a class of functions with high amortized memory complexity in the *parallel* Random Oracle Model (pROM); a variant of the standard ROM allowing for batches of *simultaneous* queries. In particular we obtain a new, more robust, type of Memory-Hard Functions (MHF); a security primitive which has recently been gaining acceptance in practice as an effective means of countering brute-force attacks on security relevant functions. Along the way we also demonstrate an important shortcoming of previous definitions of MHFs and give a new definition addressing the problem. The tools we develop represent an adaptation of the powerful pebbling paradigm (initially introduced by Hewitt and Paterson [HP70] and Cook [Coo73]) to a simple and intuitive parallel setting. We define a simple pebbling game Gp over graphs which aims to abstract parallel computation in an intuitive way. As a conceptual contribution we define a measure of pebbling complexity for graphs called *cumulative complexity* (CC) and show how it overcomes a crucial shortcoming (in the parallel setting) exhibited by more traditional complexity measures used in the past. As a main technical contribution we give an explicit construction of a constant in-degree family of graphs whose CC in Gp approaches maximality to within a polylogarithmic factor for any graph of equal size (analogous to the graphs of Tarjan et. al. [PTC76, LT82] for sequential pebbling games). Finally, for a given graph G and related function fG, we derive a lower-bound on the amortized memory complexity of fG in the pROM in terms of the CC of G in the game Gp.
{"title":"High Parallel Complexity Graphs and Memory-Hard Functions","authors":"J. Alwen, Vladimir Serbinenko","doi":"10.1145/2746539.2746622","DOIUrl":"https://doi.org/10.1145/2746539.2746622","url":null,"abstract":"We develop new theoretical tools for proving lower-bounds on the (amortized) complexity of certain functions in models of parallel computation. We apply the tools to construct a class of functions with high amortized memory complexity in the *parallel* Random Oracle Model (pROM); a variant of the standard ROM allowing for batches of *simultaneous* queries. In particular we obtain a new, more robust, type of Memory-Hard Functions (MHF); a security primitive which has recently been gaining acceptance in practice as an effective means of countering brute-force attacks on security relevant functions. Along the way we also demonstrate an important shortcoming of previous definitions of MHFs and give a new definition addressing the problem. The tools we develop represent an adaptation of the powerful pebbling paradigm (initially introduced by Hewitt and Paterson [HP70] and Cook [Coo73]) to a simple and intuitive parallel setting. We define a simple pebbling game Gp over graphs which aims to abstract parallel computation in an intuitive way. As a conceptual contribution we define a measure of pebbling complexity for graphs called *cumulative complexity* (CC) and show how it overcomes a crucial shortcoming (in the parallel setting) exhibited by more traditional complexity measures used in the past. As a main technical contribution we give an explicit construction of a constant in-degree family of graphs whose CC in Gp approaches maximality to within a polylogarithmic factor for any graph of equal size (analogous to the graphs of Tarjan et. al. [PTC76, LT82] for sequential pebbling games). Finally, for a given graph G and related function fG, we derive a lower-bound on the amortized memory complexity of fG in the pROM in terms of the CC of G in the game Gp.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72593746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We prove a parallel repetition theorem for general games with value tending to 0. Previously Dinur and Steurer proved such a theorem for the special case of projection games. We use information theoretic techniques in our proof. Our proofs also extend to the high value regime (value close to 1) and provide alternate proofs for the parallel repetition theorems of Holenstein and Rao for general and projection games respectively. We also extend the example of Feige and Verbitsky to show that the small-value parallel repetition bound we obtain is tight. Our techniques are elementary in that we only need to employ basic information theory and discrete probability in the small-value parallel repetition proof.
{"title":"Small Value Parallel Repetition for General Games","authors":"M. Braverman, A. Garg","doi":"10.1145/2746539.2746565","DOIUrl":"https://doi.org/10.1145/2746539.2746565","url":null,"abstract":"We prove a parallel repetition theorem for general games with value tending to 0. Previously Dinur and Steurer proved such a theorem for the special case of projection games. We use information theoretic techniques in our proof. Our proofs also extend to the high value regime (value close to 1) and provide alternate proofs for the parallel repetition theorems of Holenstein and Rao for general and projection games respectively. We also extend the example of Feige and Verbitsky to show that the small-value parallel repetition bound we obtain is tight. Our techniques are elementary in that we only need to employ basic information theory and discrete probability in the small-value parallel repetition proof.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80306046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The noisy population recovery problem is a basic statistical inference problem. Given an unknown distribution in {0,1}n with support of size k, and given access only to noisy samples from it, where each bit is flipped independently with probability (1-μ)/2, estimate the original probability up to an additive error of ε. We give an algorithm which solves this problem in time polynomial in (klog log k, n, 1/ε). This improves on the previous algorithm of Wigderson and Yehudayoff [FOCS 2012] which solves the problem in time polynomial in (klog k, n, 1/ε). Our main technical contribution, which facilitates the algorithm, is a new reverse Bonami-Beckner inequality for the L1 norm of sparse functions.
噪声种群恢复问题是一个基本的统计推理问题。给定支持大小为k的{0,1}n中的未知分布,并且只能访问其中的噪声样本,其中每个比特以(1-μ)/2的概率独立翻转,估计原始概率直至加性误差ε。给出了一种求解该问题的时间多项式算法(klog log k, n, 1/ε)。这是对Wigderson和Yehudayoff [FOCS 2012]之前的算法的改进,该算法在(klog k, n, 1/ε)的时间多项式中解决问题。我们的主要技术贡献是简化了算法,为稀疏函数的L1范数提供了一个新的反向Bonami-Beckner不等式。
{"title":"Improved Noisy Population Recovery, and Reverse Bonami-Beckner Inequality for Sparse Functions","authors":"Shachar Lovett, Jiapeng Zhang","doi":"10.1145/2746539.2746540","DOIUrl":"https://doi.org/10.1145/2746539.2746540","url":null,"abstract":"The noisy population recovery problem is a basic statistical inference problem. Given an unknown distribution in {0,1}n with support of size k, and given access only to noisy samples from it, where each bit is flipped independently with probability (1-μ)/2, estimate the original probability up to an additive error of ε. We give an algorithm which solves this problem in time polynomial in (klog log k, n, 1/ε). This improves on the previous algorithm of Wigderson and Yehudayoff [FOCS 2012] which solves the problem in time polynomial in (klog k, n, 1/ε). Our main technical contribution, which facilitates the algorithm, is a new reverse Bonami-Beckner inequality for the L1 norm of sparse functions.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85159564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Canetti, Justin Holmgren, Abhishek Jain, V. Vaikuntanathan
We show how to construct succinct Indistinguishability Obfuscation (IO) schemes for RAM programs. That is, given a RAM program whose computation requires space S and time T, we generate a RAM program with size and space requirements of ~O(S) and runtime ~O(T). The construction uses non-succinct IO (i.e., IO for circuits) and injective one way functions, both with sub-exponential security. A main component in our scheme is a succinct garbling scheme for RAM programs. Our garbling scheme has the same size, space and runtime parameters as above, and requires only polynomial security of the underlying primitives. This scheme has other qualitatively new applications such as publicly verifiable succinct non-interactive delegation of computation and succinct functional encryption.
{"title":"Succinct Garbling and Indistinguishability Obfuscation for RAM Programs","authors":"R. Canetti, Justin Holmgren, Abhishek Jain, V. Vaikuntanathan","doi":"10.1145/2746539.2746621","DOIUrl":"https://doi.org/10.1145/2746539.2746621","url":null,"abstract":"We show how to construct succinct Indistinguishability Obfuscation (IO) schemes for RAM programs. That is, given a RAM program whose computation requires space S and time T, we generate a RAM program with size and space requirements of ~O(S) and runtime ~O(T). The construction uses non-succinct IO (i.e., IO for circuits) and injective one way functions, both with sub-exponential security. A main component in our scheme is a succinct garbling scheme for RAM programs. Our garbling scheme has the same size, space and runtime parameters as above, and requires only polynomial security of the underlying primitives. This scheme has other qualitatively new applications such as publicly verifiable succinct non-interactive delegation of computation and succinct functional encryption.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87345038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we provide a novel construction of the linear-sized spectral sparsifiers of Batson, Spielman and Srivastava [11]. While previous constructions required Ω(n4) running time [11, 45], our sparsification routine can be implemented in almost-quadratic running time O(n2+ε). The fundamental conceptual novelty of our work is the leveraging of a strong connection between sparsification and a regret minimization problem over density matrices. This connection was known to provide an interpretation of the randomized sparsifiers of Spielman and Srivastava [39] via the application of matrix multiplicative weight updates (MWU) [17, 43]. In this paper, we explain how matrix MWU naturally arises as an instance of the Follow-the-Regularized-Leader framework and generalize this approach to yield a larger class of updates. This new class allows us to accelerate the construction of linear-sized spectral sparsifiers, and give novel insights on the motivation behind Batson, Spielman and Srivastava [11].
{"title":"Spectral Sparsification and Regret Minimization Beyond Matrix Multiplicative Updates","authors":"Z. Zhu, Zhenyu A. Liao, L. Orecchia","doi":"10.1145/2746539.2746610","DOIUrl":"https://doi.org/10.1145/2746539.2746610","url":null,"abstract":"In this paper, we provide a novel construction of the linear-sized spectral sparsifiers of Batson, Spielman and Srivastava [11]. While previous constructions required Ω(n4) running time [11, 45], our sparsification routine can be implemented in almost-quadratic running time O(n2+ε). The fundamental conceptual novelty of our work is the leveraging of a strong connection between sparsification and a regret minimization problem over density matrices. This connection was known to provide an interpretation of the randomized sparsifiers of Spielman and Srivastava [39] via the application of matrix multiplicative weight updates (MWU) [17, 43]. In this paper, we explain how matrix MWU naturally arises as an instance of the Follow-the-Regularized-Leader framework and generalize this approach to yield a larger class of updates. This new class allows us to accelerate the construction of linear-sized spectral sparsifiers, and give novel insights on the motivation behind Batson, Spielman and Srivastava [11].","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81503638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}