We consider the fundamental derandomization problem of deterministically finding a satisfying assignment to a CNF formula that has many satisfying assignments. We give a deterministic algorithm which, given an n-variable poly(n)-clause CNF formula F that has at least ≥ 2^n satisfying assignments, runs in time [ n^{tilde{O}(loglog n)^2} ] for ≥ ge 1/polylog(n) and outputs a satisfying assignment of F. Prior to our work the fastest known algorithm for this problem was simply to enumerate over all seeds of a pseudorandom generator for CNFs; using the best known PRGs for CNFs cite{DETT10, this takes time n^{tilde{Ω}(log n)} even for constant ≥. Our approach is based on a new general framework relating deterministic search and deterministic approximate counting, which we believe may find further applications.
我们考虑一个基本的非随机化问题,即确定性地找到一个有许多满意赋值的CNF公式的满意赋值。我们给出了一个确定性算法,给定一个n变量poly(n)-clause - CNF公式F,它至少具有≥2^n个满足的赋值,运行时间[n^{tilde{O}(loglog n)^2} ] for ≥ge 1/polylog(n)并输出一个令人满意的赋值f。在我们的工作之前,已知最快的算法是简单地枚举CNFs的伪随机生成器的所有种子;对于CNFs cite{DETT10,使用最著名的PRGs,即使对于常数≥,也需要n^{tilde{Ω}(log n)}时间。我们的方法是基于一个与确定性搜索和确定性近似计数相关的新的通用框架,我们相信这可能会找到进一步的应用。
{"title":"Deterministic Search for CNF Satisfying Assignments in Almost Polynomial Time","authors":"R. Servedio, Li-Yang Tan","doi":"10.1109/FOCS.2017.80","DOIUrl":"https://doi.org/10.1109/FOCS.2017.80","url":null,"abstract":"We consider the fundamental derandomization problem of deterministically finding a satisfying assignment to a CNF formula that has many satisfying assignments. We give a deterministic algorithm which, given an n-variable poly(n)-clause CNF formula F that has at least ≥ 2^n satisfying assignments, runs in time [ n^{tilde{O}(loglog n)^2} ] for ≥ ge 1/polylog(n) and outputs a satisfying assignment of F. Prior to our work the fastest known algorithm for this problem was simply to enumerate over all seeds of a pseudorandom generator for CNFs; using the best known PRGs for CNFs cite{DETT10, this takes time n^{tilde{Ω}(log n)} even for constant ≥. Our approach is based on a new general framework relating deterministic search and deterministic approximate counting, which we believe may find further applications.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127081268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we initiate the study of garbled protocols — a generalization of Yaos garbled circuits construction to distributed protocols. More specifically, in a garbled protocol construction, each party can independently generate a garbled protocol component along with pairs of input labels. Additionally, it generates an encoding of its input. The evaluation procedure takes as input the set of all garbled protocol components and the labels corresponding to the input encodings of all parties and outputs the entire transcript of the distributed protocol.We provide constructions for garbling arbitrary protocols based on standard computational assumptions on bilinear maps (in the common random string model). Next, using garbled protocols we obtain a general compiler that compresses any arbitrary round multiparty secure computation protocol into a two-round UC secure protocol. Previously, two-round multiparty secure computation protocols were only known assuming witness encryption or learning-with errors. Benefiting from our generic approach we also obtain protocols (i) for the setting of random access machines (RAM programs) while keeping communication and computational costs proportional to running times, while (ii) making only a black-box use of the underlying group, eliminating the need for any expensive non-black-box group operations. Our results are obtained by a simple but powerful extension of the non-interactive zero-knowledge proof system of Groth, Ostrovsky and Sahai [Journal of ACM, 2012].
在本文中,我们启动了对乱码协议的研究—将姚的乱码电路构造推广到分布式协议。更具体地说,在乱码协议构造中,每一方都可以独立地生成一个乱码协议组件以及对输入标签。此外,它还生成其输入的编码。求值过程将所有乱码协议组件的集合和各方输入编码对应的标签作为输入,输出分布式协议的整个文本。我们提供了基于双线性映射(在普通随机字符串模型中)的标准计算假设的乱码任意协议的构造。其次,使用乱码协议,我们获得了一个通用编译器,该编译器将任意轮多方安全计算协议压缩为两轮UC安全协议。以前,两轮多方安全计算协议仅在假设证人加密或有错误学习的情况下才已知。得益于我们的通用方法,我们还获得了(i)用于设置随机存取机(RAM程序)的协议,同时保持通信和计算成本与运行时间成正比,同时(ii)仅使用底层组的黑盒,消除了任何昂贵的非黑盒组操作的需要。我们的结果是通过对growth, Ostrovsky和Sahai [Journal of ACM, 2012]的非交互式零知识证明系统的简单而强大的扩展获得的。
{"title":"Garbled Protocols and Two-Round MPC from Bilinear Maps","authors":"Sanjam Garg, Akshayaram Srinivasan","doi":"10.1109/FOCS.2017.60","DOIUrl":"https://doi.org/10.1109/FOCS.2017.60","url":null,"abstract":"In this paper, we initiate the study of garbled protocols — a generalization of Yaos garbled circuits construction to distributed protocols. More specifically, in a garbled protocol construction, each party can independently generate a garbled protocol component along with pairs of input labels. Additionally, it generates an encoding of its input. The evaluation procedure takes as input the set of all garbled protocol components and the labels corresponding to the input encodings of all parties and outputs the entire transcript of the distributed protocol.We provide constructions for garbling arbitrary protocols based on standard computational assumptions on bilinear maps (in the common random string model). Next, using garbled protocols we obtain a general compiler that compresses any arbitrary round multiparty secure computation protocol into a two-round UC secure protocol. Previously, two-round multiparty secure computation protocols were only known assuming witness encryption or learning-with errors. Benefiting from our generic approach we also obtain protocols (i) for the setting of random access machines (RAM programs) while keeping communication and computational costs proportional to running times, while (ii) making only a black-box use of the underlying group, eliminating the need for any expensive non-black-box group operations. Our results are obtained by a simple but powerful extension of the non-interactive zero-knowledge proof system of Groth, Ostrovsky and Sahai [Journal of ACM, 2012].","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123901704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate neural circuits in the exacting setting that (i) the acquisition of a piece of knowledge can occur from a single interaction, (ii) the result of each such interaction is a rapidly evaluatable subcircuit, (iii) hundreds of thousands of such subcircuits can be acquired in sequence without substantially degrading the earlier ones, and (iv) recall can be in the form of a rapid evaluation of a composition of subcircuits that have been so acquired at arbitrary different earlier times.We develop a complexity theory, in terms of asymptotically matching upper and lower bounds, on the capacity of a neural network for executing, in this setting, the following action, which we call {it association}: Each action sets up a subcircuit so that the excitation of a chosen set of neurons A will in future cause the excitation of another chosen set B.% As model of computation we consider the neuroidal model, a fully distributed model in which the quantitative resources n, the neuron numbers, d, the number of other neurons each neuron is connected to, and k, the inverse of the maximum synaptic strength, are all accounted for.A succession of experiences, possibly over a lifetime, results in the realization of a complex set of subcircuits. The composability requirement constrains the model to ensure that, for each association as realized by a subcircuit, the excitation in the triggering set of neurons A is quantitatively similar to that in the triggered set B, and also that the unintended excitation in the rest of the system is negligible. These requirements ensure that chains of associations can be triggeredWe first analyze what we call the Basic Mechanism, which uses only direct connections between neurons in the triggering set A and the target set B. We consider random networks of n neurons with expected number d of connections to and from each. We show that in the composable context capacity growth is limited by d^2, a severe limitation if the network is sparse, as it is in cortex. We go on to study the Expansive Mechanism, that additionally uses intermediate relay neurons which have high synaptic weights. For this mechanism we show that the capacity can grow as dn, to within logarithmic factors. From these two results it follows that in the composable regime, for the realistic cortical estimate of d=n^{frac{1}{2}, superlinear capacity of order n^{frac{3}{2}} in terms of the neuron numbers can be realized by the Expansive Mechanism, instead of the linear order n to which the Basic Mechanism is limited. More generally, for both mechanisms, we establish matching upper and lower bounds on capacity in terms of the parameters n, d, and the inverse maximum synaptic strength k.The results as stated above assume that in a set of associations, a target B can be triggered by at most one set A. It can be shown that the capacities are similar if the number m of As that can trigger a B is greater than one but small, but become severely constrained if m exceeds a c
{"title":"Capacity of Neural Networks for Lifelong Learning of Composable Tasks","authors":"L. Valiant","doi":"10.1109/FOCS.2017.41","DOIUrl":"https://doi.org/10.1109/FOCS.2017.41","url":null,"abstract":"We investigate neural circuits in the exacting setting that (i) the acquisition of a piece of knowledge can occur from a single interaction, (ii) the result of each such interaction is a rapidly evaluatable subcircuit, (iii) hundreds of thousands of such subcircuits can be acquired in sequence without substantially degrading the earlier ones, and (iv) recall can be in the form of a rapid evaluation of a composition of subcircuits that have been so acquired at arbitrary different earlier times.We develop a complexity theory, in terms of asymptotically matching upper and lower bounds, on the capacity of a neural network for executing, in this setting, the following action, which we call {it association}: Each action sets up a subcircuit so that the excitation of a chosen set of neurons A will in future cause the excitation of another chosen set B.% As model of computation we consider the neuroidal model, a fully distributed model in which the quantitative resources n, the neuron numbers, d, the number of other neurons each neuron is connected to, and k, the inverse of the maximum synaptic strength, are all accounted for.A succession of experiences, possibly over a lifetime, results in the realization of a complex set of subcircuits. The composability requirement constrains the model to ensure that, for each association as realized by a subcircuit, the excitation in the triggering set of neurons A is quantitatively similar to that in the triggered set B, and also that the unintended excitation in the rest of the system is negligible. These requirements ensure that chains of associations can be triggeredWe first analyze what we call the Basic Mechanism, which uses only direct connections between neurons in the triggering set A and the target set B. We consider random networks of n neurons with expected number d of connections to and from each. We show that in the composable context capacity growth is limited by d^2, a severe limitation if the network is sparse, as it is in cortex. We go on to study the Expansive Mechanism, that additionally uses intermediate relay neurons which have high synaptic weights. For this mechanism we show that the capacity can grow as dn, to within logarithmic factors. From these two results it follows that in the composable regime, for the realistic cortical estimate of d=n^{frac{1}{2}, superlinear capacity of order n^{frac{3}{2}} in terms of the neuron numbers can be realized by the Expansive Mechanism, instead of the linear order n to which the Basic Mechanism is limited. More generally, for both mechanisms, we establish matching upper and lower bounds on capacity in terms of the parameters n, d, and the inverse maximum synaptic strength k.The results as stated above assume that in a set of associations, a target B can be triggered by at most one set A. It can be shown that the capacities are similar if the number m of As that can trigger a B is greater than one but small, but become severely constrained if m exceeds a c","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121483374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose an efficient meta-algorithm for Bayesian inference problems based on low-degree polynomials, semidefinite programming, and tensor decomposition. The algorithm is inspired by recent lower bound constructions for sum-of-squares and related to the method of moments. Our focus is on sample complexity bounds that are as tight as possible (up to additive lower-order terms) and often achieve statistical thresholds or conjectured computational thresholds.Our algorithm recovers the best known bounds for partial recovery in the stochastic block model, a widely-studied class of inference problems for community detection in graphs. We obtain the first partial recovery guarantees for the mixed-membership stochastic block model (Airoldi et el.) for constant average degree—up to what we conjecture to be the computational threshold for this model. %Our algorithm also captures smooth trade-offs between sample and computational complexity, for example, for tensor principal component analysis. We show that our algorithm exhibits a sharp computational threshold for the stochastic block model with multiple communities beyond the Kesten–Stigum bound—giving evidence that this task may require exponential time.The basic strategy of our algorithm is strikingly simple: we compute the best-possible low-degree approximation for the moments of the posterior distribution of the parameters and use a robust tensor decomposition algorithm to recover the parameters from these approximate posterior moments.
我们提出了一种基于低次多项式、半定规划和张量分解的贝叶斯推理问题的有效元算法。该算法的灵感来自于最近的平方和下界构造,并与矩量法有关。我们的重点是尽可能严格的样本复杂性界限(直到加性低阶项),并且经常达到统计阈值或推测的计算阈值。我们的算法恢复了随机块模型中最著名的部分恢复界,随机块模型是一类广泛研究的图中社区检测的推理问题。我们获得了混合隶属度随机块模型(Airoldi et el.)在恒定平均度—下的第一个部分恢复保证,该保证达到了我们推测的该模型的计算阈值。我们的算法还捕获样本和计算复杂性之间的平滑权衡,例如,用于张量主成分分析。我们表明,我们的算法对随机块模型显示出一个尖锐的计算阈值,该模型具有超过kesten x2013;Stigum边界—这表明该任务可能需要指数时间。我们的算法的基本策略非常简单:我们计算参数后验分布的矩的最佳低度近似值,并使用鲁棒张量分解算法从这些近似后验矩中恢复参数。
{"title":"Efficient Bayesian Estimation from Few Samples: Community Detection and Related Problems","authors":"Samuel B. Hopkins, David Steurer","doi":"10.1109/FOCS.2017.42","DOIUrl":"https://doi.org/10.1109/FOCS.2017.42","url":null,"abstract":"We propose an efficient meta-algorithm for Bayesian inference problems based on low-degree polynomials, semidefinite programming, and tensor decomposition. The algorithm is inspired by recent lower bound constructions for sum-of-squares and related to the method of moments. Our focus is on sample complexity bounds that are as tight as possible (up to additive lower-order terms) and often achieve statistical thresholds or conjectured computational thresholds.Our algorithm recovers the best known bounds for partial recovery in the stochastic block model, a widely-studied class of inference problems for community detection in graphs. We obtain the first partial recovery guarantees for the mixed-membership stochastic block model (Airoldi et el.) for constant average degree—up to what we conjecture to be the computational threshold for this model. %Our algorithm also captures smooth trade-offs between sample and computational complexity, for example, for tensor principal component analysis. We show that our algorithm exhibits a sharp computational threshold for the stochastic block model with multiple communities beyond the Kesten–Stigum bound—giving evidence that this task may require exponential time.The basic strategy of our algorithm is strikingly simple: we compute the best-possible low-degree approximation for the moments of the posterior distribution of the parameters and use a robust tensor decomposition algorithm to recover the parameters from these approximate posterior moments.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123375140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samuel B. Hopkins, Pravesh Kothari, Aaron Potechin, P. Raghavendra, T. Schramm, David Steurer
We study planted problems—finding hidden structures in random noisy inputs—through the lens of the sum-of-squares semidefinite programming hierarchy (SoS). This family of powerful semidefinite programs has recently yielded many new algorithms for planted problems, often achieving the best known polynomial-time guarantees in terms of accuracy of recovered solutions and robustness to noise. One theme in recent work is the design of spectral algorithms which match the guarantees of SoS algorithms for planted problems. Classical spectral algorithms are often unable to accomplish this: the twist in these new spectral algorithms is the use of spectral structure of matrices whose entries are low-degree polynomials of the input variables.We prove that for a wide class of planted problems, including refuting random constraint satisfaction problems, tensor and sparse PCA, densest-ksubgraph, community detection in stochastic block models, planted clique, and others, eigenvalues of degree-d matrix polynomials are as powerful as SoS semidefinite programs of degree d. For such problems it is therefore always possible to match the guarantees of SoS without solving a large semidefinite program.Using related ideas on SoS algorithms and lowdegree matrix polynomials (and inspired by recent work on SoS and the planted clique problem [BHK+16]), we prove a new SoS lower bound for the tensor PCA problem.
{"title":"The Power of Sum-of-Squares for Detecting Hidden Structures","authors":"Samuel B. Hopkins, Pravesh Kothari, Aaron Potechin, P. Raghavendra, T. Schramm, David Steurer","doi":"10.1109/FOCS.2017.72","DOIUrl":"https://doi.org/10.1109/FOCS.2017.72","url":null,"abstract":"We study planted problems—finding hidden structures in random noisy inputs—through the lens of the sum-of-squares semidefinite programming hierarchy (SoS). This family of powerful semidefinite programs has recently yielded many new algorithms for planted problems, often achieving the best known polynomial-time guarantees in terms of accuracy of recovered solutions and robustness to noise. One theme in recent work is the design of spectral algorithms which match the guarantees of SoS algorithms for planted problems. Classical spectral algorithms are often unable to accomplish this: the twist in these new spectral algorithms is the use of spectral structure of matrices whose entries are low-degree polynomials of the input variables.We prove that for a wide class of planted problems, including refuting random constraint satisfaction problems, tensor and sparse PCA, densest-ksubgraph, community detection in stochastic block models, planted clique, and others, eigenvalues of degree-d matrix polynomials are as powerful as SoS semidefinite programs of degree d. For such problems it is therefore always possible to match the guarantees of SoS without solving a large semidefinite program.Using related ideas on SoS algorithms and lowdegree matrix polynomials (and inspired by recent work on SoS and the planted clique problem [BHK+16]), we prove a new SoS lower bound for the tensor PCA problem.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130189850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We prove new lower bounds on the sizes of proofs in the Cutting Plane proof system, using a concept that we call unsatisfiability certificate. This approach is, essentially, equivalent to the well-known feasible interpolation method, but is applicable to CNF formulas that do not seem suitable for interpolation. Specifically, we prove exponential lower bounds for random k-CNFs, where k is the logarithm of the number of variables, and for the Weak Bit Pigeon Hole Principle. Furthermore, we prove a monotone variant of a hypothesis of Feige [12]. We give a superpolynomial lower bound on monotone real circuits that approximately decide the satisfiability of k-CNFs, where k = ω(1). For k ≈ logn, the lower bound is exponential.
我们用一个我们称之为不满意证明的概念,证明了切割平面证明系统中样张尺寸的新下界。这种方法本质上相当于众所周知的可行插值方法,但适用于似乎不适合插值的CNF公式。具体来说,我们证明了随机k- cnfs的指数下界,其中k是变量数的对数,以及弱比特鸽子洞原理。进一步,我们证明了Feige[12]的一个假设的单调变体。在单调实电路上给出了近似决定k- cnfs可满足性的一个超多项式下界,其中k = ω(1)。For k ≈Logn,下界是指数。
{"title":"Random Formulas, Monotone Circuits, and Interpolation","authors":"P. Hrubes, P. Pudlák","doi":"10.1109/FOCS.2017.20","DOIUrl":"https://doi.org/10.1109/FOCS.2017.20","url":null,"abstract":"We prove new lower bounds on the sizes of proofs in the Cutting Plane proof system, using a concept that we call unsatisfiability certificate. This approach is, essentially, equivalent to the well-known feasible interpolation method, but is applicable to CNF formulas that do not seem suitable for interpolation. Specifically, we prove exponential lower bounds for random k-CNFs, where k is the logarithm of the number of variables, and for the Weak Bit Pigeon Hole Principle. Furthermore, we prove a monotone variant of a hypothesis of Feige [12]. We give a superpolynomial lower bound on monotone real circuits that approximately decide the satisfiability of k-CNFs, where k = ω(1). For k ≈ logn, the lower bound is exponential.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128755289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ramsey theory assures us that in any graph there is a clique or independent set of a certain size, roughly logarithmic in the graph size. But how difficult is it to find the clique or independent set? If the graph is given explicitly, then it is possible to do so while examining a linear number of edges. If the graph is given by a black-box, where to figure out whether a certain edge exists the box should be queried, then a large number of queries must be issued. But what if one is given a program or circuit for computing the existence of an edge? This problem was raised by Buss and Goldberg and Papadimitriou in the context of TFNP, search problems with a guaranteed solution.We examine the relationship between black-box complexity and white-box complexity for search problems with guaranteed solution such as the above Ramsey problem. We show that under the assumption that collision resistant hash function exist (which follows from the hardness of problems such as factoring, discrete-log and learning with errors) the white-box Ramsey problem is hard and this is true even if one is looking for a much smaller clique or independent set than the theorem guarantees.In general, one cannot hope to translate all black-box hardness for TFNP into white-box hardness: we show this by adapting results concerning the random oracle methodology and the impossibility of instantiating it.Another model we consider is the succinct black-box, where there is a known upper bound on the size of the black-box (but no limit on the computation time). In this case we show that for all TFNP problems there is an upper bound on the number of queries proportional to the description size of the box times the solution size. On the other hand, for promise problems this is not the case.Finally, we consider the complexity of graph property testing in the white-box model. We show a property which is hard to test even when one is given the program for computing the graph. The hard property is whether the graph is a two-source extractor.
{"title":"White-Box vs. Black-Box Complexity of Search Problems: Ramsey and Graph Property Testing","authors":"Ilan Komargodski, M. Naor, E. Yogev","doi":"10.1145/3341106","DOIUrl":"https://doi.org/10.1145/3341106","url":null,"abstract":"Ramsey theory assures us that in any graph there is a clique or independent set of a certain size, roughly logarithmic in the graph size. But how difficult is it to find the clique or independent set? If the graph is given explicitly, then it is possible to do so while examining a linear number of edges. If the graph is given by a black-box, where to figure out whether a certain edge exists the box should be queried, then a large number of queries must be issued. But what if one is given a program or circuit for computing the existence of an edge? This problem was raised by Buss and Goldberg and Papadimitriou in the context of TFNP, search problems with a guaranteed solution.We examine the relationship between black-box complexity and white-box complexity for search problems with guaranteed solution such as the above Ramsey problem. We show that under the assumption that collision resistant hash function exist (which follows from the hardness of problems such as factoring, discrete-log and learning with errors) the white-box Ramsey problem is hard and this is true even if one is looking for a much smaller clique or independent set than the theorem guarantees.In general, one cannot hope to translate all black-box hardness for TFNP into white-box hardness: we show this by adapting results concerning the random oracle methodology and the impossibility of instantiating it.Another model we consider is the succinct black-box, where there is a known upper bound on the size of the black-box (but no limit on the computation time). In this case we show that for all TFNP problems there is an upper bound on the number of queries proportional to the description size of the box times the solution size. On the other hand, for promise problems this is not the case.Finally, we consider the complexity of graph property testing in the white-box model. We show a property which is hard to test even when one is given the program for computing the graph. The hard property is whether the graph is a two-source extractor.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128354122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we introduce the notion of lockable obfuscation. In a lockable obfuscation scheme there exists an obfuscation algorithm Obf that takes as input a security parameter, a program P, a message msg and lock value lck and outputs an obfuscated program oP. One can evaluate the obfuscated program oP on any input x where the output of evaluation is the message msg if P(x) = lck and otherwise receives a rejecting symbol.We proceed to provide a construction of lockable obfuscation and prove it secure under the Learning with Errors (LWE) assumption. Notably, our proof only requires LWE with polynomial hardness and does not require complexity leveraging.We follow this by describing multiple applications of lockable obfuscation. First, we show how to transform any attribute-based encryption (ABE) scheme into one in which the attributes used to encrypt the message are hidden from any user that is not authorized to decrypt the message. (Such a system is also know as predicate encryption with one-sided security.) The only previous construction due to Gorbunov, Vaikuntanathan and Wee is based off of a specific ABE scheme of Boneh. By enabling the transformation of any ABE scheme we can inherent different forms and features of the underlying scheme such as: multi-authority, adaptive security from polynomial hardness, regular language policies, etc.We also show applications of lockable obfuscation to separation and uninstantiability results. We first show how to create new separation results in circular encryption that were previously based on indistinguishability obfuscation. This results in new separation results from learning with error including a public key bit encryption scheme that it IND-CPA secure and not circular secure. The tool of lockable obfuscation allows these constructions to be almost immediately realized by translation from previous indistinguishability obfuscation based constructions.In a similar vein we provide random oracle uninstantiability results of the Fujisaki-Okamoto transformation (and related transformations) from the lockable obfuscation combined with fully homomorphic encryption. Again, we take advantage that previous work used indistinguishability obfuscation that obfuscated programs in a form that could easily be translated to lockable obfuscation.
{"title":"Lockable Obfuscation","authors":"Rishab Goyal, Venkata Koppula, Brent Waters","doi":"10.1109/FOCS.2017.62","DOIUrl":"https://doi.org/10.1109/FOCS.2017.62","url":null,"abstract":"In this paper we introduce the notion of lockable obfuscation. In a lockable obfuscation scheme there exists an obfuscation algorithm Obf that takes as input a security parameter, a program P, a message msg and lock value lck and outputs an obfuscated program oP. One can evaluate the obfuscated program oP on any input x where the output of evaluation is the message msg if P(x) = lck and otherwise receives a rejecting symbol.We proceed to provide a construction of lockable obfuscation and prove it secure under the Learning with Errors (LWE) assumption. Notably, our proof only requires LWE with polynomial hardness and does not require complexity leveraging.We follow this by describing multiple applications of lockable obfuscation. First, we show how to transform any attribute-based encryption (ABE) scheme into one in which the attributes used to encrypt the message are hidden from any user that is not authorized to decrypt the message. (Such a system is also know as predicate encryption with one-sided security.) The only previous construction due to Gorbunov, Vaikuntanathan and Wee is based off of a specific ABE scheme of Boneh. By enabling the transformation of any ABE scheme we can inherent different forms and features of the underlying scheme such as: multi-authority, adaptive security from polynomial hardness, regular language policies, etc.We also show applications of lockable obfuscation to separation and uninstantiability results. We first show how to create new separation results in circular encryption that were previously based on indistinguishability obfuscation. This results in new separation results from learning with error including a public key bit encryption scheme that it IND-CPA secure and not circular secure. The tool of lockable obfuscation allows these constructions to be almost immediately realized by translation from previous indistinguishability obfuscation based constructions.In a similar vein we provide random oracle uninstantiability results of the Fujisaki-Okamoto transformation (and related transformations) from the lockable obfuscation combined with fully homomorphic encryption. Again, we take advantage that previous work used indistinguishability obfuscation that obfuscated programs in a form that could easily be translated to lockable obfuscation.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115083380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Noah Fleming, D. Pankratov, T. Pitassi, Robert Robere
The random k-SAT model is the most important and well-studied distribution over k-SAT instances. It is closely connected to statistical physics and is a benchmark for satisfiability algorithms. We show that when k = Θ(log n), any Cutting Planes refutation for random k-SAT requires exponential size in the interesting regime where the number of clauses guarantees that the formula is unsatisfiable with high probability.
{"title":"Random Θ(log n)-CNFs Are Hard for Cutting Planes","authors":"Noah Fleming, D. Pankratov, T. Pitassi, Robert Robere","doi":"10.1109/FOCS.2017.19","DOIUrl":"https://doi.org/10.1109/FOCS.2017.19","url":null,"abstract":"The random k-SAT model is the most important and well-studied distribution over k-SAT instances. It is closely connected to statistical physics and is a benchmark for satisfiability algorithms. We show that when k = Θ(log n), any Cutting Planes refutation for random k-SAT requires exponential size in the interesting regime where the number of clauses guarantees that the formula is unsatisfiable with high probability.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131059134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We show that high dimensional expanders imply derandomized direct product tests, with a number of subsets that is linear in the size of the universe.Direct product tests belong to a family of tests called agreement tests that are important components in PCP constructions and include, for example, low degree tests such as line vs. line and plane vs. plane.For a generic hypergraph, we introduce the notion of agreement expansion, which captures the usefulness of the hypergraph for an agreement test. We show that explicit bounded degree agreement expanders exist, based on Ramanujan complexes.
{"title":"High Dimensional Expanders Imply Agreement Expanders","authors":"Irit Dinur, T. Kaufman","doi":"10.1109/FOCS.2017.94","DOIUrl":"https://doi.org/10.1109/FOCS.2017.94","url":null,"abstract":"We show that high dimensional expanders imply derandomized direct product tests, with a number of subsets that is linear in the size of the universe.Direct product tests belong to a family of tests called agreement tests that are important components in PCP constructions and include, for example, low degree tests such as line vs. line and plane vs. plane.For a generic hypergraph, we introduce the notion of agreement expansion, which captures the usefulness of the hypergraph for an agreement test. We show that explicit bounded degree agreement expanders exist, based on Ramanujan complexes.","PeriodicalId":311592,"journal":{"name":"2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131456291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}