We consider the fine-grained complexity of sparse graph problems that currently have Õ(mn) time algorithms, where m is the number of edges and n is the number of vertices in the input graph. This class includes several important path problems on both directed and undirected graphs, including APSP, MWC (Minimum Weight Cycle), Radius, Eccentricities, BC (Betweenness Centrality), etc. We introduce the notion of a sparse reduction which preserves the sparsity of graphs, and we present near linear-time sparse reductions between various pairs of graph problems in the Õ(mn) class. There are many sub-cubic reductions between graph problems in the Õ(mn) class, but surprisingly few of these preserve sparsity. In the directed case, our results give a partial order on a large collection of problems in the Õ(mn) class (along with some equivalences), and many of our reductions are very nontrivial. In the undirected case we give two nontrivial sparse reductions: from MWC to APSP, and from unweighted ANSC (all nodes shortest cycles) to unweighted APSP. We develop a new ‘bit-sampling’ method for these sparse reductions on undirected graphs, which also gives rise to improved or simpler algorithms for cycle finding problems in undirected graphs. We formulate the the notion of MWC hardness, which is based on the assumption that a minimum weight cycle in a directed graph cannot be computed in time polynomially smaller than mn. Our sparse reductions for directed path problems in the Õ(mn) class establish that several problems in this class, including 2-SiSP (second simple shortest path), s-t Replacement Paths, Radius, Eccentricities and BC are MWC hard. Our sparse reductions give MWC hardness a status for the Õ(mn) class similar to 3SUM hardness for the quadratic class, since they show sub-mn hardness for a large collection of fundamental and well-studied graph problems that have maintained an Õ(mn) time bound for over half a century. We also identify Eccentricities and BC as key problems in the Õ(mn) class which are simultaneously MWC-hard, SETH-hard and k-DSH-hard, where SETH is the Strong Exponential Time Hypothesis, and k-DSH is the hypothesis that a dominating set of size k cannot be computed in time polynomially smaller than nk. Our framework using sparse reductions is very relevant to real-world graphs, which tend to be sparse and for which the Õ(mn) time algorithms are the ones typically used in practice, and not the Õ(n3) time algorithms.
{"title":"Fine-grained complexity for sparse graphs","authors":"U. Agarwal, V. Ramachandran","doi":"10.1145/3188745.3188888","DOIUrl":"https://doi.org/10.1145/3188745.3188888","url":null,"abstract":"We consider the fine-grained complexity of sparse graph problems that currently have Õ(mn) time algorithms, where m is the number of edges and n is the number of vertices in the input graph. This class includes several important path problems on both directed and undirected graphs, including APSP, MWC (Minimum Weight Cycle), Radius, Eccentricities, BC (Betweenness Centrality), etc. We introduce the notion of a sparse reduction which preserves the sparsity of graphs, and we present near linear-time sparse reductions between various pairs of graph problems in the Õ(mn) class. There are many sub-cubic reductions between graph problems in the Õ(mn) class, but surprisingly few of these preserve sparsity. In the directed case, our results give a partial order on a large collection of problems in the Õ(mn) class (along with some equivalences), and many of our reductions are very nontrivial. In the undirected case we give two nontrivial sparse reductions: from MWC to APSP, and from unweighted ANSC (all nodes shortest cycles) to unweighted APSP. We develop a new ‘bit-sampling’ method for these sparse reductions on undirected graphs, which also gives rise to improved or simpler algorithms for cycle finding problems in undirected graphs. We formulate the the notion of MWC hardness, which is based on the assumption that a minimum weight cycle in a directed graph cannot be computed in time polynomially smaller than mn. Our sparse reductions for directed path problems in the Õ(mn) class establish that several problems in this class, including 2-SiSP (second simple shortest path), s-t Replacement Paths, Radius, Eccentricities and BC are MWC hard. Our sparse reductions give MWC hardness a status for the Õ(mn) class similar to 3SUM hardness for the quadratic class, since they show sub-mn hardness for a large collection of fundamental and well-studied graph problems that have maintained an Õ(mn) time bound for over half a century. We also identify Eccentricities and BC as key problems in the Õ(mn) class which are simultaneously MWC-hard, SETH-hard and k-DSH-hard, where SETH is the Strong Exponential Time Hypothesis, and k-DSH is the hypothesis that a dominating set of size k cannot be computed in time polynomially smaller than nk. Our framework using sparse reductions is very relevant to real-world graphs, which tend to be sparse and for which the Õ(mn) time algorithms are the ones typically used in practice, and not the Õ(n3) time algorithms.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90259194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work we provide a traitor tracing construction with ciphertexts that grow polynomially in log(n) where n is the number of users and prove it secure under the Learning with Errors (LWE) assumption. This is the first traitor tracing scheme with such parameters provably secure from a standard assumption. In addition to achieving new traitor tracing results, we believe our techniques push forward the broader area of computing on encrypted data under standard assumptions. Notably, traitor tracing is substantially different problem from other cryptography primitives that have seen recent progress in LWE solutions. We achieve our results by first conceiving a novel approach to building traitor tracing that starts with a new form of Functional Encryption that we call Mixed FE. In a Mixed FE system the encryption algorithm is bimodal and works with either a public key or master secret key. Ciphertexts encrypted using the public key can only encrypt one type of functionality. On the other hand the secret key encryption can be used to encode many different types of programs, but is only secure as long as the attacker sees a bounded number of such ciphertexts. We first show how to combine Mixed FE with Attribute-Based Encryption to achieve traitor tracing. Second we build Mixed FE systems for polynomial sized branching programs (which corresponds to the complexity class logspace) by relying on the polynomial hardness of the LWE assumption with super-polynomial modulus-to-noise ratio.
{"title":"Collusion resistant traitor tracing from learning with errors","authors":"Rishab Goyal, Venkata Koppula, Brent Waters","doi":"10.1145/3188745.3188844","DOIUrl":"https://doi.org/10.1145/3188745.3188844","url":null,"abstract":"In this work we provide a traitor tracing construction with ciphertexts that grow polynomially in log(n) where n is the number of users and prove it secure under the Learning with Errors (LWE) assumption. This is the first traitor tracing scheme with such parameters provably secure from a standard assumption. In addition to achieving new traitor tracing results, we believe our techniques push forward the broader area of computing on encrypted data under standard assumptions. Notably, traitor tracing is substantially different problem from other cryptography primitives that have seen recent progress in LWE solutions. We achieve our results by first conceiving a novel approach to building traitor tracing that starts with a new form of Functional Encryption that we call Mixed FE. In a Mixed FE system the encryption algorithm is bimodal and works with either a public key or master secret key. Ciphertexts encrypted using the public key can only encrypt one type of functionality. On the other hand the secret key encryption can be used to encode many different types of programs, but is only secure as long as the attacker sees a bounded number of such ciphertexts. We first show how to combine Mixed FE with Attribute-Based Encryption to achieve traitor tracing. Second we build Mixed FE systems for polynomial sized branching programs (which corresponds to the complexity class logspace) by relying on the polynomial hardness of the LWE assumption with super-polynomial modulus-to-noise ratio.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"67 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76380044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We characterize the size of monotone span programs computing certain “structured” boolean functions by the Nullstellensatz degree of a related unsatisfiable Boolean formula. This yields the first exponential lower bounds for monotone span programs over arbitrary fields, the first exponential separations between monotone span programs over fields of different characteristic, and the first exponential separation between monotone span programs over arbitrary fields and monotone circuits. We also show tight quasipolynomial lower bounds on monotone span programs computing directed st-connectivity over arbitrary fields, separating monotone span programs from non-deterministic logspace and also separating monotone and non-monotone span programs over GF(2). Our results yield the same lower bounds for linear secret sharing schemes due to the previously known relationship between monotone span programs and linear secret sharing. To prove our characterization we introduce a new and general tool for lifting polynomial degree to rank over arbitrary fields.
{"title":"Lifting Nullstellensatz to monotone span programs over any field","authors":"T. Pitassi, Robert Robere","doi":"10.1145/3188745.3188914","DOIUrl":"https://doi.org/10.1145/3188745.3188914","url":null,"abstract":"We characterize the size of monotone span programs computing certain “structured” boolean functions by the Nullstellensatz degree of a related unsatisfiable Boolean formula. This yields the first exponential lower bounds for monotone span programs over arbitrary fields, the first exponential separations between monotone span programs over fields of different characteristic, and the first exponential separation between monotone span programs over arbitrary fields and monotone circuits. We also show tight quasipolynomial lower bounds on monotone span programs computing directed st-connectivity over arbitrary fields, separating monotone span programs from non-deterministic logspace and also separating monotone and non-monotone span programs over GF(2). Our results yield the same lower bounds for linear secret sharing schemes due to the previously known relationship between monotone span programs and linear secret sharing. To prove our characterization we introduce a new and general tool for lifting polynomial degree to rank over arbitrary fields.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"80 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80882020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A number of works have focused on the setting where an adversary tampers with the shares of a secret sharing scheme. This includes literature on verifiable secret sharing, algebraic manipulation detection(AMD) codes, and, error correcting or detecting codes in general. In this work, we initiate a systematic study of what we call non-malleable secret sharing. Very roughly, the guarantee we seek is the following: the adversary may potentially tamper with all of the shares, and still, either the reconstruction procedure outputs the original secret, or, the original secret is “destroyed” and the reconstruction outputs a string which is completely “unrelated” to the original secret. Recent exciting work on non-malleable codes in the split-state model led to constructions which can be seen as 2-out-of-2 non-malleable secret sharing schemes. These constructions have already found a number of applications in cryptography. We investigate the natural question of constructing t-out-of-n non-malleable secret sharing schemes. Such a secret sharing scheme ensures that only a set consisting of t or more shares can reconstruct the secret, and, additionally guarantees non-malleability under an attack where potentially every share maybe tampered with. Techniques used for obtaining split-state non-malleable codes (or 2-out-of-2 non-malleable secret sharing) are (in some form) based on two-source extractors and seem not to generalize to our setting. Our first result is the construction of a t-out-of-n non-malleable secret sharing scheme against an adversary who arbitrarily tampers each of the shares independently. Our construction is unconditional and features statistical non-malleability. As our main technical result, we present t-out-of-n non-malleable secret sharing scheme in a stronger adversarial model where an adversary may jointly tamper multiple shares. Our construction is unconditional and the adversary is allowed to jointly-tamper subsets of up to (t−1) shares. We believe that the techniques introduced in our construction may be of independent interest. Inspired by the well studied problem of perfectly secure message transmission introduced in the seminal work of Dolev et. al (J. of ACM’93), we also initiate the study of non-malleable message transmission. Non-malleable message transmission can be seen as a natural generalization in which the goal is to ensure that the receiver either receives the original message, or, the original message is essentially destroyed and the receiver receives an “unrelated” message, when the network is under the influence of an adversary who can byzantinely corrupt all the nodes in the network. As natural applications of our non-malleable secret sharing schemes, we propose constructions for non-malleable message transmission.
许多作品都集中在对手篡改秘密共享方案的份额的场景上。这包括关于可验证的秘密共享、代数操作检测(AMD)代码和纠错或一般检测代码的文献。在这项工作中,我们启动了一个系统的研究,我们称之为不可延展性的秘密共享。粗略地说,我们寻求的保证如下:对手可能会潜在地篡改所有的共享,并且仍然,要么重构过程输出原始秘密,要么原始秘密被“销毁”,重构输出一个与原始秘密完全“无关”的字符串。最近关于分裂状态模型中不可延展性代码的令人兴奋的工作导致了可以被视为2-out- 2不可延性秘密共享方案的结构。这些结构已经在密码学中得到了许多应用。我们研究了构造t-out- n非延展性秘密共享方案的自然问题。这样的秘密共享方案确保只有由t个或更多的共享组成的集合才能重建秘密,并且额外保证在攻击下的不可延展性,因为每个共享都可能被篡改。用于获取分裂状态不可延展性代码(或2 / 2不可延展性秘密共享)的技术(以某种形式)基于双源提取器,似乎不适用于我们的设置。我们的第一个结果是构造了一个t-out- n不可延展性的秘密共享方案,以对抗任意篡改每个共享的对手。我们的结构是无条件的,具有统计上的不可延展性。作为我们的主要技术成果,我们在一个更强的对抗性模型中提出了t-out- n非延展性秘密共享方案,其中攻击者可能联合篡改多个共享。我们的构造是无条件的,并且允许对手联合篡改最多(t−1)个份额的子集。我们相信,在我们的建设中引入的技术可能是独立的兴趣。受Dolev et. al (J. of ACM ' 93)开创性工作中引入的完全安全消息传输问题的深入研究的启发,我们还启动了非延展性消息传输的研究。不可延展性消息传输可以看作是一种自然的概括,其目标是确保接收方接收到原始消息,或者当网络受到对手的影响时,原始消息基本上被破坏,接收方接收到“不相关”的消息,而对手可以拜占庭式地破坏网络中的所有节点。作为我们的非延展性秘密共享方案的自然应用,我们提出了非延展性消息传输的结构。
{"title":"Non-malleable secret sharing","authors":"Vipul Goyal, Ashutosh Kumar","doi":"10.1145/3188745.3188872","DOIUrl":"https://doi.org/10.1145/3188745.3188872","url":null,"abstract":"A number of works have focused on the setting where an adversary tampers with the shares of a secret sharing scheme. This includes literature on verifiable secret sharing, algebraic manipulation detection(AMD) codes, and, error correcting or detecting codes in general. In this work, we initiate a systematic study of what we call non-malleable secret sharing. Very roughly, the guarantee we seek is the following: the adversary may potentially tamper with all of the shares, and still, either the reconstruction procedure outputs the original secret, or, the original secret is “destroyed” and the reconstruction outputs a string which is completely “unrelated” to the original secret. Recent exciting work on non-malleable codes in the split-state model led to constructions which can be seen as 2-out-of-2 non-malleable secret sharing schemes. These constructions have already found a number of applications in cryptography. We investigate the natural question of constructing t-out-of-n non-malleable secret sharing schemes. Such a secret sharing scheme ensures that only a set consisting of t or more shares can reconstruct the secret, and, additionally guarantees non-malleability under an attack where potentially every share maybe tampered with. Techniques used for obtaining split-state non-malleable codes (or 2-out-of-2 non-malleable secret sharing) are (in some form) based on two-source extractors and seem not to generalize to our setting. Our first result is the construction of a t-out-of-n non-malleable secret sharing scheme against an adversary who arbitrarily tampers each of the shares independently. Our construction is unconditional and features statistical non-malleability. As our main technical result, we present t-out-of-n non-malleable secret sharing scheme in a stronger adversarial model where an adversary may jointly tamper multiple shares. Our construction is unconditional and the adversary is allowed to jointly-tamper subsets of up to (t−1) shares. We believe that the techniques introduced in our construction may be of independent interest. Inspired by the well studied problem of perfectly secure message transmission introduced in the seminal work of Dolev et. al (J. of ACM’93), we also initiate the study of non-malleable message transmission. Non-malleable message transmission can be seen as a natural generalization in which the goal is to ensure that the receiver either receives the original message, or, the original message is essentially destroyed and the receiver receives an “unrelated” message, when the network is under the influence of an adversary who can byzantinely corrupt all the nodes in the network. As natural applications of our non-malleable secret sharing schemes, we propose constructions for non-malleable message transmission.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"85 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81187168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good cryptographic primitives. In this talk, we will first introduce the ba- sics of GANs and then discuss the fundamental statistical question about GANs — assuming the training can succeed with polynomial samples, can we have any statistical guarantees for the estimated distributions? In the work with Arora, Ge, Liang, and Zhang, we suggested a dilemma: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. Such a conundrum may be solved or alleviated by designing discrimina- tor class with strong distinguishing power against the particular generator class (instead of against all possible generators.)
{"title":"Generalization and equilibrium in generative adversarial nets (GANs) (invited talk)","authors":"Tengyu Ma","doi":"10.1145/3188745.3232194","DOIUrl":"https://doi.org/10.1145/3188745.3232194","url":null,"abstract":"Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good cryptographic primitives. In this talk, we will first introduce the ba- sics of GANs and then discuss the fundamental statistical question about GANs — assuming the training can succeed with polynomial samples, can we have any statistical guarantees for the estimated distributions? In the work with Arora, Ge, Liang, and Zhang, we suggested a dilemma: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. Such a conundrum may be solved or alleviated by designing discrimina- tor class with strong distinguishing power against the particular generator class (instead of against all possible generators.)","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83026803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A set of n players, each holding a private input bit, communicate over a noisy broadcast channel. Their mutual goal is for all players to learn all inputs. At each round one of the players broadcasts a bit to all the other players, and the bit received by each player is flipped with a fixed constant probability (independently for each recipient). How many rounds are needed? This problem was first suggested by El Gamal in 1984. In 1988, Gallager gave an elegant noise-resistant protocol requiring only O(n loglogn) rounds. The problem got resolved in 2005 by a seminal paper of Goyal, Kindler, and Saks, proving that Gallager’s protocol is essentially optimal. We revisit the above noisy broadcast problem and show that O(n) rounds suffice. This is possible due to a relaxation of the model assumed by the previous works. We no longer demand that exactly one player broadcasts in every round, but rather allow any number of players to broadcast. However, if it is not the case that exactly one player chooses to broadcast, each of the other players gets an adversely chosen bit. We generalized the above result and initiate the study of interactive coding over the noisy broadcast channel. We show that any interactive protocol that works over the noiseless broadcast channel can be simulated over our restrictive noisy broadcast model with constant blowup of the communication. Our results also establish that modern techniques for interactive coding can help us make progress on the classical problems.
{"title":"Interactive coding over the noisy broadcast channel","authors":"K. Efremenko, Gillat Kol, Raghuvansh R. Saxena","doi":"10.1145/3188745.3188884","DOIUrl":"https://doi.org/10.1145/3188745.3188884","url":null,"abstract":"A set of n players, each holding a private input bit, communicate over a noisy broadcast channel. Their mutual goal is for all players to learn all inputs. At each round one of the players broadcasts a bit to all the other players, and the bit received by each player is flipped with a fixed constant probability (independently for each recipient). How many rounds are needed? This problem was first suggested by El Gamal in 1984. In 1988, Gallager gave an elegant noise-resistant protocol requiring only O(n loglogn) rounds. The problem got resolved in 2005 by a seminal paper of Goyal, Kindler, and Saks, proving that Gallager’s protocol is essentially optimal. We revisit the above noisy broadcast problem and show that O(n) rounds suffice. This is possible due to a relaxation of the model assumed by the previous works. We no longer demand that exactly one player broadcasts in every round, but rather allow any number of players to broadcast. However, if it is not the case that exactly one player chooses to broadcast, each of the other players gets an adversely chosen bit. We generalized the above result and initiate the study of interactive coding over the noisy broadcast channel. We show that any interactive protocol that works over the noiseless broadcast channel can be simulated over our restrictive noisy broadcast model with constant blowup of the communication. Our results also establish that modern techniques for interactive coding can help us make progress on the classical problems.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84353836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Albert Atserias, Ilario Bonacina, Susanna F. de Rezende, Massimo Lauria, Jakob Nordström, A. Razborov
We prove that for k ≪ n1/4 regular resolution requires length nΩ(k) to establish that an Erdos-Renyi graph with appropriately chosen edge density does not contain a k-clique. This lower bound is optimal up to the multiplicative constant in the exponent, and also implies unconditional nΩ(k) lower bounds on running time for several state-of-the-art algorithms for finding maximum cliques in graphs.
{"title":"Clique is hard on average for regular resolution","authors":"Albert Atserias, Ilario Bonacina, Susanna F. de Rezende, Massimo Lauria, Jakob Nordström, A. Razborov","doi":"10.1145/3188745.3188856","DOIUrl":"https://doi.org/10.1145/3188745.3188856","url":null,"abstract":"We prove that for k ≪ n1/4 regular resolution requires length nΩ(k) to establish that an Erdos-Renyi graph with appropriately chosen edge density does not contain a k-clique. This lower bound is optimal up to the multiplicative constant in the exponent, and also implies unconditional nΩ(k) lower bounds on running time for several state-of-the-art algorithms for finding maximum cliques in graphs.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"98 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91085098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a new notion of multi-collision resistance for keyless hash functions. This is a natural relaxation of collision resistance where it is hard to find multiple inputs with the same hash in the following sense. The number of colliding inputs that a polynomial-time non-uniform adversary can find is not much larger than its advice. We discuss potential candidates for this notion and study its applications. Assuming the existence of such hash functions, we resolve the long-standing question of the round complexity of zero knowledge protocols --- we construct a 3-message zero knowledge argument against arbitrary polynomial-size non-uniform adversaries. We also improve the round complexity in several other central applications, including a 3-message succinct argument of knowledge for NP, a 4-message zero-knowledge proof, and a 5-message public-coin zero-knowledge argument. Our techniques can also be applied in the keyed setting, where we match the round complexity of known protocols while relaxing the underlying assumption from collision-resistance to keyed multi-collision resistance. The core technical contribution behind our results is a domain extension transformation from multi-collision-resistant hash functions for a fixed input length to ones with an arbitrary input length and a local opening property. The transformation is based on a combination of classical domain extension techniques, together with new information-theoretic tools. In particular, we define and construct a new variant of list-recoverable codes, which may be of independent interest.
{"title":"Multi-collision resistance: a paradigm for keyless hash functions","authors":"Nir Bitansky, Y. Kalai, Omer Paneth","doi":"10.1145/3188745.3188870","DOIUrl":"https://doi.org/10.1145/3188745.3188870","url":null,"abstract":"We introduce a new notion of multi-collision resistance for keyless hash functions. This is a natural relaxation of collision resistance where it is hard to find multiple inputs with the same hash in the following sense. The number of colliding inputs that a polynomial-time non-uniform adversary can find is not much larger than its advice. We discuss potential candidates for this notion and study its applications. Assuming the existence of such hash functions, we resolve the long-standing question of the round complexity of zero knowledge protocols --- we construct a 3-message zero knowledge argument against arbitrary polynomial-size non-uniform adversaries. We also improve the round complexity in several other central applications, including a 3-message succinct argument of knowledge for NP, a 4-message zero-knowledge proof, and a 5-message public-coin zero-knowledge argument. Our techniques can also be applied in the keyed setting, where we match the round complexity of known protocols while relaxing the underlying assumption from collision-resistance to keyed multi-collision resistance. The core technical contribution behind our results is a domain extension transformation from multi-collision-resistant hash functions for a fixed input length to ones with an arbitrary input length and a local opening property. The transformation is based on a combination of classical domain extension techniques, together with new information-theoretic tools. In particular, we define and construct a new variant of list-recoverable codes, which may be of independent interest.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84495849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Bläser, Christian Ikenmeyer, Gorav Jindal, Vladimir Lysikov
Algebraic natural proofs were recently introduced by Forbes, Shpilka and Volk (Proc. of the 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 653–664, 2017) and independently by Grochow, Kumar, Saks and Saraf (CoRR, abs/1701.01717, 2017) as an attempt to transfer Razborov and Rudich’s famous barrier result (J. Comput. Syst. Sci., 55(1): 24–35, 1997) for Boolean circuit complexity to algebraic complexity theory. Razborov and Rudich’s barrier result relies on a widely believed assumption, namely, the existence of pseudo-random generators. Unfortunately, there is no known analogous theory of pseudo-randomness in the algebraic setting. Therefore, Forbes et al. use a concept called succinct hitting sets instead. This assumption is related to polynomial identity testing, but it is currently not clear how plausible this assumption is. Forbes et al. are only able to construct succinct hitting sets against rather weak models of arithmetic circuits. Generalized matrix completion is the following problem: Given a matrix with affine linear forms as entries, find an assignment to the variables in the linear forms such that the rank of the resulting matrix is minimal. We call this rank the completion rank. Computing the completion rank is an NP-hard problem. As our first main result, we prove that it is also NP-hard to determine whether a given matrix can be approximated by matrices of completion rank ≤ b. The minimum quantity b for which this is possible is called border completion rank (similar to the border rank of tensors). Naturally, algebraic natural proofs can only prove lower bounds for such border complexity measures. Furthermore, these border complexity measures play an important role in the geometric complexity program. Using our hardness result above, we can prove the following barrier: We construct a small family of matrices with affine linear forms as entries and a bound b, such that at least one of these matrices does not have an algebraic natural proof of polynomial size against all matrices of border completion rank b, unless coNP ⊆ ∃ BPP. This is an algebraic barrier result that is based on a well-established and widely believed conjecture. The complexity class ∃ BPP is known to be a subset of the more well known complexity class in the literature. Thus ∃ BPP can be replaced by MA in the statements of all our results. With similar techniques, we can also prove that tensor rank is hard to approximate. Furthermore, we prove a similar result for the variety of matrices with permanent zero. There are no algebraic polynomial size natural proofs for the variety of matrices with permanent zero, unless P#P ⊆ ∃ BPP. On the other hand, we are able to prove that the geometric complexity theory approach initiated by Mulmuley and Sohoni (SIAM J. Comput. 31(2): 496–526, 2001) yields proofs of polynomial size for this variety, therefore overcoming the natural proofs barrier in this case.
{"title":"Generalized matrix completion and algebraic natural proofs","authors":"M. Bläser, Christian Ikenmeyer, Gorav Jindal, Vladimir Lysikov","doi":"10.1145/3188745.3188832","DOIUrl":"https://doi.org/10.1145/3188745.3188832","url":null,"abstract":"Algebraic natural proofs were recently introduced by Forbes, Shpilka and Volk (Proc. of the 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 653–664, 2017) and independently by Grochow, Kumar, Saks and Saraf (CoRR, abs/1701.01717, 2017) as an attempt to transfer Razborov and Rudich’s famous barrier result (J. Comput. Syst. Sci., 55(1): 24–35, 1997) for Boolean circuit complexity to algebraic complexity theory. Razborov and Rudich’s barrier result relies on a widely believed assumption, namely, the existence of pseudo-random generators. Unfortunately, there is no known analogous theory of pseudo-randomness in the algebraic setting. Therefore, Forbes et al. use a concept called succinct hitting sets instead. This assumption is related to polynomial identity testing, but it is currently not clear how plausible this assumption is. Forbes et al. are only able to construct succinct hitting sets against rather weak models of arithmetic circuits. Generalized matrix completion is the following problem: Given a matrix with affine linear forms as entries, find an assignment to the variables in the linear forms such that the rank of the resulting matrix is minimal. We call this rank the completion rank. Computing the completion rank is an NP-hard problem. As our first main result, we prove that it is also NP-hard to determine whether a given matrix can be approximated by matrices of completion rank ≤ b. The minimum quantity b for which this is possible is called border completion rank (similar to the border rank of tensors). Naturally, algebraic natural proofs can only prove lower bounds for such border complexity measures. Furthermore, these border complexity measures play an important role in the geometric complexity program. Using our hardness result above, we can prove the following barrier: We construct a small family of matrices with affine linear forms as entries and a bound b, such that at least one of these matrices does not have an algebraic natural proof of polynomial size against all matrices of border completion rank b, unless coNP ⊆ ∃ BPP. This is an algebraic barrier result that is based on a well-established and widely believed conjecture. The complexity class ∃ BPP is known to be a subset of the more well known complexity class in the literature. Thus ∃ BPP can be replaced by MA in the statements of all our results. With similar techniques, we can also prove that tensor rank is hard to approximate. Furthermore, we prove a similar result for the variety of matrices with permanent zero. There are no algebraic polynomial size natural proofs for the variety of matrices with permanent zero, unless P#P ⊆ ∃ BPP. On the other hand, we are able to prove that the geometric complexity theory approach initiated by Mulmuley and Sohoni (SIAM J. Comput. 31(2): 496–526, 2001) yields proofs of polynomial size for this variety, therefore overcoming the natural proofs barrier in this case.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85478384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Backurs, L. Roditty, Gilad Segal, V. V. Williams, Nicole Wein
Among the most important graph parameters is the Diameter, the largest distance between any two vertices. There are no known very efficient algorithms for computing the Diameter exactly. Thus, much research has been devoted to how fast this parameter can be approximated. Chechik et al. [SODA 2014] showed that the diameter can be approximated within a multiplicative factor of 3/2 in Õ(m3/2) time. Furthermore, Roditty and Vassilevska W. [STOC 13] showed that unless the Strong Exponential Time Hypothesis (SETH) fails, no O(n2−ε) time algorithm can achieve an approximation factor better than 3/2 in sparse graphs. Thus the above algorithm is essentially optimal for sparse graphs for approximation factors less than 3/2. It was, however, completely plausible that a 3/2-approximation is possible in linear time. In this work we conditionally rule out such a possibility by showing that unless SETH fails no O(m3/2−ε) time algorithm can achieve an approximation factor better than 5/3. Another fundamental set of graph parameters are the Eccentricities. The Eccentricity of a vertex v is the distance between v and the farthest vertex from v. Chechik et al. [SODA 2014] showed that the Eccentricities of all vertices can be approximated within a factor of 5/3 in Õ(m3/2) time and Abboud et al. [SODA 2016] showed that no O(n2−ε) algorithm can achieve better than 5/3 approximation in sparse graphs. We show that the runtime of the 5/3 approximation algorithm is also optimal by proving that under SETH, there is no O(m3/2−ε) algorithm that achieves a better than 9/5 approximation. We also show that no near-linear time algorithm can achieve a better than 2 approximation for the Eccentricities. This is the first lower bound in fine-grained complexity that addresses near-linear time computation. We show that our lower bound for near-linear time algorithms is essentially tight by giving an algorithm that approximates Eccentricities within a 2+δ factor in Õ(m/δ) time for any 0<δ<1. This beats all Eccentricity algorithms in Cairo et al. [SODA 2016] and is the first constant factor approximation for Eccentricities in directed graphs. To establish the above lower bounds we study the S-T Diameter problem: Given a graph and two subsets S and T of vertices, output the largest distance between a vertex in S and a vertex in T. We give new algorithms and show tight lower bounds that serve as a starting point for all other hardness results. Our lower bounds apply only to sparse graphs. We show that for dense graphs, there are near-linear time algorithms for S-T Diameter, Diameter and Eccentricities, with almost the same approximation guarantees as their Õ(m3/2) counterparts, improving upon the best known algorithms for dense graphs.
最重要的图形参数之一是直径,即任意两个顶点之间的最大距离。目前还没有已知的非常有效的精确计算直径的算法。因此,许多研究都致力于如何快速地逼近这个参数。Chechik等人[SODA 2014]表明,在Õ(m3/2)时间内,直径可以在3/2的乘法因子内近似。此外,Roditty和Vassilevska W. [STOC 13]表明,除非强指数时间假设(Strong Exponential Time Hypothesis, SETH)失效,否则没有O(n2−ε)时间算法可以在稀疏图中获得优于3/2的近似因子。因此,对于近似因子小于3/2的稀疏图,上述算法本质上是最优的。然而,在线性时间内,3/2近似是完全可能的。在这项工作中,我们有条件地排除了这种可能性,表明除非SETH失败,否则没有O(m3/2−ε)时间算法可以获得优于5/3的近似因子。另一组基本的图形参数是离心率。顶点v的偏心率是v到v的最远顶点之间的距离。Chechik等人[SODA 2014]表明,在Õ(m3/2)时间内,所有顶点的偏心率可以在5/3的因子内近似,Abboud等人[SODA 2016]表明,在稀疏图中,没有O(n2−ε)算法可以达到优于5/3的近似。我们通过证明在SETH下,没有O(m3/2−ε)算法能达到比9/5近似更好的结果,证明了5/3近似算法的运行时间也是最优的。我们还表明,没有任何近线性时间算法可以获得优于2的偏心距近似。这是解决近线性时间计算的细粒度复杂性的第一个下界。我们表明,我们的近线性时间算法的下界本质上是紧密的,通过给出一个算法,该算法在Õ(m/δ)时间内的2+δ因子内近似偏心,对于任何0<δ<1。这击败了Cairo等人[SODA 2016]中的所有偏心算法,并且是有向图中偏心的第一个常数因子近似。为了建立上述下界,我们研究了S-T直径问题:给定一个图和两个顶点子集S和T,输出S中的顶点和T中的顶点之间的最大距离。我们给出了新的算法并显示了作为所有其他硬度结果起点的紧密下界。我们的下界只适用于稀疏图。我们表明,对于密集图,存在S-T直径,直径和偏心的近线性时间算法,具有与Õ(m3/2)对应的几乎相同的近似保证,改进了最著名的密集图算法。
{"title":"Towards tight approximation bounds for graph diameter and eccentricities","authors":"A. Backurs, L. Roditty, Gilad Segal, V. V. Williams, Nicole Wein","doi":"10.1145/3188745.3188950","DOIUrl":"https://doi.org/10.1145/3188745.3188950","url":null,"abstract":"Among the most important graph parameters is the Diameter, the largest distance between any two vertices. There are no known very efficient algorithms for computing the Diameter exactly. Thus, much research has been devoted to how fast this parameter can be approximated. Chechik et al. [SODA 2014] showed that the diameter can be approximated within a multiplicative factor of 3/2 in Õ(m3/2) time. Furthermore, Roditty and Vassilevska W. [STOC 13] showed that unless the Strong Exponential Time Hypothesis (SETH) fails, no O(n2−ε) time algorithm can achieve an approximation factor better than 3/2 in sparse graphs. Thus the above algorithm is essentially optimal for sparse graphs for approximation factors less than 3/2. It was, however, completely plausible that a 3/2-approximation is possible in linear time. In this work we conditionally rule out such a possibility by showing that unless SETH fails no O(m3/2−ε) time algorithm can achieve an approximation factor better than 5/3. Another fundamental set of graph parameters are the Eccentricities. The Eccentricity of a vertex v is the distance between v and the farthest vertex from v. Chechik et al. [SODA 2014] showed that the Eccentricities of all vertices can be approximated within a factor of 5/3 in Õ(m3/2) time and Abboud et al. [SODA 2016] showed that no O(n2−ε) algorithm can achieve better than 5/3 approximation in sparse graphs. We show that the runtime of the 5/3 approximation algorithm is also optimal by proving that under SETH, there is no O(m3/2−ε) algorithm that achieves a better than 9/5 approximation. We also show that no near-linear time algorithm can achieve a better than 2 approximation for the Eccentricities. This is the first lower bound in fine-grained complexity that addresses near-linear time computation. We show that our lower bound for near-linear time algorithms is essentially tight by giving an algorithm that approximates Eccentricities within a 2+δ factor in Õ(m/δ) time for any 0<δ<1. This beats all Eccentricity algorithms in Cairo et al. [SODA 2016] and is the first constant factor approximation for Eccentricities in directed graphs. To establish the above lower bounds we study the S-T Diameter problem: Given a graph and two subsets S and T of vertices, output the largest distance between a vertex in S and a vertex in T. We give new algorithms and show tight lower bounds that serve as a starting point for all other hardness results. Our lower bounds apply only to sparse graphs. We show that for dense graphs, there are near-linear time algorithms for S-T Diameter, Diameter and Eccentricities, with almost the same approximation guarantees as their Õ(m3/2) counterparts, improving upon the best known algorithms for dense graphs.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84103165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}