首页 > 最新文献

Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing最新文献

英文 中文
Fine-grained complexity for sparse graphs 稀疏图的细粒度复杂性
Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188888
U. Agarwal, V. Ramachandran
We consider the fine-grained complexity of sparse graph problems that currently have Õ(mn) time algorithms, where m is the number of edges and n is the number of vertices in the input graph. This class includes several important path problems on both directed and undirected graphs, including APSP, MWC (Minimum Weight Cycle), Radius, Eccentricities, BC (Betweenness Centrality), etc. We introduce the notion of a sparse reduction which preserves the sparsity of graphs, and we present near linear-time sparse reductions between various pairs of graph problems in the Õ(mn) class. There are many sub-cubic reductions between graph problems in the Õ(mn) class, but surprisingly few of these preserve sparsity. In the directed case, our results give a partial order on a large collection of problems in the Õ(mn) class (along with some equivalences), and many of our reductions are very nontrivial. In the undirected case we give two nontrivial sparse reductions: from MWC to APSP, and from unweighted ANSC (all nodes shortest cycles) to unweighted APSP. We develop a new ‘bit-sampling’ method for these sparse reductions on undirected graphs, which also gives rise to improved or simpler algorithms for cycle finding problems in undirected graphs. We formulate the the notion of MWC hardness, which is based on the assumption that a minimum weight cycle in a directed graph cannot be computed in time polynomially smaller than mn. Our sparse reductions for directed path problems in the Õ(mn) class establish that several problems in this class, including 2-SiSP (second simple shortest path), s-t Replacement Paths, Radius, Eccentricities and BC are MWC hard. Our sparse reductions give MWC hardness a status for the Õ(mn) class similar to 3SUM hardness for the quadratic class, since they show sub-mn hardness for a large collection of fundamental and well-studied graph problems that have maintained an Õ(mn) time bound for over half a century. We also identify Eccentricities and BC as key problems in the Õ(mn) class which are simultaneously MWC-hard, SETH-hard and k-DSH-hard, where SETH is the Strong Exponential Time Hypothesis, and k-DSH is the hypothesis that a dominating set of size k cannot be computed in time polynomially smaller than nk. Our framework using sparse reductions is very relevant to real-world graphs, which tend to be sparse and for which the Õ(mn) time algorithms are the ones typically used in practice, and not the Õ(n3) time algorithms.
我们考虑目前有Õ(mn)时间算法的稀疏图问题的细粒度复杂性,其中m是输入图中的边数,n是顶点数。本课程包括有向图和无向图上的几个重要路径问题,包括APSP, MWC(最小权值循环),半径,偏心,BC(中间性)等。我们引入了保持图的稀疏性的稀疏约简的概念,并在Õ(mn)类的各种图问题对之间给出了近似线性时间的稀疏约简。在Õ(mn)类的图问题之间有许多次立方约简,但令人惊讶的是,这些问题中很少保持稀疏性。在有向情况下,我们的结果给出了Õ(mn)类中大量问题的偏序(以及一些等价),并且我们的许多约简都是非平凡的。在无向情况下,我们给出了两个非平凡的稀疏约简:从MWC到APSP,以及从未加权的ANSC(所有节点最短周期)到未加权的APSP。我们为无向图上的这些稀疏约简开发了一种新的“位采样”方法,这也为无向图中的循环查找问题提供了改进或更简单的算法。我们提出了MWC硬度的概念,它是基于有向图中的最小权值循环不能在时间上多项式地小于mn的假设。我们对Õ(mn)类中有向路径问题的稀疏约简确定了该类中的几个问题,包括2-SiSP(第二简单最短路径),s-t替换路径,半径,偏心和BC是MWC困难的。我们的稀疏约简给Õ(mn)类的MWC硬度提供了一个类似于二次类的3SUM硬度的状态,因为它们显示了对大量基本和经过充分研究的图问题的亚mn硬度,这些图问题在半个多世纪以来一直保持Õ(mn)的时间范围。我们还确定了偏心和BC是Õ(mn)类的关键问题,该类同时是MWC-hard, SETH-hard和k- dsh -hard,其中SETH是强指数时间假设,k- dsh是一个大小为k的支配集不能在时间上多项式地小于nk的假设。我们使用稀疏约简的框架与现实世界的图形非常相关,这些图形往往是稀疏的,并且在实践中通常使用Õ(mn)时间算法,而不是Õ(n3)时间算法。
{"title":"Fine-grained complexity for sparse graphs","authors":"U. Agarwal, V. Ramachandran","doi":"10.1145/3188745.3188888","DOIUrl":"https://doi.org/10.1145/3188745.3188888","url":null,"abstract":"We consider the fine-grained complexity of sparse graph problems that currently have Õ(mn) time algorithms, where m is the number of edges and n is the number of vertices in the input graph. This class includes several important path problems on both directed and undirected graphs, including APSP, MWC (Minimum Weight Cycle), Radius, Eccentricities, BC (Betweenness Centrality), etc. We introduce the notion of a sparse reduction which preserves the sparsity of graphs, and we present near linear-time sparse reductions between various pairs of graph problems in the Õ(mn) class. There are many sub-cubic reductions between graph problems in the Õ(mn) class, but surprisingly few of these preserve sparsity. In the directed case, our results give a partial order on a large collection of problems in the Õ(mn) class (along with some equivalences), and many of our reductions are very nontrivial. In the undirected case we give two nontrivial sparse reductions: from MWC to APSP, and from unweighted ANSC (all nodes shortest cycles) to unweighted APSP. We develop a new ‘bit-sampling’ method for these sparse reductions on undirected graphs, which also gives rise to improved or simpler algorithms for cycle finding problems in undirected graphs. We formulate the the notion of MWC hardness, which is based on the assumption that a minimum weight cycle in a directed graph cannot be computed in time polynomially smaller than mn. Our sparse reductions for directed path problems in the Õ(mn) class establish that several problems in this class, including 2-SiSP (second simple shortest path), s-t Replacement Paths, Radius, Eccentricities and BC are MWC hard. Our sparse reductions give MWC hardness a status for the Õ(mn) class similar to 3SUM hardness for the quadratic class, since they show sub-mn hardness for a large collection of fundamental and well-studied graph problems that have maintained an Õ(mn) time bound for over half a century. We also identify Eccentricities and BC as key problems in the Õ(mn) class which are simultaneously MWC-hard, SETH-hard and k-DSH-hard, where SETH is the Strong Exponential Time Hypothesis, and k-DSH is the hypothesis that a dominating set of size k cannot be computed in time polynomially smaller than nk. Our framework using sparse reductions is very relevant to real-world graphs, which tend to be sparse and for which the Õ(mn) time algorithms are the ones typically used in practice, and not the Õ(n3) time algorithms.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90259194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Collusion resistant traitor tracing from learning with errors 通抗汉奸溯源,从学带误
Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188844
Rishab Goyal, Venkata Koppula, Brent Waters
In this work we provide a traitor tracing construction with ciphertexts that grow polynomially in log(n) where n is the number of users and prove it secure under the Learning with Errors (LWE) assumption. This is the first traitor tracing scheme with such parameters provably secure from a standard assumption. In addition to achieving new traitor tracing results, we believe our techniques push forward the broader area of computing on encrypted data under standard assumptions. Notably, traitor tracing is substantially different problem from other cryptography primitives that have seen recent progress in LWE solutions. We achieve our results by first conceiving a novel approach to building traitor tracing that starts with a new form of Functional Encryption that we call Mixed FE. In a Mixed FE system the encryption algorithm is bimodal and works with either a public key or master secret key. Ciphertexts encrypted using the public key can only encrypt one type of functionality. On the other hand the secret key encryption can be used to encode many different types of programs, but is only secure as long as the attacker sees a bounded number of such ciphertexts. We first show how to combine Mixed FE with Attribute-Based Encryption to achieve traitor tracing. Second we build Mixed FE systems for polynomial sized branching programs (which corresponds to the complexity class logspace) by relying on the polynomial hardness of the LWE assumption with super-polynomial modulus-to-noise ratio.
在这项工作中,我们提供了一个以log(n)多项式增长的密文的叛徒跟踪结构,其中n是用户的数量,并证明了它在有错误学习(LWE)假设下是安全的。这是第一个具有这样的参数可证明是安全的标准假设的叛逆者跟踪方案。除了实现新的叛逆者跟踪结果外,我们相信我们的技术还推动了在标准假设下对加密数据进行计算的更广泛领域。值得注意的是,叛逆者跟踪问题与最近在LWE解决方案中取得进展的其他加密原语有本质上的不同。我们首先设想了一种新的方法来构建叛逆者跟踪,这种方法从一种新的功能加密形式开始,我们称之为混合FE。在混合FE系统中,加密算法是双峰的,可以使用公钥或主秘钥。使用公钥加密的密文只能加密一种类型的功能。另一方面,密钥加密可用于对许多不同类型的程序进行编码,但只有当攻击者看到有限数量的此类密文时,它才是安全的。我们首先展示了如何结合混合FE和基于属性的加密来实现叛逆者跟踪。其次,我们通过依赖具有超多项式模噪比的LWE假设的多项式硬度,为多项式大小的分支程序(对应于复杂性类对数空间)构建混合有限元系统。
{"title":"Collusion resistant traitor tracing from learning with errors","authors":"Rishab Goyal, Venkata Koppula, Brent Waters","doi":"10.1145/3188745.3188844","DOIUrl":"https://doi.org/10.1145/3188745.3188844","url":null,"abstract":"In this work we provide a traitor tracing construction with ciphertexts that grow polynomially in log(n) where n is the number of users and prove it secure under the Learning with Errors (LWE) assumption. This is the first traitor tracing scheme with such parameters provably secure from a standard assumption. In addition to achieving new traitor tracing results, we believe our techniques push forward the broader area of computing on encrypted data under standard assumptions. Notably, traitor tracing is substantially different problem from other cryptography primitives that have seen recent progress in LWE solutions. We achieve our results by first conceiving a novel approach to building traitor tracing that starts with a new form of Functional Encryption that we call Mixed FE. In a Mixed FE system the encryption algorithm is bimodal and works with either a public key or master secret key. Ciphertexts encrypted using the public key can only encrypt one type of functionality. On the other hand the secret key encryption can be used to encode many different types of programs, but is only secure as long as the attacker sees a bounded number of such ciphertexts. We first show how to combine Mixed FE with Attribute-Based Encryption to achieve traitor tracing. Second we build Mixed FE systems for polynomial sized branching programs (which corresponds to the complexity class logspace) by relying on the polynomial hardness of the LWE assumption with super-polynomial modulus-to-noise ratio.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76380044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Lifting Nullstellensatz to monotone span programs over any field 将Nullstellensatz提升到任何域上的单调跨度程序
Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188914
T. Pitassi, Robert Robere
We characterize the size of monotone span programs computing certain “structured” boolean functions by the Nullstellensatz degree of a related unsatisfiable Boolean formula. This yields the first exponential lower bounds for monotone span programs over arbitrary fields, the first exponential separations between monotone span programs over fields of different characteristic, and the first exponential separation between monotone span programs over arbitrary fields and monotone circuits. We also show tight quasipolynomial lower bounds on monotone span programs computing directed st-connectivity over arbitrary fields, separating monotone span programs from non-deterministic logspace and also separating monotone and non-monotone span programs over GF(2). Our results yield the same lower bounds for linear secret sharing schemes due to the previously known relationship between monotone span programs and linear secret sharing. To prove our characterization we introduce a new and general tool for lifting polynomial degree to rank over arbitrary fields.
我们通过一个相关的不可满足布尔公式的Nullstellensatz度来表征计算某些“结构化”布尔函数的单调跨度程序的大小。这产生了任意域上单调张成程序的第一个指数下界,不同特征域上单调张成程序之间的第一个指数分离,以及任意域上单调张成程序和单调电路之间的第一个指数分离。我们还证明了在任意域上计算有向st连通性的单调张成规划的紧拟多项式下界,将单调张成规划从非确定性对数空间中分离出来,并在GF(2)上将单调张成规划和非单调张成规划分离出来。由于单调跨规划和线性秘密共享之间已知的关系,我们的结果给出了线性秘密共享方案的相同下界。为了证明我们的性质,我们引入了一个新的和通用的工具来提升多项式的次数在任意域上的秩。
{"title":"Lifting Nullstellensatz to monotone span programs over any field","authors":"T. Pitassi, Robert Robere","doi":"10.1145/3188745.3188914","DOIUrl":"https://doi.org/10.1145/3188745.3188914","url":null,"abstract":"We characterize the size of monotone span programs computing certain “structured” boolean functions by the Nullstellensatz degree of a related unsatisfiable Boolean formula. This yields the first exponential lower bounds for monotone span programs over arbitrary fields, the first exponential separations between monotone span programs over fields of different characteristic, and the first exponential separation between monotone span programs over arbitrary fields and monotone circuits. We also show tight quasipolynomial lower bounds on monotone span programs computing directed st-connectivity over arbitrary fields, separating monotone span programs from non-deterministic logspace and also separating monotone and non-monotone span programs over GF(2). Our results yield the same lower bounds for linear secret sharing schemes due to the previously known relationship between monotone span programs and linear secret sharing. To prove our characterization we introduce a new and general tool for lifting polynomial degree to rank over arbitrary fields.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80882020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Non-malleable secret sharing 不可延展性的秘密共享
Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188872
Vipul Goyal, Ashutosh Kumar
A number of works have focused on the setting where an adversary tampers with the shares of a secret sharing scheme. This includes literature on verifiable secret sharing, algebraic manipulation detection(AMD) codes, and, error correcting or detecting codes in general. In this work, we initiate a systematic study of what we call non-malleable secret sharing. Very roughly, the guarantee we seek is the following: the adversary may potentially tamper with all of the shares, and still, either the reconstruction procedure outputs the original secret, or, the original secret is “destroyed” and the reconstruction outputs a string which is completely “unrelated” to the original secret. Recent exciting work on non-malleable codes in the split-state model led to constructions which can be seen as 2-out-of-2 non-malleable secret sharing schemes. These constructions have already found a number of applications in cryptography. We investigate the natural question of constructing t-out-of-n non-malleable secret sharing schemes. Such a secret sharing scheme ensures that only a set consisting of t or more shares can reconstruct the secret, and, additionally guarantees non-malleability under an attack where potentially every share maybe tampered with. Techniques used for obtaining split-state non-malleable codes (or 2-out-of-2 non-malleable secret sharing) are (in some form) based on two-source extractors and seem not to generalize to our setting. Our first result is the construction of a t-out-of-n non-malleable secret sharing scheme against an adversary who arbitrarily tampers each of the shares independently. Our construction is unconditional and features statistical non-malleability. As our main technical result, we present t-out-of-n non-malleable secret sharing scheme in a stronger adversarial model where an adversary may jointly tamper multiple shares. Our construction is unconditional and the adversary is allowed to jointly-tamper subsets of up to (t−1) shares. We believe that the techniques introduced in our construction may be of independent interest. Inspired by the well studied problem of perfectly secure message transmission introduced in the seminal work of Dolev et. al (J. of ACM’93), we also initiate the study of non-malleable message transmission. Non-malleable message transmission can be seen as a natural generalization in which the goal is to ensure that the receiver either receives the original message, or, the original message is essentially destroyed and the receiver receives an “unrelated” message, when the network is under the influence of an adversary who can byzantinely corrupt all the nodes in the network. As natural applications of our non-malleable secret sharing schemes, we propose constructions for non-malleable message transmission.
许多作品都集中在对手篡改秘密共享方案的份额的场景上。这包括关于可验证的秘密共享、代数操作检测(AMD)代码和纠错或一般检测代码的文献。在这项工作中,我们启动了一个系统的研究,我们称之为不可延展性的秘密共享。粗略地说,我们寻求的保证如下:对手可能会潜在地篡改所有的共享,并且仍然,要么重构过程输出原始秘密,要么原始秘密被“销毁”,重构输出一个与原始秘密完全“无关”的字符串。最近关于分裂状态模型中不可延展性代码的令人兴奋的工作导致了可以被视为2-out- 2不可延性秘密共享方案的结构。这些结构已经在密码学中得到了许多应用。我们研究了构造t-out- n非延展性秘密共享方案的自然问题。这样的秘密共享方案确保只有由t个或更多的共享组成的集合才能重建秘密,并且额外保证在攻击下的不可延展性,因为每个共享都可能被篡改。用于获取分裂状态不可延展性代码(或2 / 2不可延展性秘密共享)的技术(以某种形式)基于双源提取器,似乎不适用于我们的设置。我们的第一个结果是构造了一个t-out- n不可延展性的秘密共享方案,以对抗任意篡改每个共享的对手。我们的结构是无条件的,具有统计上的不可延展性。作为我们的主要技术成果,我们在一个更强的对抗性模型中提出了t-out- n非延展性秘密共享方案,其中攻击者可能联合篡改多个共享。我们的构造是无条件的,并且允许对手联合篡改最多(t−1)个份额的子集。我们相信,在我们的建设中引入的技术可能是独立的兴趣。受Dolev et. al (J. of ACM ' 93)开创性工作中引入的完全安全消息传输问题的深入研究的启发,我们还启动了非延展性消息传输的研究。不可延展性消息传输可以看作是一种自然的概括,其目标是确保接收方接收到原始消息,或者当网络受到对手的影响时,原始消息基本上被破坏,接收方接收到“不相关”的消息,而对手可以拜占庭式地破坏网络中的所有节点。作为我们的非延展性秘密共享方案的自然应用,我们提出了非延展性消息传输的结构。
{"title":"Non-malleable secret sharing","authors":"Vipul Goyal, Ashutosh Kumar","doi":"10.1145/3188745.3188872","DOIUrl":"https://doi.org/10.1145/3188745.3188872","url":null,"abstract":"A number of works have focused on the setting where an adversary tampers with the shares of a secret sharing scheme. This includes literature on verifiable secret sharing, algebraic manipulation detection(AMD) codes, and, error correcting or detecting codes in general. In this work, we initiate a systematic study of what we call non-malleable secret sharing. Very roughly, the guarantee we seek is the following: the adversary may potentially tamper with all of the shares, and still, either the reconstruction procedure outputs the original secret, or, the original secret is “destroyed” and the reconstruction outputs a string which is completely “unrelated” to the original secret. Recent exciting work on non-malleable codes in the split-state model led to constructions which can be seen as 2-out-of-2 non-malleable secret sharing schemes. These constructions have already found a number of applications in cryptography. We investigate the natural question of constructing t-out-of-n non-malleable secret sharing schemes. Such a secret sharing scheme ensures that only a set consisting of t or more shares can reconstruct the secret, and, additionally guarantees non-malleability under an attack where potentially every share maybe tampered with. Techniques used for obtaining split-state non-malleable codes (or 2-out-of-2 non-malleable secret sharing) are (in some form) based on two-source extractors and seem not to generalize to our setting. Our first result is the construction of a t-out-of-n non-malleable secret sharing scheme against an adversary who arbitrarily tampers each of the shares independently. Our construction is unconditional and features statistical non-malleability. As our main technical result, we present t-out-of-n non-malleable secret sharing scheme in a stronger adversarial model where an adversary may jointly tamper multiple shares. Our construction is unconditional and the adversary is allowed to jointly-tamper subsets of up to (t−1) shares. We believe that the techniques introduced in our construction may be of independent interest. Inspired by the well studied problem of perfectly secure message transmission introduced in the seminal work of Dolev et. al (J. of ACM’93), we also initiate the study of non-malleable message transmission. Non-malleable message transmission can be seen as a natural generalization in which the goal is to ensure that the receiver either receives the original message, or, the original message is essentially destroyed and the receiver receives an “unrelated” message, when the network is under the influence of an adversary who can byzantinely corrupt all the nodes in the network. As natural applications of our non-malleable secret sharing schemes, we propose constructions for non-malleable message transmission.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81187168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
Generalization and equilibrium in generative adversarial nets (GANs) (invited talk) 生成对抗网络(GANs)的泛化与平衡(特邀演讲)
Pub Date : 2018-06-20 DOI: 10.1145/3188745.3232194
Tengyu Ma
Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good cryptographic primitives. In this talk, we will first introduce the ba- sics of GANs and then discuss the fundamental statistical question about GANs — assuming the training can succeed with polynomial samples, can we have any statistical guarantees for the estimated distributions? In the work with Arora, Ge, Liang, and Zhang, we suggested a dilemma: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. Such a conundrum may be solved or alleviated by designing discrimina- tor class with strong distinguishing power against the particular generator class (instead of against all possible generators.)
生成对抗网络(GANs)已经成为将生成模型拟合到复杂的现实数据中的主要方法之一,甚至发现了不寻常的用途,例如设计好的密码原语。在这次演讲中,我们将首先介绍gan的基本物理知识,然后讨论关于gan的基本统计问题——假设多项式样本的训练可以成功,我们是否可以对估计的分布有任何统计保证?在与Arora、Ge、Liang和Zhang的合作中,我们提出了一个难题:强大的鉴别器会导致过拟合,而弱鉴别器无法检测模式坍缩。这样的难题可以通过设计对特定生成器类(而不是对所有可能的生成器类)具有强区分能力的判别器类来解决或缓解。
{"title":"Generalization and equilibrium in generative adversarial nets (GANs) (invited talk)","authors":"Tengyu Ma","doi":"10.1145/3188745.3232194","DOIUrl":"https://doi.org/10.1145/3188745.3232194","url":null,"abstract":"Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good cryptographic primitives. In this talk, we will first introduce the ba- sics of GANs and then discuss the fundamental statistical question about GANs — assuming the training can succeed with polynomial samples, can we have any statistical guarantees for the estimated distributions? In the work with Arora, Ge, Liang, and Zhang, we suggested a dilemma: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. Such a conundrum may be solved or alleviated by designing discrimina- tor class with strong distinguishing power against the particular generator class (instead of against all possible generators.)","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83026803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Interactive coding over the noisy broadcast channel 在有噪声的广播信道上进行交互编码
Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188884
K. Efremenko, Gillat Kol, Raghuvansh R. Saxena
A set of n players, each holding a private input bit, communicate over a noisy broadcast channel. Their mutual goal is for all players to learn all inputs. At each round one of the players broadcasts a bit to all the other players, and the bit received by each player is flipped with a fixed constant probability (independently for each recipient). How many rounds are needed? This problem was first suggested by El Gamal in 1984. In 1988, Gallager gave an elegant noise-resistant protocol requiring only O(n loglogn) rounds. The problem got resolved in 2005 by a seminal paper of Goyal, Kindler, and Saks, proving that Gallager’s protocol is essentially optimal. We revisit the above noisy broadcast problem and show that O(n) rounds suffice. This is possible due to a relaxation of the model assumed by the previous works. We no longer demand that exactly one player broadcasts in every round, but rather allow any number of players to broadcast. However, if it is not the case that exactly one player chooses to broadcast, each of the other players gets an adversely chosen bit. We generalized the above result and initiate the study of interactive coding over the noisy broadcast channel. We show that any interactive protocol that works over the noiseless broadcast channel can be simulated over our restrictive noisy broadcast model with constant blowup of the communication. Our results also establish that modern techniques for interactive coding can help us make progress on the classical problems.
一组n个播放器,每个播放器持有一个私有输入位,通过一个嘈杂的广播信道进行通信。他们的共同目标是让所有玩家学习所有输入。在每一轮中,一个玩家向所有其他玩家广播一个比特,每个玩家接收到的比特以固定的常数概率翻转(每个接收者独立)。需要多少回合?这个问题最早是El Gamal在1984年提出的。1988年,Gallager给出了一个优雅的抗噪声协议,只需要O(n对数)轮。2005年,戈亚尔、金德勒和萨克斯的一篇开创性论文解决了这个问题,证明了加拉格尔的方案本质上是最优的。我们重新考虑上面的噪声广播问题,并证明O(n)轮就足够了。这是可能的,因为以前的工作中假设的模型松弛了。我们不再要求每轮只有一名玩家进行广播,而是允许任意数量的玩家进行广播。然而,如果不是只有一个玩家选择广播,那么其他每个玩家都会得到一个相反的选择位。我们对上述结果进行了推广,并开始了在噪声广播信道上进行交互编码的研究。我们表明,任何在无噪声广播信道上工作的交互协议都可以在我们的限制性噪声广播模型上进行模拟,该模型具有恒定的通信放大。我们的研究结果还表明,交互式编码的现代技术可以帮助我们在经典问题上取得进展。
{"title":"Interactive coding over the noisy broadcast channel","authors":"K. Efremenko, Gillat Kol, Raghuvansh R. Saxena","doi":"10.1145/3188745.3188884","DOIUrl":"https://doi.org/10.1145/3188745.3188884","url":null,"abstract":"A set of n players, each holding a private input bit, communicate over a noisy broadcast channel. Their mutual goal is for all players to learn all inputs. At each round one of the players broadcasts a bit to all the other players, and the bit received by each player is flipped with a fixed constant probability (independently for each recipient). How many rounds are needed? This problem was first suggested by El Gamal in 1984. In 1988, Gallager gave an elegant noise-resistant protocol requiring only O(n loglogn) rounds. The problem got resolved in 2005 by a seminal paper of Goyal, Kindler, and Saks, proving that Gallager’s protocol is essentially optimal. We revisit the above noisy broadcast problem and show that O(n) rounds suffice. This is possible due to a relaxation of the model assumed by the previous works. We no longer demand that exactly one player broadcasts in every round, but rather allow any number of players to broadcast. However, if it is not the case that exactly one player chooses to broadcast, each of the other players gets an adversely chosen bit. We generalized the above result and initiate the study of interactive coding over the noisy broadcast channel. We show that any interactive protocol that works over the noiseless broadcast channel can be simulated over our restrictive noisy broadcast model with constant blowup of the communication. Our results also establish that modern techniques for interactive coding can help us make progress on the classical problems.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84353836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Clique is hard on average for regular resolution 一般来说,小团体很难解决常规问题
Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188856
Albert Atserias, Ilario Bonacina, Susanna F. de Rezende, Massimo Lauria, Jakob Nordström, A. Razborov
We prove that for k ≪ n1/4 regular resolution requires length nΩ(k) to establish that an Erdos-Renyi graph with appropriately chosen edge density does not contain a k-clique. This lower bound is optimal up to the multiplicative constant in the exponent, and also implies unconditional nΩ(k) lower bounds on running time for several state-of-the-art algorithms for finding maximum cliques in graphs.
我们证明,对于k≪n1/4正则分辨率,需要长度nΩ(k)才能确定具有适当选择的边缘密度的erdo - renyi图不包含k团。这个下界一直到指数中的乘法常数都是最优的,并且还意味着用于在图中查找最大团的几种最先进算法的运行时间的无条件nΩ(k)下界。
{"title":"Clique is hard on average for regular resolution","authors":"Albert Atserias, Ilario Bonacina, Susanna F. de Rezende, Massimo Lauria, Jakob Nordström, A. Razborov","doi":"10.1145/3188745.3188856","DOIUrl":"https://doi.org/10.1145/3188745.3188856","url":null,"abstract":"We prove that for k ≪ n1/4 regular resolution requires length nΩ(k) to establish that an Erdos-Renyi graph with appropriately chosen edge density does not contain a k-clique. This lower bound is optimal up to the multiplicative constant in the exponent, and also implies unconditional nΩ(k) lower bounds on running time for several state-of-the-art algorithms for finding maximum cliques in graphs.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91085098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Multi-collision resistance: a paradigm for keyless hash functions 抗多碰撞:无密钥哈希函数的范例
Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188870
Nir Bitansky, Y. Kalai, Omer Paneth
We introduce a new notion of multi-collision resistance for keyless hash functions. This is a natural relaxation of collision resistance where it is hard to find multiple inputs with the same hash in the following sense. The number of colliding inputs that a polynomial-time non-uniform adversary can find is not much larger than its advice. We discuss potential candidates for this notion and study its applications. Assuming the existence of such hash functions, we resolve the long-standing question of the round complexity of zero knowledge protocols --- we construct a 3-message zero knowledge argument against arbitrary polynomial-size non-uniform adversaries. We also improve the round complexity in several other central applications, including a 3-message succinct argument of knowledge for NP, a 4-message zero-knowledge proof, and a 5-message public-coin zero-knowledge argument. Our techniques can also be applied in the keyed setting, where we match the round complexity of known protocols while relaxing the underlying assumption from collision-resistance to keyed multi-collision resistance. The core technical contribution behind our results is a domain extension transformation from multi-collision-resistant hash functions for a fixed input length to ones with an arbitrary input length and a local opening property. The transformation is based on a combination of classical domain extension techniques, together with new information-theoretic tools. In particular, we define and construct a new variant of list-recoverable codes, which may be of independent interest.
我们为无密钥哈希函数引入了一种抗多碰撞的新概念。这是对碰撞阻力的一种自然放松,在这种情况下,很难找到具有相同哈希值的多个输入。一个多项式时间非均匀对手可以找到的碰撞输入的数量并不比它的建议大多少。我们讨论了这个概念的潜在候选,并研究了它的应用。假设存在这样的哈希函数,我们解决了长期存在的零知识协议的轮复杂度问题——我们构造了一个3消息零知识参数来对抗任意多项式大小的非均匀对手。我们还在其他几个中心应用中提高了轮复杂度,包括3条消息的NP简明知识论证、4条消息的零知识证明和5条消息的公币零知识论证。我们的技术也可以应用于键控设置,其中我们匹配已知协议的轮复杂度,同时将基本假设从抗碰撞放宽到抗键控多碰撞。我们的结果背后的核心技术贡献是从固定输入长度的多抗碰撞哈希函数到具有任意输入长度和局部开放属性的哈希函数的域扩展转换。该转换基于经典的领域扩展技术和新的信息理论工具的结合。特别地,我们定义并构造了一个列表可恢复代码的新变体,这可能是独立的兴趣。
{"title":"Multi-collision resistance: a paradigm for keyless hash functions","authors":"Nir Bitansky, Y. Kalai, Omer Paneth","doi":"10.1145/3188745.3188870","DOIUrl":"https://doi.org/10.1145/3188745.3188870","url":null,"abstract":"We introduce a new notion of multi-collision resistance for keyless hash functions. This is a natural relaxation of collision resistance where it is hard to find multiple inputs with the same hash in the following sense. The number of colliding inputs that a polynomial-time non-uniform adversary can find is not much larger than its advice. We discuss potential candidates for this notion and study its applications. Assuming the existence of such hash functions, we resolve the long-standing question of the round complexity of zero knowledge protocols --- we construct a 3-message zero knowledge argument against arbitrary polynomial-size non-uniform adversaries. We also improve the round complexity in several other central applications, including a 3-message succinct argument of knowledge for NP, a 4-message zero-knowledge proof, and a 5-message public-coin zero-knowledge argument. Our techniques can also be applied in the keyed setting, where we match the round complexity of known protocols while relaxing the underlying assumption from collision-resistance to keyed multi-collision resistance. The core technical contribution behind our results is a domain extension transformation from multi-collision-resistant hash functions for a fixed input length to ones with an arbitrary input length and a local opening property. The transformation is based on a combination of classical domain extension techniques, together with new information-theoretic tools. In particular, we define and construct a new variant of list-recoverable codes, which may be of independent interest.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84495849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
Generalized matrix completion and algebraic natural proofs 广义矩阵补全与代数自然证明
Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188832
M. Bläser, Christian Ikenmeyer, Gorav Jindal, Vladimir Lysikov
Algebraic natural proofs were recently introduced by Forbes, Shpilka and Volk (Proc. of the 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 653–664, 2017) and independently by Grochow, Kumar, Saks and Saraf (CoRR, abs/1701.01717, 2017) as an attempt to transfer Razborov and Rudich’s famous barrier result (J. Comput. Syst. Sci., 55(1): 24–35, 1997) for Boolean circuit complexity to algebraic complexity theory. Razborov and Rudich’s barrier result relies on a widely believed assumption, namely, the existence of pseudo-random generators. Unfortunately, there is no known analogous theory of pseudo-randomness in the algebraic setting. Therefore, Forbes et al. use a concept called succinct hitting sets instead. This assumption is related to polynomial identity testing, but it is currently not clear how plausible this assumption is. Forbes et al. are only able to construct succinct hitting sets against rather weak models of arithmetic circuits. Generalized matrix completion is the following problem: Given a matrix with affine linear forms as entries, find an assignment to the variables in the linear forms such that the rank of the resulting matrix is minimal. We call this rank the completion rank. Computing the completion rank is an NP-hard problem. As our first main result, we prove that it is also NP-hard to determine whether a given matrix can be approximated by matrices of completion rank ≤ b. The minimum quantity b for which this is possible is called border completion rank (similar to the border rank of tensors). Naturally, algebraic natural proofs can only prove lower bounds for such border complexity measures. Furthermore, these border complexity measures play an important role in the geometric complexity program. Using our hardness result above, we can prove the following barrier: We construct a small family of matrices with affine linear forms as entries and a bound b, such that at least one of these matrices does not have an algebraic natural proof of polynomial size against all matrices of border completion rank b, unless coNP ⊆ ∃ BPP. This is an algebraic barrier result that is based on a well-established and widely believed conjecture. The complexity class ∃ BPP is known to be a subset of the more well known complexity class in the literature. Thus ∃ BPP can be replaced by MA in the statements of all our results. With similar techniques, we can also prove that tensor rank is hard to approximate. Furthermore, we prove a similar result for the variety of matrices with permanent zero. There are no algebraic polynomial size natural proofs for the variety of matrices with permanent zero, unless P#P ⊆ ∃ BPP. On the other hand, we are able to prove that the geometric complexity theory approach initiated by Mulmuley and Sohoni (SIAM J. Comput. 31(2): 496–526, 2001) yields proofs of polynomial size for this variety, therefore overcoming the natural proofs barrier in this case.
最近,Forbes、Shpilka和Volk(第49届ACM SIGACT计算理论研讨会(STOC) Proc., 653-664页,2017)和Grochow、Kumar、Saks和Saraf (CoRR, abs/1701.01717, 2017)分别引入了代数自然证明,试图转移Razborov和Rudich著名的障位结果(J. Comput。系统。科学。数学学报,55(1):24 - 35,1997)Razborov和Rudich的势垒结果依赖于一个被广泛相信的假设,即伪随机发生器的存在。不幸的是,在代数设置中没有已知的伪随机性的类似理论。因此,Forbes等人使用了一个叫做简洁命中集的概念。这个假设与多项式恒等检验有关,但目前还不清楚这个假设是否合理。Forbes等人只能针对相当弱的算术电路模型构建简洁的命中集。广义矩阵补全是这样一个问题:给定一个以仿射线性形式为元素的矩阵,求对线性形式的变量的赋值,使得到的矩阵的秩最小。我们称这个秩为完成秩。计算完成等级是一个np困难问题。作为我们的第一个主要结果,我们证明了确定给定矩阵是否可以由补全秩≤b的矩阵近似也是np困难的。可能的最小量b称为边界补全秩(类似于张量的边界秩)。自然地,代数自然证明只能证明这种边界复杂度测度的下界。此外,这些边界复杂度度量在几何复杂度规划中起着重要的作用。利用我们上面的硬度结果,我们可以证明以下障碍:我们构造了一个以仿射线性形式为入口和界b的矩阵小族,使得这些矩阵中至少有一个不具有对所有边补齐秩b的矩阵的多项式大小的代数自然证明,除非coNP∃BPP。这是一个代数势垒结果,它建立在一个公认的、被广泛相信的猜想的基础上。已知复杂性类∃是文献中更知名的复杂性类的一个子集。因此,在我们所有结果的陈述中,∃BPP可以用MA代替。通过类似的技术,我们也可以证明张量秩是难以近似的。进一步,我们证明了具有永久零的矩阵的变化的类似结果。除p# P≠BPP外,对于恒量为零的各种矩阵没有代数多项式大小的自然证明。另一方面,我们能够证明Mulmuley和Sohoni提出的几何复杂性理论方法(SIAM J. Comput. 31(2): 496 - 526,2001)为这种变化提供了多项式大小的证明,从而克服了这种情况下的自然证明障碍。
{"title":"Generalized matrix completion and algebraic natural proofs","authors":"M. Bläser, Christian Ikenmeyer, Gorav Jindal, Vladimir Lysikov","doi":"10.1145/3188745.3188832","DOIUrl":"https://doi.org/10.1145/3188745.3188832","url":null,"abstract":"Algebraic natural proofs were recently introduced by Forbes, Shpilka and Volk (Proc. of the 49th Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 653–664, 2017) and independently by Grochow, Kumar, Saks and Saraf (CoRR, abs/1701.01717, 2017) as an attempt to transfer Razborov and Rudich’s famous barrier result (J. Comput. Syst. Sci., 55(1): 24–35, 1997) for Boolean circuit complexity to algebraic complexity theory. Razborov and Rudich’s barrier result relies on a widely believed assumption, namely, the existence of pseudo-random generators. Unfortunately, there is no known analogous theory of pseudo-randomness in the algebraic setting. Therefore, Forbes et al. use a concept called succinct hitting sets instead. This assumption is related to polynomial identity testing, but it is currently not clear how plausible this assumption is. Forbes et al. are only able to construct succinct hitting sets against rather weak models of arithmetic circuits. Generalized matrix completion is the following problem: Given a matrix with affine linear forms as entries, find an assignment to the variables in the linear forms such that the rank of the resulting matrix is minimal. We call this rank the completion rank. Computing the completion rank is an NP-hard problem. As our first main result, we prove that it is also NP-hard to determine whether a given matrix can be approximated by matrices of completion rank ≤ b. The minimum quantity b for which this is possible is called border completion rank (similar to the border rank of tensors). Naturally, algebraic natural proofs can only prove lower bounds for such border complexity measures. Furthermore, these border complexity measures play an important role in the geometric complexity program. Using our hardness result above, we can prove the following barrier: We construct a small family of matrices with affine linear forms as entries and a bound b, such that at least one of these matrices does not have an algebraic natural proof of polynomial size against all matrices of border completion rank b, unless coNP ⊆ ∃ BPP. This is an algebraic barrier result that is based on a well-established and widely believed conjecture. The complexity class ∃ BPP is known to be a subset of the more well known complexity class in the literature. Thus ∃ BPP can be replaced by MA in the statements of all our results. With similar techniques, we can also prove that tensor rank is hard to approximate. Furthermore, we prove a similar result for the variety of matrices with permanent zero. There are no algebraic polynomial size natural proofs for the variety of matrices with permanent zero, unless P#P ⊆ ∃ BPP. On the other hand, we are able to prove that the geometric complexity theory approach initiated by Mulmuley and Sohoni (SIAM J. Comput. 31(2): 496–526, 2001) yields proofs of polynomial size for this variety, therefore overcoming the natural proofs barrier in this case.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85478384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Towards tight approximation bounds for graph diameter and eccentricities 图直径和偏心率的逼近边界
Pub Date : 2018-06-20 DOI: 10.1145/3188745.3188950
A. Backurs, L. Roditty, Gilad Segal, V. V. Williams, Nicole Wein
Among the most important graph parameters is the Diameter, the largest distance between any two vertices. There are no known very efficient algorithms for computing the Diameter exactly. Thus, much research has been devoted to how fast this parameter can be approximated. Chechik et al. [SODA 2014] showed that the diameter can be approximated within a multiplicative factor of 3/2 in Õ(m3/2) time. Furthermore, Roditty and Vassilevska W. [STOC 13] showed that unless the Strong Exponential Time Hypothesis (SETH) fails, no O(n2−ε) time algorithm can achieve an approximation factor better than 3/2 in sparse graphs. Thus the above algorithm is essentially optimal for sparse graphs for approximation factors less than 3/2. It was, however, completely plausible that a 3/2-approximation is possible in linear time. In this work we conditionally rule out such a possibility by showing that unless SETH fails no O(m3/2−ε) time algorithm can achieve an approximation factor better than 5/3. Another fundamental set of graph parameters are the Eccentricities. The Eccentricity of a vertex v is the distance between v and the farthest vertex from v. Chechik et al. [SODA 2014] showed that the Eccentricities of all vertices can be approximated within a factor of 5/3 in Õ(m3/2) time and Abboud et al. [SODA 2016] showed that no O(n2−ε) algorithm can achieve better than 5/3 approximation in sparse graphs. We show that the runtime of the 5/3 approximation algorithm is also optimal by proving that under SETH, there is no O(m3/2−ε) algorithm that achieves a better than 9/5 approximation. We also show that no near-linear time algorithm can achieve a better than 2 approximation for the Eccentricities. This is the first lower bound in fine-grained complexity that addresses near-linear time computation. We show that our lower bound for near-linear time algorithms is essentially tight by giving an algorithm that approximates Eccentricities within a 2+δ factor in Õ(m/δ) time for any 0<δ<1. This beats all Eccentricity algorithms in Cairo et al. [SODA 2016] and is the first constant factor approximation for Eccentricities in directed graphs. To establish the above lower bounds we study the S-T Diameter problem: Given a graph and two subsets S and T of vertices, output the largest distance between a vertex in S and a vertex in T. We give new algorithms and show tight lower bounds that serve as a starting point for all other hardness results. Our lower bounds apply only to sparse graphs. We show that for dense graphs, there are near-linear time algorithms for S-T Diameter, Diameter and Eccentricities, with almost the same approximation guarantees as their Õ(m3/2) counterparts, improving upon the best known algorithms for dense graphs.
最重要的图形参数之一是直径,即任意两个顶点之间的最大距离。目前还没有已知的非常有效的精确计算直径的算法。因此,许多研究都致力于如何快速地逼近这个参数。Chechik等人[SODA 2014]表明,在Õ(m3/2)时间内,直径可以在3/2的乘法因子内近似。此外,Roditty和Vassilevska W. [STOC 13]表明,除非强指数时间假设(Strong Exponential Time Hypothesis, SETH)失效,否则没有O(n2−ε)时间算法可以在稀疏图中获得优于3/2的近似因子。因此,对于近似因子小于3/2的稀疏图,上述算法本质上是最优的。然而,在线性时间内,3/2近似是完全可能的。在这项工作中,我们有条件地排除了这种可能性,表明除非SETH失败,否则没有O(m3/2−ε)时间算法可以获得优于5/3的近似因子。另一组基本的图形参数是离心率。顶点v的偏心率是v到v的最远顶点之间的距离。Chechik等人[SODA 2014]表明,在Õ(m3/2)时间内,所有顶点的偏心率可以在5/3的因子内近似,Abboud等人[SODA 2016]表明,在稀疏图中,没有O(n2−ε)算法可以达到优于5/3的近似。我们通过证明在SETH下,没有O(m3/2−ε)算法能达到比9/5近似更好的结果,证明了5/3近似算法的运行时间也是最优的。我们还表明,没有任何近线性时间算法可以获得优于2的偏心距近似。这是解决近线性时间计算的细粒度复杂性的第一个下界。我们表明,我们的近线性时间算法的下界本质上是紧密的,通过给出一个算法,该算法在Õ(m/δ)时间内的2+δ因子内近似偏心,对于任何0<δ<1。这击败了Cairo等人[SODA 2016]中的所有偏心算法,并且是有向图中偏心的第一个常数因子近似。为了建立上述下界,我们研究了S-T直径问题:给定一个图和两个顶点子集S和T,输出S中的顶点和T中的顶点之间的最大距离。我们给出了新的算法并显示了作为所有其他硬度结果起点的紧密下界。我们的下界只适用于稀疏图。我们表明,对于密集图,存在S-T直径,直径和偏心的近线性时间算法,具有与Õ(m3/2)对应的几乎相同的近似保证,改进了最著名的密集图算法。
{"title":"Towards tight approximation bounds for graph diameter and eccentricities","authors":"A. Backurs, L. Roditty, Gilad Segal, V. V. Williams, Nicole Wein","doi":"10.1145/3188745.3188950","DOIUrl":"https://doi.org/10.1145/3188745.3188950","url":null,"abstract":"Among the most important graph parameters is the Diameter, the largest distance between any two vertices. There are no known very efficient algorithms for computing the Diameter exactly. Thus, much research has been devoted to how fast this parameter can be approximated. Chechik et al. [SODA 2014] showed that the diameter can be approximated within a multiplicative factor of 3/2 in Õ(m3/2) time. Furthermore, Roditty and Vassilevska W. [STOC 13] showed that unless the Strong Exponential Time Hypothesis (SETH) fails, no O(n2−ε) time algorithm can achieve an approximation factor better than 3/2 in sparse graphs. Thus the above algorithm is essentially optimal for sparse graphs for approximation factors less than 3/2. It was, however, completely plausible that a 3/2-approximation is possible in linear time. In this work we conditionally rule out such a possibility by showing that unless SETH fails no O(m3/2−ε) time algorithm can achieve an approximation factor better than 5/3. Another fundamental set of graph parameters are the Eccentricities. The Eccentricity of a vertex v is the distance between v and the farthest vertex from v. Chechik et al. [SODA 2014] showed that the Eccentricities of all vertices can be approximated within a factor of 5/3 in Õ(m3/2) time and Abboud et al. [SODA 2016] showed that no O(n2−ε) algorithm can achieve better than 5/3 approximation in sparse graphs. We show that the runtime of the 5/3 approximation algorithm is also optimal by proving that under SETH, there is no O(m3/2−ε) algorithm that achieves a better than 9/5 approximation. We also show that no near-linear time algorithm can achieve a better than 2 approximation for the Eccentricities. This is the first lower bound in fine-grained complexity that addresses near-linear time computation. We show that our lower bound for near-linear time algorithms is essentially tight by giving an algorithm that approximates Eccentricities within a 2+δ factor in Õ(m/δ) time for any 0<δ<1. This beats all Eccentricity algorithms in Cairo et al. [SODA 2016] and is the first constant factor approximation for Eccentricities in directed graphs. To establish the above lower bounds we study the S-T Diameter problem: Given a graph and two subsets S and T of vertices, output the largest distance between a vertex in S and a vertex in T. We give new algorithms and show tight lower bounds that serve as a starting point for all other hardness results. Our lower bounds apply only to sparse graphs. We show that for dense graphs, there are near-linear time algorithms for S-T Diameter, Diameter and Eccentricities, with almost the same approximation guarantees as their Õ(m3/2) counterparts, improving upon the best known algorithms for dense graphs.","PeriodicalId":20593,"journal":{"name":"Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84103165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
期刊
Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1