In this paper we consider the classic scheduling problem of minimizing total weighted completion time on unrelated machines when jobs have release times, i.e, R|rij| Σj wjCj using the three-field notation. For this problem, a 2-approximation is known based on a novel convex programming (J. ACM 2001 by Skutella). It has been a long standing open problem if one can improve upon this 2-approximation (Open Problem 8 in J. of Sched. 1999 by Schuurman and Woeginger). We answer this question in the affirmative by giving a 1.8786-approximation. We achieve this via a surprisingly simple linear programming, but a novel rounding algorithm and analysis. A key ingredient of our algorithm is the use of random offsets sampled from non-uniform distributions. We also consider the preemptive version of the problem, i.e, R|rij, pmtn|ΣjwjCj. We again use the idea of sampling offsets from non-uniform distributions to give the first better than 2-approximation for this problem. This improvement also requires use of a configuration LP with variables for each job's complete schedules along with more careful analysis. For both non-preemptive and preemptive versions, we break the approximation barrier of 2 for the first time.
本文研究了一类经典的调度问题,即当作业具有释放时间(R|rij| Σj wjCj)时,最小化不相关机器上的总加权完成时间。对于这个问题,已知的2逼近是基于一种新的凸规划(J. ACM 2001 by Skutella)。如果人们能够改进这个2-近似(Schuurman和Woeginger在J. of Sched. 1999出版的开放问题8),它已经是一个长期存在的开放问题。我们用1.8786的近似值来肯定地回答这个问题。我们通过一个非常简单的线性规划实现了这一点,但采用了一种新颖的舍入算法和分析。我们算法的一个关键成分是从非均匀分布中抽样的随机偏移量的使用。我们还考虑了问题的抢占式版本,即R|rij, pmtn|ΣjwjCj。我们再次使用非均匀分布的抽样偏移的思想来给出这个问题的第一个优于2的近似。这种改进还需要为每个作业的完整时间表使用带有变量的配置LP,并进行更仔细的分析。对于非抢占和抢占版本,我们首次打破了2的近似障碍。
{"title":"Better Unrelated Machine Scheduling for Weighted Completion Time via Random Offsets from Non-uniform Distributions","authors":"Sungjin Im, Shi Li","doi":"10.1109/FOCS.2016.23","DOIUrl":"https://doi.org/10.1109/FOCS.2016.23","url":null,"abstract":"In this paper we consider the classic scheduling problem of minimizing total weighted completion time on unrelated machines when jobs have release times, i.e, R|rij| Σj wjCj using the three-field notation. For this problem, a 2-approximation is known based on a novel convex programming (J. ACM 2001 by Skutella). It has been a long standing open problem if one can improve upon this 2-approximation (Open Problem 8 in J. of Sched. 1999 by Schuurman and Woeginger). We answer this question in the affirmative by giving a 1.8786-approximation. We achieve this via a surprisingly simple linear programming, but a novel rounding algorithm and analysis. A key ingredient of our algorithm is the use of random offsets sampled from non-uniform distributions. We also consider the preemptive version of the problem, i.e, R|rij, pmtn|ΣjwjCj. We again use the idea of sampling offsets from non-uniform distributions to give the first better than 2-approximation for this problem. This improvement also requires use of a configuration LP with variables for each job's complete schedules along with more careful analysis. For both non-preemptive and preemptive versions, we break the approximation barrier of 2 for the first time.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131392742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a natural reverse Minkowski-type inequality for lattices, which gives upper bounds on the number of lattice points in a Euclidean ball in terms of sublattice determinants, and conjecture its optimal form. The conjecture exhibits a surprising wealth of connections to various areas in mathematics and computer science, including a conjecture motivated by integer programming by Kannan and Lovasz (Annals of Math. 1988), a question from additive combinatorics asked by Green, a question on Brownian motions asked by Saloff-Coste (Colloq. Math. 2010), a theorem by Milman and Pisier from convex geometry (Ann. Probab. 1987), worst-case to average-case reductions in lattice-based cryptography, and more. We present these connections, provide evidence for the conjecture, and discuss possible approaches towards a proof. Our main technical contribution is in proving that our conjecture implies the l2 case of the Kannan and Lovasz conjecture. The proof relies on a novel convex relaxation for the covering radius, and a rounding procedure based on "uncrossing" lattice subspaces.
{"title":"Towards Strong Reverse Minkowski-Type Inequalities for Lattices","authors":"D. Dadush, O. Regev","doi":"10.1109/FOCS.2016.55","DOIUrl":"https://doi.org/10.1109/FOCS.2016.55","url":null,"abstract":"We present a natural reverse Minkowski-type inequality for lattices, which gives upper bounds on the number of lattice points in a Euclidean ball in terms of sublattice determinants, and conjecture its optimal form. The conjecture exhibits a surprising wealth of connections to various areas in mathematics and computer science, including a conjecture motivated by integer programming by Kannan and Lovasz (Annals of Math. 1988), a question from additive combinatorics asked by Green, a question on Brownian motions asked by Saloff-Coste (Colloq. Math. 2010), a theorem by Milman and Pisier from convex geometry (Ann. Probab. 1987), worst-case to average-case reductions in lattice-based cryptography, and more. We present these connections, provide evidence for the conjecture, and discuss possible approaches towards a proof. Our main technical contribution is in proving that our conjecture implies the l2 case of the Kannan and Lovasz conjecture. The proof relies on a novel convex relaxation for the covering radius, and a rounding procedure based on \"uncrossing\" lattice subspaces.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116056188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We prove that there exists a constant ε > 0 such that, assuming the Exponential Time Hypothesis for PPAD, computing an ε-approximate Nash equilibrium in a two-player (n × n) game requires quasi-polynomial time, nlog1-o(1) n. This matches (up to the o(1) term) the algorithm of Lipton, Markakis, and Mehta [54]. Our proof relies on a variety of techniques from the study of probabilistically checkable proofs (PCP), this is the first time that such ideas are used for a reduction between problems inside PPAD. En route, we also prove new hardness results for computing Nash equilibria in games with many players. In particular, we show that computing an ε-approximate Nash equilibrium in a game with n players requires 2Ω(n) oracle queries to the payoff tensors. This resolves an open problem posed by Hart and Nisan [43], Babichenko [13], and Chen et al. [28]. In fact, our results for n-player games are stronger: they hold with respect to the (ε,δ)-WeakNash relaxation recently introduced by Babichenko et al. [15].
{"title":"Settling the Complexity of Computing Approximate Two-Player Nash Equilibria","authors":"A. Rubinstein","doi":"10.1145/3055589.3055596","DOIUrl":"https://doi.org/10.1145/3055589.3055596","url":null,"abstract":"We prove that there exists a constant ε > 0 such that, assuming the Exponential Time Hypothesis for PPAD, computing an ε-approximate Nash equilibrium in a two-player (n × n) game requires quasi-polynomial time, nlog1-o(1) n. This matches (up to the o(1) term) the algorithm of Lipton, Markakis, and Mehta [54]. Our proof relies on a variety of techniques from the study of probabilistically checkable proofs (PCP), this is the first time that such ideas are used for a reduction between problems inside PPAD. En route, we also prove new hardness results for computing Nash equilibria in games with many players. In particular, we show that computing an ε-approximate Nash equilibrium in a game with n players requires 2Ω(n) oracle queries to the payoff tensors. This resolves an open problem posed by Hart and Nisan [43], Babichenko [13], and Chen et al. [28]. In fact, our results for n-player games are stronger: they hold with respect to the (ε,δ)-WeakNash relaxation recently introduced by Babichenko et al. [15].","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116068383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A (β, ∈)-hopset for a weighted undirected n-vertex graph G = (V, E) is a set of edges, whose addition to the graph guarantees that every pair of vertices has a path between them that contains at most β edges, whose length is within 1 + ∈ of the shortest path. In her seminal paper, Cohen [8, JACM 2000] introduced the notion of hopsets in the context of parallel computation of approximate shortest paths, and since then it has found numerous applications in various other settings, such as dynamic graph algorithms, distributed computing, and the streaming model. Cohen [8] devised efficient algorithms for constructing hopsets with polylogarithmic in n number of hops. Her constructions remain the state-of-the-art since the publication of her paper in STOC'94, i.e., for more than two decades. In this paper we exhibit the first construction of sparse hopsets with a constant number of hops. We also find efficient algorithms for hopsets in various computational settings, improving the best known constructions. Generally, our hopsets strictly outperform the hopsets of [8], both in terms of their parameters, and in terms of the resources required to construct them. We demonstrate the applicability of our results for the fundamental problem of computing approximate shortest paths from s sources. Our results improve the running time for this problem in the parallel, distributed and streaming models, for a vast range of s.
对于一个加权无向n顶点图G = (V, E), A (β,∈)-hopset是一组边,它们的加入保证了每一对顶点之间都有一条最多包含β条边的路径,其长度在最短路径的1 +∈内。在她的开创性论文中,Cohen [8, JACM 2000]在近似最短路径并行计算的背景下引入了hopset的概念,从那时起,它已经在各种其他设置中找到了许多应用,例如动态图算法,分布式计算和流模型。Cohen[8]设计了高效的算法来构造n跳数为多对数的hopset。自从她的论文在STOC'94发表以来,她的结构仍然是最先进的,即二十多年来。在本文中,我们展示了具有常数跳数的稀疏跳集的第一个构造。我们还在各种计算设置中找到了hopset的有效算法,改进了最著名的结构。一般来说,我们的hopset严格优于[8]的hopset,无论是在它们的参数方面,还是在构建它们所需的资源方面。我们证明了我们的结果对从5个源计算近似最短路径的基本问题的适用性。我们的结果在并行、分布式和流模型下改善了这个问题的运行时间,在很大范围内。
{"title":"Hopsets with Constant Hopbound, and Applications to Approximate Shortest Paths","authors":"Michael Elkin, Ofer Neiman","doi":"10.1109/FOCS.2016.22","DOIUrl":"https://doi.org/10.1109/FOCS.2016.22","url":null,"abstract":"A (β, ∈)-hopset for a weighted undirected n-vertex graph G = (V, E) is a set of edges, whose addition to the graph guarantees that every pair of vertices has a path between them that contains at most β edges, whose length is within 1 + ∈ of the shortest path. In her seminal paper, Cohen [8, JACM 2000] introduced the notion of hopsets in the context of parallel computation of approximate shortest paths, and since then it has found numerous applications in various other settings, such as dynamic graph algorithms, distributed computing, and the streaming model. Cohen [8] devised efficient algorithms for constructing hopsets with polylogarithmic in n number of hops. Her constructions remain the state-of-the-art since the publication of her paper in STOC'94, i.e., for more than two decades. In this paper we exhibit the first construction of sparse hopsets with a constant number of hops. We also find efficient algorithms for hopsets in various computational settings, improving the best known constructions. Generally, our hopsets strictly outperform the hopsets of [8], both in terms of their parameters, and in terms of the resources required to construct them. We demonstrate the applicability of our results for the fundamental problem of computing approximate shortest paths from s sources. Our results improve the running time for this problem in the parallel, distributed and streaming models, for a vast range of s.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121474791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The dynamic shortest paths problem on planar graphs asks us to preprocess a planar graph G such that we may support insertions and deletions of edges in G as well as distance queries between any two nodes u, v subject to the constraint that the graph remains planar at all times. This problem has been extensively studied in both the theory and experimental communities over the past decades. The best known algorithm performs queries and updates in Õ(n2/3) time, based on ideas of a seminal paper by Fakcharoenphol and Rao [FOCS'01]. A (1+ε)-approximation algorithm of Abraham et al. [STOC'12] performs updates and queries in Õ(√n) time. An algorithm with a more practical O(polylog(n)) runtime would be a major breakthrough. However, such runtimes are only known for a (1+ε)-approximation in a model where only restricted weight updates are allowed due to Abraham et al. [SODA'16], or for easier problems like connectivity. In this paper, we follow a recent and very active line of work on showing lower bounds for polynomial time problems based on popular conjectures, obtaining the first such results for natural problems in planar graphs. Such results were previously out of reach due to the highly non-planar nature of known reductions and the impossibility of "planarizing gadgets". We introduce a new framework which is inspired by techniques from the literatures on distance labelling schemes and on parameterized complexity. Using our framework, we show that no algorithm for dynamic shortest paths or maximum weight bipartite matching in planar graphs can support both updates and queries in amortized O(n1/2-ε) time, for any ε>0, unless the classical all-pairs-shortest-paths problem can be solved in truly subcubic time, which is widely believed to be impossible. We extend these results to obtain strong lower bounds for other related problems as well as for possible trade-offs between query and update time. Interestingly, our lower bounds hold even in very restrictive models where only weight updates are allowed.
{"title":"Popular Conjectures as a Barrier for Dynamic Planar Graph Algorithms","authors":"Amir Abboud, Søren Dahlgaard","doi":"10.1109/FOCS.2016.58","DOIUrl":"https://doi.org/10.1109/FOCS.2016.58","url":null,"abstract":"The dynamic shortest paths problem on planar graphs asks us to preprocess a planar graph G such that we may support insertions and deletions of edges in G as well as distance queries between any two nodes u, v subject to the constraint that the graph remains planar at all times. This problem has been extensively studied in both the theory and experimental communities over the past decades. The best known algorithm performs queries and updates in Õ(n2/3) time, based on ideas of a seminal paper by Fakcharoenphol and Rao [FOCS'01]. A (1+ε)-approximation algorithm of Abraham et al. [STOC'12] performs updates and queries in Õ(√n) time. An algorithm with a more practical O(polylog(n)) runtime would be a major breakthrough. However, such runtimes are only known for a (1+ε)-approximation in a model where only restricted weight updates are allowed due to Abraham et al. [SODA'16], or for easier problems like connectivity. In this paper, we follow a recent and very active line of work on showing lower bounds for polynomial time problems based on popular conjectures, obtaining the first such results for natural problems in planar graphs. Such results were previously out of reach due to the highly non-planar nature of known reductions and the impossibility of \"planarizing gadgets\". We introduce a new framework which is inspired by techniques from the literatures on distance labelling schemes and on parameterized complexity. Using our framework, we show that no algorithm for dynamic shortest paths or maximum weight bipartite matching in planar graphs can support both updates and queries in amortized O(n1/2-ε) time, for any ε>0, unless the classical all-pairs-shortest-paths problem can be solved in truly subcubic time, which is widely believed to be impossible. We extend these results to obtain strong lower bounds for other related problems as well as for possible trade-offs between query and update time. Interestingly, our lower bounds hold even in very restrictive models where only weight updates are allowed.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128508419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the problem of finding a low discrepancy coloring for sparse set systems where each element lies in at most t sets. We give an efficient algorithm that finds a coloring with discrepancy O((t log n)1/2), matching the best known non-constructive bound for the problem due to Banaszczyk. The previous algorithms only achieved an O(t1/2 log n) bound. Our result also extends to the more general Komlós setting and gives an algorithmic O(log1/2 n) bound.
{"title":"An Algorithm for Komlós Conjecture Matching Banaszczyk's Bound","authors":"N. Bansal, D. Dadush, S. Garg","doi":"10.1109/FOCS.2016.89","DOIUrl":"https://doi.org/10.1109/FOCS.2016.89","url":null,"abstract":"We consider the problem of finding a low discrepancy coloring for sparse set systems where each element lies in at most t sets. We give an efficient algorithm that finds a coloring with discrepancy O((t log n)1/2), matching the best known non-constructive bound for the problem due to Banaszczyk. The previous algorithms only achieved an O(t1/2 log n) bound. Our result also extends to the more general Komlós setting and gives an algorithmic O(log1/2 n) bound.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114337422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian by the product of a sparse lower triangular matrix with its transpose. This gives the first nearly linear time solver for Laplacian systems that is based purely on random sampling, and does not use any graph theoretic constructions such as low-stretch trees, sparsifiers, or expanders. Our algorithm performs a subsampled Cholesky factorization, which we analyze using matrix martingales. As part of the analysis, we give a proof of a concentration inequality for matrix martingales where the differences are sums of conditionally independent variables.
{"title":"Approximate Gaussian Elimination for Laplacians - Fast, Sparse, and Simple","authors":"Rasmus Kyng, Sushant Sachdeva","doi":"10.1109/FOCS.2016.68","DOIUrl":"https://doi.org/10.1109/FOCS.2016.68","url":null,"abstract":"We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian by the product of a sparse lower triangular matrix with its transpose. This gives the first nearly linear time solver for Laplacian systems that is based purely on random sampling, and does not use any graph theoretic constructions such as low-stretch trees, sparsifiers, or expanders. Our algorithm performs a subsampled Cholesky factorization, which we analyze using matrix martingales. As part of the analysis, we give a proof of a concentration inequality for matrix martingales where the differences are sums of conditionally independent variables.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117335217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anurag Anshu, Aleksandrs Belovs, S. Ben-David, Mika Göös, Rahul Jain, Robin Kothari, Troy Lee, M. Santha
While exponential separations are known between quantum and randomized communication complexity for partial functions (Raz, STOC 1999), the best known separation between these measures for a total function is quadratic, witnessed by the disjointness function. We give the first super-quadratic separation between quantum and randomized communication complexity for a total function, giving an example exhibiting a power 2.5 gap. We further present a 1.5 power separation between exact quantum and randomized communication complexity, improving on the previous ≈ 1.15 separation by Ambainis (STOC 2013). Finally, we present a nearly optimal quadratic separation between randomized communication complexity and the logarithm of the partition number, improving upon the previous best power 1.5 separation due to Goos, Jayram, Pitassi, and Watson. Our results are the communication analogues of separations in query complexity proved using the recent cheat sheet framework of Aaronson, Ben-David, and Kothari (STOC 2016). Our main technical results are randomized communication and information complexity lower bounds for a family of functions, called lookup functions, that generalize and port the cheat sheet framework to communication complexity.
{"title":"Separations in Communication Complexity Using Cheat Sheets and Information Complexity","authors":"Anurag Anshu, Aleksandrs Belovs, S. Ben-David, Mika Göös, Rahul Jain, Robin Kothari, Troy Lee, M. Santha","doi":"10.1109/FOCS.2016.66","DOIUrl":"https://doi.org/10.1109/FOCS.2016.66","url":null,"abstract":"While exponential separations are known between quantum and randomized communication complexity for partial functions (Raz, STOC 1999), the best known separation between these measures for a total function is quadratic, witnessed by the disjointness function. We give the first super-quadratic separation between quantum and randomized communication complexity for a total function, giving an example exhibiting a power 2.5 gap. We further present a 1.5 power separation between exact quantum and randomized communication complexity, improving on the previous ≈ 1.15 separation by Ambainis (STOC 2013). Finally, we present a nearly optimal quadratic separation between randomized communication complexity and the logarithm of the partition number, improving upon the previous best power 1.5 separation due to Goos, Jayram, Pitassi, and Watson. Our results are the communication analogues of separations in query complexity proved using the recent cheat sheet framework of Aaronson, Ben-David, and Kothari (STOC 2016). Our main technical results are randomized communication and information complexity lower bounds for a family of functions, called lookup functions, that generalize and port the cheat sheet framework to communication complexity.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124028656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Baswana, Gupta and Sen [FOCS'11] showed that fully dynamic maximal matching can be maintained in general graphs with logarithmic amortized update time. More specifically, starting from an empty graph on n fixed vertices, they devised a randomized algorithm for maintaining maximal matching over any sequence of t edge insertions and deletions with a total runtime of O(t log n) in expectation and O(t log n + n log2 n) with high probability. Whether or not this runtime bound can be improved towards O(t) has remained an important open problem. Despite significant research efforts, this question has resisted numerous attempts at resolution even for basic graph families such as forests. In this paper, we resolve the question in the affirmative, by presenting a randomized algorithm for maintaining maximal matching in general graphs with constant amortized update time. The optimal runtime bound O(t) of our algorithm holds both in expectation and with high probability. As an immediate corollary, we can maintain 2-approximate vertex cover with constant amortized update time. This result is essentially the best one can hope for (under the unique games conjecture) in the context of dynamic approximate vertex cover, culminating a long line of research. Our algorithm builds on Baswana et al.'s algorithm, but is inherently different and arguably simpler. As an implication of our simplified approach, the space usage of our algorithm is linear in the (dynamic) graph size, while the space usage of Baswana et al.'s algorithm is always at least Ω(n log n). Finally, we present applications to approximate weighted matchings and to distributed networks.
Baswana, Gupta和Sen [FOCS'11]表明,对于具有对数平摊更新时间的一般图,可以保持完全动态的最大匹配。更具体地说,他们从一个有n个固定顶点的空图开始,设计了一种随机算法,用于在任何t条边插入和删除的序列上保持最大匹配,总运行时间为O(t log n),期望为O(t log n + n log2n),高概率为O(t log n + n log2n)。这个运行时边界是否可以改进到O(t)仍然是一个重要的开放问题。尽管进行了大量的研究工作,但这个问题在解决诸如森林之类的基本图族问题上遇到了许多困难。在本文中,我们提出了一种保持一般图的最大匹配的随机算法,该算法具有恒定的平摊更新时间。我们的算法的最优运行时边界O(t)既符合期望又具有高概率。作为一个直接推论,我们可以保持2-近似顶点覆盖与常数平摊更新时间。在动态近似顶点覆盖的背景下,这个结果基本上是人们所能期望的最好结果(在独特的游戏猜想下),这是一长串研究的结果。我们的算法建立在Baswana等人的算法的基础上,但本质上是不同的,可以说更简单。作为我们的简化方法的含义,我们的算法的空间使用在(动态)图大小上是线性的,而Baswana等人的算法的空间使用总是至少为Ω(n log n)。最后,我们提出了近似加权匹配和分布式网络的应用。
{"title":"Fully Dynamic Maximal Matching in Constant Update Time","authors":"Shay Solomon","doi":"10.1109/FOCS.2016.43","DOIUrl":"https://doi.org/10.1109/FOCS.2016.43","url":null,"abstract":"Baswana, Gupta and Sen [FOCS'11] showed that fully dynamic maximal matching can be maintained in general graphs with logarithmic amortized update time. More specifically, starting from an empty graph on n fixed vertices, they devised a randomized algorithm for maintaining maximal matching over any sequence of t edge insertions and deletions with a total runtime of O(t log n) in expectation and O(t log n + n log2 n) with high probability. Whether or not this runtime bound can be improved towards O(t) has remained an important open problem. Despite significant research efforts, this question has resisted numerous attempts at resolution even for basic graph families such as forests. In this paper, we resolve the question in the affirmative, by presenting a randomized algorithm for maintaining maximal matching in general graphs with constant amortized update time. The optimal runtime bound O(t) of our algorithm holds both in expectation and with high probability. As an immediate corollary, we can maintain 2-approximate vertex cover with constant amortized update time. This result is essentially the best one can hope for (under the unique games conjecture) in the context of dynamic approximate vertex cover, culminating a long line of research. Our algorithm builds on Baswana et al.'s algorithm, but is inherently different and arguably simpler. As an implication of our simplified approach, the space usage of our algorithm is linear in the (dynamic) graph size, while the space usage of Baswana et al.'s algorithm is always at least Ω(n log n). Finally, we present applications to approximate weighted matchings and to distributed networks.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"CE-27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126544399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the problem of estimating the mean and covariance of a distribution from i.i.d. samples in the presence of a fraction of malicious noise. This is in contrast to much recent work where the noise itself is assumed to be from a distribution of known type. The agnostic problem includes many interesting special cases, e.g., learning the parameters of a single Gaussian (or finding the best-fit Gaussian) when a fraction of data is adversarially corrupted, agnostically learning mixtures, agnostic ICA, etc. We present polynomial-time algorithms to estimate the mean and covariance with error guarantees in terms of information-theoretic lower bounds. As a corollary, we also obtain an agnostic algorithm for Singular Value Decomposition.
{"title":"Agnostic Estimation of Mean and Covariance","authors":"Kevin A. Lai, Anup B. Rao, S. Vempala","doi":"10.1109/FOCS.2016.76","DOIUrl":"https://doi.org/10.1109/FOCS.2016.76","url":null,"abstract":"We consider the problem of estimating the mean and covariance of a distribution from i.i.d. samples in the presence of a fraction of malicious noise. This is in contrast to much recent work where the noise itself is assumed to be from a distribution of known type. The agnostic problem includes many interesting special cases, e.g., learning the parameters of a single Gaussian (or finding the best-fit Gaussian) when a fraction of data is adversarially corrupted, agnostically learning mixtures, agnostic ICA, etc. We present polynomial-time algorithms to estimate the mean and covariance with error guarantees in terms of information-theoretic lower bounds. As a corollary, we also obtain an agnostic algorithm for Singular Value Decomposition.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123861068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}