首页 > 最新文献

2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)最新文献

英文 中文
Better Unrelated Machine Scheduling for Weighted Completion Time via Random Offsets from Non-uniform Distributions 基于非均匀分布随机偏移的非相关机器加权完成时间调度
Pub Date : 2016-06-28 DOI: 10.1109/FOCS.2016.23
Sungjin Im, Shi Li
In this paper we consider the classic scheduling problem of minimizing total weighted completion time on unrelated machines when jobs have release times, i.e, R|rij| Σj wjCj using the three-field notation. For this problem, a 2-approximation is known based on a novel convex programming (J. ACM 2001 by Skutella). It has been a long standing open problem if one can improve upon this 2-approximation (Open Problem 8 in J. of Sched. 1999 by Schuurman and Woeginger). We answer this question in the affirmative by giving a 1.8786-approximation. We achieve this via a surprisingly simple linear programming, but a novel rounding algorithm and analysis. A key ingredient of our algorithm is the use of random offsets sampled from non-uniform distributions. We also consider the preemptive version of the problem, i.e, R|rij, pmtn|ΣjwjCj. We again use the idea of sampling offsets from non-uniform distributions to give the first better than 2-approximation for this problem. This improvement also requires use of a configuration LP with variables for each job's complete schedules along with more careful analysis. For both non-preemptive and preemptive versions, we break the approximation barrier of 2 for the first time.
本文研究了一类经典的调度问题,即当作业具有释放时间(R|rij| Σj wjCj)时,最小化不相关机器上的总加权完成时间。对于这个问题,已知的2逼近是基于一种新的凸规划(J. ACM 2001 by Skutella)。如果人们能够改进这个2-近似(Schuurman和Woeginger在J. of Sched. 1999出版的开放问题8),它已经是一个长期存在的开放问题。我们用1.8786的近似值来肯定地回答这个问题。我们通过一个非常简单的线性规划实现了这一点,但采用了一种新颖的舍入算法和分析。我们算法的一个关键成分是从非均匀分布中抽样的随机偏移量的使用。我们还考虑了问题的抢占式版本,即R|rij, pmtn|ΣjwjCj。我们再次使用非均匀分布的抽样偏移的思想来给出这个问题的第一个优于2的近似。这种改进还需要为每个作业的完整时间表使用带有变量的配置LP,并进行更仔细的分析。对于非抢占和抢占版本,我们首次打破了2的近似障碍。
{"title":"Better Unrelated Machine Scheduling for Weighted Completion Time via Random Offsets from Non-uniform Distributions","authors":"Sungjin Im, Shi Li","doi":"10.1109/FOCS.2016.23","DOIUrl":"https://doi.org/10.1109/FOCS.2016.23","url":null,"abstract":"In this paper we consider the classic scheduling problem of minimizing total weighted completion time on unrelated machines when jobs have release times, i.e, R|rij| Σj wjCj using the three-field notation. For this problem, a 2-approximation is known based on a novel convex programming (J. ACM 2001 by Skutella). It has been a long standing open problem if one can improve upon this 2-approximation (Open Problem 8 in J. of Sched. 1999 by Schuurman and Woeginger). We answer this question in the affirmative by giving a 1.8786-approximation. We achieve this via a surprisingly simple linear programming, but a novel rounding algorithm and analysis. A key ingredient of our algorithm is the use of random offsets sampled from non-uniform distributions. We also consider the preemptive version of the problem, i.e, R|rij, pmtn|ΣjwjCj. We again use the idea of sampling offsets from non-uniform distributions to give the first better than 2-approximation for this problem. This improvement also requires use of a configuration LP with variables for each job's complete schedules along with more careful analysis. For both non-preemptive and preemptive versions, we break the approximation barrier of 2 for the first time.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131392742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Towards Strong Reverse Minkowski-Type Inequalities for Lattices 格的强逆minkowski型不等式
Pub Date : 2016-06-22 DOI: 10.1109/FOCS.2016.55
D. Dadush, O. Regev
We present a natural reverse Minkowski-type inequality for lattices, which gives upper bounds on the number of lattice points in a Euclidean ball in terms of sublattice determinants, and conjecture its optimal form. The conjecture exhibits a surprising wealth of connections to various areas in mathematics and computer science, including a conjecture motivated by integer programming by Kannan and Lovasz (Annals of Math. 1988), a question from additive combinatorics asked by Green, a question on Brownian motions asked by Saloff-Coste (Colloq. Math. 2010), a theorem by Milman and Pisier from convex geometry (Ann. Probab. 1987), worst-case to average-case reductions in lattice-based cryptography, and more. We present these connections, provide evidence for the conjecture, and discuss possible approaches towards a proof. Our main technical contribution is in proving that our conjecture implies the l2 case of the Kannan and Lovasz conjecture. The proof relies on a novel convex relaxation for the covering radius, and a rounding procedure based on "uncrossing" lattice subspaces.
本文给出了一个自然逆minkowski型格不等式,用子格行列式给出了欧几里得球格点数目的上界,并推测了其最优形式。这个猜想与数学和计算机科学的各个领域有着惊人的联系,包括Kannan和Lovasz提出的整数规划的猜想(《数学年鉴》,1988),Green提出的加性组合问题,salff - coste提出的布朗运动问题(Colloq. Math, 2010), Milman和Pisier提出的凸几何定理(Ann。Probab. 1987),基于格的密码术的最坏情况到平均情况的缩减,以及更多。我们提出这些联系,为猜想提供证据,并讨论可能的证明方法。我们的主要技术贡献是证明了我们的猜想包含了Kannan和Lovasz猜想的第2种情况。该证明依赖于覆盖半径的一种新颖的凸松弛,以及基于“不相交”格子空间的舍入过程。
{"title":"Towards Strong Reverse Minkowski-Type Inequalities for Lattices","authors":"D. Dadush, O. Regev","doi":"10.1109/FOCS.2016.55","DOIUrl":"https://doi.org/10.1109/FOCS.2016.55","url":null,"abstract":"We present a natural reverse Minkowski-type inequality for lattices, which gives upper bounds on the number of lattice points in a Euclidean ball in terms of sublattice determinants, and conjecture its optimal form. The conjecture exhibits a surprising wealth of connections to various areas in mathematics and computer science, including a conjecture motivated by integer programming by Kannan and Lovasz (Annals of Math. 1988), a question from additive combinatorics asked by Green, a question on Brownian motions asked by Saloff-Coste (Colloq. Math. 2010), a theorem by Milman and Pisier from convex geometry (Ann. Probab. 1987), worst-case to average-case reductions in lattice-based cryptography, and more. We present these connections, provide evidence for the conjecture, and discuss possible approaches towards a proof. Our main technical contribution is in proving that our conjecture implies the l2 case of the Kannan and Lovasz conjecture. The proof relies on a novel convex relaxation for the covering radius, and a rounding procedure based on \"uncrossing\" lattice subspaces.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116056188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Settling the Complexity of Computing Approximate Two-Player Nash Equilibria 求解近似二人纳什均衡的复杂度
Pub Date : 2016-06-14 DOI: 10.1145/3055589.3055596
A. Rubinstein
We prove that there exists a constant ε > 0 such that, assuming the Exponential Time Hypothesis for PPAD, computing an ε-approximate Nash equilibrium in a two-player (n × n) game requires quasi-polynomial time, nlog1-o(1) n. This matches (up to the o(1) term) the algorithm of Lipton, Markakis, and Mehta [54]. Our proof relies on a variety of techniques from the study of probabilistically checkable proofs (PCP), this is the first time that such ideas are used for a reduction between problems inside PPAD. En route, we also prove new hardness results for computing Nash equilibria in games with many players. In particular, we show that computing an ε-approximate Nash equilibrium in a game with n players requires 2Ω(n) oracle queries to the payoff tensors. This resolves an open problem posed by Hart and Nisan [43], Babichenko [13], and Chen et al. [28]. In fact, our results for n-player games are stronger: they hold with respect to the (ε,δ)-WeakNash relaxation recently introduced by Babichenko et al. [15].
我们证明了存在一个ε > 0的常数,使得在PPAD的指数时间假设下,计算两人(n × n)博弈中的ε-近似纳什均衡需要拟多项式时间nlog1-o(1) n。这与Lipton, Markakis和Mehta的算法(直到o(1)项)相匹配[54]。我们的证明依赖于概率可检验证明(PCP)研究中的各种技术,这是第一次将这些思想用于PPAD内部问题之间的约简。在此过程中,我们还证明了在有许多参与者的博弈中计算纳什均衡的新硬度结果。特别地,我们证明了在有n个参与者的博弈中计算ε-近似纳什均衡需要2Ω(n)对支付张量的oracle查询。这解决了Hart和Nisan[43]、Babichenko[13]和Chen等[28]提出的一个开放性问题。事实上,我们对n人博弈的结果更强:它们符合Babichenko等人最近引入的(ε,δ)-WeakNash松弛[15]。
{"title":"Settling the Complexity of Computing Approximate Two-Player Nash Equilibria","authors":"A. Rubinstein","doi":"10.1145/3055589.3055596","DOIUrl":"https://doi.org/10.1145/3055589.3055596","url":null,"abstract":"We prove that there exists a constant ε > 0 such that, assuming the Exponential Time Hypothesis for PPAD, computing an ε-approximate Nash equilibrium in a two-player (n × n) game requires quasi-polynomial time, nlog1-o(1) n. This matches (up to the o(1) term) the algorithm of Lipton, Markakis, and Mehta [54]. Our proof relies on a variety of techniques from the study of probabilistically checkable proofs (PCP), this is the first time that such ideas are used for a reduction between problems inside PPAD. En route, we also prove new hardness results for computing Nash equilibria in games with many players. In particular, we show that computing an ε-approximate Nash equilibrium in a game with n players requires 2Ω(n) oracle queries to the payoff tensors. This resolves an open problem posed by Hart and Nisan [43], Babichenko [13], and Chen et al. [28]. In fact, our results for n-player games are stronger: they hold with respect to the (ε,δ)-WeakNash relaxation recently introduced by Babichenko et al. [15].","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116068383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 123
Hopsets with Constant Hopbound, and Applications to Approximate Shortest Paths 常希望界hopset及其在最短路径近似中的应用
Pub Date : 2016-05-15 DOI: 10.1109/FOCS.2016.22
Michael Elkin, Ofer Neiman
A (β, ∈)-hopset for a weighted undirected n-vertex graph G = (V, E) is a set of edges, whose addition to the graph guarantees that every pair of vertices has a path between them that contains at most β edges, whose length is within 1 + ∈ of the shortest path. In her seminal paper, Cohen [8, JACM 2000] introduced the notion of hopsets in the context of parallel computation of approximate shortest paths, and since then it has found numerous applications in various other settings, such as dynamic graph algorithms, distributed computing, and the streaming model. Cohen [8] devised efficient algorithms for constructing hopsets with polylogarithmic in n number of hops. Her constructions remain the state-of-the-art since the publication of her paper in STOC'94, i.e., for more than two decades. In this paper we exhibit the first construction of sparse hopsets with a constant number of hops. We also find efficient algorithms for hopsets in various computational settings, improving the best known constructions. Generally, our hopsets strictly outperform the hopsets of [8], both in terms of their parameters, and in terms of the resources required to construct them. We demonstrate the applicability of our results for the fundamental problem of computing approximate shortest paths from s sources. Our results improve the running time for this problem in the parallel, distributed and streaming models, for a vast range of s.
对于一个加权无向n顶点图G = (V, E), A (β,∈)-hopset是一组边,它们的加入保证了每一对顶点之间都有一条最多包含β条边的路径,其长度在最短路径的1 +∈内。在她的开创性论文中,Cohen [8, JACM 2000]在近似最短路径并行计算的背景下引入了hopset的概念,从那时起,它已经在各种其他设置中找到了许多应用,例如动态图算法,分布式计算和流模型。Cohen[8]设计了高效的算法来构造n跳数为多对数的hopset。自从她的论文在STOC'94发表以来,她的结构仍然是最先进的,即二十多年来。在本文中,我们展示了具有常数跳数的稀疏跳集的第一个构造。我们还在各种计算设置中找到了hopset的有效算法,改进了最著名的结构。一般来说,我们的hopset严格优于[8]的hopset,无论是在它们的参数方面,还是在构建它们所需的资源方面。我们证明了我们的结果对从5个源计算近似最短路径的基本问题的适用性。我们的结果在并行、分布式和流模型下改善了这个问题的运行时间,在很大范围内。
{"title":"Hopsets with Constant Hopbound, and Applications to Approximate Shortest Paths","authors":"Michael Elkin, Ofer Neiman","doi":"10.1109/FOCS.2016.22","DOIUrl":"https://doi.org/10.1109/FOCS.2016.22","url":null,"abstract":"A (β, ∈)-hopset for a weighted undirected n-vertex graph G = (V, E) is a set of edges, whose addition to the graph guarantees that every pair of vertices has a path between them that contains at most β edges, whose length is within 1 + ∈ of the shortest path. In her seminal paper, Cohen [8, JACM 2000] introduced the notion of hopsets in the context of parallel computation of approximate shortest paths, and since then it has found numerous applications in various other settings, such as dynamic graph algorithms, distributed computing, and the streaming model. Cohen [8] devised efficient algorithms for constructing hopsets with polylogarithmic in n number of hops. Her constructions remain the state-of-the-art since the publication of her paper in STOC'94, i.e., for more than two decades. In this paper we exhibit the first construction of sparse hopsets with a constant number of hops. We also find efficient algorithms for hopsets in various computational settings, improving the best known constructions. Generally, our hopsets strictly outperform the hopsets of [8], both in terms of their parameters, and in terms of the resources required to construct them. We demonstrate the applicability of our results for the fundamental problem of computing approximate shortest paths from s sources. Our results improve the running time for this problem in the parallel, distributed and streaming models, for a vast range of s.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121474791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 77
Popular Conjectures as a Barrier for Dynamic Planar Graph Algorithms 流行猜想是动态平面图算法的障碍
Pub Date : 2016-05-12 DOI: 10.1109/FOCS.2016.58
Amir Abboud, Søren Dahlgaard
The dynamic shortest paths problem on planar graphs asks us to preprocess a planar graph G such that we may support insertions and deletions of edges in G as well as distance queries between any two nodes u, v subject to the constraint that the graph remains planar at all times. This problem has been extensively studied in both the theory and experimental communities over the past decades. The best known algorithm performs queries and updates in Õ(n2/3) time, based on ideas of a seminal paper by Fakcharoenphol and Rao [FOCS'01]. A (1+ε)-approximation algorithm of Abraham et al. [STOC'12] performs updates and queries in Õ(√n) time. An algorithm with a more practical O(polylog(n)) runtime would be a major breakthrough. However, such runtimes are only known for a (1+ε)-approximation in a model where only restricted weight updates are allowed due to Abraham et al. [SODA'16], or for easier problems like connectivity. In this paper, we follow a recent and very active line of work on showing lower bounds for polynomial time problems based on popular conjectures, obtaining the first such results for natural problems in planar graphs. Such results were previously out of reach due to the highly non-planar nature of known reductions and the impossibility of "planarizing gadgets". We introduce a new framework which is inspired by techniques from the literatures on distance labelling schemes and on parameterized complexity. Using our framework, we show that no algorithm for dynamic shortest paths or maximum weight bipartite matching in planar graphs can support both updates and queries in amortized O(n1/2-ε) time, for any ε>0, unless the classical all-pairs-shortest-paths problem can be solved in truly subcubic time, which is widely believed to be impossible. We extend these results to obtain strong lower bounds for other related problems as well as for possible trade-offs between query and update time. Interestingly, our lower bounds hold even in very restrictive models where only weight updates are allowed.
平面图上的动态最短路径问题要求我们对一个平面图G进行预处理,这样我们就可以支持G中边的插入和删除,以及任意两个节点u, v之间的距离查询,前提是图始终保持平面。在过去的几十年里,这个问题在理论界和实验界都得到了广泛的研究。最著名的算法基于Fakcharoenphol和Rao [FOCS'01]的一篇开创性论文的思想,在Õ(n2/3)时间内执行查询和更新。Abraham等人[STOC'12]的(1+ε)近似算法在Õ(√n)时间内执行更新和查询。具有更实用的O(polylog(n))运行时间的算法将是一个重大突破。然而,由于Abraham等人[SODA'16]的原因,这种运行时只能在模型中以(1+ε)近似已知,其中只允许有限的权重更新,或者对于连接等更简单的问题。在本文中,我们遵循最近的一个非常活跃的工作路线,基于流行的猜想来显示多项式时间问题的下界,第一次得到了平面图中自然问题的下界结果。这样的结果以前是遥不可及的,由于高度非平面性质的已知还原和不可能的“平面化小工具”。我们引入了一个新的框架,该框架受到距离标记方案和参数化复杂性文献的启发。利用我们的框架,我们证明了在任何ε>0的情况下,平面图中动态最短路径或最大权值二部匹配的算法都不能在平摊O(n1/2-ε)时间内支持更新和查询,除非经典的全对最短路径问题可以在真正的次三次时间内解决,而这被普遍认为是不可能的。我们扩展这些结果,以获得其他相关问题的强下限,以及查询和更新时间之间可能的折衷。有趣的是,即使在只允许权重更新的非常严格的模型中,我们的下限也保持不变。
{"title":"Popular Conjectures as a Barrier for Dynamic Planar Graph Algorithms","authors":"Amir Abboud, Søren Dahlgaard","doi":"10.1109/FOCS.2016.58","DOIUrl":"https://doi.org/10.1109/FOCS.2016.58","url":null,"abstract":"The dynamic shortest paths problem on planar graphs asks us to preprocess a planar graph G such that we may support insertions and deletions of edges in G as well as distance queries between any two nodes u, v subject to the constraint that the graph remains planar at all times. This problem has been extensively studied in both the theory and experimental communities over the past decades. The best known algorithm performs queries and updates in Õ(n2/3) time, based on ideas of a seminal paper by Fakcharoenphol and Rao [FOCS'01]. A (1+ε)-approximation algorithm of Abraham et al. [STOC'12] performs updates and queries in Õ(√n) time. An algorithm with a more practical O(polylog(n)) runtime would be a major breakthrough. However, such runtimes are only known for a (1+ε)-approximation in a model where only restricted weight updates are allowed due to Abraham et al. [SODA'16], or for easier problems like connectivity. In this paper, we follow a recent and very active line of work on showing lower bounds for polynomial time problems based on popular conjectures, obtaining the first such results for natural problems in planar graphs. Such results were previously out of reach due to the highly non-planar nature of known reductions and the impossibility of \"planarizing gadgets\". We introduce a new framework which is inspired by techniques from the literatures on distance labelling schemes and on parameterized complexity. Using our framework, we show that no algorithm for dynamic shortest paths or maximum weight bipartite matching in planar graphs can support both updates and queries in amortized O(n1/2-ε) time, for any ε>0, unless the classical all-pairs-shortest-paths problem can be solved in truly subcubic time, which is widely believed to be impossible. We extend these results to obtain strong lower bounds for other related problems as well as for possible trade-offs between query and update time. Interestingly, our lower bounds hold even in very restrictive models where only weight updates are allowed.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128508419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
An Algorithm for Komlós Conjecture Matching Banaszczyk's Bound Komlós猜想匹配Banaszczyk界的算法
Pub Date : 2016-05-10 DOI: 10.1109/FOCS.2016.89
N. Bansal, D. Dadush, S. Garg
We consider the problem of finding a low discrepancy coloring for sparse set systems where each element lies in at most t sets. We give an efficient algorithm that finds a coloring with discrepancy O((t log n)1/2), matching the best known non-constructive bound for the problem due to Banaszczyk. The previous algorithms only achieved an O(t1/2 log n) bound. Our result also extends to the more general Komlós setting and gives an algorithmic O(log1/2 n) bound.
我们考虑对于每个元素最多存在于t个集合的稀疏集系统寻找低差异着色问题。我们给出了一个有效的算法,它找到了一个差异为O((t log n)1/2)的着色,匹配了Banaszczyk问题的最著名的非构造界。之前的算法只能达到O(t1/2 log n)的界。我们的结果也扩展到更一般的Komlós设置,并给出了一个O(log1/2 n)的算法界。
{"title":"An Algorithm for Komlós Conjecture Matching Banaszczyk's Bound","authors":"N. Bansal, D. Dadush, S. Garg","doi":"10.1109/FOCS.2016.89","DOIUrl":"https://doi.org/10.1109/FOCS.2016.89","url":null,"abstract":"We consider the problem of finding a low discrepancy coloring for sparse set systems where each element lies in at most t sets. We give an efficient algorithm that finds a coloring with discrepancy O((t log n)1/2), matching the best known non-constructive bound for the problem due to Banaszczyk. The previous algorithms only achieved an O(t1/2 log n) bound. Our result also extends to the more general Komlós setting and gives an algorithmic O(log1/2 n) bound.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114337422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
Approximate Gaussian Elimination for Laplacians - Fast, Sparse, and Simple 近似高斯消除拉普拉斯-快速,稀疏,简单
Pub Date : 2016-05-08 DOI: 10.1109/FOCS.2016.68
Rasmus Kyng, Sushant Sachdeva
We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian by the product of a sparse lower triangular matrix with its transpose. This gives the first nearly linear time solver for Laplacian systems that is based purely on random sampling, and does not use any graph theoretic constructions such as low-stretch trees, sparsifiers, or expanders. Our algorithm performs a subsampled Cholesky factorization, which we analyze using matrix martingales. As part of the analysis, we give a proof of a concentration inequality for matrix martingales where the differences are sums of conditionally independent variables.
我们展示了如何对拉普拉斯矩阵进行稀疏近似高斯消去。我们提出了一个简单的,近线性的时间算法,它通过一个稀疏下三角矩阵与其转置的乘积来近似拉普拉斯矩阵。这给出了拉普拉斯系统的第一个近线性时间解算器,它完全基于随机抽样,不使用任何图论结构,如低拉伸树、稀疏化器或扩展器。我们的算法执行一个次抽样的Cholesky分解,我们使用矩阵鞅来分析它。作为分析的一部分,我们给出了一个矩阵鞅集中不等式的证明,其中差是条件自变量的和。
{"title":"Approximate Gaussian Elimination for Laplacians - Fast, Sparse, and Simple","authors":"Rasmus Kyng, Sushant Sachdeva","doi":"10.1109/FOCS.2016.68","DOIUrl":"https://doi.org/10.1109/FOCS.2016.68","url":null,"abstract":"We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian by the product of a sparse lower triangular matrix with its transpose. This gives the first nearly linear time solver for Laplacian systems that is based purely on random sampling, and does not use any graph theoretic constructions such as low-stretch trees, sparsifiers, or expanders. Our algorithm performs a subsampled Cholesky factorization, which we analyze using matrix martingales. As part of the analysis, we give a proof of a concentration inequality for matrix martingales where the differences are sums of conditionally independent variables.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117335217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 164
Separations in Communication Complexity Using Cheat Sheets and Information Complexity 使用小抄表和信息复杂性分离通信复杂性
Pub Date : 2016-05-04 DOI: 10.1109/FOCS.2016.66
Anurag Anshu, Aleksandrs Belovs, S. Ben-David, Mika Göös, Rahul Jain, Robin Kothari, Troy Lee, M. Santha
While exponential separations are known between quantum and randomized communication complexity for partial functions (Raz, STOC 1999), the best known separation between these measures for a total function is quadratic, witnessed by the disjointness function. We give the first super-quadratic separation between quantum and randomized communication complexity for a total function, giving an example exhibiting a power 2.5 gap. We further present a 1.5 power separation between exact quantum and randomized communication complexity, improving on the previous ≈ 1.15 separation by Ambainis (STOC 2013). Finally, we present a nearly optimal quadratic separation between randomized communication complexity and the logarithm of the partition number, improving upon the previous best power 1.5 separation due to Goos, Jayram, Pitassi, and Watson. Our results are the communication analogues of separations in query complexity proved using the recent cheat sheet framework of Aaronson, Ben-David, and Kothari (STOC 2016). Our main technical results are randomized communication and information complexity lower bounds for a family of functions, called lookup functions, that generalize and port the cheat sheet framework to communication complexity.
虽然已知部分函数的量子和随机通信复杂性之间存在指数分离(Raz, STOC 1999),但对于总函数,这些度量之间最著名的分离是二次的,由不连接函数见证。我们给出了一个全函数的量子和随机通信复杂度之间的第一个超二次分离,并给出了一个显示2.5次方差距的例子。我们进一步提出了精确量子和随机通信复杂度之间的1.5功率分离,改进了Ambainis (STOC 2013)之前的≈1.15功率分离。最后,我们提出了随机通信复杂度和分区数对数之间的近似最优二次分离,改进了Goos、Jayram、Pitassi和Watson之前的最佳1.5次分离。我们的结果是使用Aaronson, Ben-David和Kothari (STOC 2016)最近的小抄表框架证明的查询复杂性分离的通信类似物。我们的主要技术成果是一组函数(称为查找函数)的随机通信和信息复杂性下界,这些函数将小抄框架推广并移植到通信复杂性。
{"title":"Separations in Communication Complexity Using Cheat Sheets and Information Complexity","authors":"Anurag Anshu, Aleksandrs Belovs, S. Ben-David, Mika Göös, Rahul Jain, Robin Kothari, Troy Lee, M. Santha","doi":"10.1109/FOCS.2016.66","DOIUrl":"https://doi.org/10.1109/FOCS.2016.66","url":null,"abstract":"While exponential separations are known between quantum and randomized communication complexity for partial functions (Raz, STOC 1999), the best known separation between these measures for a total function is quadratic, witnessed by the disjointness function. We give the first super-quadratic separation between quantum and randomized communication complexity for a total function, giving an example exhibiting a power 2.5 gap. We further present a 1.5 power separation between exact quantum and randomized communication complexity, improving on the previous ≈ 1.15 separation by Ambainis (STOC 2013). Finally, we present a nearly optimal quadratic separation between randomized communication complexity and the logarithm of the partition number, improving upon the previous best power 1.5 separation due to Goos, Jayram, Pitassi, and Watson. Our results are the communication analogues of separations in query complexity proved using the recent cheat sheet framework of Aaronson, Ben-David, and Kothari (STOC 2016). Our main technical results are randomized communication and information complexity lower bounds for a family of functions, called lookup functions, that generalize and port the cheat sheet framework to communication complexity.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124028656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Fully Dynamic Maximal Matching in Constant Update Time 恒定更新时间下的全动态最大匹配
Pub Date : 2016-04-28 DOI: 10.1109/FOCS.2016.43
Shay Solomon
Baswana, Gupta and Sen [FOCS'11] showed that fully dynamic maximal matching can be maintained in general graphs with logarithmic amortized update time. More specifically, starting from an empty graph on n fixed vertices, they devised a randomized algorithm for maintaining maximal matching over any sequence of t edge insertions and deletions with a total runtime of O(t log n) in expectation and O(t log n + n log2 n) with high probability. Whether or not this runtime bound can be improved towards O(t) has remained an important open problem. Despite significant research efforts, this question has resisted numerous attempts at resolution even for basic graph families such as forests. In this paper, we resolve the question in the affirmative, by presenting a randomized algorithm for maintaining maximal matching in general graphs with constant amortized update time. The optimal runtime bound O(t) of our algorithm holds both in expectation and with high probability. As an immediate corollary, we can maintain 2-approximate vertex cover with constant amortized update time. This result is essentially the best one can hope for (under the unique games conjecture) in the context of dynamic approximate vertex cover, culminating a long line of research. Our algorithm builds on Baswana et al.'s algorithm, but is inherently different and arguably simpler. As an implication of our simplified approach, the space usage of our algorithm is linear in the (dynamic) graph size, while the space usage of Baswana et al.'s algorithm is always at least Ω(n log n). Finally, we present applications to approximate weighted matchings and to distributed networks.
Baswana, Gupta和Sen [FOCS'11]表明,对于具有对数平摊更新时间的一般图,可以保持完全动态的最大匹配。更具体地说,他们从一个有n个固定顶点的空图开始,设计了一种随机算法,用于在任何t条边插入和删除的序列上保持最大匹配,总运行时间为O(t log n),期望为O(t log n + n log2n),高概率为O(t log n + n log2n)。这个运行时边界是否可以改进到O(t)仍然是一个重要的开放问题。尽管进行了大量的研究工作,但这个问题在解决诸如森林之类的基本图族问题上遇到了许多困难。在本文中,我们提出了一种保持一般图的最大匹配的随机算法,该算法具有恒定的平摊更新时间。我们的算法的最优运行时边界O(t)既符合期望又具有高概率。作为一个直接推论,我们可以保持2-近似顶点覆盖与常数平摊更新时间。在动态近似顶点覆盖的背景下,这个结果基本上是人们所能期望的最好结果(在独特的游戏猜想下),这是一长串研究的结果。我们的算法建立在Baswana等人的算法的基础上,但本质上是不同的,可以说更简单。作为我们的简化方法的含义,我们的算法的空间使用在(动态)图大小上是线性的,而Baswana等人的算法的空间使用总是至少为Ω(n log n)。最后,我们提出了近似加权匹配和分布式网络的应用。
{"title":"Fully Dynamic Maximal Matching in Constant Update Time","authors":"Shay Solomon","doi":"10.1109/FOCS.2016.43","DOIUrl":"https://doi.org/10.1109/FOCS.2016.43","url":null,"abstract":"Baswana, Gupta and Sen [FOCS'11] showed that fully dynamic maximal matching can be maintained in general graphs with logarithmic amortized update time. More specifically, starting from an empty graph on n fixed vertices, they devised a randomized algorithm for maintaining maximal matching over any sequence of t edge insertions and deletions with a total runtime of O(t log n) in expectation and O(t log n + n log2 n) with high probability. Whether or not this runtime bound can be improved towards O(t) has remained an important open problem. Despite significant research efforts, this question has resisted numerous attempts at resolution even for basic graph families such as forests. In this paper, we resolve the question in the affirmative, by presenting a randomized algorithm for maintaining maximal matching in general graphs with constant amortized update time. The optimal runtime bound O(t) of our algorithm holds both in expectation and with high probability. As an immediate corollary, we can maintain 2-approximate vertex cover with constant amortized update time. This result is essentially the best one can hope for (under the unique games conjecture) in the context of dynamic approximate vertex cover, culminating a long line of research. Our algorithm builds on Baswana et al.'s algorithm, but is inherently different and arguably simpler. As an implication of our simplified approach, the space usage of our algorithm is linear in the (dynamic) graph size, while the space usage of Baswana et al.'s algorithm is always at least Ω(n log n). Finally, we present applications to approximate weighted matchings and to distributed networks.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"CE-27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126544399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 124
Agnostic Estimation of Mean and Covariance 均值和协方差的不可知论估计
Pub Date : 2016-04-24 DOI: 10.1109/FOCS.2016.76
Kevin A. Lai, Anup B. Rao, S. Vempala
We consider the problem of estimating the mean and covariance of a distribution from i.i.d. samples in the presence of a fraction of malicious noise. This is in contrast to much recent work where the noise itself is assumed to be from a distribution of known type. The agnostic problem includes many interesting special cases, e.g., learning the parameters of a single Gaussian (or finding the best-fit Gaussian) when a fraction of data is adversarially corrupted, agnostically learning mixtures, agnostic ICA, etc. We present polynomial-time algorithms to estimate the mean and covariance with error guarantees in terms of information-theoretic lower bounds. As a corollary, we also obtain an agnostic algorithm for Singular Value Decomposition.
我们考虑了在存在一小部分恶意噪声的情况下,从i.i.d样本估计分布的均值和协方差的问题。这与最近的许多研究相反,这些研究认为噪声本身来自已知类型的分布。不可知论问题包括许多有趣的特殊情况,例如,当一小部分数据被对抗性破坏时,学习单个高斯函数的参数(或找到最适合的高斯函数),不可知论学习混合,不可知论ICA等。我们提出了多项式时间算法估计均值和协方差与误差保证在信息论的下界。作为推论,我们也得到了奇异值分解的不可知算法。
{"title":"Agnostic Estimation of Mean and Covariance","authors":"Kevin A. Lai, Anup B. Rao, S. Vempala","doi":"10.1109/FOCS.2016.76","DOIUrl":"https://doi.org/10.1109/FOCS.2016.76","url":null,"abstract":"We consider the problem of estimating the mean and covariance of a distribution from i.i.d. samples in the presence of a fraction of malicious noise. This is in contrast to much recent work where the noise itself is assumed to be from a distribution of known type. The agnostic problem includes many interesting special cases, e.g., learning the parameters of a single Gaussian (or finding the best-fit Gaussian) when a fraction of data is adversarially corrupted, agnostically learning mixtures, agnostic ICA, etc. We present polynomial-time algorithms to estimate the mean and covariance with error guarantees in terms of information-theoretic lower bounds. As a corollary, we also obtain an agnostic algorithm for Singular Value Decomposition.","PeriodicalId":414001,"journal":{"name":"2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123861068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 309
期刊
2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1