首页 > 最新文献

Proceedings of the forty-seventh annual ACM symposium on Theory of Computing最新文献

英文 中文
Forrelation: A Problem that Optimally Separates Quantum from Classical Computing 关联:量子计算与经典计算的最佳分离问题
Pub Date : 2014-11-20 DOI: 10.1145/2746539.2746547
S. Aaronson, A. Ambainis
We achieve essentially the largest possible separation between quantum and classical query complexities. We do so using a property-testing problem called Forrelation, where one needs to decide whether one Boolean function is highly correlated with the Fourier transform of a second function. This problem can be solved using 1 quantum query, yet we show that any randomized algorithm needs Ω(√(N)log(N)) queries (improving an Ω(N1/4) lower bound of Aaronson). Conversely, we show that this 1 versus Ω(√(N)) separation is optimal: indeed, any t-query quantum algorithm whatsoever can be simulated by an O(N1-1/2t)-query randomized algorithm. Thus, resolving an open question of Buhrman et al. from 2002, there is no partial Boolean function whose quantum query complexity is constant and whose randomized query complexity is linear. We conjecture that a natural generalization of Forrelation achieves the optimal t versus Ω(N1-1/2t) separation for all t. As a bonus, we show that this generalization is BQP-complete. This yields what's arguably the simplest BQP-complete problem yet known, and gives a second sense in which Forrelation "captures the maximum power of quantum computation."
我们基本上实现了量子查询和经典查询复杂性之间最大可能的分离。我们使用一个被称为关系的性质测试问题来做到这一点,其中需要确定一个布尔函数是否与另一个函数的傅里叶变换高度相关。这个问题可以使用1个量子查询来解决,但是我们证明任何随机算法都需要Ω(√(N)log(N))个查询(改进Aaronson的Ω(N1/4)下界)。相反,我们表明这种1与Ω(√(N))的分离是最优的:实际上,任何t-查询量子算法都可以通过O(N1-1/2t)-查询随机化算法来模拟。因此,解决了Buhrman等人2002年提出的一个开放性问题,不存在量子查询复杂度为常数且随机查询复杂度为线性的部分布尔函数。我们推测,对于所有t, Forrelation的自然推广实现了与Ω(N1-1/2t)分离的最优t。作为奖励,我们证明了这种推广是bqp完备的。这产生了目前已知的最简单的bqp完备问题,并给出了第二种意义,即Forrelation“抓住了量子计算的最大能力”。
{"title":"Forrelation: A Problem that Optimally Separates Quantum from Classical Computing","authors":"S. Aaronson, A. Ambainis","doi":"10.1145/2746539.2746547","DOIUrl":"https://doi.org/10.1145/2746539.2746547","url":null,"abstract":"We achieve essentially the largest possible separation between quantum and classical query complexities. We do so using a property-testing problem called Forrelation, where one needs to decide whether one Boolean function is highly correlated with the Fourier transform of a second function. This problem can be solved using 1 quantum query, yet we show that any randomized algorithm needs Ω(√(N)log(N)) queries (improving an Ω(N1/4) lower bound of Aaronson). Conversely, we show that this 1 versus Ω(√(N)) separation is optimal: indeed, any t-query quantum algorithm whatsoever can be simulated by an O(N1-1/2t)-query randomized algorithm. Thus, resolving an open question of Buhrman et al. from 2002, there is no partial Boolean function whose quantum query complexity is constant and whose randomized query complexity is linear. We conjecture that a natural generalization of Forrelation achieves the optimal t versus Ω(N1-1/2t) separation for all t. As a bonus, we show that this generalization is BQP-complete. This yields what's arguably the simplest BQP-complete problem yet known, and gives a second sense in which Forrelation \"captures the maximum power of quantum computation.\"","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83718763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 115
The Directed Grid Theorem 有向网格定理
Pub Date : 2014-11-20 DOI: 10.1145/2746539.2746586
K. Kawarabayashi, S. Kreutzer
The grid theorem, originally proved in 1986 by Robertson and Seymour in Graph Minors V, is one of the most central results in the study of graph minors. It has found numerous applications in algorithmic graph structure theory, for instance in bidimensionality theory, and it is the basis for several other structure theorems developed in the graph minors project. In the mid-90s, Reed and Johnson, Robertson, Seymour and Thomas, independently, conjectured an analogous theorem for directed graphs, i.e. the existence of a function f : N-> N such that every digraph of directed tree width at least f(k) contains a directed grid of order k. In an unpublished manuscript from 2001, Johnson, Robertson, Seymour and Thomas give a proof of this conjecture for planar digraphs. But for over a decade, this was the most general case proved for the conjecture. Only very recently, this result has been extended by Kawarabayashi and Kreutzer to all classes of digraphs excluding a fixed undirected graph as a minor. In this paper, nearly two decades after the conjecture was made, we are finally able to confirm the Reed, Johnson, Robertson, Seymour and Thomas conjecture in full generality. As consequence of our results we are able to improve results by Reed 1996 on disjoint cycles of length at least l and by Kawarabayashi, Kobayashi, Kreutzer on quarter-integral disjoint paths. We expect many more algorithmic results to follow from the grid theorem.
网格定理最初由Robertson和Seymour于1986年在Graph minor V中证明,是图minor研究中最重要的结果之一。它在算法图结构理论中有许多应用,例如在二维理论中,它是图小项目中开发的其他几个结构定理的基础。在90年代中期,Reed和Johnson, Robertson, Seymour和Thomas分别独立地推测了有向图的一个类似定理,即存在一个函数f: N-> N,使得每个有向树宽度至少为f(k)的有向图包含一个k阶的有向网格。在2001年未发表的手稿中,Johnson, Robertson, Seymour和Thomas给出了平面有向图的这个猜想的证明。但十多年来,这是该猜想最普遍的证明。直到最近,这个结果才被Kawarabayashi和Kreutzer推广到除固定无向图外的所有有向图类。在这篇论文中,在里德、约翰逊、罗伯逊、西摩和托马斯猜想提出近二十年后,我们终于能够全面地证实里德、约翰逊、罗伯逊和托马斯猜想。由于我们的结果,我们能够改进Reed 1996关于长度至少为1的不相交环和Kawarabayashi, Kobayashi, Kreutzer关于四分之一积分不相交路径的结果。我们期待更多的算法结果遵循网格定理。
{"title":"The Directed Grid Theorem","authors":"K. Kawarabayashi, S. Kreutzer","doi":"10.1145/2746539.2746586","DOIUrl":"https://doi.org/10.1145/2746539.2746586","url":null,"abstract":"The grid theorem, originally proved in 1986 by Robertson and Seymour in Graph Minors V, is one of the most central results in the study of graph minors. It has found numerous applications in algorithmic graph structure theory, for instance in bidimensionality theory, and it is the basis for several other structure theorems developed in the graph minors project. In the mid-90s, Reed and Johnson, Robertson, Seymour and Thomas, independently, conjectured an analogous theorem for directed graphs, i.e. the existence of a function f : N-> N such that every digraph of directed tree width at least f(k) contains a directed grid of order k. In an unpublished manuscript from 2001, Johnson, Robertson, Seymour and Thomas give a proof of this conjecture for planar digraphs. But for over a decade, this was the most general case proved for the conjecture. Only very recently, this result has been extended by Kawarabayashi and Kreutzer to all classes of digraphs excluding a fixed undirected graph as a minor. In this paper, nearly two decades after the conjecture was made, we are finally able to confirm the Reed, Johnson, Robertson, Seymour and Thomas conjecture in full generality. As consequence of our results we are able to improve results by Reed 1996 on disjoint cycles of length at least l and by Kawarabayashi, Kobayashi, Kreutzer on quarter-integral disjoint paths. We expect many more algorithmic results to follow from the grid theorem.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"7 6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78504577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Deterministic Global Minimum Cut of a Simple Graph in Near-Linear Time 一类简单图在近线性时间下的确定性全局最小割
Pub Date : 2014-11-18 DOI: 10.1145/2746539.2746588
K. Kawarabayashi, M. Thorup
We present a deterministic near-linear time algorithm that computes the edge-connectivity and finds a minimum cut for a simple undirected unweighted graph G with n vertices and m edges. This is the first o(mn) time deterministic algorithm for the problem. In near-linear time we can also construct the classic cactus representation of all minimum cuts. The previous fastest deterministic algorithm by Gabow from STOC'91 took O(m+λ2 n), where λ is the edge connectivity, but λ could be Ω(n). At STOC'96 Karger presented a randomized near linear time Monte Carlo algorithm for the minimum cut problem. As he points out, there is no better way of certifying the minimality of the returned cut than to use Gabow's slower deterministic algorithm and compare sizes. Our main technical contribution is a near-linear time algorithm that contracts vertex sets of a simple input graph G with minimum degree δ, producing a multigraph G with ~O(m/δ) edges which preserves all minimum cuts of G with at least two vertices on each side. In our deterministic near-linear time algorithm, we will decompose the problem via low-conductance cuts found using PageRank a la Brin and Page (1998), as analyzed by Andersson, Chung, and Lang at FOCS'06. Normally such algorithms for low-conductance cuts are randomized Monte Carlo algorithms, because they rely on guessing a good start vertex. However, in our case, we have so much structure that no guessing is needed.
我们提出了一种确定性的近线性时间算法,用于计算具有n个顶点和m条边的简单无向无权图G的边缘连通性并找到最小切割。这是该问题的第一个o(mn)时间确定性算法。在近线性时间内,我们也可以构造所有最小切量的经典仙人掌表示。STOC'91的Gabow先前最快的确定性算法耗时O(m+λ 2n),其中λ是边缘连通性,但λ可以是Ω(n)。在1996年的STOC会议上,Karger提出了一种求解最小割问题的随机近线性时间蒙特卡罗算法。正如他所指出的,没有比使用Gabow的较慢的确定性算法和比较大小更好的方法来证明返回切割的最小性。我们的主要技术贡献是一种近线性时间算法,该算法以最小度δ收缩简单输入图G的顶点集,产生具有~O(m/δ)条边的多图G,该多图G保留了G的所有最小切割,每条边至少有两个顶点。在我们的确定性近线性时间算法中,我们将通过使用PageRank a la Brin和Page(1998)发现的低电导切割来分解问题,正如Andersson, Chung和Lang在FOCS'06上所分析的那样。通常这种低电导切割算法是随机蒙特卡罗算法,因为它们依赖于猜测一个好的起始顶点。然而,在我们的例子中,我们有如此多的结构,不需要猜测。
{"title":"Deterministic Global Minimum Cut of a Simple Graph in Near-Linear Time","authors":"K. Kawarabayashi, M. Thorup","doi":"10.1145/2746539.2746588","DOIUrl":"https://doi.org/10.1145/2746539.2746588","url":null,"abstract":"We present a deterministic near-linear time algorithm that computes the edge-connectivity and finds a minimum cut for a simple undirected unweighted graph G with n vertices and m edges. This is the first o(mn) time deterministic algorithm for the problem. In near-linear time we can also construct the classic cactus representation of all minimum cuts. The previous fastest deterministic algorithm by Gabow from STOC'91 took O(m+λ2 n), where λ is the edge connectivity, but λ could be Ω(n). At STOC'96 Karger presented a randomized near linear time Monte Carlo algorithm for the minimum cut problem. As he points out, there is no better way of certifying the minimality of the returned cut than to use Gabow's slower deterministic algorithm and compare sizes. Our main technical contribution is a near-linear time algorithm that contracts vertex sets of a simple input graph G with minimum degree δ, producing a multigraph G with ~O(m/δ) edges which preserves all minimum cuts of G with at least two vertices on each side. In our deterministic near-linear time algorithm, we will decompose the problem via low-conductance cuts found using PageRank a la Brin and Page (1998), as analyzed by Andersson, Chung, and Lang at FOCS'06. Normally such algorithms for low-conductance cuts are randomized Monte Carlo algorithms, because they rely on guessing a good start vertex. However, in our case, we have so much structure that no guessing is needed.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74145285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
Preserving Statistical Validity in Adaptive Data Analysis 自适应数据分析中保持统计有效性
Pub Date : 2014-11-10 DOI: 10.1145/2746539.2746580
C. Dwork, V. Feldman, Moritz Hardt, T. Pitassi, Omer Reingold, Aaron Roth
A great deal of effort has been devoted to reducing the risk of spurious scientific discoveries, from the use of sophisticated validation techniques, to deep statistical methods for controlling the false discovery rate in multiple hypothesis testing. However, there is a fundamental disconnect between the theoretical results and the practice of data analysis: the theory of statistical inference assumes a fixed collection of hypotheses to be tested, or learning algorithms to be applied, selected non-adaptively before the data are gathered, whereas in practice data is shared and reused with hypotheses and new analyses being generated on the basis of data exploration and the outcomes of previous analyses. In this work we initiate a principled study of how to guarantee the validity of statistical inference in adaptive data analysis. As an instance of this problem, we propose and investigate the question of estimating the expectations of m adaptively chosen functions on an unknown distribution given n random samples. We show that, surprisingly, there is a way to estimate an exponential in n number of expectations accurately even if the functions are chosen adaptively. This gives an exponential improvement over standard empirical estimators that are limited to a linear number of estimates. Our result follows from a general technique that counter-intuitively involves actively perturbing and coordinating the estimates, using techniques developed for privacy preservation. We give additional applications of this technique to our question.
从使用复杂的验证技术,到在多假设检验中控制错误发现率的深度统计方法,人们已经投入了大量的努力来降低虚假科学发现的风险。然而,理论结果与数据分析实践之间存在根本的脱节:统计推断理论假设要测试的假设的固定集合,或者要应用的学习算法,在收集数据之前非自适应地选择,而在实践中,数据是与假设共享和重用的,并在数据探索和先前分析结果的基础上生成新的分析。在这项工作中,我们对如何保证自适应数据分析中统计推断的有效性进行了原则性研究。作为该问题的一个实例,我们提出并研究了在给定n个随机样本的未知分布上估计m个自适应选择函数的期望的问题。我们表明,令人惊讶的是,有一种方法可以准确地估计n个期望的指数,即使函数是自适应选择的。这给了一个指数改进的标准经验估计,是有限的线性数量的估计。我们的结果来自一种通用的技术,这种技术违反直觉地涉及主动干扰和协调估计,使用为保护隐私而开发的技术。对于我们的问题,我们给出了这种技术的其他应用。
{"title":"Preserving Statistical Validity in Adaptive Data Analysis","authors":"C. Dwork, V. Feldman, Moritz Hardt, T. Pitassi, Omer Reingold, Aaron Roth","doi":"10.1145/2746539.2746580","DOIUrl":"https://doi.org/10.1145/2746539.2746580","url":null,"abstract":"A great deal of effort has been devoted to reducing the risk of spurious scientific discoveries, from the use of sophisticated validation techniques, to deep statistical methods for controlling the false discovery rate in multiple hypothesis testing. However, there is a fundamental disconnect between the theoretical results and the practice of data analysis: the theory of statistical inference assumes a fixed collection of hypotheses to be tested, or learning algorithms to be applied, selected non-adaptively before the data are gathered, whereas in practice data is shared and reused with hypotheses and new analyses being generated on the basis of data exploration and the outcomes of previous analyses. In this work we initiate a principled study of how to guarantee the validity of statistical inference in adaptive data analysis. As an instance of this problem, we propose and investigate the question of estimating the expectations of m adaptively chosen functions on an unknown distribution given n random samples. We show that, surprisingly, there is a way to estimate an exponential in n number of expectations accurately even if the functions are chosen adaptively. This gives an exponential improvement over standard empirical estimators that are limited to a linear number of estimates. Our result follows from a general technique that counter-intuitively involves actively perturbing and coordinating the estimates, using techniques developed for privacy preservation. We give additional applications of this technique to our question.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84028293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 347
Sketching and Embedding are Equivalent for Norms 写生和嵌入对于规范是等价的
Pub Date : 2014-11-10 DOI: 10.1145/2746539.2746552
Alexandr Andoni, Robert Krauthgamer, Ilya P. Razenshteyn
An outstanding open question (http://sublinear.info, Question #5) asks to characterize metric spaces in which distances can be estimated using efficient sketches. Specifically, we say that a sketching algorithm is efficient if it achieves constant approximation using constant sketch size. A well-known result of Indyk (J. ACM, 2006) implies that a metric that admits a constant-distortion embedding into lp for p∈(0,2] also admits an efficient sketching scheme. But is the converse true, i.e., is embedding into lp the only way to achieve efficient sketching? We address these questions for the important special case of normed spaces, by providing an almost complete characterization of sketching in terms of embeddings. In particular, we prove that a finite-dimensional normed space allows efficient sketches if and only if it embeds (linearly) into l1-ε with constant distortion. We further prove that for norms that are closed under sum-product, efficient sketching is equivalent to embedding into l1 with constant distortion. Examples of such norms include the Earth Mover's Distance (specifically its norm variant, called Kantorovich-Rubinstein norm), and the trace norm (a.k.a. Schatten 1-norm or the nuclear norm). Using known non-embeddability theorems for these norms by Naor and Schechtman (SICOMP, 2007) and by Pisier (Compositio. Math., 1978), we then conclude that these spaces do not admit efficient sketches either, making progress towards answering another open question (http://sublinear.info, Question #7). Finally, we observe that resolving whether "sketching is equivalent to embedding into l1 for general norms" (i.e., without the above restriction) is equivalent to resolving a well-known open problem in Functional Analysis posed by Kwapien in 1969.
一个突出的开放问题(http://sublinear.info,问题#5)要求描述度量空间,其中可以使用有效的草图估计距离。具体来说,我们说一个素描算法是有效的,如果它使用恒定的草图大小达到恒定的近似。Indyk (J. ACM, 2006)的一个著名结果表明,对于p∈(0,2),一个允许恒定失真嵌入到lp中的度量也允许一个有效的草图方案。但反过来是否正确,即嵌入到lp中是实现高效素描的唯一方法?我们解决了这些问题的赋范空间的重要的特殊情况下,通过提供一个几乎完整的表征素描的嵌入。特别地,我们证明了有限维赋范空间允许有效的草图当且仅当它以恒定的畸变(线性)嵌入到l1-ε中。我们进一步证明了对于在和积下闭合的范数,有效的草图等价于以恒定失真嵌入l1。这些规范的例子包括地球移动者的距离(特别是它的范数变体,称为Kantorovich-Rubinstein范数)和迹范数(又名Schatten 1范数或核范数)。利用Naor和Schechtman (SICOMP, 2007)和Pisier (comtio . o)提出的这些规范的已知不可嵌入性定理。数学。, 1978),然后我们得出结论,这些空间也不承认有效的草图,朝着回答另一个开放问题(http://sublinear.info,问题#7)取得进展。最后,我们观察到,解决“草图是否等同于一般规范的嵌入l1”(即,没有上述限制)等同于解决Kwapien在1969年提出的功能分析中一个众所周知的开放问题。
{"title":"Sketching and Embedding are Equivalent for Norms","authors":"Alexandr Andoni, Robert Krauthgamer, Ilya P. Razenshteyn","doi":"10.1145/2746539.2746552","DOIUrl":"https://doi.org/10.1145/2746539.2746552","url":null,"abstract":"An outstanding open question (http://sublinear.info, Question #5) asks to characterize metric spaces in which distances can be estimated using efficient sketches. Specifically, we say that a sketching algorithm is efficient if it achieves constant approximation using constant sketch size. A well-known result of Indyk (J. ACM, 2006) implies that a metric that admits a constant-distortion embedding into lp for p∈(0,2] also admits an efficient sketching scheme. But is the converse true, i.e., is embedding into lp the only way to achieve efficient sketching? We address these questions for the important special case of normed spaces, by providing an almost complete characterization of sketching in terms of embeddings. In particular, we prove that a finite-dimensional normed space allows efficient sketches if and only if it embeds (linearly) into l1-ε with constant distortion. We further prove that for norms that are closed under sum-product, efficient sketching is equivalent to embedding into l1 with constant distortion. Examples of such norms include the Earth Mover's Distance (specifically its norm variant, called Kantorovich-Rubinstein norm), and the trace norm (a.k.a. Schatten 1-norm or the nuclear norm). Using known non-embeddability theorems for these norms by Naor and Schechtman (SICOMP, 2007) and by Pisier (Compositio. Math., 1978), we then conclude that these spaces do not admit efficient sketches either, making progress towards answering another open question (http://sublinear.info, Question #7). Finally, we observe that resolving whether \"sketching is equivalent to embedding into l1 for general norms\" (i.e., without the above restriction) is equivalent to resolving a well-known open problem in Functional Analysis posed by Kwapien in 1969.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"129 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76610687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
How Well Can Graphs Represent Wireless Interference? 图形如何很好地表示无线干扰?
Pub Date : 2014-11-05 DOI: 10.1145/2746539.2746585
M. Halldórsson, Tigran Tonoyan
Efficient use of a wireless network requires that transmissions be grouped into feasible sets, where feasibility means that each transmission can be successfully decoded in spite of the interference caused by simultaneous transmissions. Feasibility is most closely modeled by a signal-to-interference-plus-noise (SINR) formula, which unfortunately is conceptually complicated, being an asymmetric, cumulative, many-to-one relationship. We re-examine how well graphs can capture wireless receptions as encoded in SINR relationships, placing them in a framework in order to understand the limits of such modelling. We seek for each wireless instance a pair of graphs that provide upper and lower bounds on the feasibility relation, while aiming to minimize the gap between the two graphs. The cost of a graph formulation is the worst gap over all instances, and the price of (graph) abstraction is the smallest cost of a graph formulation. We propose a family of conflict graphs that is parameterized by a non-decreasing sub-linear function, and show that with a judicious choice of functions, the graphs can capture feasibility with a cost of O(log* Δ), where Δ is the ratio between the longest and the shortest link length. This holds on the plane and more generally in doubling metrics. We use this to give greatly improved O(log* Δ)-approximation for fundamental link scheduling problems with arbitrary power control. We also explore the limits of graph representations and find that our upper bound is tight: the price of graph abstraction is Ω(log* Δ). In addition, we give strong impossibility results for general metrics, and for approximations in terms of the number of links.
无线网络的有效使用要求将传输分组为可行集,其中可行性意味着尽管同时传输造成干扰,但每个传输都可以成功解码。可行性最接近的模型是信号干扰加噪声(SINR)公式,不幸的是,这个公式在概念上很复杂,是一种不对称的、累积的、多对一的关系。我们重新研究了图形如何很好地捕获在信噪比关系中编码的无线接收,将它们放在一个框架中,以便了解这种建模的局限性。我们为每个无线实例寻找一对提供可行性关系上界和下界的图,同时力求最小化两个图之间的差距。图公式的成本是所有实例中最大的差距,而(图)抽象的成本是图公式的最小成本。我们提出了一组由非递减次线性函数参数化的冲突图,并表明,通过明智地选择函数,图可以以O(log* Δ)的代价捕获可行性,其中Δ是最长和最短链路长度之间的比率。这在平面上是成立的,更普遍的是在双指标上。对于任意功率控制的基本链路调度问题,我们使用该方法给出了改进的O(log* Δ)近似。我们还探索了图表示的极限,并发现我们的上界是紧的:图抽象的价格是Ω(log* Δ)。此外,我们给出了一般指标的强不可能结果,以及根据链接数量的近似值。
{"title":"How Well Can Graphs Represent Wireless Interference?","authors":"M. Halldórsson, Tigran Tonoyan","doi":"10.1145/2746539.2746585","DOIUrl":"https://doi.org/10.1145/2746539.2746585","url":null,"abstract":"Efficient use of a wireless network requires that transmissions be grouped into feasible sets, where feasibility means that each transmission can be successfully decoded in spite of the interference caused by simultaneous transmissions. Feasibility is most closely modeled by a signal-to-interference-plus-noise (SINR) formula, which unfortunately is conceptually complicated, being an asymmetric, cumulative, many-to-one relationship. We re-examine how well graphs can capture wireless receptions as encoded in SINR relationships, placing them in a framework in order to understand the limits of such modelling. We seek for each wireless instance a pair of graphs that provide upper and lower bounds on the feasibility relation, while aiming to minimize the gap between the two graphs. The cost of a graph formulation is the worst gap over all instances, and the price of (graph) abstraction is the smallest cost of a graph formulation. We propose a family of conflict graphs that is parameterized by a non-decreasing sub-linear function, and show that with a judicious choice of functions, the graphs can capture feasibility with a cost of O(log* Δ), where Δ is the ratio between the longest and the shortest link length. This holds on the plane and more generally in doubling metrics. We use this to give greatly improved O(log* Δ)-approximation for fundamental link scheduling problems with arbitrary power control. We also explore the limits of graph representations and find that our upper bound is tight: the price of graph abstraction is Ω(log* Δ). In addition, we give strong impossibility results for general metrics, and for approximations in terms of the number of links.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76521603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Proof of the Satisfiability Conjecture for Large k 大k的可满足性猜想的证明
Pub Date : 2014-11-03 DOI: 10.1145/2746539.2746619
Jian Ding, A. Sly, Nike Sun
We establish the satisfiability threshold for random k-SAT for all k ≥ k0. That is, there exists a limiting density αs(k) such that a random k-SAT formula of clause density α is with high probability satisfiable for α < αs, and unsatisfiable for α > αs. The satisfiability threshold αs is given explicitly by the one-step replica symmetry breaking (1SRB) prediction from statistical physics. We believe that our methods may apply to a range of random constraint satisfaction problems in the 1RSB class.
我们建立了所有k≥k0的随机k- sat的可满足阈值。即存在一个极限密度αs(k),使得子句密度α的随机k- sat公式在α < αs时大概率可满足,在α > αs时大概率不满足。通过统计物理的一步复制对称破缺(1SRB)预测,明确给出了可满足阈值αs。我们相信我们的方法可以应用于1RSB类中的一系列随机约束满足问题。
{"title":"Proof of the Satisfiability Conjecture for Large k","authors":"Jian Ding, A. Sly, Nike Sun","doi":"10.1145/2746539.2746619","DOIUrl":"https://doi.org/10.1145/2746539.2746619","url":null,"abstract":"We establish the satisfiability threshold for random k-SAT for all k ≥ k0. That is, there exists a limiting density αs(k) such that a random k-SAT formula of clause density α is with high probability satisfiable for α < αs, and unsatisfiable for α > αs. The satisfiability threshold αs is given explicitly by the one-step replica symmetry breaking (1SRB) prediction from statistical physics. We believe that our methods may apply to a range of random constraint satisfaction problems in the 1RSB class.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"73 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87250717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 176
Fast Matrix Multiplication: Limitations of the Coppersmith-Winograd Method 快速矩阵乘法:Coppersmith-Winograd方法的局限性
Pub Date : 2014-11-01 DOI: 10.1145/2746539.2746554
A. Ambainis, Yuval Filmus, F. Gall
Until a few years ago, the fastest known matrix multiplication algorithm, due to Coppersmith and Winograd (1990), ran in time O(n2.3755). Recently, a surge of activity by Stothers, Vassilevska-Williams, and Le~Gall has led to an improved algorithm running in time O(n2.3729). These algorithms are obtained by analyzing higher and higher tensor powers of a certain identity of Coppersmith and Winograd. We show that this exact approach cannot result in an algorithm with running time O(n2.3725), and identify a wide class of variants of this approach which cannot result in an algorithm with running time $O(n^{2.3078}); in particular, this approach cannot prove the conjecture that for every ε > 0, two n x n matrices can be multiplied in time O(n2+ε). We describe a new framework extending the original laser method, which is the method underlying the previously mentioned algorithms. Our framework accommodates the algorithms by Coppersmith and Winograd, Stothers, Vassilevska-Williams and Le~Gall. We obtain our main result by analyzing this framework. The framework also explains why taking tensor powers of the Coppersmith--Winograd identity results in faster algorithms.
直到几年前,已知最快的矩阵乘法算法,由于Coppersmith和Winograd(1990),运行时间为0 (n2.3755)。最近,Stothers、Vassilevska-Williams和Le~Gall的研究活动激增,导致了一种改进的算法,运行时间为O(n2.3729)。这些算法是通过分析Coppersmith和Winograd的某个恒等式的越来越高的张量幂得到的。我们证明了这种确切的方法不能产生运行时间为0 (n2.3725)的算法,并确定了这种方法的各种变体,它们不能产生运行时间为$O(n^{2.3078})的算法;特别是,这种方法不能证明对于每一个ε > 0,两个n × n矩阵可以在时间O(n2+ε)内相乘的猜想。我们描述了一个扩展原始激光方法的新框架,该框架是前面提到的算法的基础。我们的框架容纳了Coppersmith和Winograd、Stothers、Vassilevska-Williams和Le~Gall的算法。通过对该框架的分析,得出了本文的主要结论。该框架还解释了为什么取Coppersmith- Winograd恒等式的张量幂会导致更快的算法。
{"title":"Fast Matrix Multiplication: Limitations of the Coppersmith-Winograd Method","authors":"A. Ambainis, Yuval Filmus, F. Gall","doi":"10.1145/2746539.2746554","DOIUrl":"https://doi.org/10.1145/2746539.2746554","url":null,"abstract":"Until a few years ago, the fastest known matrix multiplication algorithm, due to Coppersmith and Winograd (1990), ran in time O(n2.3755). Recently, a surge of activity by Stothers, Vassilevska-Williams, and Le~Gall has led to an improved algorithm running in time O(n2.3729). These algorithms are obtained by analyzing higher and higher tensor powers of a certain identity of Coppersmith and Winograd. We show that this exact approach cannot result in an algorithm with running time O(n2.3725), and identify a wide class of variants of this approach which cannot result in an algorithm with running time $O(n^{2.3078}); in particular, this approach cannot prove the conjecture that for every ε > 0, two n x n matrices can be multiplied in time O(n2+ε). We describe a new framework extending the original laser method, which is the method underlying the previously mentioned algorithms. Our framework accommodates the algorithms by Coppersmith and Winograd, Stothers, Vassilevska-Williams and Le~Gall. We obtain our main result by analyzing this framework. The framework also explains why taking tensor powers of the Coppersmith--Winograd identity results in faster algorithms.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77863477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
Inapproximability of Combinatorial Problems via Small LPs and SDPs 基于小lp和sdp的组合问题的不可逼近性
Pub Date : 2014-10-31 DOI: 10.1145/2746539.2746550
Gábor Braun, S. Pokutta, Daniel Zink
Motivated by [12], we provide a framework for studying the size of linear programming formulations as well as semidefinite programming formulations of combinatorial optimization problems without encoding them first as linear programs. This is done via a factorization theorem for the optimization problem itself (and not a specific encoding of such). As a result we define a consistent reduction mechanism that degrades approximation factors in a controlled fashion and which, at the same time, is compatible with approximate linear and semidefinite programming formulations. Moreover, our reduction mechanism is a minor restriction of classical reductions establishing inapproximability in the context of PCP theorems. As a consequence we establish strong linear programming inapproximability (for LPs with a polynomial number of constraints) for several problems that are not 0/1-CSPs: we obtain a 3/2-epsilon inapproximability for Vertex Cover (which is not of the CSP type) answering an open question in [12], we answer a weak version of our sparse graph conjecture posed in [6] showing an inapproximability factor of 1/2+ε for bounded degree IndependentSet, and we establish inapproximability of MaxMULTICUT (a non-binary CSP). In the case of SDPs, we obtain relative inapproximability results for these problems.
在[12]的激励下,我们提供了一个框架来研究组合优化问题的线性规划公式和半确定规划公式的大小,而不首先将它们编码为线性规划。这是通过优化问题本身的因数分解定理完成的(而不是这样的特定编码)。因此,我们定义了一个一致的约简机制,该机制以可控的方式降低近似因子,同时与近似线性和半确定规划公式兼容。此外,我们的约简机制是在PCP定理的背景下建立不可逼近性的经典约简的一个次要限制。因此,我们为几个非0/1- csp的问题建立了强线性规划不可逼近性(对于具有多项式约束数的lp):我们得到了一个3/2-epsilon的顶点覆盖(它不是CSP类型)的不逼近性,回答了[12]中的一个开放问题,我们回答了[6]中提出的稀疏图猜想的一个弱版本,显示了有界度独立集的不逼近因子为1/2+ε,我们建立了MaxMULTICUT(非二进制CSP)的不逼近性。对于sdp,我们得到了这些问题的相对不逼近性结果。
{"title":"Inapproximability of Combinatorial Problems via Small LPs and SDPs","authors":"Gábor Braun, S. Pokutta, Daniel Zink","doi":"10.1145/2746539.2746550","DOIUrl":"https://doi.org/10.1145/2746539.2746550","url":null,"abstract":"Motivated by [12], we provide a framework for studying the size of linear programming formulations as well as semidefinite programming formulations of combinatorial optimization problems without encoding them first as linear programs. This is done via a factorization theorem for the optimization problem itself (and not a specific encoding of such). As a result we define a consistent reduction mechanism that degrades approximation factors in a controlled fashion and which, at the same time, is compatible with approximate linear and semidefinite programming formulations. Moreover, our reduction mechanism is a minor restriction of classical reductions establishing inapproximability in the context of PCP theorems. As a consequence we establish strong linear programming inapproximability (for LPs with a polynomial number of constraints) for several problems that are not 0/1-CSPs: we obtain a 3/2-epsilon inapproximability for Vertex Cover (which is not of the CSP type) answering an open question in [12], we answer a weak version of our sparse graph conjecture posed in [6] showing an inapproximability factor of 1/2+ε for bounded degree IndependentSet, and we establish inapproximability of MaxMULTICUT (a non-binary CSP). In the case of SDPs, we obtain relative inapproximability results for these problems.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81916708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Dimensionality Reduction for k-Means Clustering and Low Rank Approximation k均值聚类和低秩逼近的降维方法
Pub Date : 2014-10-24 DOI: 10.1145/2746539.2746569
Michael B. Cohen, Sam Elder, Cameron Musco, C. Musco, Madalina Persu
We show how to approximate a data matrix A with a much smaller sketch ~A that can be used to solve a general class of constrained k-rank approximation problems to within (1+ε) error. Importantly, this class includes k-means clustering and unconstrained low rank approximation (i.e. principal component analysis). By reducing data points to just O(k) dimensions, we generically accelerate any exact, approximate, or heuristic algorithm for these ubiquitous problems. For k-means dimensionality reduction, we provide (1+ε) relative error results for many common sketching techniques, including random row projection, column selection, and approximate SVD. For approximate principal component analysis, we give a simple alternative to known algorithms that has applications in the streaming setting. Additionally, we extend recent work on column-based matrix reconstruction, giving column subsets that not only 'cover' a good subspace for A}, but can be used directly to compute this subspace. Finally, for k-means clustering, we show how to achieve a (9+ε) approximation by Johnson-Lindenstrauss projecting data to just O(log k/ε2) dimensions. This is the first result that leverages the specific structure of k-means to achieve dimension independent of input size and sublinear in k.
我们展示了如何用一个更小的草图来近似一个数据矩阵a,它可以用来解决一类一般的约束k-秩近似问题,误差在(1+ε)以内。重要的是,这类包括k-means聚类和无约束低秩近似(即主成分分析)。通过将数据点减少到仅O(k)个维度,我们通常可以加速处理这些普遍存在的问题的任何精确、近似或启发式算法。对于k-means降维,我们为许多常见的素描技术提供了(1+ε)相对误差结果,包括随机行投影、列选择和近似奇异值分解。对于近似主成分分析,我们给出了一种简单的替代已知算法,该算法在流设置中具有应用。此外,我们扩展了最近关于基于列矩阵重构的工作,给出了列子集,这些列子集不仅“覆盖”了a}的一个很好的子空间,而且可以直接用于计算这个子空间。最后,对于k-means聚类,我们展示了如何通过Johnson-Lindenstrauss将数据投影到O(log k/ε2)维来实现(9+ε)近似。这是第一个利用k-means的特定结构来实现维度独立于输入大小和k的亚线性的结果。
{"title":"Dimensionality Reduction for k-Means Clustering and Low Rank Approximation","authors":"Michael B. Cohen, Sam Elder, Cameron Musco, C. Musco, Madalina Persu","doi":"10.1145/2746539.2746569","DOIUrl":"https://doi.org/10.1145/2746539.2746569","url":null,"abstract":"We show how to approximate a data matrix A with a much smaller sketch ~A that can be used to solve a general class of constrained k-rank approximation problems to within (1+ε) error. Importantly, this class includes k-means clustering and unconstrained low rank approximation (i.e. principal component analysis). By reducing data points to just O(k) dimensions, we generically accelerate any exact, approximate, or heuristic algorithm for these ubiquitous problems. For k-means dimensionality reduction, we provide (1+ε) relative error results for many common sketching techniques, including random row projection, column selection, and approximate SVD. For approximate principal component analysis, we give a simple alternative to known algorithms that has applications in the streaming setting. Additionally, we extend recent work on column-based matrix reconstruction, giving column subsets that not only 'cover' a good subspace for A}, but can be used directly to compute this subspace. Finally, for k-means clustering, we show how to achieve a (9+ε) approximation by Johnson-Lindenstrauss projecting data to just O(log k/ε2) dimensions. This is the first result that leverages the specific structure of k-means to achieve dimension independent of input size and sublinear in k.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82082660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 330
期刊
Proceedings of the forty-seventh annual ACM symposium on Theory of Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1