Alon, Seymour, and Thomas generalized Lipton and Tarjan's planar separator theorem and showed that a $K_h$-minor free graph with $n$ vertices has a separator of size at most $h^{3/2}sqrt n$. They gave an algorithm that, given a graph $G$ with $m$ edges and $n$ vertices and given an integer $hgeq 1$, outputs in $O(sqrt{hn}m)$ time such a separator or a $K_h$-minor of $G$. Plot kin, Rao, and Smith gave an $O(hmsqrt{nlog n})$ time algorithm to find a separator of size $O(hsqrt{nlog n})$. Kawara bayashi and Reed improved the bound on the size of the separator to $hsqrt n$ and gave an algorithm that finds such a separator in $O(n^{1 + epsilon})$ time for any constant $epsilon >, 0$, assuming $h$ is constant. This algorithm has an extremely large dependency on $h$ in the running time (some power tower of $h$ whose height is itself a function of $h$), making it impractical even for small $h$. We are interested in a small polynomial time dependency on $h$ and we show how to find an $O(hsqrt{nlog n})$-size separator or report that $G$ has a $K_h$-minor in $O(poly(h)n^{5/4 + epsilon})$ time for any constant $epsilon >, 0$. We also present the first $O(poly(h)n)$ time algorithm to find a separator of size $O(n^c)$ for a constant $c
{"title":"Separator Theorems for Minor-Free and Shallow Minor-Free Graphs with Applications","authors":"Christian Wulff-Nilsen","doi":"10.1109/FOCS.2011.15","DOIUrl":"https://doi.org/10.1109/FOCS.2011.15","url":null,"abstract":"Alon, Seymour, and Thomas generalized Lipton and Tarjan's planar separator theorem and showed that a $K_h$-minor free graph with $n$ vertices has a separator of size at most $h^{3/2}sqrt n$. They gave an algorithm that, given a graph $G$ with $m$ edges and $n$ vertices and given an integer $hgeq 1$, outputs in $O(sqrt{hn}m)$ time such a separator or a $K_h$-minor of $G$. Plot kin, Rao, and Smith gave an $O(hmsqrt{nlog n})$ time algorithm to find a separator of size $O(hsqrt{nlog n})$. Kawara bayashi and Reed improved the bound on the size of the separator to $hsqrt n$ and gave an algorithm that finds such a separator in $O(n^{1 + epsilon})$ time for any constant $epsilon >, 0$, assuming $h$ is constant. This algorithm has an extremely large dependency on $h$ in the running time (some power tower of $h$ whose height is itself a function of $h$), making it impractical even for small $h$. We are interested in a small polynomial time dependency on $h$ and we show how to find an $O(hsqrt{nlog n})$-size separator or report that $G$ has a $K_h$-minor in $O(poly(h)n^{5/4 + epsilon})$ time for any constant $epsilon >, 0$. We also present the first $O(poly(h)n)$ time algorithm to find a separator of size $O(n^c)$ for a constant $c","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125110062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For Bayesian combinatorial auctions, we present a general framework for approximately reducing the mechanism design problem for multiple buyers to the mechanism design problem for each individual buyer. Our framework can be applied to any setting which roughly satisfies the following assumptions: (i) The buyer's types must be distributed independently (not necessarily identically). (ii) The objective function must be linearly separable over the set of buyers (iii) The supply constraints must be the only constraints involving more than one buyer. Our framework is general in the sense that it makes no explicit assumption about any of the following: (i) The buyer's valuations (e.g., sub modular, additive, etc). (ii) The distribution of types for each buyer. (iii) The other constraints involving individual buyers (e.g., budget constraints, etc). We present two generic $n$-buyer mechanisms that use $1$-buyer mechanisms as black boxes. Assuming that we have an$alpha$-approximate $1$-buyer mechanism for each buyerfootnote{Note that we can use different $1$-buyer mechanisms to accommodate different classes of buyers.} and assuming that no buyer ever needs more than $frac{1}{k}$ of all copies of each item for some integer $k ge 1$, then our generic $n$-buyer mechanisms are $gamma_kcdotalpha$-approximation of the optimal$n$-buyer mechanism, in which $gamma_k$ is a constant which is at least $1-frac{1}{sqrt{k+3}}$. Observe that $gamma_k$ is at least $frac{1}{2}$ (for $k=1$) and approaches $1$ as $k$ increases. As a byproduct of our construction, we improve a generalization of prophet inequalities. Furthermore, as applications of our main theorem, we improve several results from the literature.
{"title":"Bayesian Combinatorial Auctions: Expanding Single Buyer Mechanisms to Many Buyers","authors":"S. Alaei","doi":"10.1137/120878422","DOIUrl":"https://doi.org/10.1137/120878422","url":null,"abstract":"For Bayesian combinatorial auctions, we present a general framework for approximately reducing the mechanism design problem for multiple buyers to the mechanism design problem for each individual buyer. Our framework can be applied to any setting which roughly satisfies the following assumptions: (i) The buyer's types must be distributed independently (not necessarily identically). (ii) The objective function must be linearly separable over the set of buyers (iii) The supply constraints must be the only constraints involving more than one buyer. Our framework is general in the sense that it makes no explicit assumption about any of the following: (i) The buyer's valuations (e.g., sub modular, additive, etc). (ii) The distribution of types for each buyer. (iii) The other constraints involving individual buyers (e.g., budget constraints, etc). We present two generic $n$-buyer mechanisms that use $1$-buyer mechanisms as black boxes. Assuming that we have an$alpha$-approximate $1$-buyer mechanism for each buyerfootnote{Note that we can use different $1$-buyer mechanisms to accommodate different classes of buyers.} and assuming that no buyer ever needs more than $frac{1}{k}$ of all copies of each item for some integer $k ge 1$, then our generic $n$-buyer mechanisms are $gamma_kcdotalpha$-approximation of the optimal$n$-buyer mechanism, in which $gamma_k$ is a constant which is at least $1-frac{1}{sqrt{k+3}}$. Observe that $gamma_k$ is at least $frac{1}{2}$ (for $k=1$) and approaches $1$ as $k$ increases. As a byproduct of our construction, we improve a generalization of prophet inequalities. Furthermore, as applications of our main theorem, we improve several results from the literature.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129352258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Decomposition theorems in classical Fourier analysis enable us to express a bounded function in terms of few linear phases with large Fourier coefficients plus a part that is pseudorandom with respect to linear phases. The Gold Reich-Levin algorithm can be viewed as an algorithmic analogue of such a decomposition as it gives a way to efficiently find the linear phases associated with large Fourier coefficients. In the study of & quot; quadratic Fourier analysis & quot;, higher-degree analogues of such decompositions have been developed in which the pseudorandomness property is stronger but the structured part correspondingly weaker. For example, it has previously been shown that it is possible to express a bounded function as a sum of a few quadratic phases plus a part that is small in the $U^3$ norm, defined by Gowers for the purpose of counting arithmetic progressions of length 4. We give a polynomial time algorithm for computing such a decomposition. A key part of the algorithm is a local self-correction procedure for Reed-Muller codes of order 2 (over $F_2^n$) for a function at distance $1/2-epsilon$ from a codeword. Given a function $f:F_2^n to {-1,1}$ at fractional Hamming distance $1/2-epsilon$ from a quadratic phase (which is a codeword of Reed-Muller code of order 2), we give an algorithm that runs in time polynomial in $n$ and finds a codeword at distance at most $1/2-eta$ for $eta = eta(epsilon)$. This is an algorithmic analogue of Samorodnitsky's result, which gave a tester for the above problem. To our knowledge, it represents the first instance of a correction procedure for any class of codes, beyond the list-decoding radius. In the process, we give algorithmic versions of results from additive combinatorics used in Samorodnitsky's proof and a refined version of the inverse theorem for the Gowers $U^3$ norm over $F_2^n$ or a function at distance 1/2 -- episilon from a codeword. Given a function f : $F_2^n$ right arrow { -- 1, 1} at fractional Hamming distance 1/2 -- epsilon " from a quadratic phase (which is a codeword of Reed-Muller code of order 2), we give an algorithm that runs in time polynomial in n and finds a codeword at distance at most 1.2 -- n for n = n (epsilon). This is an algorithmic analogue of Samorodnitsky's result [17], which gave a tester for the above problem. To our knowledge, it represents the first instance of a correction procedure for any class of codes, beyond the list-decoding radius..
{"title":"Quadratic Goldreich-Levin Theorems","authors":"Madhur Tulsiani, J. Wolf","doi":"10.1137/12086827X","DOIUrl":"https://doi.org/10.1137/12086827X","url":null,"abstract":"Decomposition theorems in classical Fourier analysis enable us to express a bounded function in terms of few linear phases with large Fourier coefficients plus a part that is pseudorandom with respect to linear phases. The Gold Reich-Levin algorithm can be viewed as an algorithmic analogue of such a decomposition as it gives a way to efficiently find the linear phases associated with large Fourier coefficients. In the study of & quot; quadratic Fourier analysis & quot;, higher-degree analogues of such decompositions have been developed in which the pseudorandomness property is stronger but the structured part correspondingly weaker. For example, it has previously been shown that it is possible to express a bounded function as a sum of a few quadratic phases plus a part that is small in the $U^3$ norm, defined by Gowers for the purpose of counting arithmetic progressions of length 4. We give a polynomial time algorithm for computing such a decomposition. A key part of the algorithm is a local self-correction procedure for Reed-Muller codes of order 2 (over $F_2^n$) for a function at distance $1/2-epsilon$ from a codeword. Given a function $f:F_2^n to {-1,1}$ at fractional Hamming distance $1/2-epsilon$ from a quadratic phase (which is a codeword of Reed-Muller code of order 2), we give an algorithm that runs in time polynomial in $n$ and finds a codeword at distance at most $1/2-eta$ for $eta = eta(epsilon)$. This is an algorithmic analogue of Samorodnitsky's result, which gave a tester for the above problem. To our knowledge, it represents the first instance of a correction procedure for any class of codes, beyond the list-decoding radius. In the process, we give algorithmic versions of results from additive combinatorics used in Samorodnitsky's proof and a refined version of the inverse theorem for the Gowers $U^3$ norm over $F_2^n$ or a function at distance 1/2 -- episilon from a codeword. Given a function f : $F_2^n$ right arrow { -- 1, 1} at fractional Hamming distance 1/2 -- epsilon \" from a quadratic phase (which is a codeword of Reed-Muller code of order 2), we give an algorithm that runs in time polynomial in n and finds a codeword at distance at most 1.2 -- n for n = n (epsilon). This is an algorithmic analogue of Samorodnitsky's result [17], which gave a tester for the above problem. To our knowledge, it represents the first instance of a correction procedure for any class of codes, beyond the list-decoding radius..","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127366200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Any physical channel of communication offers two potential reasons why its capacity (the number of bits it can transmit in a unit of time) might be unbounded: (1) (Uncountably) infinitely many choices of signal strength at any given instant of time, and (2) (Uncountably) infinitely many instances of time at which signals may be sent. However channel noise cancels out the potential unboundedness of the first aspect, leaving typical channels with only a finite capacity per instant of time. The latter source of infinity seems less extensively studied. A potential source of unreliability that might restrict the capacity also from the second aspect is ``delay'': Signals transmitted by the sender at a given point of time may not be received with a predictable delay at the receiving end. In this work we examine this source of uncertainty by considering a simple discrete model of delay errors. In our model the communicating parties get to subdivide time as microscopically finely as they wish, but still have to cope with communication delays that are macroscopic and variable. The continuous process becomes the limit of our process as the time subdivision becomes infinitesimal. We taxonomize this class of communication channels based on whether the delays and noise are stochastic or adversarial, and based on how much information each aspect has about the other when introducing its errors. We analyze the limits of such channels and reach somewhat surprising conclusions: The capacity of a physical channel is finitely bounded only if at least one of the two sources of error (signal noise or delay noise) is adversarial. In particular the capacity is finitely bounded only if the delay is adversarial, or the noise is adversarial and acts with knowledge of the stochastic delay. If both error sources are stochastic, or if the noise is adversarial and independent of the stochastic delay, then the capacity of the associated physical channel is infinite!
{"title":"Delays and the Capacity of Continuous-Time Channels","authors":"S. Khanna, M. Sudan","doi":"10.1109/FOCS.2011.60","DOIUrl":"https://doi.org/10.1109/FOCS.2011.60","url":null,"abstract":"Any physical channel of communication offers two potential reasons why its capacity (the number of bits it can transmit in a unit of time) might be unbounded: (1) (Uncountably) infinitely many choices of signal strength at any given instant of time, and (2) (Uncountably) infinitely many instances of time at which signals may be sent. However channel noise cancels out the potential unboundedness of the first aspect, leaving typical channels with only a finite capacity per instant of time. The latter source of infinity seems less extensively studied. A potential source of unreliability that might restrict the capacity also from the second aspect is ``delay'': Signals transmitted by the sender at a given point of time may not be received with a predictable delay at the receiving end. In this work we examine this source of uncertainty by considering a simple discrete model of delay errors. In our model the communicating parties get to subdivide time as microscopically finely as they wish, but still have to cope with communication delays that are macroscopic and variable. The continuous process becomes the limit of our process as the time subdivision becomes infinitesimal. We taxonomize this class of communication channels based on whether the delays and noise are stochastic or adversarial, and based on how much information each aspect has about the other when introducing its errors. We analyze the limits of such channels and reach somewhat surprising conclusions: The capacity of a physical channel is finitely bounded only if at least one of the two sources of error (signal noise or delay noise) is adversarial. In particular the capacity is finitely bounded only if the delay is adversarial, or the noise is adversarial and acts with knowledge of the stochastic delay. If both error sources are stochastic, or if the noise is adversarial and independent of the stochastic delay, then the capacity of the associated physical channel is infinite!","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130951182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Borradaile, P. Klein, S. Mozes, Yahav Nussbaum, Christian Wulff-Nilsen
We give an O(n log3 n) algorithm that, given an n-node directed planar graph with arc capacities, a set of source nodes, and a set of sink nodes, finds a maximum flow from the sources to the sinks. Previously, the fastest algorithms known for this problem were those for general graphs.
{"title":"Multiple-Source Multiple-Sink Maximum Flow in Directed Planar Graphs in Near-Linear Time","authors":"G. Borradaile, P. Klein, S. Mozes, Yahav Nussbaum, Christian Wulff-Nilsen","doi":"10.1137/15M1042929","DOIUrl":"https://doi.org/10.1137/15M1042929","url":null,"abstract":"We give an O(n log3 n) algorithm that, given an n-node directed planar graph with arc capacities, a set of source nodes, and a set of sink nodes, finds a maximum flow from the sources to the sinks. Previously, the fastest algorithms known for this problem were those for general graphs.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121975326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study algorithms for the {sc Sub modular Multiway Partition}problem (SubMP). An instance of SubMP consists of a finite ground set $V$, a subset $S = {s_1,s_2,ldots,s_k} subseteq V$ of $k$elements called terminals, and a non-negative sub modular set function$f:2^Vrightarrow mathbb{R}_+$ on $V$ provided as a value oracle. The goal is to partition $V$ into $k$ sets $A_1,ldots,A_k$ to minimize $sum_{i=1}^kf(A_i)$ such that for $1 le i le k$, $s_i inA_i$. SubMP generalizes some well-known problems such as the {scMultiway Cut} problem in graphs and hyper graphs, and the {scNode-weighed Multiway Cut} problem in graphs. SubMP for arbitrary sub modular functions (instead of just symmetric functions) was considered by Zhao, Nagamochi and Ibaraki cite{ZhaoNI05}. Previous algorithms were based on greedy splitting and divide and conquer strategies. In recent work cite{ChekuriE11} we proposed a convex-programming relaxation for SubMP based on the Lov'asz-extension of a sub modular function and showed its applicability for some special cases. In this paper we obtain the following results for arbitrary sub modular functions via this relaxation. begin{itemize} item A $2$-approximation for SubMP. This improves the $(k-1)$-approximation from cite{ZhaoNI05}. item A $(1.5-frac{1}{k})$-approximation for SubMP when $f$ is {em symmetric}. This improves the $2(1-frac{1}{k})$-approximation from cite{Queyranne99, ZhaoNI05}.end{itemize}
我们研究了{sc子模块多路分区}问题的算法(SubMP)。SubMP的实例由一个有限的基础集$V$、一个称为终端的$k$元素子集$S = {s_1,s_2,ldots,s_k} subseteq V$和一个在$V$上作为值oracle提供的非负子模集函数$f:2^Vrightarrow mathbb{R}_+$组成。目标是将$V$划分为$k$集$A_1,ldots,A_k$,以最小化$sum_{i=1}^kf(A_i)$,以便于$1 le i le k$、$s_i inA_i$。SubMP推广了一些众所周知的问题,如{scMultiway图和超图中的切问题,以及}{scNode加权}图中的多路切问题。SubMP任意子模函数(而不仅仅是对称函数)是由Zhao, Nagamochi和Ibaraki考虑的cite{ZhaoNI05}。以前的算法是基于贪婪分割和分而治之策略。在最近的工作cite{ChekuriE11}中,我们提出了一个基于子模函数Lovász-extension的SubMP的凸规划松弛,并证明了它在一些特殊情况下的适用性。本文利用这种松弛得到了任意子模函数的如下结果。 begin{itemize} itemSubMP的$2$ -近似。这改进了cite{ZhaoNI05}的$(k-1)$ -近似。 item 当$f$是的时,SubMP的$(1.5-frac{1}{k})$ -近似。这改进了{em}cite{Queyranne99, ZhaoNI05}的$2(1-frac{1}{k})$ -近似。end{itemize}
{"title":"Approximation Algorithms for Submodular Multiway Partition","authors":"C. Chekuri, Alina Ene","doi":"10.1109/FOCS.2011.34","DOIUrl":"https://doi.org/10.1109/FOCS.2011.34","url":null,"abstract":"We study algorithms for the {sc Sub modular Multiway Partition}problem (SubMP). An instance of SubMP consists of a finite ground set $V$, a subset $S = {s_1,s_2,ldots,s_k} subseteq V$ of $k$elements called terminals, and a non-negative sub modular set function$f:2^Vrightarrow mathbb{R}_+$ on $V$ provided as a value oracle. The goal is to partition $V$ into $k$ sets $A_1,ldots,A_k$ to minimize $sum_{i=1}^kf(A_i)$ such that for $1 le i le k$, $s_i inA_i$. SubMP generalizes some well-known problems such as the {scMultiway Cut} problem in graphs and hyper graphs, and the {scNode-weighed Multiway Cut} problem in graphs. SubMP for arbitrary sub modular functions (instead of just symmetric functions) was considered by Zhao, Nagamochi and Ibaraki cite{ZhaoNI05}. Previous algorithms were based on greedy splitting and divide and conquer strategies. In recent work cite{ChekuriE11} we proposed a convex-programming relaxation for SubMP based on the Lov'asz-extension of a sub modular function and showed its applicability for some special cases. In this paper we obtain the following results for arbitrary sub modular functions via this relaxation. begin{itemize} item A $2$-approximation for SubMP. This improves the $(k-1)$-approximation from cite{ZhaoNI05}. item A $(1.5-frac{1}{k})$-approximation for SubMP when $f$ is {em symmetric}. This improves the $2(1-frac{1}{k})$-approximation from cite{Queyranne99, ZhaoNI05}.end{itemize}","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127332682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We show a new way to round vector solutions of semi definite programming (SDP) hierarchies into integral solutions, based on a connection between these hierarchies and the spectrum of the input graph. We demonstrate the utility of our method by providing a new SDP-hierarchy based algorithm for constraint satisfaction problems with 2-variable constraints (2-CSP's). More concretely, we show for every $2$-CSP instance $Ins$, a rounding algorithm for $r$ rounds of the Lasserre SDP hierarchy for $Ins$ that obtains an integral solution which is at most $e$ worse than the relaxation's value (normalized to lie in $[0,1]$), as long as[ r >, kcdotrank_{geq theta}(Ins)/poly(e) ;,]where $k$ is the alphabet size of $Ins$, $theta=poly(e/k)$, and $rank_{geq theta}(Ins)$ denotes the number of eigen values larger than $theta$ in the normalized adjacency matrix of the constraint graph of $Ins$. In the case that $Ins$ is a unique games instance, the threshold $theta$ is only a polynomial in $e$, and is independent of the alphabet size. Also in this case, we can give a non-trivial bound on the number of rounds for emph{every} instance. In particular our result yields an SDP-hierarchy based algorithm that matches the performance of the recent sub exponential algorithm of Aurora, Barak and Steurer (FOCS 2010) in the worst case, but runs faster on a natural family of instances, thus further restricting the set of possible hard instances for Khot's Unique Games Conjecture. Our algorithm actually requires less than the $n^{O(r)}$ constraints specified by the $r^{th}$ level of the Lasserre hierarchy, and in some cases $r$ rounds of our program can be evaluated in time$2^{O(r)}poly(n)$.
基于半定规划(SDP)层次与输入图谱之间的联系,给出了一种将半定规划(SDP)层次的向量解舍入为积分解的新方法。我们通过提供一种新的基于sdp层次的算法来解决具有2变量约束(2-CSP)的约束满足问题来证明我们的方法的实用性。更具体地说,我们证明了对于$2$ -CSP实例$Ins$,对于$Ins$的Lasserre SDP层次结构的$r$的舍入算法,得到的积分解至多$e$小于松弛值(归一化到$[0,1]$),只要[ r >, kcdotrank_{geq theta}(Ins)/poly(e) ;,],其中$k$是$Ins$, $theta=poly(e/k)$,$rank_{geq theta}(Ins)$表示$Ins$约束图的归一化邻接矩阵中大于$theta$的特征值个数。在$Ins$是unique games实例的情况下,阈值$theta$只是$e$中的一个多项式,并且与字母大小无关。同样在这种情况下,我们可以给出emph{每个}实例的轮数的非平凡界。特别是,我们的结果产生了一个基于sdp层次的算法,在最坏的情况下,它与Aurora, Barak和Steurer最近的次指数算法(FOCS 2010)的性能相匹配,但在自然实例系列上运行得更快,从而进一步限制了Khot的Unique Games Conjecture的可能硬实例集。我们的算法实际上需要比Lasserre层次结构的$r^{th}$级别指定的$n^{O(r)}$约束更少的约束,并且在某些情况下,我们的程序的$r$轮可以及时计算$2^{O(r)}poly(n)$。
{"title":"Rounding Semidefinite Programming Hierarchies via Global Correlation","authors":"B. Barak, P. Raghavendra, David Steurer","doi":"10.1109/FOCS.2011.95","DOIUrl":"https://doi.org/10.1109/FOCS.2011.95","url":null,"abstract":"We show a new way to round vector solutions of semi definite programming (SDP) hierarchies into integral solutions, based on a connection between these hierarchies and the spectrum of the input graph. We demonstrate the utility of our method by providing a new SDP-hierarchy based algorithm for constraint satisfaction problems with 2-variable constraints (2-CSP's). More concretely, we show for every $2$-CSP instance $Ins$, a rounding algorithm for $r$ rounds of the Lasserre SDP hierarchy for $Ins$ that obtains an integral solution which is at most $e$ worse than the relaxation's value (normalized to lie in $[0,1]$), as long as[ r >, kcdotrank_{geq theta}(Ins)/poly(e) ;,]where $k$ is the alphabet size of $Ins$, $theta=poly(e/k)$, and $rank_{geq theta}(Ins)$ denotes the number of eigen values larger than $theta$ in the normalized adjacency matrix of the constraint graph of $Ins$. In the case that $Ins$ is a unique games instance, the threshold $theta$ is only a polynomial in $e$, and is independent of the alphabet size. Also in this case, we can give a non-trivial bound on the number of rounds for emph{every} instance. In particular our result yields an SDP-hierarchy based algorithm that matches the performance of the recent sub exponential algorithm of Aurora, Barak and Steurer (FOCS 2010) in the worst case, but runs faster on a natural family of instances, thus further restricting the set of possible hard instances for Khot's Unique Games Conjecture. Our algorithm actually requires less than the $n^{O(r)}$ constraints specified by the $r^{th}$ level of the Lasserre hierarchy, and in some cases $r$ rounds of our program can be evaluated in time$2^{O(r)}poly(n)$.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122102430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we study the average case complexity of the Unique Games problem. We propose a semi-random model, in which a unique game instance is generated in several steps. First an adversary selects a completely satisfiable instance of Unique Games, then she chooses an epsilon-fraction of all edges, and finally replaces (& quot; corrupts'') the constraints corresponding to these edges with new constraints. If all steps are adversarial, the adversary can obtain any (1-epsilon)-satisfiable instance, so then the problem is as hard as in the worst case. We show however that we can find a solution satisfying a (1-delta) fraction of all constraints in polynomial-time if at least one step is random (we require that the average degree of the graph is Omeg(log k)). Our result holds only for epsilon less than some absolute constant. We prove that if epsilon >= 1/2, then the problem is hard in one of the models, that is, no polynomial-time algorithm can distinguish between the following two cases: (i) the instance is a (1-epsilon)-satisfiable semi-random instance and (ii) the instance is at most delta-satisfiable (for every delta >, 0); the result assumes the 2-to-2 conjecture. Finally, we study semi-random instances of Unique Games that are at most (1-epsilon)-satisfiable. We present an algorithm that distinguishes between the case when the instance is a semi-random instance and the case when the instance is an (arbitrary) (1-delta)-satisfiable instances if epsilon >, c delta (for some absolute constant c).
{"title":"How to Play Unique Games Against a Semi-random Adversary: Study of Semi-random Models of Unique Games","authors":"A. Kolla, K. Makarychev, Yury Makarychev","doi":"10.1109/FOCS.2011.78","DOIUrl":"https://doi.org/10.1109/FOCS.2011.78","url":null,"abstract":"In this paper, we study the average case complexity of the Unique Games problem. We propose a semi-random model, in which a unique game instance is generated in several steps. First an adversary selects a completely satisfiable instance of Unique Games, then she chooses an epsilon-fraction of all edges, and finally replaces (& quot; corrupts'') the constraints corresponding to these edges with new constraints. If all steps are adversarial, the adversary can obtain any (1-epsilon)-satisfiable instance, so then the problem is as hard as in the worst case. We show however that we can find a solution satisfying a (1-delta) fraction of all constraints in polynomial-time if at least one step is random (we require that the average degree of the graph is Omeg(log k)). Our result holds only for epsilon less than some absolute constant. We prove that if epsilon >= 1/2, then the problem is hard in one of the models, that is, no polynomial-time algorithm can distinguish between the following two cases: (i) the instance is a (1-epsilon)-satisfiable semi-random instance and (ii) the instance is at most delta-satisfiable (for every delta >, 0); the result assumes the 2-to-2 conjecture. Finally, we study semi-random instances of Unique Games that are at most (1-epsilon)-satisfiable. We present an algorithm that distinguishes between the case when the instance is a semi-random instance and the case when the instance is an (arbitrary) (1-delta)-satisfiable instances if epsilon >, c delta (for some absolute constant c).","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123802508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a framework for approximating the metric TSP based on a novel use of matchings. Traditionally, matchings have been used to add edges in order to make a given graph Eulerian, whereas our approach also allows for the removal of certain edges leading to a decreased cost. For the TSP on graphic metrics (graph-TSP), the approach yields a 1.461-approximation algorithm with respect to the Held-Karp lower bound. For graph-TSP restricted to a class of graphs that contains degree three bounded and claw-free graphs, we show that the integrality gap of the Held-Karp relaxation matches the conjectured ratio 4/3. The framework allows for generalizations in a natural way and also leads to a 1.586-approximation algorithm for the traveling salesman path problem on graphic metrics where the start and end vertices are prespecified.
{"title":"Approximating Graphic TSP by Matchings","authors":"Tobias Mömke, O. Svensson","doi":"10.1109/FOCS.2011.56","DOIUrl":"https://doi.org/10.1109/FOCS.2011.56","url":null,"abstract":"We present a framework for approximating the metric TSP based on a novel use of matchings. Traditionally, matchings have been used to add edges in order to make a given graph Eulerian, whereas our approach also allows for the removal of certain edges leading to a decreased cost. For the TSP on graphic metrics (graph-TSP), the approach yields a 1.461-approximation algorithm with respect to the Held-Karp lower bound. For graph-TSP restricted to a class of graphs that contains degree three bounded and claw-free graphs, we show that the integrality gap of the Held-Karp relaxation matches the conjectured ratio 4/3. The framework allows for generalizations in a natural way and also leads to a 1.586-approximation algorithm for the traveling salesman path problem on graphic metrics where the start and end vertices are prespecified.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130394080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the fundamental algorithmic problem of finding a cycle of minimum weight in a weighted graph. In particular, we show that the minimum weight cycle problem in an undirected n-node graph with edge weights in {1,...,M} or in a directed n-node graph with edge weights in {-M,..., M} and no negative cycles can be efficiently reduced to finding a minimum weight _triangle_ in an Theta(n)-node _undirected_ graph with weights in {1,...,O(M)}. Roughly speaking, our reductions imply the following surprising phenomenon: a minimum cycle with an arbitrary number of weighted edges can be ``encoded'' using only three edges within roughly the same weight interval! This resolves a longstanding open problem posed in a seminal work by Itai and Rodeh [SIAM J. Computing 1978] on minimum cycle in unweighted graphs. A direct consequence of our efficient reductions are tilde{O}(Mn^{omega})0) for minimum weight cycle immediately implies a O(n^{3-delta})-time algorithm (delta>0) for APSP.
我们考虑在加权图中寻找最小权值环的基本算法问题。特别地,我们证明了边权为{1,…的无向n节点图的最小权循环问题。,在边权为-M}的有向n节点图中,{, M}和无负环可以有效地简化为在Theta(n)节点的无向图中找到一个最小权值三角形,权值为{1,…,O(M)}。粗略地说,我们的约简暗示了以下令人惊讶的现象:具有任意数量加权边的最小循环可以在大致相同的权重区间内仅使用三条边进行“编码”!这解决了Itai和Rodeh [SIAM J. Computing 1978]在非加权图中的最小周期的开创性工作中提出的长期开放问题。我们有效缩减的直接结果是最小权循环的{tildeO}(Mn^ {omega})0)立即意味着APSP的O(n^{3-delta})时间算法(delta >0)。
{"title":"Minimum Weight Cycles and Triangles: Equivalences and Algorithms","authors":"L. Roditty, V. V. Williams","doi":"10.1109/FOCS.2011.27","DOIUrl":"https://doi.org/10.1109/FOCS.2011.27","url":null,"abstract":"We consider the fundamental algorithmic problem of finding a cycle of minimum weight in a weighted graph. In particular, we show that the minimum weight cycle problem in an undirected n-node graph with edge weights in {1,...,M} or in a directed n-node graph with edge weights in {-M,..., M} and no negative cycles can be efficiently reduced to finding a minimum weight _triangle_ in an Theta(n)-node _undirected_ graph with weights in {1,...,O(M)}. Roughly speaking, our reductions imply the following surprising phenomenon: a minimum cycle with an arbitrary number of weighted edges can be ``encoded'' using only three edges within roughly the same weight interval! This resolves a longstanding open problem posed in a seminal work by Itai and Rodeh [SIAM J. Computing 1978] on minimum cycle in unweighted graphs. A direct consequence of our efficient reductions are tilde{O}(Mn^{omega})0) for minimum weight cycle immediately implies a O(n^{3-delta})-time algorithm (delta>0) for APSP.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133320696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}