It is widely suspected that ErdH{o}s-R'enyi random graphs are a source of hard instances for clique problems. Giving further evidence for this belief, we prove the first average-case hardness result for the $k$-clique problem on monotone circuits. Specifically, we show that no monotone circuit of size $O(n^{k/4})$ solves the $k$-clique problem with high probability on $ER(n,p)$ for two sufficiently far-apart threshold functions $p(n)$ (for instance $n^{-2/(k-1)}$ and $2n^{-2/(k-1)}$). Moreover, the exponent $k/4$ in this result is tight up to an additive constant. One technical contribution of this paper is the introduction of {em quasi-sunflowers}, a new relaxation of sunflowers in which petals may overlap slightly on average. A ``quasi-sunflower lemma'' (`a la the ErdH{o}s-Rado sunflower lemma) leads to our novel lower bounds within Razborov's method of approximations.
人们普遍怀疑ErdH{o} - r enyi随机图是团问题的硬实例来源。为进一步证明这一信念,我们证明了单调电路上k团问题的第一个平均情况硬度结果。具体来说,我们证明了对于两个足够远的阈值函数$p(n)$(例如$n^{-2/(k-1)}$和$2n^{-2/(k-1)}$) $,没有大小为$O(n^{k/4})$的单调电路能在$ER(n,p)$上高概率地解决$k$-团问题。此外,该结果中的指数$k/4$紧致于一个可加常数。本文的一个技术贡献是引入了{em准向日葵},这是一种新的向日葵松弛,花瓣平均可能有轻微的重叠。一个“准向日葵引理”(即ErdH{o} - rado向日葵引理)在Razborov的近似方法中引出了我们的新下界。
{"title":"The Monotone Complexity of k-clique on Random Graphs","authors":"Benjamin Rossman","doi":"10.1137/110839059","DOIUrl":"https://doi.org/10.1137/110839059","url":null,"abstract":"It is widely suspected that ErdH{o}s-R'enyi random graphs are a source of hard instances for clique problems. Giving further evidence for this belief, we prove the first average-case hardness result for the $k$-clique problem on monotone circuits. Specifically, we show that no monotone circuit of size $O(n^{k/4})$ solves the $k$-clique problem with high probability on $ER(n,p)$ for two sufficiently far-apart threshold functions $p(n)$ (for instance $n^{-2/(k-1)}$ and $2n^{-2/(k-1)}$). Moreover, the exponent $k/4$ in this result is tight up to an additive constant. One technical contribution of this paper is the introduction of {em quasi-sunflowers}, a new relaxation of sunflowers in which petals may overlap slightly on average. A ``quasi-sunflower lemma'' (`a la the ErdH{o}s-Rado sunflower lemma) leads to our novel lower bounds within Razborov's method of approximations.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114252174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently Efremenko showed locally-decodable codes of sub-exponential length. That result showed that these codes can handle up to $frac{1}{3} $ fraction of errors. In this paper we show that the same codes can be locally unique-decoded from error rate $half-alpha$ for any $alpha>0$ and locally list-decoded from error rate $1-alpha$ for any $alpha>0$, with only a constant number of queries and a constant alphabet size. This gives the first sub-exponential codes that can be locally list-decoded with a constant number of queries.
{"title":"Local List Decoding with a Constant Number of Queries","authors":"Avraham Ben-Aroya, K. Efremenko, A. Ta-Shma","doi":"10.1109/FOCS.2010.88","DOIUrl":"https://doi.org/10.1109/FOCS.2010.88","url":null,"abstract":"Recently Efremenko showed locally-decodable codes of sub-exponential length. That result showed that these codes can handle up to $frac{1}{3} $ fraction of errors. In this paper we show that the same codes can be locally unique-decoded from error rate $half-alpha$ for any $alpha>0$ and locally list-decoded from error rate $1-alpha$ for any $alpha>0$, with only a constant number of queries and a constant alphabet size. This gives the first sub-exponential codes that can be locally list-decoded with a constant number of queries.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114507447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main question in the on-line chain partitioning problem is to determine whether there exists an algorithm that partitions on-line posets of width at most $w$ into polynomial number of chains – see Trotter's chapter Partially ordered sets in the Handbook of Combinatorics. So far the best known on-line algorithm of Kier stead used at most $(5^w-1)/4$ chains, on the other hand Szemer'{e}di proved that any on-line algorithm requires at least $binom{w+1}{2}$ chains. These results were obtained in the early eighties and since then no progress in the general case has been done. We provide an on-line algorithm that partitions orders of width $w$ into at most $w^{16log{w}}$ chains. This yields the first sub-exponential upper bound for on-line chain partitioning problem.
{"title":"The Sub-exponential Upper Bound for On-Line Chain Partitioning","authors":"B. Bosek, Tomasz Krawczyk","doi":"10.1109/FOCS.2010.40","DOIUrl":"https://doi.org/10.1109/FOCS.2010.40","url":null,"abstract":"The main question in the on-line chain partitioning problem is to determine whether there exists an algorithm that partitions on-line posets of width at most $w$ into polynomial number of chains – see Trotter's chapter Partially ordered sets in the Handbook of Combinatorics. So far the best known on-line algorithm of Kier stead used at most $(5^w-1)/4$ chains, on the other hand Szemer'{e}di proved that any on-line algorithm requires at least $binom{w+1}{2}$ chains. These results were obtained in the early eighties and since then no progress in the general case has been done. We provide an on-line algorithm that partitions orders of width $w$ into at most $w^{16log{w}}$ chains. This yields the first sub-exponential upper bound for on-line chain partitioning problem.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128385628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new algorithm for learning a convex set in $n$-dimensional space given labeled examples drawn from any Gaussian distribution. The complexity of the algorithm is bounded by a fixed polynomial in $n$ times a function of $k$ and $eps$ where $k$ is the dimension of the {em normal subspace} (the span of normal vectors to supporting hyper planes of the convex set) and the output is a hypothesis that correctly classifies at least $1-eps$ of the unknown Gaussian distribution. For the important case when the convex set is the intersection of $k$ half spaces, the complexity is [ poly(n,k,1/eps) + n cdot min , k^{O(log k/eps^4)}, (k/eps)^{O(k)}, ] improving substantially on the state of the art cite{Vem04,KOS08} for Gaussian distributions. The key step of the algorithm is a Singular Value Decomposition after applying a normalization. The proof is based on a monotonicity property of Gaussian space under convex restrictions.
我们提出了一种在$n$维空间中学习凸集的新算法,给出了从任意高斯分布中提取的标记示例。该算法的复杂度由$n$中的一个固定多项式乘以$k$和$eps$的函数所限定,其中$k$是{em法向子空间}的维度(支持凸集超平面的法向量的跨度),输出是一个正确分类至少$1-eps$未知高斯分布的假设。对于重要的情况,当凸集是$k$半空间的交集时,复杂度是[ poly(n,k,1/eps) + n cdot min , k^{O(log k/eps^4)}, (k/eps)^{O(k)}, ]在高斯分布的技术水平上显著提高cite{Vem04,KOS08}。该算法的关键步骤是应用归一化后的奇异值分解。这个证明是基于高斯空间在凸限制下的单调性。
{"title":"Learning Convex Concepts from Gaussian Distributions with PCA","authors":"S. Vempala","doi":"10.1109/FOCS.2010.19","DOIUrl":"https://doi.org/10.1109/FOCS.2010.19","url":null,"abstract":"We present a new algorithm for learning a convex set in $n$-dimensional space given labeled examples drawn from any Gaussian distribution. The complexity of the algorithm is bounded by a fixed polynomial in $n$ times a function of $k$ and $eps$ where $k$ is the dimension of the {em normal subspace} (the span of normal vectors to supporting hyper planes of the convex set) and the output is a hypothesis that correctly classifies at least $1-eps$ of the unknown Gaussian distribution. For the important case when the convex set is the intersection of $k$ half spaces, the complexity is [ poly(n,k,1/eps) + n cdot min , k^{O(log k/eps^4)}, (k/eps)^{O(k)}, ] improving substantially on the state of the art cite{Vem04,KOS08} for Gaussian distributions. The key step of the algorithm is a Singular Value Decomposition after applying a normalization. The proof is based on a monotonicity property of Gaussian space under convex restrictions.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126253615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Let G be a directed edge-weighted graph and let P be a shortest path from s to t in G. The replacement paths problem asks to compute, for every edge e on P, the shortest s-to-t path that avoids e. Apart from approximation algorithms and algorithms for special graph classes, the naive solution to this problem – removing each edge e on P one at a time and computing the shortest s-to-t path each time – is surprisingly the only known solution for directed weighted graphs, even when the weights are integrals. In particular, although the related shortest paths problem has benefited from fast matrix multiplication, the replacement paths problem has not, and still required cubic time. For an n-vertex graph with integral edge-lengths between -M and M, we give a randomized algorithm that uses fast matrix multiplication and is sub-cubic for appropriate values of M. We also show how to construct a distance sensitivity oracle in the same time bounds. A query (u,v,e) to this oracle requires sub-quadratic time and returns the length of the shortest u-to-v path that avoids the edge e. In fact, for any constant number of edge failures, we construct a data structure in sub-cubic time, that answer queries in sub-quadratic time. Our results also apply for avoiding vertices rather than edges.
让G是一个有方向的edge-weighted图表,让P s t G是一个最短路径替换路径问题要求计算,对每条边e P, s-to-t最短路径,避免e。除了近似算法和算法为特殊图类,天真的解决这个问题——删除每条边e P一次和计算每次s-to-t最短路径——令人惊讶的是唯一已知的解决方案直接加权图,即使权值是积分。特别是,尽管相关的最短路径问题受益于快速矩阵乘法,但替换路径问题没有,并且仍然需要三次时间。对于边长在-M和M之间的n顶点图,我们给出了一种使用快速矩阵乘法的随机化算法,并且对于M的适当值是次三次的。我们还展示了如何在相同的时间范围内构造距离灵敏度oracle。对该oracle的查询(u,v,e)需要次二次时间,并返回避免边e的最短u到v路径的长度。事实上,对于任意常数次的边失败,我们在次三次时间内构建一个数据结构,该数据结构在次二次时间内回答查询。我们的结果也适用于避免顶点而不是边。
{"title":"Replacement Paths via Fast Matrix Multiplication","authors":"O. Weimann, R. Yuster","doi":"10.1109/FOCS.2010.68","DOIUrl":"https://doi.org/10.1109/FOCS.2010.68","url":null,"abstract":"Let G be a directed edge-weighted graph and let P be a shortest path from s to t in G. The replacement paths problem asks to compute, for every edge e on P, the shortest s-to-t path that avoids e. Apart from approximation algorithms and algorithms for special graph classes, the naive solution to this problem – removing each edge e on P one at a time and computing the shortest s-to-t path each time – is surprisingly the only known solution for directed weighted graphs, even when the weights are integrals. In particular, although the related shortest paths problem has benefited from fast matrix multiplication, the replacement paths problem has not, and still required cubic time. For an n-vertex graph with integral edge-lengths between -M and M, we give a randomized algorithm that uses fast matrix multiplication and is sub-cubic for appropriate values of M. We also show how to construct a distance sensitivity oracle in the same time bounds. A query (u,v,e) to this oracle requires sub-quadratic time and returns the length of the shortest u-to-v path that avoids the edge e. In fact, for any constant number of edge failures, we construct a data structure in sub-cubic time, that answer queries in sub-quadratic time. Our results also apply for avoiding vertices rather than edges.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131091573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Finding the length of the longest increasing subsequence (LIS) is a classic algorithmic problem. Let $n$ denote the size of the array. Simple O(n log n) time algorithms are known that determine the LIS exactly. In this paper, we develop a randomized approximation algorithm, that for any constant delta > 0, runs in time polylogarithmic in n and estimates the length of the LIS of an array up to an additive error of (delta n). The algorithm presented in this extended abstract runs in time (log n)^{O(1/delta)}. In the full paper, we will give an improved version of the algorithm with running time (log n)^c (1/delta)^{O(1/delta)} where the exponent c is independent of delta. Previously, the best known polylogarithmic time algorithms could only achieve an additive n/2-approximation. Our techniques also yield a fast algorithm for estimating the distance to monotonicity to within a small multiplicative factor. The distance of f to monotonicity, eps_f, is equal to 1 - |LIS|/n (the fractional length of the complement of the LIS). For any delta > 0, we give an algorithm with running time O((eps^{-1}_f log n)^{O(1/delta)}) that outputs a (1+delta)-multiplicative approximation to eps_f. This can be improved so that the exponent is a fixed constant. The previously known polylogarithmic algorithms gave only a 2-approximation.
{"title":"Estimating the Longest Increasing Sequence in Polylogarithmic Time","authors":"M. Saks, C. Seshadhri","doi":"10.1137/130942152","DOIUrl":"https://doi.org/10.1137/130942152","url":null,"abstract":"Finding the length of the longest increasing subsequence (LIS) is a classic algorithmic problem. Let $n$ denote the size of the array. Simple O(n log n) time algorithms are known that determine the LIS exactly. In this paper, we develop a randomized approximation algorithm, that for any constant delta > 0, runs in time polylogarithmic in n and estimates the length of the LIS of an array up to an additive error of (delta n). The algorithm presented in this extended abstract runs in time (log n)^{O(1/delta)}. In the full paper, we will give an improved version of the algorithm with running time (log n)^c (1/delta)^{O(1/delta)} where the exponent c is independent of delta. Previously, the best known polylogarithmic time algorithms could only achieve an additive n/2-approximation. Our techniques also yield a fast algorithm for estimating the distance to monotonicity to within a small multiplicative factor. The distance of f to monotonicity, eps_f, is equal to 1 - |LIS|/n (the fractional length of the complement of the LIS). For any delta > 0, we give an algorithm with running time O((eps^{-1}_f log n)^{O(1/delta)}) that outputs a (1+delta)-multiplicative approximation to eps_f. This can be improved so that the exponent is a fixed constant. The previously known polylogarithmic algorithms gave only a 2-approximation.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132156299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We initiate the study of testing properties of images that correspond to sparse 0/1-valued matrices of size n × n. Our study is related to but different from the study initiated by Raskhodnikova (Proceedings of RANDOM, 2003), where the images correspond to dense 0/1-valued matrices. Specifically, while distance between images in the model studied by Raskhodnikova is the fraction of entries on which the images differ taken with respect to all n^2 entries, the distance measure in our model is defined by the fraction of such entries taken with respect to the actual number of 1’s in the matrix. We study several natural properties: connectivity, convexity, monotonicity, and being a line. In all cases we give testing algorithms with sub linear complexity, and in some of the cases we also provide corresponding lower bounds.
我们开始了对大小为n × n的稀疏0/1值矩阵对应的图像的测试特性的研究。我们的研究与Raskhodnikova (Proceedings of RANDOM, 2003)发起的研究相关,但又不同,Raskhodnikova的研究中,图像对应于密集的0/1值矩阵。具体来说,Raskhodnikova研究的模型中图像之间的距离是图像不同的条目相对于所有n^2个条目的分数,而我们模型中的距离度量是由这些条目相对于矩阵中实际1的数量的分数来定义的。我们研究了几个自然性质:连通性、凸性、单调性和作为一条线。在所有情况下,我们都给出了具有次线性复杂度的测试算法,并在某些情况下给出了相应的下界。
{"title":"Testing Properties of Sparse Images","authors":"D. Ron, Gilad Tsur","doi":"10.1145/2635806","DOIUrl":"https://doi.org/10.1145/2635806","url":null,"abstract":"We initiate the study of testing properties of images that correspond to sparse 0/1-valued matrices of size n × n. Our study is related to but different from the study initiated by Raskhodnikova (Proceedings of RANDOM, 2003), where the images correspond to dense 0/1-valued matrices. Specifically, while distance between images in the model studied by Raskhodnikova is the fraction of entries on which the images differ taken with respect to all n^2 entries, the distance measure in our model is defined by the fraction of such entries taken with respect to the actual number of 1’s in the matrix. We study several natural properties: connectivity, convexity, monotonicity, and being a line. In all cases we give testing algorithms with sub linear complexity, and in some of the cases we also provide corresponding lower bounds.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121420110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The emph{Coin Problem} is the following problem: a coin is given, which lands on head with probability either $1/2 + beta$ or $1/2 - beta$. We are given the outcome of $n$ independent tosses of this coin, and the goal is to guess which way the coin is biased, and to answer correctly with probability $ge 2/3$. When our computational model is unrestricted, the majority function is optimal, and succeeds when $beta ge c /sqrt{n}$ for a large enough constant $c$. The coin problem is open and interesting in models that cannot compute the majority function. In this paper we study the coin problem in the model of emph{read-once width-$w$ branching programs}. We prove that in order to succeed in this model, $beta$ must be at least $1/ (log n)^{Theta(w)}$. For constant $w$ this is tight by considering the recursive tribes function, and for other values of $w$ this is nearly tight by considering other read-once AND-OR trees. We generalize this to a emph{Dice Problem}, where instead of independent tosses of a coin we are given independent tosses of one of two $m$-sided dice. We prove that if the distributions are too close and the mass of each side of the dice is not too small, then the dice cannot be distinguished by small-width read-once branching programs. We suggest one application for this kind of theorems: we prove that Nisan's Generator fools width-$w$ read-once emph{regular} branching programs, using seed length $O(w^4 log n log log n + log n log (1/eps))$. For $w=eps=Theta(1)$, this seed length is $O(log n log log n)$. The coin theorem and its relatives might have other connections to PRGs. This application is related to the independent, but chronologically-earlier, work of Braver man, Rao, Raz and Yehudayoff~cite{BRRY}.
emph{硬币问题}是这样的问题:给定一枚硬币,它的概率为$1/2 + beta$或$1/2 - beta$。我们得到了$n$次独立抛硬币的结果,目标是猜测硬币偏向哪个方向,并以$ge 2/3$的概率正确回答。当我们的计算模型不受限制时,多数函数是最优的,并且在$beta ge c /sqrt{n}$对于足够大的常数$c$时成功。硬币问题在不能计算多数函数的模型中是开放和有趣的。本文研究了emph{读一次宽度-$w$分支规划}模型中的硬币问题。我们证明,为了在这个模型中取得成功,$beta$必须至少是$1/ (log n)^{Theta(w)}$。对于常数$w$,考虑到递归部落函数,这是紧密的;对于其他值$w$,考虑到其他只读一次的and - or树,这几乎是紧密的。我们将其推广到emph{骰子问题},在这个问题中,我们不是独立地投掷硬币,而是独立地投掷两个$m$面骰子中的一个。我们证明了如果分布太接近且骰子每边的质量不太小,则不能通过小宽度读取一次分支程序来区分骰子。我们提出了这类定理的一个应用:我们证明了Nisan的生成器使用种子长度$O(w^4 log n log log n + log n log (1/eps))$来处理宽度- $w$只读一次的emph{正则}分支程序。对于$w=eps=Theta(1)$,此种子长度为$O(log n log log n)$。硬币定理及其相关定理可能与pg有其他联系。这个应用程序与braverman, Rao, Raz和Yehudayoff cite{BRRY}的独立但时间较早的工作有关。
{"title":"The Coin Problem and Pseudorandomness for Branching Programs","authors":"Joshua Brody, Elad Verbin","doi":"10.1109/FOCS.2010.10","DOIUrl":"https://doi.org/10.1109/FOCS.2010.10","url":null,"abstract":"The emph{Coin Problem} is the following problem: a coin is given, which lands on head with probability either $1/2 + beta$ or $1/2 - beta$. We are given the outcome of $n$ independent tosses of this coin, and the goal is to guess which way the coin is biased, and to answer correctly with probability $ge 2/3$. When our computational model is unrestricted, the majority function is optimal, and succeeds when $beta ge c /sqrt{n}$ for a large enough constant $c$. The coin problem is open and interesting in models that cannot compute the majority function. In this paper we study the coin problem in the model of emph{read-once width-$w$ branching programs}. We prove that in order to succeed in this model, $beta$ must be at least $1/ (log n)^{Theta(w)}$. For constant $w$ this is tight by considering the recursive tribes function, and for other values of $w$ this is nearly tight by considering other read-once AND-OR trees. We generalize this to a emph{Dice Problem}, where instead of independent tosses of a coin we are given independent tosses of one of two $m$-sided dice. We prove that if the distributions are too close and the mass of each side of the dice is not too small, then the dice cannot be distinguished by small-width read-once branching programs. We suggest one application for this kind of theorems: we prove that Nisan's Generator fools width-$w$ read-once emph{regular} branching programs, using seed length $O(w^4 log n log log n + log n log (1/eps))$. For $w=eps=Theta(1)$, this seed length is $O(log n log log n)$. The coin theorem and its relatives might have other connections to PRGs. This application is related to the independent, but chronologically-earlier, work of Braver man, Rao, Raz and Yehudayoff~cite{BRRY}.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"1027 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116258373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We say an algorithm on n by n matrices with entries in [-M, M] (or n-node graphs with edge weights from [-M, M]) is truly sub cubic if it runs in O(n^{3-delta} poly(log M)) time for some delta > 0. We define a notion of sub cubic reducibility, and show that many important problems on graphs and matrices solvable in O(n^3) time are equivalent under sub cubic reductions. Namely, the following weighted problems either all have truly sub cubic algorithms, or none of them do: - The all-pairs shortest paths problem (APSP). - Detecting if a weighted graph has a triangle of negative total edge weight. - Listing up to n^{2.99} negative triangles in an edge-weighted graph. - Finding a minimum weight cycle in a graph of non-negative edge weights. - The replacement paths problem in an edge-weighted digraph. - Finding the second shortest simple path between two nodes in an edge-weighted digraph. - Checking whether a given matrix defines a metric. - Verifying the correctness of a matrix product over the (min, +)-semiring. Therefore, if APSP cannot be solved in n^{3-eps} time for any eps > 0, then many other problems also need essentially cubic time. In fact we show generic equivalences between matrix products over a large class of algebraic structures used in optimization, verifying a matrix product over the same structure, and corresponding triangle detection problems over the structure. These equivalences simplify prior work on sub cubic algorithms for all-pairs path problems, since it now suffices to give appropriate sub cubic triangle detection algorithms. Other consequences of our work are new combinatorial approaches to Boolean matrix multiplication over the (OR, AND)-semiring (abbreviated as BMM). We show that practical advances in triangle detection would imply practical BMM algorithms, among other results. Building on our techniques, we give two new BMM algorithms: a derandomization of the recent combinatorial BMM algorithm of Bansal and Williams (FOCS'09), and an improved quantum algorithm for BMM.
{"title":"Subcubic Equivalences between Path, Matrix and Triangle Problems","authors":"V. V. Williams, Ryan Williams","doi":"10.1145/3186893","DOIUrl":"https://doi.org/10.1145/3186893","url":null,"abstract":"We say an algorithm on n by n matrices with entries in [-M, M] (or n-node graphs with edge weights from [-M, M]) is truly sub cubic if it runs in O(n^{3-delta} poly(log M)) time for some delta > 0. We define a notion of sub cubic reducibility, and show that many important problems on graphs and matrices solvable in O(n^3) time are equivalent under sub cubic reductions. Namely, the following weighted problems either all have truly sub cubic algorithms, or none of them do: - The all-pairs shortest paths problem (APSP). - Detecting if a weighted graph has a triangle of negative total edge weight. - Listing up to n^{2.99} negative triangles in an edge-weighted graph. - Finding a minimum weight cycle in a graph of non-negative edge weights. - The replacement paths problem in an edge-weighted digraph. - Finding the second shortest simple path between two nodes in an edge-weighted digraph. - Checking whether a given matrix defines a metric. - Verifying the correctness of a matrix product over the (min, +)-semiring. Therefore, if APSP cannot be solved in n^{3-eps} time for any eps > 0, then many other problems also need essentially cubic time. In fact we show generic equivalences between matrix products over a large class of algebraic structures used in optimization, verifying a matrix product over the same structure, and corresponding triangle detection problems over the structure. These equivalences simplify prior work on sub cubic algorithms for all-pairs path problems, since it now suffices to give appropriate sub cubic triangle detection algorithms. Other consequences of our work are new combinatorial approaches to Boolean matrix multiplication over the (OR, AND)-semiring (abbreviated as BMM). We show that practical advances in triangle detection would imply practical BMM algorithms, among other results. Building on our techniques, we give two new BMM algorithms: a derandomization of the recent combinatorial BMM algorithm of Bansal and Williams (FOCS'09), and an improved quantum algorithm for BMM.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133440712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Generalized Second Price Auction has been the main mechanism used by search companies to auction positions for advertisements on search pages. In this paper we study the social welfare of the Nash equilibria of this game in various models. In the full information setting, socially optimal Nash equilibria are known to exist (i.e., the Price of Stability is 1). This paper is the first to prove bounds on the price of anarchy, and to give any bounds in the Bayesian setting. Our main result is to show that the price of anarchy is small assuming that all bidders play un-dominated strategies. In the full information setting we prove a bound of 1.618 for the price of anarchy for pure Nash equilibria, and a bound of 4 for mixed Nash equilibria. We also prove a bound of 8 for the price of anarchy in the Bayesian setting, when valuations are drawn independently, and the valuation is known only to the bidder and only the distributions used are common knowledge. Our proof exhibits a combinatorial structure of Nash equilibria and uses this structure to bound the price of anarchy. While establishing the structure is simple in the case of pure and mixed Nash equilibria, the extension to the Bayesian setting requires the use of novel combinatorial techniques that can be of independent interest.
广义第二价格拍卖(Generalized Second Price Auction)一直是搜索公司拍卖搜索页面广告位置的主要机制。本文研究了该博弈在不同模型下的纳什均衡的社会福利。在全信息条件下,已知存在社会最优纳什均衡(即稳定价格为1)。本文首次证明了无政府状态价格的界,并给出了贝叶斯条件下的任何界。我们的主要结果是表明,假设所有竞标者都采取非主导策略,无政府状态的代价很小。在完全信息条件下,我们证明了纯纳什均衡的无政府状态的代价界为1.618,混合纳什均衡的无政府状态代价界为4。我们还证明了贝叶斯设置中无政府状态价格的8界,当估值独立绘制时,估值仅为投标人所知,并且仅使用的分布是常识。我们的证明展示了纳什均衡的组合结构,并利用这个结构来限定无政府状态的代价。虽然在纯纳什均衡和混合纳什均衡的情况下建立结构很简单,但扩展到贝叶斯设置需要使用新颖的组合技术,这可能是独立的兴趣。
{"title":"Pure and Bayes-Nash Price of Anarchy for Generalized Second Price Auction","authors":"R. Leme, É. Tardos","doi":"10.1109/FOCS.2010.75","DOIUrl":"https://doi.org/10.1109/FOCS.2010.75","url":null,"abstract":"The Generalized Second Price Auction has been the main mechanism used by search companies to auction positions for advertisements on search pages. In this paper we study the social welfare of the Nash equilibria of this game in various models. In the full information setting, socially optimal Nash equilibria are known to exist (i.e., the Price of Stability is 1). This paper is the first to prove bounds on the price of anarchy, and to give any bounds in the Bayesian setting. Our main result is to show that the price of anarchy is small assuming that all bidders play un-dominated strategies. In the full information setting we prove a bound of 1.618 for the price of anarchy for pure Nash equilibria, and a bound of 4 for mixed Nash equilibria. We also prove a bound of 8 for the price of anarchy in the Bayesian setting, when valuations are drawn independently, and the valuation is known only to the bidder and only the distributions used are common knowledge. Our proof exhibits a combinatorial structure of Nash equilibria and uses this structure to bound the price of anarchy. While establishing the structure is simple in the case of pure and mixed Nash equilibria, the extension to the Bayesian setting requires the use of novel combinatorial techniques that can be of independent interest.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"351 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115971994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}