首页 > 最新文献

2010 IEEE 51st Annual Symposium on Foundations of Computer Science最新文献

英文 中文
The Monotone Complexity of k-clique on Random Graphs 随机图上k-团的单调复杂度
Pub Date : 2010-10-23 DOI: 10.1137/110839059
Benjamin Rossman
It is widely suspected that ErdH{o}s-R'enyi random graphs are a source of hard instances for clique problems. Giving further evidence for this belief, we prove the first average-case hardness result for the $k$-clique problem on monotone circuits. Specifically, we show that no monotone circuit of size $O(n^{k/4})$ solves the $k$-clique problem with high probability on $ER(n,p)$ for two sufficiently far-apart threshold functions $p(n)$ (for instance $n^{-2/(k-1)}$ and $2n^{-2/(k-1)}$). Moreover, the exponent $k/4$ in this result is tight up to an additive constant. One technical contribution of this paper is the introduction of {em quasi-sunflowers}, a new relaxation of sunflowers in which petals may overlap slightly on average. A ``quasi-sunflower lemma'' (`a la the ErdH{o}s-Rado sunflower lemma) leads to our novel lower bounds within Razborov's method of approximations.
人们普遍怀疑ErdH{o} - r enyi随机图是团问题的硬实例来源。为进一步证明这一信念,我们证明了单调电路上k团问题的第一个平均情况硬度结果。具体来说,我们证明了对于两个足够远的阈值函数$p(n)$(例如$n^{-2/(k-1)}$和$2n^{-2/(k-1)}$) $,没有大小为$O(n^{k/4})$的单调电路能在$ER(n,p)$上高概率地解决$k$-团问题。此外,该结果中的指数$k/4$紧致于一个可加常数。本文的一个技术贡献是引入了{em准向日葵},这是一种新的向日葵松弛,花瓣平均可能有轻微的重叠。一个“准向日葵引理”(即ErdH{o} - rado向日葵引理)在Razborov的近似方法中引出了我们的新下界。
{"title":"The Monotone Complexity of k-clique on Random Graphs","authors":"Benjamin Rossman","doi":"10.1137/110839059","DOIUrl":"https://doi.org/10.1137/110839059","url":null,"abstract":"It is widely suspected that ErdH{o}s-R'enyi random graphs are a source of hard instances for clique problems. Giving further evidence for this belief, we prove the first average-case hardness result for the $k$-clique problem on monotone circuits. Specifically, we show that no monotone circuit of size $O(n^{k/4})$ solves the $k$-clique problem with high probability on $ER(n,p)$ for two sufficiently far-apart threshold functions $p(n)$ (for instance $n^{-2/(k-1)}$ and $2n^{-2/(k-1)}$). Moreover, the exponent $k/4$ in this result is tight up to an additive constant. One technical contribution of this paper is the introduction of {em quasi-sunflowers}, a new relaxation of sunflowers in which petals may overlap slightly on average. A ``quasi-sunflower lemma'' (`a la the ErdH{o}s-Rado sunflower lemma) leads to our novel lower bounds within Razborov's method of approximations.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114252174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
Local List Decoding with a Constant Number of Queries 使用常量查询数的本地列表解码
Pub Date : 2010-10-23 DOI: 10.1109/FOCS.2010.88
Avraham Ben-Aroya, K. Efremenko, A. Ta-Shma
Recently Efremenko showed locally-decodable codes of sub-exponential length. That result showed that these codes can handle up to $frac{1}{3} $ fraction of errors. In this paper we show that the same codes can be locally unique-decoded from error rate $half-alpha$ for any $alpha>0$ and locally list-decoded from error rate $1-alpha$ for any $alpha>0$, with only a constant number of queries and a constant alphabet size. This gives the first sub-exponential codes that can be locally list-decoded with a constant number of queries.
最近,Efremenko展示了亚指数长度的局部可解码代码。结果表明,这些代码可以处理高达$frac{1}{3} $部分的错误。在本文中,我们证明了对于任何$alpha>0$,相同的代码可以从错误率$half-alpha$进行局部唯一解码,对于任何$alpha>0$,可以从错误率$1-alpha$进行局部列表解码,只有恒定的查询次数和恒定的字母大小。这给出了第一个可以使用常量查询在本地列表中解码的次指数代码。
{"title":"Local List Decoding with a Constant Number of Queries","authors":"Avraham Ben-Aroya, K. Efremenko, A. Ta-Shma","doi":"10.1109/FOCS.2010.88","DOIUrl":"https://doi.org/10.1109/FOCS.2010.88","url":null,"abstract":"Recently Efremenko showed locally-decodable codes of sub-exponential length. That result showed that these codes can handle up to $frac{1}{3} $ fraction of errors. In this paper we show that the same codes can be locally unique-decoded from error rate $half-alpha$ for any $alpha>0$ and locally list-decoded from error rate $1-alpha$ for any $alpha>0$, with only a constant number of queries and a constant alphabet size. This gives the first sub-exponential codes that can be locally list-decoded with a constant number of queries.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114507447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
The Sub-exponential Upper Bound for On-Line Chain Partitioning 联机链分区的次指数上界
Pub Date : 2010-10-23 DOI: 10.1109/FOCS.2010.40
B. Bosek, Tomasz Krawczyk
The main question in the on-line chain partitioning problem is to determine whether there exists an algorithm that partitions on-line posets of width at most $w$ into polynomial number of chains – see Trotter's chapter Partially ordered sets in the Handbook of Combinatorics. So far the best known on-line algorithm of Kier stead used at most $(5^w-1)/4$ chains, on the other hand Szemer'{e}di proved that any on-line algorithm requires at least $binom{w+1}{2}$ chains. These results were obtained in the early eighties and since then no progress in the general case has been done. We provide an on-line algorithm that partitions orders of width $w$ into at most $w^{16log{w}}$ chains. This yields the first sub-exponential upper bound for on-line chain partitioning problem.
联机链划分问题的主要问题是确定是否存在一种算法将宽度不超过$w$的联机序集划分为多项式个数的链——参见《组合学手册》中Trotter的部分有序集一章。目前最著名的Kier stead在线算法最多使用$(5^w-1)/4个$链,而Szemer {e}di则证明了任何在线算法至少需要$binom{w+1}{2}$链。这些结果是在八十年代初获得的,从那时起,在一般情况下没有取得任何进展。我们提供了一种在线算法,将宽度$w$的阶划分为最多$w^{16log{w}}$链。这得到了联机链划分问题的第一个次指数上界。
{"title":"The Sub-exponential Upper Bound for On-Line Chain Partitioning","authors":"B. Bosek, Tomasz Krawczyk","doi":"10.1109/FOCS.2010.40","DOIUrl":"https://doi.org/10.1109/FOCS.2010.40","url":null,"abstract":"The main question in the on-line chain partitioning problem is to determine whether there exists an algorithm that partitions on-line posets of width at most $w$ into polynomial number of chains – see Trotter's chapter Partially ordered sets in the Handbook of Combinatorics. So far the best known on-line algorithm of Kier stead used at most $(5^w-1)/4$ chains, on the other hand Szemer'{e}di proved that any on-line algorithm requires at least $binom{w+1}{2}$ chains. These results were obtained in the early eighties and since then no progress in the general case has been done. We provide an on-line algorithm that partitions orders of width $w$ into at most $w^{16log{w}}$ chains. This yields the first sub-exponential upper bound for on-line chain partitioning problem.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128385628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Learning Convex Concepts from Gaussian Distributions with PCA 用PCA从高斯分布中学习凸概念
Pub Date : 2010-10-23 DOI: 10.1109/FOCS.2010.19
S. Vempala
We present a new algorithm for learning a convex set in $n$-dimensional space given labeled examples drawn from any Gaussian distribution. The complexity of the algorithm is bounded by a fixed polynomial in $n$ times a function of $k$ and $eps$ where $k$ is the dimension of the {em normal subspace} (the span of normal vectors to supporting hyper planes of the convex set) and the output is a hypothesis that correctly classifies at least $1-eps$ of the unknown Gaussian distribution. For the important case when the convex set is the intersection of $k$ half spaces, the complexity is [ poly(n,k,1/eps) + n cdot min , k^{O(log k/eps^4)}, (k/eps)^{O(k)}, ] improving substantially on the state of the art cite{Vem04,KOS08} for Gaussian distributions. The key step of the algorithm is a Singular Value Decomposition after applying a normalization. The proof is based on a monotonicity property of Gaussian space under convex restrictions.
我们提出了一种在$n$维空间中学习凸集的新算法,给出了从任意高斯分布中提取的标记示例。该算法的复杂度由$n$中的一个固定多项式乘以$k$和$eps$的函数所限定,其中$k$是{em法向子空间}的维度(支持凸集超平面的法向量的跨度),输出是一个正确分类至少$1-eps$未知高斯分布的假设。对于重要的情况,当凸集是$k$半空间的交集时,复杂度是[ poly(n,k,1/eps) + n cdot min , k^{O(log k/eps^4)}, (k/eps)^{O(k)}, ]在高斯分布的技术水平上显著提高cite{Vem04,KOS08}。该算法的关键步骤是应用归一化后的奇异值分解。这个证明是基于高斯空间在凸限制下的单调性。
{"title":"Learning Convex Concepts from Gaussian Distributions with PCA","authors":"S. Vempala","doi":"10.1109/FOCS.2010.19","DOIUrl":"https://doi.org/10.1109/FOCS.2010.19","url":null,"abstract":"We present a new algorithm for learning a convex set in $n$-dimensional space given labeled examples drawn from any Gaussian distribution. The complexity of the algorithm is bounded by a fixed polynomial in $n$ times a function of $k$ and $eps$ where $k$ is the dimension of the {em normal subspace} (the span of normal vectors to supporting hyper planes of the convex set) and the output is a hypothesis that correctly classifies at least $1-eps$ of the unknown Gaussian distribution. For the important case when the convex set is the intersection of $k$ half spaces, the complexity is [ poly(n,k,1/eps) + n cdot min , k^{O(log k/eps^4)}, (k/eps)^{O(k)}, ] improving substantially on the state of the art cite{Vem04,KOS08} for Gaussian distributions. The key step of the algorithm is a Singular Value Decomposition after applying a normalization. The proof is based on a monotonicity property of Gaussian space under convex restrictions.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126253615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Replacement Paths via Fast Matrix Multiplication 替换路径通过快速矩阵乘法
Pub Date : 2010-10-23 DOI: 10.1109/FOCS.2010.68
O. Weimann, R. Yuster
Let G be a directed edge-weighted graph and let P be a shortest path from s to t in G. The replacement paths problem asks to compute, for every edge e on P, the shortest s-to-t path that avoids e. Apart from approximation algorithms and algorithms for special graph classes, the naive solution to this problem – removing each edge e on P one at a time and computing the shortest s-to-t path each time – is surprisingly the only known solution for directed weighted graphs, even when the weights are integrals. In particular, although the related shortest paths problem has benefited from fast matrix multiplication, the replacement paths problem has not, and still required cubic time. For an n-vertex graph with integral edge-lengths between -M and M, we give a randomized algorithm that uses fast matrix multiplication and is sub-cubic for appropriate values of M. We also show how to construct a distance sensitivity oracle in the same time bounds. A query (u,v,e) to this oracle requires sub-quadratic time and returns the length of the shortest u-to-v path that avoids the edge e. In fact, for any constant number of edge failures, we construct a data structure in sub-cubic time, that answer queries in sub-quadratic time. Our results also apply for avoiding vertices rather than edges.
让G是一个有方向的edge-weighted图表,让P s t G是一个最短路径替换路径问题要求计算,对每条边e P, s-to-t最短路径,避免e。除了近似算法和算法为特殊图类,天真的解决这个问题——删除每条边e P一次和计算每次s-to-t最短路径——令人惊讶的是唯一已知的解决方案直接加权图,即使权值是积分。特别是,尽管相关的最短路径问题受益于快速矩阵乘法,但替换路径问题没有,并且仍然需要三次时间。对于边长在-M和M之间的n顶点图,我们给出了一种使用快速矩阵乘法的随机化算法,并且对于M的适当值是次三次的。我们还展示了如何在相同的时间范围内构造距离灵敏度oracle。对该oracle的查询(u,v,e)需要次二次时间,并返回避免边e的最短u到v路径的长度。事实上,对于任意常数次的边失败,我们在次三次时间内构建一个数据结构,该数据结构在次二次时间内回答查询。我们的结果也适用于避免顶点而不是边。
{"title":"Replacement Paths via Fast Matrix Multiplication","authors":"O. Weimann, R. Yuster","doi":"10.1109/FOCS.2010.68","DOIUrl":"https://doi.org/10.1109/FOCS.2010.68","url":null,"abstract":"Let G be a directed edge-weighted graph and let P be a shortest path from s to t in G. The replacement paths problem asks to compute, for every edge e on P, the shortest s-to-t path that avoids e. Apart from approximation algorithms and algorithms for special graph classes, the naive solution to this problem – removing each edge e on P one at a time and computing the shortest s-to-t path each time – is surprisingly the only known solution for directed weighted graphs, even when the weights are integrals. In particular, although the related shortest paths problem has benefited from fast matrix multiplication, the replacement paths problem has not, and still required cubic time. For an n-vertex graph with integral edge-lengths between -M and M, we give a randomized algorithm that uses fast matrix multiplication and is sub-cubic for appropriate values of M. We also show how to construct a distance sensitivity oracle in the same time bounds. A query (u,v,e) to this oracle requires sub-quadratic time and returns the length of the shortest u-to-v path that avoids the edge e. In fact, for any constant number of edge failures, we construct a data structure in sub-cubic time, that answer queries in sub-quadratic time. Our results also apply for avoiding vertices rather than edges.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131091573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Estimating the Longest Increasing Sequence in Polylogarithmic Time 多对数时间最长递增序列的估计
Pub Date : 2010-10-23 DOI: 10.1137/130942152
M. Saks, C. Seshadhri
Finding the length of the longest increasing subsequence (LIS) is a classic algorithmic problem. Let $n$ denote the size of the array. Simple O(n log n) time algorithms are known that determine the LIS exactly. In this paper, we develop a randomized approximation algorithm, that for any constant delta > 0, runs in time polylogarithmic in n and estimates the length of the LIS of an array up to an additive error of (delta n). The algorithm presented in this extended abstract runs in time (log n)^{O(1/delta)}. In the full paper, we will give an improved version of the algorithm with running time (log n)^c (1/delta)^{O(1/delta)} where the exponent c is independent of delta. Previously, the best known polylogarithmic time algorithms could only achieve an additive n/2-approximation. Our techniques also yield a fast algorithm for estimating the distance to monotonicity to within a small multiplicative factor. The distance of f to monotonicity, eps_f, is equal to 1 - |LIS|/n (the fractional length of the complement of the LIS). For any delta > 0, we give an algorithm with running time O((eps^{-1}_f log n)^{O(1/delta)}) that outputs a (1+delta)-multiplicative approximation to eps_f. This can be improved so that the exponent is a fixed constant. The previously known polylogarithmic algorithms gave only a 2-approximation.
寻找最长递增子序列(LIS)的长度是一个经典的算法问题。让$n$表示数组的大小。已知简单的O(n log n)时间算法可以精确地确定LIS。在本文中,我们开发了一种随机化的近似算法,对于任意常数δ > 0,它在n的多对数时间内运行,并估计数组的LIS长度,直至加性误差为(δ n)。在这个扩展摘要中给出的算法在(log n)^{O(1/ δ)}时间内运行。在全文中,我们将给出该算法的改进版本,其运行时间为(log n)^c (1/delta)^{O(1/delta)},其中指数c与delta无关。以前,最著名的多对数时间算法只能实现n/2近似。我们的技术也产生了一个快速的算法估计距离单调到一个小的乘法因子。f到单调性的距离,eps_f,等于1 - |LIS|/n (LIS补的分数长度)。对于任何δ > 0,我们给出了一个运行时间为O((eps^{-1}_f log n)^{O(1/ δ)})的算法,该算法输出一个(1+ δ)乘近似于eps_f。这可以改进,使指数是一个固定常数。以前已知的多对数算法只能给出2的近似。
{"title":"Estimating the Longest Increasing Sequence in Polylogarithmic Time","authors":"M. Saks, C. Seshadhri","doi":"10.1137/130942152","DOIUrl":"https://doi.org/10.1137/130942152","url":null,"abstract":"Finding the length of the longest increasing subsequence (LIS) is a classic algorithmic problem. Let $n$ denote the size of the array. Simple O(n log n) time algorithms are known that determine the LIS exactly. In this paper, we develop a randomized approximation algorithm, that for any constant delta > 0, runs in time polylogarithmic in n and estimates the length of the LIS of an array up to an additive error of (delta n). The algorithm presented in this extended abstract runs in time (log n)^{O(1/delta)}. In the full paper, we will give an improved version of the algorithm with running time (log n)^c (1/delta)^{O(1/delta)} where the exponent c is independent of delta. Previously, the best known polylogarithmic time algorithms could only achieve an additive n/2-approximation. Our techniques also yield a fast algorithm for estimating the distance to monotonicity to within a small multiplicative factor. The distance of f to monotonicity, eps_f, is equal to 1 - |LIS|/n (the fractional length of the complement of the LIS). For any delta > 0, we give an algorithm with running time O((eps^{-1}_f log n)^{O(1/delta)}) that outputs a (1+delta)-multiplicative approximation to eps_f. This can be improved so that the exponent is a fixed constant. The previously known polylogarithmic algorithms gave only a 2-approximation.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132156299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Testing Properties of Sparse Images 稀疏图像的性能测试
Pub Date : 2010-10-23 DOI: 10.1145/2635806
D. Ron, Gilad Tsur
We initiate the study of testing properties of images that correspond to sparse 0/1-valued matrices of size n × n. Our study is related to but different from the study initiated by Raskhodnikova (Proceedings of RANDOM, 2003), where the images correspond to dense 0/1-valued matrices. Specifically, while distance between images in the model studied by Raskhodnikova is the fraction of entries on which the images differ taken with respect to all n^2 entries, the distance measure in our model is defined by the fraction of such entries taken with respect to the actual number of 1’s in the matrix. We study several natural properties: connectivity, convexity, monotonicity, and being a line. In all cases we give testing algorithms with sub linear complexity, and in some of the cases we also provide corresponding lower bounds.
我们开始了对大小为n × n的稀疏0/1值矩阵对应的图像的测试特性的研究。我们的研究与Raskhodnikova (Proceedings of RANDOM, 2003)发起的研究相关,但又不同,Raskhodnikova的研究中,图像对应于密集的0/1值矩阵。具体来说,Raskhodnikova研究的模型中图像之间的距离是图像不同的条目相对于所有n^2个条目的分数,而我们模型中的距离度量是由这些条目相对于矩阵中实际1的数量的分数来定义的。我们研究了几个自然性质:连通性、凸性、单调性和作为一条线。在所有情况下,我们都给出了具有次线性复杂度的测试算法,并在某些情况下给出了相应的下界。
{"title":"Testing Properties of Sparse Images","authors":"D. Ron, Gilad Tsur","doi":"10.1145/2635806","DOIUrl":"https://doi.org/10.1145/2635806","url":null,"abstract":"We initiate the study of testing properties of images that correspond to sparse 0/1-valued matrices of size n × n. Our study is related to but different from the study initiated by Raskhodnikova (Proceedings of RANDOM, 2003), where the images correspond to dense 0/1-valued matrices. Specifically, while distance between images in the model studied by Raskhodnikova is the fraction of entries on which the images differ taken with respect to all n^2 entries, the distance measure in our model is defined by the fraction of such entries taken with respect to the actual number of 1’s in the matrix. We study several natural properties: connectivity, convexity, monotonicity, and being a line. In all cases we give testing algorithms with sub linear complexity, and in some of the cases we also provide corresponding lower bounds.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121420110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
The Coin Problem and Pseudorandomness for Branching Programs 分支规划的硬币问题与伪随机性
Pub Date : 2010-10-23 DOI: 10.1109/FOCS.2010.10
Joshua Brody, Elad Verbin
The emph{Coin Problem} is the following problem: a coin is given, which lands on head with probability either $1/2 + beta$ or $1/2 - beta$. We are given the outcome of $n$ independent tosses of this coin, and the goal is to guess which way the coin is biased, and to answer correctly with probability $ge 2/3$. When our computational model is unrestricted, the majority function is optimal, and succeeds when $beta ge c /sqrt{n}$ for a large enough constant $c$. The coin problem is open and interesting in models that cannot compute the majority function. In this paper we study the coin problem in the model of emph{read-once width-$w$ branching programs}. We prove that in order to succeed in this model, $beta$ must be at least $1/ (log n)^{Theta(w)}$. For constant $w$ this is tight by considering the recursive tribes function, and for other values of $w$ this is nearly tight by considering other read-once AND-OR trees. We generalize this to a emph{Dice Problem}, where instead of independent tosses of a coin we are given independent tosses of one of two $m$-sided dice. We prove that if the distributions are too close and the mass of each side of the dice is not too small, then the dice cannot be distinguished by small-width read-once branching programs. We suggest one application for this kind of theorems: we prove that Nisan's Generator fools width-$w$ read-once emph{regular} branching programs, using seed length $O(w^4 log n log log n + log n log (1/eps))$. For $w=eps=Theta(1)$, this seed length is $O(log n log log n)$. The coin theorem and its relatives might have other connections to PRGs. This application is related to the independent, but chronologically-earlier, work of Braver man, Rao, Raz and Yehudayoff~cite{BRRY}.
emph{硬币问题}是这样的问题:给定一枚硬币,它的概率为$1/2 + beta$或$1/2 - beta$。我们得到了$n$次独立抛硬币的结果,目标是猜测硬币偏向哪个方向,并以$ge 2/3$的概率正确回答。当我们的计算模型不受限制时,多数函数是最优的,并且在$beta ge c /sqrt{n}$对于足够大的常数$c$时成功。硬币问题在不能计算多数函数的模型中是开放和有趣的。本文研究了emph{读一次宽度-$w$分支规划}模型中的硬币问题。我们证明,为了在这个模型中取得成功,$beta$必须至少是$1/ (log n)^{Theta(w)}$。对于常数$w$,考虑到递归部落函数,这是紧密的;对于其他值$w$,考虑到其他只读一次的and - or树,这几乎是紧密的。我们将其推广到emph{骰子问题},在这个问题中,我们不是独立地投掷硬币,而是独立地投掷两个$m$面骰子中的一个。我们证明了如果分布太接近且骰子每边的质量不太小,则不能通过小宽度读取一次分支程序来区分骰子。我们提出了这类定理的一个应用:我们证明了Nisan的生成器使用种子长度$O(w^4 log n log log n + log n log (1/eps))$来处理宽度- $w$只读一次的emph{正则}分支程序。对于$w=eps=Theta(1)$,此种子长度为$O(log n log log n)$。硬币定理及其相关定理可能与pg有其他联系。这个应用程序与braverman, Rao, Raz和Yehudayoff cite{BRRY}的独立但时间较早的工作有关。
{"title":"The Coin Problem and Pseudorandomness for Branching Programs","authors":"Joshua Brody, Elad Verbin","doi":"10.1109/FOCS.2010.10","DOIUrl":"https://doi.org/10.1109/FOCS.2010.10","url":null,"abstract":"The emph{Coin Problem} is the following problem: a coin is given, which lands on head with probability either $1/2 + beta$ or $1/2 - beta$. We are given the outcome of $n$ independent tosses of this coin, and the goal is to guess which way the coin is biased, and to answer correctly with probability $ge 2/3$. When our computational model is unrestricted, the majority function is optimal, and succeeds when $beta ge c /sqrt{n}$ for a large enough constant $c$. The coin problem is open and interesting in models that cannot compute the majority function. In this paper we study the coin problem in the model of emph{read-once width-$w$ branching programs}. We prove that in order to succeed in this model, $beta$ must be at least $1/ (log n)^{Theta(w)}$. For constant $w$ this is tight by considering the recursive tribes function, and for other values of $w$ this is nearly tight by considering other read-once AND-OR trees. We generalize this to a emph{Dice Problem}, where instead of independent tosses of a coin we are given independent tosses of one of two $m$-sided dice. We prove that if the distributions are too close and the mass of each side of the dice is not too small, then the dice cannot be distinguished by small-width read-once branching programs. We suggest one application for this kind of theorems: we prove that Nisan's Generator fools width-$w$ read-once emph{regular} branching programs, using seed length $O(w^4 log n log log n + log n log (1/eps))$. For $w=eps=Theta(1)$, this seed length is $O(log n log log n)$. The coin theorem and its relatives might have other connections to PRGs. This application is related to the independent, but chronologically-earlier, work of Braver man, Rao, Raz and Yehudayoff~cite{BRRY}.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"1027 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116258373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
Subcubic Equivalences between Path, Matrix and Triangle Problems 路径、矩阵和三角形问题之间的次立方等价
Pub Date : 2010-10-23 DOI: 10.1145/3186893
V. V. Williams, Ryan Williams
We say an algorithm on n by n matrices with entries in [-M, M] (or n-node graphs with edge weights from [-M, M]) is truly sub cubic if it runs in O(n^{3-delta} poly(log M)) time for some delta > 0. We define a notion of sub cubic reducibility, and show that many important problems on graphs and matrices solvable in O(n^3) time are equivalent under sub cubic reductions. Namely, the following weighted problems either all have truly sub cubic algorithms, or none of them do: - The all-pairs shortest paths problem (APSP). - Detecting if a weighted graph has a triangle of negative total edge weight. - Listing up to n^{2.99} negative triangles in an edge-weighted graph. - Finding a minimum weight cycle in a graph of non-negative edge weights. - The replacement paths problem in an edge-weighted digraph. - Finding the second shortest simple path between two nodes in an edge-weighted digraph. - Checking whether a given matrix defines a metric. - Verifying the correctness of a matrix product over the (min, +)-semiring. Therefore, if APSP cannot be solved in n^{3-eps} time for any eps > 0, then many other problems also need essentially cubic time. In fact we show generic equivalences between matrix products over a large class of algebraic structures used in optimization, verifying a matrix product over the same structure, and corresponding triangle detection problems over the structure. These equivalences simplify prior work on sub cubic algorithms for all-pairs path problems, since it now suffices to give appropriate sub cubic triangle detection algorithms. Other consequences of our work are new combinatorial approaches to Boolean matrix multiplication over the (OR, AND)-semiring (abbreviated as BMM). We show that practical advances in triangle detection would imply practical BMM algorithms, among other results. Building on our techniques, we give two new BMM algorithms: a derandomization of the recent combinatorial BMM algorithm of Bansal and Williams (FOCS'09), and an improved quantum algorithm for BMM.
对于某些{}delta > 0的情况,如果算法在O(n^ 3-{delta}poly (log M))时间内运行,那么我们说算法在n × n矩阵上的条目为[-M, M](或边权为[-M, M]的n节点图)上是真正的次三次的。我们定义了次三次可约性的概念,并证明了在O(n^3)时间内可解的图和矩阵的许多重要问题在次三次约化下是等价的。也就是说,以下加权问题要么都有真正的次三次算法,要么都没有:——全对最短路径问题(APSP)。-检测一个加权图是否有一个总边权为负的三角形。-在边加权图中列出最多n^ {2.99}负三角形。-在非负边权值的图中寻找最小权值循环。—边加权有向图中的路径替换问题。—查找有向图中两个节点之间的第二条最短简单路径。-检查给定矩阵是否定义了度量。—验证(min, +)-半环上矩阵乘积的正确性。因此,如果对于任何{}eps > 0的情况,APSP不能在n^ 3-{eps}时间内解决,那么其他许多问题本质上也需要三次时间。事实上,我们展示了在优化、验证相同结构上的矩阵乘积以及相应的结构上的三角形检测问题中使用的一大类代数结构上的矩阵乘积之间的一般等价。这些等价简化了先前对全对路径问题的次立方算法的工作,因为它现在足以给出适当的次立方三角形检测算法。我们工作的其他结果是布尔矩阵在(OR, AND)半环(缩写为BMM)上乘法的新组合方法。我们表明,三角形检测的实际进展将意味着实用的BMM算法,以及其他结果。基于我们的技术,我们给出了两种新的BMM算法:Bansal和Williams最近的组合BMM算法的非随机化(FOCS'09),以及BMM的改进量子算法。
{"title":"Subcubic Equivalences between Path, Matrix and Triangle Problems","authors":"V. V. Williams, Ryan Williams","doi":"10.1145/3186893","DOIUrl":"https://doi.org/10.1145/3186893","url":null,"abstract":"We say an algorithm on n by n matrices with entries in [-M, M] (or n-node graphs with edge weights from [-M, M]) is truly sub cubic if it runs in O(n^{3-delta} poly(log M)) time for some delta > 0. We define a notion of sub cubic reducibility, and show that many important problems on graphs and matrices solvable in O(n^3) time are equivalent under sub cubic reductions. Namely, the following weighted problems either all have truly sub cubic algorithms, or none of them do: - The all-pairs shortest paths problem (APSP). - Detecting if a weighted graph has a triangle of negative total edge weight. - Listing up to n^{2.99} negative triangles in an edge-weighted graph. - Finding a minimum weight cycle in a graph of non-negative edge weights. - The replacement paths problem in an edge-weighted digraph. - Finding the second shortest simple path between two nodes in an edge-weighted digraph. - Checking whether a given matrix defines a metric. - Verifying the correctness of a matrix product over the (min, +)-semiring. Therefore, if APSP cannot be solved in n^{3-eps} time for any eps > 0, then many other problems also need essentially cubic time. In fact we show generic equivalences between matrix products over a large class of algebraic structures used in optimization, verifying a matrix product over the same structure, and corresponding triangle detection problems over the structure. These equivalences simplify prior work on sub cubic algorithms for all-pairs path problems, since it now suffices to give appropriate sub cubic triangle detection algorithms. Other consequences of our work are new combinatorial approaches to Boolean matrix multiplication over the (OR, AND)-semiring (abbreviated as BMM). We show that practical advances in triangle detection would imply practical BMM algorithms, among other results. Building on our techniques, we give two new BMM algorithms: a derandomization of the recent combinatorial BMM algorithm of Bansal and Williams (FOCS'09), and an improved quantum algorithm for BMM.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133440712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 400
Pure and Bayes-Nash Price of Anarchy for Generalized Second Price Auction 广义二次价格拍卖中无政府状态的纯粹价格和贝叶斯-纳什价格
Pub Date : 2010-10-23 DOI: 10.1109/FOCS.2010.75
R. Leme, É. Tardos
The Generalized Second Price Auction has been the main mechanism used by search companies to auction positions for advertisements on search pages. In this paper we study the social welfare of the Nash equilibria of this game in various models. In the full information setting, socially optimal Nash equilibria are known to exist (i.e., the Price of Stability is 1). This paper is the first to prove bounds on the price of anarchy, and to give any bounds in the Bayesian setting. Our main result is to show that the price of anarchy is small assuming that all bidders play un-dominated strategies. In the full information setting we prove a bound of 1.618 for the price of anarchy for pure Nash equilibria, and a bound of 4 for mixed Nash equilibria. We also prove a bound of 8 for the price of anarchy in the Bayesian setting, when valuations are drawn independently, and the valuation is known only to the bidder and only the distributions used are common knowledge. Our proof exhibits a combinatorial structure of Nash equilibria and uses this structure to bound the price of anarchy. While establishing the structure is simple in the case of pure and mixed Nash equilibria, the extension to the Bayesian setting requires the use of novel combinatorial techniques that can be of independent interest.
广义第二价格拍卖(Generalized Second Price Auction)一直是搜索公司拍卖搜索页面广告位置的主要机制。本文研究了该博弈在不同模型下的纳什均衡的社会福利。在全信息条件下,已知存在社会最优纳什均衡(即稳定价格为1)。本文首次证明了无政府状态价格的界,并给出了贝叶斯条件下的任何界。我们的主要结果是表明,假设所有竞标者都采取非主导策略,无政府状态的代价很小。在完全信息条件下,我们证明了纯纳什均衡的无政府状态的代价界为1.618,混合纳什均衡的无政府状态代价界为4。我们还证明了贝叶斯设置中无政府状态价格的8界,当估值独立绘制时,估值仅为投标人所知,并且仅使用的分布是常识。我们的证明展示了纳什均衡的组合结构,并利用这个结构来限定无政府状态的代价。虽然在纯纳什均衡和混合纳什均衡的情况下建立结构很简单,但扩展到贝叶斯设置需要使用新颖的组合技术,这可能是独立的兴趣。
{"title":"Pure and Bayes-Nash Price of Anarchy for Generalized Second Price Auction","authors":"R. Leme, É. Tardos","doi":"10.1109/FOCS.2010.75","DOIUrl":"https://doi.org/10.1109/FOCS.2010.75","url":null,"abstract":"The Generalized Second Price Auction has been the main mechanism used by search companies to auction positions for advertisements on search pages. In this paper we study the social welfare of the Nash equilibria of this game in various models. In the full information setting, socially optimal Nash equilibria are known to exist (i.e., the Price of Stability is 1). This paper is the first to prove bounds on the price of anarchy, and to give any bounds in the Bayesian setting. Our main result is to show that the price of anarchy is small assuming that all bidders play un-dominated strategies. In the full information setting we prove a bound of 1.618 for the price of anarchy for pure Nash equilibria, and a bound of 4 for mixed Nash equilibria. We also prove a bound of 8 for the price of anarchy in the Bayesian setting, when valuations are drawn independently, and the valuation is known only to the bidder and only the distributions used are common knowledge. Our proof exhibits a combinatorial structure of Nash equilibria and uses this structure to bound the price of anarchy. While establishing the structure is simple in the case of pure and mixed Nash equilibria, the extension to the Bayesian setting requires the use of novel combinatorial techniques that can be of independent interest.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"351 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115971994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 115
期刊
2010 IEEE 51st Annual Symposium on Foundations of Computer Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1