For a pair of parameters $alpha,beta ge 1$, a spanning tree $T$ of a weighted undirected $n$-vertex graph $G = (V,E,w)$ is called an emph{$(alpha,beta)$-shallow-light tree} (shortly, $(alpha,beta)$-SLT)of $G$ with respect to a designated vertex $rt in V$ if (1) it approximates all distances from $rt$ to the other vertices up to a factor of $alpha$, and(2) its weight is at most $beta$ times the weight of the minimum spanning tree $MST(G)$ of $G$. The parameter $alpha$ (respectively, $beta$) is called the emph{root-distortion}(resp., emph{lightness}) of the tree $T$. Shallow-light trees (SLTs) constitute a fundamental graph structure, with numerous theoretical and practical applications. In particular, they were used for constructing spanners, in network design, for VLSI-circuit design, for various data gathering and dissemination tasks in wireless and sensor networks, in overlay networks, and in the message-passing model of distributed computing. Tight tradeoffs between the parameters of SLTs were established by Awer buch et al. cite{ABP90, ABP91} and Khuller et al. cite{KRY93}. They showed that for any $epsilon >, 0$there always exist $(1+epsilon, O(frac{1}{epsilon}))$-SLTs, and that the upper bound $beta = O(frac{1}{epsilon})$on the lightness of SLTs cannot be improved. In this paper we show that using Steiner points one can build SLTs with emph{logarithmic lightness}, i.e., $beta = O(log frac{1}{epsilon})$. This establishes an emph{exponential separation} between spanning SLTs and Steiner ones. One particularly remarkable point on our tradeoff curve is $epsilon =0$. In this regime our construction provides a emph{shortest-path tree} with weight at most $O(log n) cdot w(MST(G))$. Moreover, we prove matching lower bounds that show that all our results are tight up to constant factors. Finally, on our way to these results we settle (up to constant factors) a number of open questions that were raised by Khuller et al. cite{KRY93} in SODA'93.
{"title":"Steiner Shallow-Light Trees are Exponentially Lighter than Spanning Ones","authors":"Michael Elkin, Shay Solomon","doi":"10.1137/13094791X","DOIUrl":"https://doi.org/10.1137/13094791X","url":null,"abstract":"For a pair of parameters $alpha,beta ge 1$, a spanning tree $T$ of a weighted undirected $n$-vertex graph $G = (V,E,w)$ is called an emph{$(alpha,beta)$-shallow-light tree} (shortly, $(alpha,beta)$-SLT)of $G$ with respect to a designated vertex $rt in V$ if (1) it approximates all distances from $rt$ to the other vertices up to a factor of $alpha$, and(2) its weight is at most $beta$ times the weight of the minimum spanning tree $MST(G)$ of $G$. The parameter $alpha$ (respectively, $beta$) is called the emph{root-distortion}(resp., emph{lightness}) of the tree $T$. Shallow-light trees (SLTs) constitute a fundamental graph structure, with numerous theoretical and practical applications. In particular, they were used for constructing spanners, in network design, for VLSI-circuit design, for various data gathering and dissemination tasks in wireless and sensor networks, in overlay networks, and in the message-passing model of distributed computing. Tight tradeoffs between the parameters of SLTs were established by Awer buch et al. cite{ABP90, ABP91} and Khuller et al. cite{KRY93}. They showed that for any $epsilon >, 0$there always exist $(1+epsilon, O(frac{1}{epsilon}))$-SLTs, and that the upper bound $beta = O(frac{1}{epsilon})$on the lightness of SLTs cannot be improved. In this paper we show that using Steiner points one can build SLTs with emph{logarithmic lightness}, i.e., $beta = O(log frac{1}{epsilon})$. This establishes an emph{exponential separation} between spanning SLTs and Steiner ones. One particularly remarkable point on our tradeoff curve is $epsilon =0$. In this regime our construction provides a emph{shortest-path tree} with weight at most $O(log n) cdot w(MST(G))$. Moreover, we prove matching lower bounds that show that all our results are tight up to constant factors. Finally, on our way to these results we settle (up to constant factors) a number of open questions that were raised by Khuller et al. cite{KRY93} in SODA'93.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134398575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a fully homomorphic encryption scheme that is based solely on the(standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worst-case hardness of ``short vector problems'' on arbitrary lattices. Our construction improves on previous works in two aspects:begin{enumerate}item We show that ``somewhat homomorphic'' encryption can be based on LWE, using a new {em re-linearization} technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings. item We deviate from the "squashing paradigm'' used in all previous works. We introduce a new {em dimension-modulus reduction} technique, which shortens the cipher texts and reduces the decryption complexity of our scheme, {em without introducing additional assumptions}. end{enumerate}Our scheme has very short cipher texts and we therefore use it to construct an asymptotically efficient LWE-based single-server private information retrieval (PIR) protocol. The communication complexity of our protocol (in the public-key model) is $k cdot polylog(k)+log dbs$ bits per single-bit query (here, $k$ is a security parameter).
{"title":"Efficient Fully Homomorphic Encryption from (Standard) LWE","authors":"Zvika Brakerski, V. Vaikuntanathan","doi":"10.1109/FOCS.2011.12","DOIUrl":"https://doi.org/10.1109/FOCS.2011.12","url":null,"abstract":"We present a fully homomorphic encryption scheme that is based solely on the(standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worst-case hardness of ``short vector problems'' on arbitrary lattices. Our construction improves on previous works in two aspects:begin{enumerate}item We show that ``somewhat homomorphic'' encryption can be based on LWE, using a new {em re-linearization} technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings. item We deviate from the \"squashing paradigm'' used in all previous works. We introduce a new {em dimension-modulus reduction} technique, which shortens the cipher texts and reduces the decryption complexity of our scheme, {em without introducing additional assumptions}. end{enumerate}Our scheme has very short cipher texts and we therefore use it to construct an asymptotically efficient LWE-based single-server private information retrieval (PIR) protocol. The communication complexity of our protocol (in the public-key model) is $k cdot polylog(k)+log dbs$ bits per single-bit query (here, $k$ is a security parameter).","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116092246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new algebraic formulation to compute edge connectivities in a directed graph, using the ideas developed in network coding. This reduces the problem of computing edge connectivities to solving systems of linear equations, thus allowing us to use tools in linear algebra to design new algorithms. Using the algebraic formulation we obtain faster algorithms for computing single source edge connectivities and all pairs edge connectivities, in some settings the amortized time to compute the edge connectivity for one pair is sub linear. Through this connection, we have also found an interesting use of expanders and super concentrators to design fast algorithms for some graph connectivity problems.
{"title":"Graph Connectivities, Network Coding, and Expander Graphs","authors":"Ho Yee Cheung, L. Lau, K. M. Leung","doi":"10.1137/110844970","DOIUrl":"https://doi.org/10.1137/110844970","url":null,"abstract":"We present a new algebraic formulation to compute edge connectivities in a directed graph, using the ideas developed in network coding. This reduces the problem of computing edge connectivities to solving systems of linear equations, thus allowing us to use tools in linear algebra to design new algorithms. Using the algebraic formulation we obtain faster algorithms for computing single source edge connectivities and all pairs edge connectivities, in some settings the amortized time to compute the edge connectivity for one pair is sub linear. Through this connection, we have also found an interesting use of expanders and super concentrators to design fast algorithms for some graph connectivity problems.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131447212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the problem of testing if a given function $f : F_q^n right arrow F_q$ is close to a $n$-variate degree $d$ polynomial over the finite field $F_q$ of $q$elements. The natural, low-query, test for this property would be to pick the smallest dimension $t = t_{q,d}approx d/q$ such that every function of degree greater than $d$reveals this aspect on {em some} $t$-dimensional affine subspace of $F_q^n$ and to test that $f$ when restricted to a {em random} $t$-dimensional affine subspace is a polynomial of degree at most $d$ on this subspace. Such a test makes only $q^t$ queries, independent of $n$. Previous works, by Alon et al.~cite{AKKLR}, and Kaufman and Ron~cite{KaufmanRon06} and Jutla et al.~cite{JPRZ04}, showed that this natural test rejected functions that were$Omega(1)$-far from degree $d$-polynomials with probability at least $Omega(q^{-t})$. (The initial work~cite{AKKLR} considered only the case of $q=2$, while the work~cite{JPRZ04}only considered the case of prime $q$. The results in cite{KaufmanRon06} hold for all fields.) Thus to get a constant probability of detecting functions that are at constant distance from the space of degree $d$ polynomials, the tests made $q^{2t}$ queries. Kaufman and Ron also noted that when $q$ is prime, then $q^t$ queries are necessary. Thus these tests were off by at least a quadratic factor from known lower bounds. Bhattacharyya et al.~cite{BKSSZ10} gave an optimal analysis of this test for the case of the binary field and showed that the natural test actually rejects functions that were $Omega(1)$-far from degree $d$-polynomials with probability$Omega(1)$. In this work we extend this result for all fields showing that the natural test does indeed reject functions that are $Omega(1)$-far from degree $d$ polynomials with$Omega(1)$-probability, where the constants depend only on $q$ the field size. Thus our analysis thus shows that this test is optimal (matches known lower bounds) when $q$ is prime. The main technical ingredient in our work is a tight analysis of the number of ``hyper planes'' (affine subspaces of co-dimension $1$) on which the restriction of a degree $d$polynomial has degree less than $d$. We show that the number of such hyper planes is at most $O(q^{t_{q,d}})$ -- which is tight to within constant factors.
{"title":"Optimal Testing of Multivariate Polynomials over Small Prime Fields","authors":"Elad Haramaty, Amir Shpilka, M. Sudan","doi":"10.1137/120879257","DOIUrl":"https://doi.org/10.1137/120879257","url":null,"abstract":"We consider the problem of testing if a given function $f : F_q^n right arrow F_q$ is close to a $n$-variate degree $d$ polynomial over the finite field $F_q$ of $q$elements. The natural, low-query, test for this property would be to pick the smallest dimension $t = t_{q,d}approx d/q$ such that every function of degree greater than $d$reveals this aspect on {em some} $t$-dimensional affine subspace of $F_q^n$ and to test that $f$ when restricted to a {em random} $t$-dimensional affine subspace is a polynomial of degree at most $d$ on this subspace. Such a test makes only $q^t$ queries, independent of $n$. Previous works, by Alon et al.~cite{AKKLR}, and Kaufman and Ron~cite{KaufmanRon06} and Jutla et al.~cite{JPRZ04}, showed that this natural test rejected functions that were$Omega(1)$-far from degree $d$-polynomials with probability at least $Omega(q^{-t})$. (The initial work~cite{AKKLR} considered only the case of $q=2$, while the work~cite{JPRZ04}only considered the case of prime $q$. The results in cite{KaufmanRon06} hold for all fields.) Thus to get a constant probability of detecting functions that are at constant distance from the space of degree $d$ polynomials, the tests made $q^{2t}$ queries. Kaufman and Ron also noted that when $q$ is prime, then $q^t$ queries are necessary. Thus these tests were off by at least a quadratic factor from known lower bounds. Bhattacharyya et al.~cite{BKSSZ10} gave an optimal analysis of this test for the case of the binary field and showed that the natural test actually rejects functions that were $Omega(1)$-far from degree $d$-polynomials with probability$Omega(1)$. In this work we extend this result for all fields showing that the natural test does indeed reject functions that are $Omega(1)$-far from degree $d$ polynomials with$Omega(1)$-probability, where the constants depend only on $q$ the field size. Thus our analysis thus shows that this test is optimal (matches known lower bounds) when $q$ is prime. The main technical ingredient in our work is a tight analysis of the number of ``hyper planes'' (affine subspaces of co-dimension $1$) on which the restriction of a degree $d$polynomial has degree less than $d$. We show that the number of such hyper planes is at most $O(q^{t_{q,d}})$ -- which is tight to within constant factors.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130847916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Consider a $m$-round interactive protocol with soundness error $1/2$. How much extra randomness is required to decrease the soundness error to $delta$ through parallel repetition? Previous work, initiated by Bell are, Goldreich and Gold wasser, shows that for emph{public-coin} interactive protocols with emph{statistical soundness}, $m cdot O(log (1/delta))$ bits of extra randomness suffices. In this work, we initiate a more general study of the above question. begin{itemize}item We establish the first derandomized parallel repetition theorem for public-coin interactive protocols with emph{computational soundness} (a.k.a. arguments). The parameters of our result essentially matches the earlier works in the information-theoretic setting. item We show that obtaining even a sub-linear dependency on the number of rounds $m$ (i.e., $o(m) cdot log(1/delta)$) is impossible in the information-theoretic, and requires the existence of one-way functions in the computational setting. item We show that non-trivial derandomized parallel repetition for private-coin protocols is impossible in the information-theoretic setting and requires the existence of one-way functions in the computational setting. end{itemize} These results are tight in the sense that parallel repetition theorems in the computational setting can trivially be derandomized using pseudorandom generators, which are implied by the existence of one-way functions.
{"title":"The Randomness Complexity of Parallel Repetition","authors":"Kai-Min Chung, R. Pass","doi":"10.1109/FOCS.2011.93","DOIUrl":"https://doi.org/10.1109/FOCS.2011.93","url":null,"abstract":"Consider a $m$-round interactive protocol with soundness error $1/2$. How much extra randomness is required to decrease the soundness error to $delta$ through parallel repetition? Previous work, initiated by Bell are, Goldreich and Gold wasser, shows that for emph{public-coin} interactive protocols with emph{statistical soundness}, $m cdot O(log (1/delta))$ bits of extra randomness suffices. In this work, we initiate a more general study of the above question. begin{itemize}item We establish the first derandomized parallel repetition theorem for public-coin interactive protocols with emph{computational soundness} (a.k.a. arguments). The parameters of our result essentially matches the earlier works in the information-theoretic setting. item We show that obtaining even a sub-linear dependency on the number of rounds $m$ (i.e., $o(m) cdot log(1/delta)$) is impossible in the information-theoretic, and requires the existence of one-way functions in the computational setting. item We show that non-trivial derandomized parallel repetition for private-coin protocols is impossible in the information-theoretic setting and requires the existence of one-way functions in the computational setting. end{itemize} These results are tight in the sense that parallel repetition theorems in the computational setting can trivially be derandomized using pseudorandom generators, which are implied by the existence of one-way functions.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124434384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Bansal, U. Feige, Robert Krauthgamer, K. Makarychev, V. Nagarajan, J. Naor, Roy Schwartz
We study graph partitioning problems from a min-max perspective, in which an input graph on n vertices should be partitioned into k parts, and the objective is to minimize the maximum number of edges leaving a single part. The two main versions we consider are: (i) the k parts need to be of equal size, and (ii) the parts must separate a set of k given terminals. We consider a common generalization of these two problems, and design for it an O(√log n log k)-approximation algorithm. This improves over an O(log2 n) approximation for the second version due to Svitkina and Tardos, and roughly O(k log n) approximation for the first version that follows from other previous work. We also give an improved O(1)-approximation algorithm for graphs that exclude any fixed minor. Our algorithm uses a new procedure for solving the Small Set Expansion problem. In this problem, we are given a graph G and the goal is to find a non-empty subset S of V of size at most pn with minimum edge-expansion. We give an O(√log n log (1/p)) bicriteria approximation algorithm for the general case of Small Set Expansion and O(1) approximation algorithm for graphs that exclude any fixed minor.
我们从最小-最大的角度研究图划分问题,其中n个顶点的输入图应该划分为k个部分,目标是最小化留下单个部分的最大边数。我们考虑的两个主要版本是:(i) k个部件需要大小相等,(ii)这些部件必须分开k个给定端子的集合。我们考虑了这两个问题的一般推广,并设计了一个O(√log n log k)近似算法。这改进了由于Svitkina和Tardos的第二个版本的O(log2 n)近似值,以及根据其他先前工作得出的第一个版本的大约O(k log n)近似值。我们还给出了一种改进的O(1)-逼近算法,用于排除任何固定小项的图。我们的算法采用了一种新的方法来解决小集展开问题。在这个问题中,我们给定一个图G,目标是找到一个V的非空子集S,其大小最大为pn,且边展开最小。我们给出了一般情况下小集展开的O(√log n log (1/p))双准则逼近算法和排除任何固定次元的图的O(1)逼近算法。
{"title":"Min-max Graph Partitioning and Small Set Expansion","authors":"N. Bansal, U. Feige, Robert Krauthgamer, K. Makarychev, V. Nagarajan, J. Naor, Roy Schwartz","doi":"10.1109/focs.2011.79","DOIUrl":"https://doi.org/10.1109/focs.2011.79","url":null,"abstract":"We study graph partitioning problems from a min-max perspective, in which an input graph on n vertices should be partitioned into k parts, and the objective is to minimize the maximum number of edges leaving a single part. The two main versions we consider are: (i) the k parts need to be of equal size, and (ii) the parts must separate a set of k given terminals. We consider a common generalization of these two problems, and design for it an O(√log n log k)-approximation algorithm. This improves over an O(log2 n) approximation for the second version due to Svitkina and Tardos, and roughly O(k log n) approximation for the first version that follows from other previous work. We also give an improved O(1)-approximation algorithm for graphs that exclude any fixed minor. Our algorithm uses a new procedure for solving the Small Set Expansion problem. In this problem, we are given a graph G and the goal is to find a non-empty subset S of V of size at most pn with minimum edge-expansion. We give an O(√log n log (1/p)) bicriteria approximation algorithm for the general case of Small Set Expansion and O(1) approximation algorithm for graphs that exclude any fixed minor.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125265798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The problem central to sparse recovery and compressive sensing is that of emph{stable sparse recovery}: we want a distribution $math cal{A}$ of matrices $A in R^{m times n}$ such that, for any $x in R^n$ and with probability $1 - delta >, 2/3$ over $A in math cal{A}$, there is an algorithm to recover $hat{x}$ from $Ax$ withbegin{align} norm{p}{hat{x} - x} leq C min_{ktext{-sparse } x'} norm{p}{x - x'}end{align}for some constant $C >, 1$ and norm $p$. The measurement complexity of this problem is well understood for constant $C >, 1$. However, in a variety of applications it is important to obtain $C = 1+eps$ for a small $eps >, 0$, and this complexity is not well understood. We resolve the dependence on $eps$ in the number of measurements required of a $k$-sparse recovery algorithm, up to polylogarithmic factors for the central cases of $p=1$ and $p=2$. Namely, we give new algorithms and lower bounds that show the number of measurements required is $k/eps^{p/2} textrm{polylog}(n)$. For $p=2$, our bound of $frac{1}{eps}klog (n/k)$ is tight up to emph{constant} factors. We also give matching bounds when the output is required to be $k$-sparse, in which case we achieve $k/eps^p textrm{polylog}(n)$. This shows the distinction between the complexity of sparse and non-sparse outputs is fundamental.
稀疏恢复和压缩感知的核心问题是emph{稳定稀疏恢复}:我们想要一个矩阵$A in R^{m times n}$的分布$math cal{A}$,这样,对于任何$x in R^n$和概率$1 - delta >, 2/3$超过$A in math cal{A}$,有一种算法可以从$Ax$和begin{align} norm{p}{hat{x} - x} leq C min_{ktext{-sparse } x'} norm{p}{x - x'}end{align}对某些常数$C >, 1$和范数$p$恢复$hat{x}$。对于常数$C >, 1$,这个问题的测量复杂性是很容易理解的。然而,在各种应用程序中,为一个小的$eps >, 0$获取$C = 1+eps$是很重要的,而且这种复杂性还没有得到很好的理解。我们解决了$k$ -稀疏恢复算法所需的测量数量对$eps$的依赖,直至$p=1$和$p=2$中心情况的多对数因子。也就是说,我们给出了新的算法和下界,表明所需的测量次数是$k/eps^{p/2} textrm{polylog}(n)$。对于$p=2$, $frac{1}{eps}klog (n/k)$的边界被emph{常数}因子所限制。当输出要求为$k$ -sparse时,我们也给出了匹配边界,在这种情况下,我们实现了$k/eps^p textrm{polylog}(n)$。这表明,稀疏输出和非稀疏输出的复杂性之间的区别是根本的。
{"title":"(1 + eps)-Approximate Sparse Recovery","authors":"Eric Price, David P. Woodruff","doi":"10.1109/FOCS.2011.92","DOIUrl":"https://doi.org/10.1109/FOCS.2011.92","url":null,"abstract":"The problem central to sparse recovery and compressive sensing is that of emph{stable sparse recovery}: we want a distribution $math cal{A}$ of matrices $A in R^{m times n}$ such that, for any $x in R^n$ and with probability $1 - delta >, 2/3$ over $A in math cal{A}$, there is an algorithm to recover $hat{x}$ from $Ax$ withbegin{align} norm{p}{hat{x} - x} leq C min_{ktext{-sparse } x'} norm{p}{x - x'}end{align}for some constant $C >, 1$ and norm $p$. The measurement complexity of this problem is well understood for constant $C >, 1$. However, in a variety of applications it is important to obtain $C = 1+eps$ for a small $eps >, 0$, and this complexity is not well understood. We resolve the dependence on $eps$ in the number of measurements required of a $k$-sparse recovery algorithm, up to polylogarithmic factors for the central cases of $p=1$ and $p=2$. Namely, we give new algorithms and lower bounds that show the number of measurements required is $k/eps^{p/2} textrm{polylog}(n)$. For $p=2$, our bound of $frac{1}{eps}klog (n/k)$ is tight up to emph{constant} factors. We also give matching bounds when the output is required to be $k$-sparse, in which case we achieve $k/eps^p textrm{polylog}(n)$. This shows the distinction between the complexity of sparse and non-sparse outputs is fundamental.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128543408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The goal of (stable) sparse recovery is to recover a $k$-sparse approximation $x^*$ of a vector $x$ from linear measurements of $x$. Specifically, the goal is to recover $x^*$ such that$$norm{p}{x-x^*} le C min_{ktext{-sparse } x'} norm{q}{x-x'}$$for some constant $C$ and norm parameters $p$ and $q$. It is known that, for $p=q=1$ or $p=q=2$, this task can be accomplished using $m=O(k log (n/k))$ {em non-adaptive}measurements~cite{CRT06:Stable-Signal} and that this bound is tight~cite{DIPW, FPRU, PW11}. In this paper we show that if one is allowed to perform measurements that are {em adaptive}, then the number of measurements can be considerably reduced. Specifically, for $C=1+epsilon$ and $p=q=2$ we showbegin{itemize}item A scheme with $m=O(frac{1}{eps}k log log (neps/k))$ measurements that uses $O(log^* k cdot log log (neps/k))$ rounds. This is a significant improvement over the best possible non-adaptive bound. item A scheme with $m=O(frac{1}{eps}k log (k/eps) + k log (n/k))$ measurements that uses {em two} rounds. This improves over the best possible non-adaptive bound. end{itemize} To the best of our knowledge, these are the first results of this type.
(稳定)稀疏恢复的目标是从$x$的线性测量中恢复向量$x$的$k$ -稀疏近似$x^*$。具体来说,目标是恢复$x^*$,以便$$norm{p}{x-x^*} le C min_{ktext{-sparse } x'} norm{q}{x-x'}$$对于一些常数$C$和规范参数$p$和$q$。众所周知,对于$p=q=1$或$p=q=2$,该任务可以使用$m=O(k log (n/k))$非自适应{em测量}cite{CRT06:Stable-Signal}完成,并且该界是紧密的cite{DIPW, FPRU, PW11}。在本文中,我们表明,如果允许执行自适应的测量,{em那么}测量的数量可以大大减少。具体来说,对于$C=1+epsilon$和$p=q=2$,我们显示begin{itemize}item 使用$O(log^* k cdot log log (neps/k))$轮的$m=O(frac{1}{eps}k log log (neps/k))$测量方案。这是对最佳非自适应边界的重大改进。 item 使用轮$m=O(frac{1}{eps}k log (k/eps) + k log (n/k))$测量的方案。这比可能的最佳非自适应界有所改进。 {em}end{itemize} 据我们所知,这是这种类型的第一次结果。
{"title":"On the Power of Adaptivity in Sparse Recovery","authors":"P. Indyk, Eric Price, David P. Woodruff","doi":"10.1109/FOCS.2011.83","DOIUrl":"https://doi.org/10.1109/FOCS.2011.83","url":null,"abstract":"The goal of (stable) sparse recovery is to recover a $k$-sparse approximation $x^*$ of a vector $x$ from linear measurements of $x$. Specifically, the goal is to recover $x^*$ such that$$norm{p}{x-x^*} le C min_{ktext{-sparse } x'} norm{q}{x-x'}$$for some constant $C$ and norm parameters $p$ and $q$. It is known that, for $p=q=1$ or $p=q=2$, this task can be accomplished using $m=O(k log (n/k))$ {em non-adaptive}measurements~cite{CRT06:Stable-Signal} and that this bound is tight~cite{DIPW, FPRU, PW11}. In this paper we show that if one is allowed to perform measurements that are {em adaptive}, then the number of measurements can be considerably reduced. Specifically, for $C=1+epsilon$ and $p=q=2$ we showbegin{itemize}item A scheme with $m=O(frac{1}{eps}k log log (neps/k))$ measurements that uses $O(log^* k cdot log log (neps/k))$ rounds. This is a significant improvement over the best possible non-adaptive bound. item A scheme with $m=O(frac{1}{eps}k log (k/eps) + k log (n/k))$ measurements that uses {em two} rounds. This improves over the best possible non-adaptive bound. end{itemize} To the best of our knowledge, these are the first results of this type.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122205302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We give the first polylogarithmic-competitive randomized algorithm for the k-server problem on an arbitrary finite metric space. In particular, our algorithm achieves a competitive ratio of Õ(log3 n log2 k) for any metric space on n points. This improves upon the (2k-1)-competitive algorithm of Koutsoupias and Papadimitriou (J. ACM 1995) whenever n is sub-exponential in k.
给出了任意有限度量空间上k-server问题的第一个多对数竞争随机化算法。特别是,我们的算法在n个点的任何度量空间中实现了Õ(log3 n log2 k)的竞争比。这改进了Koutsoupias和Papadimitriou (J. ACM 1995)的(2k-1)竞争算法,当n是k的次指数时。
{"title":"A Polylogarithmic-Competitive Algorithm for the k-Server Problem","authors":"N. Bansal, Niv Buchbinder, A. Madry, J. Naor","doi":"10.1145/2783434","DOIUrl":"https://doi.org/10.1145/2783434","url":null,"abstract":"We give the first polylogarithmic-competitive randomized algorithm for the k-server problem on an arbitrary finite metric space. In particular, our algorithm achieves a competitive ratio of Õ(log3 n log2 k) for any metric space on n points. This improves upon the (2k-1)-competitive algorithm of Koutsoupias and Papadimitriou (J. ACM 1995) whenever n is sub-exponential in k.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123175847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a technique for establishing and amplifying gaps between parameters of network coding and index coding problems. The technique uses linear programs to establish separations between combinatorial and coding-theoretic parameters and applies hyper graph lexicographic products to amplify these separations. This entails combining the dual solutions of the lexicographic multiplicands and proving that this is a valid dual solution of the product. Our result is general enough to apply to a large family of linear programs. This blend of linear programs and lexicographic products gives a recipe for constructing hard instances in which the gap between combinatorial or coding-theoretic parameters is polynomially large. We find polynomial gaps in cases in which the largest previously known gaps were only small constant factors or entirely unknown. Most notably, we show a polynomial separation between linear and non-linear network coding rates. This involves exploiting a connection between matroids and index coding to establish a previously unknown separation between linear and non-linear index coding rates. We also construct index coding problems with a polynomial gap between the broadcast rate and the trivial lower bound for which no gap was previously known.
{"title":"Lexicographic Products and the Power of Non-linear Network Coding","authors":"A. Błasiak, Robert D. Kleinberg, E. Lubetzky","doi":"10.1109/FOCS.2011.39","DOIUrl":"https://doi.org/10.1109/FOCS.2011.39","url":null,"abstract":"We introduce a technique for establishing and amplifying gaps between parameters of network coding and index coding problems. The technique uses linear programs to establish separations between combinatorial and coding-theoretic parameters and applies hyper graph lexicographic products to amplify these separations. This entails combining the dual solutions of the lexicographic multiplicands and proving that this is a valid dual solution of the product. Our result is general enough to apply to a large family of linear programs. This blend of linear programs and lexicographic products gives a recipe for constructing hard instances in which the gap between combinatorial or coding-theoretic parameters is polynomially large. We find polynomial gaps in cases in which the largest previously known gaps were only small constant factors or entirely unknown. Most notably, we show a polynomial separation between linear and non-linear network coding rates. This involves exploiting a connection between matroids and index coding to establish a previously unknown separation between linear and non-linear index coding rates. We also construct index coding problems with a polynomial gap between the broadcast rate and the trivial lower bound for which no gap was previously known.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129748506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}