Given a weighted graph, the {em maximum weight matching} problem (MWM) is to find a set of vertex-disjoint edges with maximum weight. In the 1960s Edmonds showed that MWMs can be found in polynomial time. At present the fastest MWM algorithm, due to Gabow and Tarjan, runs in $tilde{O}(msqrt{n})$ time, where $m$ and $n$ are the number of edges and vertices in the graph. Surprisingly, restricted versions of the problem, such as computing $(1-epsilon)$-approximate MWMs or finding maximum cardinality matchings, are not known to be much easier (on sparse graphs). The best algorithms for these problems also run in $tilde{O}(msqrt{n})$ time. In this paper we present the first near-linear time algorithm for computing $(1-epsilon)$-approximate MWMs. Specifically, given an arbitrary real-weighted graph and $epsilon>0$, our algorithm computes such a matching in $O(mepsilon^{-2}log^3 n)$ time. The previous best approximate MWM algorithm with comparable running time could only guarantee a $(2/3-epsilon)$-approximate solution. In addition, we present a faster algorithm, running in $O(mlog nlogepsilon^{-1})$ time, that computes a $(3/4-epsilon)$-approximate MWM.
{"title":"Approximating Maximum Weight Matching in Near-Linear Time","authors":"Ran Duan, S. Pettie","doi":"10.1109/FOCS.2010.70","DOIUrl":"https://doi.org/10.1109/FOCS.2010.70","url":null,"abstract":"Given a weighted graph, the {em maximum weight matching} problem (MWM) is to find a set of vertex-disjoint edges with maximum weight. In the 1960s Edmonds showed that MWMs can be found in polynomial time. At present the fastest MWM algorithm, due to Gabow and Tarjan, runs in $tilde{O}(msqrt{n})$ time, where $m$ and $n$ are the number of edges and vertices in the graph. Surprisingly, restricted versions of the problem, such as computing $(1-epsilon)$-approximate MWMs or finding maximum cardinality matchings, are not known to be much easier (on sparse graphs). The best algorithms for these problems also run in $tilde{O}(msqrt{n})$ time. In this paper we present the first near-linear time algorithm for computing $(1-epsilon)$-approximate MWMs. Specifically, given an arbitrary real-weighted graph and $epsilon>0$, our algorithm computes such a matching in $O(mepsilon^{-2}log^3 n)$ time. The previous best approximate MWM algorithm with comparable running time could only guarantee a $(2/3-epsilon)$-approximate solution. In addition, we present a faster algorithm, running in $O(mlog nlogepsilon^{-1})$ time, that computes a $(3/4-epsilon)$-approximate MWM.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116158090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present round-efficient protocols for secure multi-party computation with a dishonest majority that rely on black-box access to the underlying primitives. Our main contributions are as follows: * a O(log^∗ n)-round protocol that relies on black-box access to dense cryptosystems, homomorphic encryption schemes, or lossy encryption schemes. This improves upon the recent O(1)^{log∗ n} -round protocol of Lin, Pass and Venkitasubramaniam (STOC 2009) that relies on non-black-box access to a smaller class of primitives. * a O(1)-round protocol requiring in addition, black-box access to a one-way function with sub-exponential hardness, improving upon the recent work of Pass and Wee (Euro crypt 2010). These are the first black-box constructions for secure computation with sub linear round complexity. Our constructions build on and improve upon the work of Lin and Pass (STOC 2009) on non-malleability amplification, as well as that of Ishai et al. (STOC 2006) on black-box secure computation. In addition to the results on secure computation, we also obtain a simple construction of a O(log^∗ n)-round non-malleable commitment scheme based on one-way functions, improving upon the recent O(1)^{log∗ n}-round protocol of Lin and Pass (STOC 2009). Our construction uses a novel transformation for handling arbitrary man-in-the-middle scheduling strategies which improves upon a previous construction of Barak (FOCS 2002).
{"title":"Black-Box, Round-Efficient Secure Computation via Non-malleability Amplification","authors":"H. Wee","doi":"10.1109/FOCS.2010.87","DOIUrl":"https://doi.org/10.1109/FOCS.2010.87","url":null,"abstract":"We present round-efficient protocols for secure multi-party computation with a dishonest majority that rely on black-box access to the underlying primitives. Our main contributions are as follows: * a O(log^∗ n)-round protocol that relies on black-box access to dense cryptosystems, homomorphic encryption schemes, or lossy encryption schemes. This improves upon the recent O(1)^{log∗ n} -round protocol of Lin, Pass and Venkitasubramaniam (STOC 2009) that relies on non-black-box access to a smaller class of primitives. * a O(1)-round protocol requiring in addition, black-box access to a one-way function with sub-exponential hardness, improving upon the recent work of Pass and Wee (Euro crypt 2010). These are the first black-box constructions for secure computation with sub linear round complexity. Our constructions build on and improve upon the work of Lin and Pass (STOC 2009) on non-malleability amplification, as well as that of Ishai et al. (STOC 2006) on black-box secure computation. In addition to the results on secure computation, we also obtain a simple construction of a O(log^∗ n)-round non-malleable commitment scheme based on one-way functions, improving upon the recent O(1)^{log∗ n}-round protocol of Lin and Pass (STOC 2009). Our construction uses a novel transformation for handling arbitrary man-in-the-middle scheduling strategies which improves upon a previous construction of Barak (FOCS 2002).","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132694899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider $k$-median clustering in finite metric spaces and $k$-means clustering in Euclidean spaces, in the setting where $k$ is part of the input (not a constant). For the $k$-means problem, Ostrovsky et al. show that if the optimal $(k-1)$-means clustering of the input is more expensive than the optimal $k$-means clustering by a factor of $1/epsilon^2$, then one can achieve a $(1+f(epsilon))$-approximation to the $k$-means optimal in time polynomial in $n$ and $k$ by using a variant of Lloyd's algorithm. In this work we substantially improve this approximation guarantee. We show that given only the condition that the $(k-1)$-means optimal is more expensive than the $k$-means optimal by a factor $1+alpha$ for {em some} constant $alpha>0$, we can obtain a PTAS. In particular, under this assumption, for any $eps>0$ we achieve a $(1+eps)$-approximation to the $k$-means optimal in time polynomial in $n$ and $k$, and exponential in $1/eps$ and $1/alpha$. We thus decouple the strength of the assumption from the quality of the approximation ratio. We also give a PTAS for the $k$-median problem in finite metrics under the analogous assumption as well. For $k$-means, we in addition give a randomized algorithm with improved running time of $n^{O(1)}(k log n)^{poly(1/epsilon,1/alpha)}$. Our technique also obtains a PTAS under the assumption of Balcan et al. that all $(1+alpha)$ approximations are $delta$-close to a desired target clustering, in the case that all target clusters have size greater than $delta n$ and $alpha>0$ is constant. Note that the motivation of Balcan et al. is that for many clustering problems, the objective function is only a proxy for the true goal of getting close to the target. From this perspective, our improvement is that for $k$-means in Euclidean spaces we reduce the distance of the clustering found to the target from $O(delta)$ to $delta$ when all target clusters are large, and for $k$-median we improve the ``largeness'' condition needed in the work of Balcan et al. to get exactly $delta$-close from $O(delta n)$ to $delta n$. Our results are based on a new notion of clustering stability.
{"title":"Stability Yields a PTAS for k-Median and k-Means Clustering","authors":"Pranjal Awasthi, Avrim Blum, Or Sheffet","doi":"10.1109/FOCS.2010.36","DOIUrl":"https://doi.org/10.1109/FOCS.2010.36","url":null,"abstract":"We consider $k$-median clustering in finite metric spaces and $k$-means clustering in Euclidean spaces, in the setting where $k$ is part of the input (not a constant). For the $k$-means problem, Ostrovsky et al. show that if the optimal $(k-1)$-means clustering of the input is more expensive than the optimal $k$-means clustering by a factor of $1/epsilon^2$, then one can achieve a $(1+f(epsilon))$-approximation to the $k$-means optimal in time polynomial in $n$ and $k$ by using a variant of Lloyd's algorithm. In this work we substantially improve this approximation guarantee. We show that given only the condition that the $(k-1)$-means optimal is more expensive than the $k$-means optimal by a factor $1+alpha$ for {em some} constant $alpha>0$, we can obtain a PTAS. In particular, under this assumption, for any $eps>0$ we achieve a $(1+eps)$-approximation to the $k$-means optimal in time polynomial in $n$ and $k$, and exponential in $1/eps$ and $1/alpha$. We thus decouple the strength of the assumption from the quality of the approximation ratio. We also give a PTAS for the $k$-median problem in finite metrics under the analogous assumption as well. For $k$-means, we in addition give a randomized algorithm with improved running time of $n^{O(1)}(k log n)^{poly(1/epsilon,1/alpha)}$. Our technique also obtains a PTAS under the assumption of Balcan et al. that all $(1+alpha)$ approximations are $delta$-close to a desired target clustering, in the case that all target clusters have size greater than $delta n$ and $alpha>0$ is constant. Note that the motivation of Balcan et al. is that for many clustering problems, the objective function is only a proxy for the true goal of getting close to the target. From this perspective, our improvement is that for $k$-means in Euclidean spaces we reduce the distance of the clustering found to the target from $O(delta)$ to $delta$ when all target clusters are large, and for $k$-median we improve the ``largeness'' condition needed in the work of Balcan et al. to get exactly $delta$-close from $O(delta n)$ to $delta n$. Our results are based on a new notion of clustering stability.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127615553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Coin flipping is one of the most fundamental tasks in cryptographic protocol design. Informally, a coin flipping protocol should guarantee both (1) Completeness: an honest execution of the protocol by both parties results in a fair coin toss, and (2) Security: a cheating party cannot increase the probability of its desired outcome by any significant amount. Since its introduction by Blum~cite{Blum82}, coin flipping has occupied a central place in the theory of cryptographic protocols. In this paper, we explore what are the implications of the existence of secure coin flipping protocols for complexity theory. As exposited recently by Impagliazzo~cite{Impagliazzo09talk}, surprisingly little is known about this question. Previous work has shown that if we interpret the Security property of coin flipping protocols very strongly, namely that nothing beyond a negligible bias by cheating parties is allowed, then one-way functions must exist~cite{ImpagliazzoLu89}. However, for even a slight weakening of this security property (for example that cheating parties cannot bias the outcome by any additive constant $epsilon>0$), the only complexity-theoretic implication that was known was that $PSPACE nsubseteq BPP$. We put forward a new attack to establish our main result, which shows that, informally speaking, the existence of any (weak) coin flipping protocol that prevents a cheating adversary from biasing the output by more than $frac14 - epsilon$ implies that $NP nsubseteq BPP$. Furthermore, for constant-round protocols, we show that the existence of any (weak) coin flipping protocol that allows an honest party to maintain any noticeable chance of prevailing against a cheating party implies the existence of (infinitely often) one-way functions.
{"title":"On the Computational Complexity of Coin Flipping","authors":"H. K. Maji, M. Prabhakaran, A. Sahai","doi":"10.1109/FOCS.2010.64","DOIUrl":"https://doi.org/10.1109/FOCS.2010.64","url":null,"abstract":"Coin flipping is one of the most fundamental tasks in cryptographic protocol design. Informally, a coin flipping protocol should guarantee both (1) Completeness: an honest execution of the protocol by both parties results in a fair coin toss, and (2) Security: a cheating party cannot increase the probability of its desired outcome by any significant amount. Since its introduction by Blum~cite{Blum82}, coin flipping has occupied a central place in the theory of cryptographic protocols. In this paper, we explore what are the implications of the existence of secure coin flipping protocols for complexity theory. As exposited recently by Impagliazzo~cite{Impagliazzo09talk}, surprisingly little is known about this question. Previous work has shown that if we interpret the Security property of coin flipping protocols very strongly, namely that nothing beyond a negligible bias by cheating parties is allowed, then one-way functions must exist~cite{ImpagliazzoLu89}. However, for even a slight weakening of this security property (for example that cheating parties cannot bias the outcome by any additive constant $epsilon>0$), the only complexity-theoretic implication that was known was that $PSPACE nsubseteq BPP$. We put forward a new attack to establish our main result, which shows that, informally speaking, the existence of any (weak) coin flipping protocol that prevents a cheating adversary from biasing the output by more than $frac14 - epsilon$ implies that $NP nsubseteq BPP$. Furthermore, for constant-round protocols, we show that the existence of any (weak) coin flipping protocol that allows an honest party to maintain any noticeable chance of prevailing against a cheating party implies the existence of (infinitely often) one-way functions.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"11 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114100962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the problem of randomly rounding a fractional solution $x$ in an integer polytope $P subseteq [0,1]^n$ to a vertex $X$ of $P$, so that $E[X] = x$. Our goal is to achieve {em concentration properties} for linear and sub modular functions of the rounded solution. Such dependent rounding techniques, with concentration bounds for linear functions, have been developed in the past for two polytopes: the assignment polytope (that is, bipartite matchings and $b$-matchings)~cite{S01, GKPS06, KMPS09}, and more recently for the spanning tree polytope~cite{AGMGS10}. These schemes have led to a number of new algorithmic results. In this paper we describe a new {em swap rounding} technique which can be applied in a variety of settings including {em matroids} and {em matroid intersection}, while providing Chernoff-type concentration bounds for linear and sub modular functions of the rounded solution. In addition to existing techniques based on negative correlation, we use a martingale argument to obtain an exponential tail estimate for monotone sub modular functions. The rounding scheme explicitly exploits {em exchange properties} of the underlying combinatorial structures, and highlights these properties as the basis for concentration bounds. Matroids and matroid intersection provide a unifying framework for several known applications~cite{GKPS06, KMPS09, CCPV09, KST09, AGMGS10} as well as new ones, and their generality allows a richer set of constraints to be incorporated easily. We give some illustrative examples, with a more comprehensive discussion deferred to a later version of the paper.
{"title":"Dependent Randomized Rounding via Exchange Properties of Combinatorial Structures","authors":"C. Chekuri, J. Vondrák, R. Zenklusen","doi":"10.1109/FOCS.2010.60","DOIUrl":"https://doi.org/10.1109/FOCS.2010.60","url":null,"abstract":"We consider the problem of randomly rounding a fractional solution $x$ in an integer polytope $P subseteq [0,1]^n$ to a vertex $X$ of $P$, so that $E[X] = x$. Our goal is to achieve {em concentration properties} for linear and sub modular functions of the rounded solution. Such dependent rounding techniques, with concentration bounds for linear functions, have been developed in the past for two polytopes: the assignment polytope (that is, bipartite matchings and $b$-matchings)~cite{S01, GKPS06, KMPS09}, and more recently for the spanning tree polytope~cite{AGMGS10}. These schemes have led to a number of new algorithmic results. In this paper we describe a new {em swap rounding} technique which can be applied in a variety of settings including {em matroids} and {em matroid intersection}, while providing Chernoff-type concentration bounds for linear and sub modular functions of the rounded solution. In addition to existing techniques based on negative correlation, we use a martingale argument to obtain an exponential tail estimate for monotone sub modular functions. The rounding scheme explicitly exploits {em exchange properties} of the underlying combinatorial structures, and highlights these properties as the basis for concentration bounds. Matroids and matroid intersection provide a unifying framework for several known applications~cite{GKPS06, KMPS09, CCPV09, KST09, AGMGS10} as well as new ones, and their generality allows a richer set of constraints to be incorporated easily. We give some illustrative examples, with a more comprehensive discussion deferred to a later version of the paper.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123175363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We give the first improvement to the space/approximation trade-off of distance oracles since the seminal result of Thorup and Zwick [STOC'01]. For unweighted graphs, our distance oracle has size $O(n^{5/3}) = O(n^{1.66cdots})$ and, when queried about vertices at distance $d$, returns a path of length $2d+1$. For weighted graphs with $m=n^2/alpha$ edges, our distance oracle has size $O(n^2 / sqrt[3]{alpha})$ and returns a factor 2 approximation. Based on a plausible conjecture about the hardness of set intersection queries, we show that a 2-approximate distance oracle requires space $tOmega(n^2 / sqrt{alpha})$. For unweighted graphs, this implies a $tOmega(n^{1.5})$ space lower bound to achieve approximation $2d+1$.
{"title":"Distance Oracles beyond the Thorup-Zwick Bound","authors":"M. Patrascu, L. Roditty","doi":"10.1137/11084128X","DOIUrl":"https://doi.org/10.1137/11084128X","url":null,"abstract":"We give the first improvement to the space/approximation trade-off of distance oracles since the seminal result of Thorup and Zwick [STOC'01]. For unweighted graphs, our distance oracle has size $O(n^{5/3}) = O(n^{1.66cdots})$ and, when queried about vertices at distance $d$, returns a path of length $2d+1$. For weighted graphs with $m=n^2/alpha$ edges, our distance oracle has size $O(n^2 / sqrt[3]{alpha})$ and returns a factor 2 approximation. Based on a plausible conjecture about the hardness of set intersection queries, we show that a 2-approximate distance oracle requires space $tOmega(n^2 / sqrt{alpha})$. For unweighted graphs, this implies a $tOmega(n^{1.5})$ space lower bound to achieve approximation $2d+1$.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125100730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is shown that for each $t$, there is a separator of size $O(t sqrt{n})$ in any $n$-vertex graph $G$ with no $K_t$-minor. This settles a conjecture of Alon, Seymour and Thomas (J. Amer. Math. Soc., 1990 and STOC'90), and generalizes a result of Djidjev (1981), and Gilbert, Hutchinson and Tarjan (J. Algorithm, 1984), independently, who proved that every graph with $n$ vertices and genus $g$ has a separator of order $O(sqrt{gn})$, because $K_t$ has genus $Omega(t^2)$. The bound $O(t sqrt{n})$ is best possible because every 3-regular expander graph with $n$ vertices is a graph with no $K_t$-minor for $t=cn^{1/2}$, and with no separator of size $dn$ for appropriately chosen positive constants $c,d$. In addition, we give an $O(n^2)$ time algorithm to obtain such a separator, and then give a sketch how to obtain such a separator in $O(n^{1+epsilon})$ time for any $epsilon > 0$. Finally, we discuss several algorithm aspects of our separator theorem, including a possibility to obtain a separator of order $g(t)sqrt{n}$, for some function $g$ of $t$, in an $n$-vertex graph $G$ with no $K_t$-minor in $O(n)$ time.
{"title":"A Separator Theorem in Minor-Closed Classes","authors":"K. Kawarabayashi, B. Reed","doi":"10.1109/FOCS.2010.22","DOIUrl":"https://doi.org/10.1109/FOCS.2010.22","url":null,"abstract":"It is shown that for each $t$, there is a separator of size $O(t sqrt{n})$ in any $n$-vertex graph $G$ with no $K_t$-minor. This settles a conjecture of Alon, Seymour and Thomas (J. Amer. Math. Soc., 1990 and STOC'90), and generalizes a result of Djidjev (1981), and Gilbert, Hutchinson and Tarjan (J. Algorithm, 1984), independently, who proved that every graph with $n$ vertices and genus $g$ has a separator of order $O(sqrt{gn})$, because $K_t$ has genus $Omega(t^2)$. The bound $O(t sqrt{n})$ is best possible because every 3-regular expander graph with $n$ vertices is a graph with no $K_t$-minor for $t=cn^{1/2}$, and with no separator of size $dn$ for appropriately chosen positive constants $c,d$. In addition, we give an $O(n^2)$ time algorithm to obtain such a separator, and then give a sketch how to obtain such a separator in $O(n^{1+epsilon})$ time for any $epsilon > 0$. Finally, we discuss several algorithm aspects of our separator theorem, including a possibility to obtain a separator of order $g(t)sqrt{n}$, for some function $g$ of $t$, in an $n$-vertex graph $G$ with no $K_t$-minor in $O(n)$ time.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125246151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A fundamental question in leakage-resilient cryptography is: can leakage resilience always be amplified by parallel repetition? It is natural to expect that if we have a leakage-resilient primitive tolerating $ell$ bits of leakage, we can take $n$ copies of it to form a system tolerating $nell$ bits of leakage. In this paper, we show that this is not always true. We construct a public key encryption system which is secure when at most $ell$ bits are leaked, but if we take $n$ copies of the system and encrypt a share of the message under each using an $n$-out-of-$n$ secret-sharing scheme, leaking $nell$ bits renders the system insecure. Our results hold either in composite order bilinear groups under a variant of the subgroup decision assumption emph{or} in prime order bilinear groups under the decisional linear assumption. We note that the $n$ copies of our public key systems share a common reference parameter.
{"title":"On the Insecurity of Parallel Repetition for Leakage Resilience","authors":"Allison Bishop, Brent Waters","doi":"10.1109/FOCS.2010.57","DOIUrl":"https://doi.org/10.1109/FOCS.2010.57","url":null,"abstract":"A fundamental question in leakage-resilient cryptography is: can leakage resilience always be amplified by parallel repetition? It is natural to expect that if we have a leakage-resilient primitive tolerating $ell$ bits of leakage, we can take $n$ copies of it to form a system tolerating $nell$ bits of leakage. In this paper, we show that this is not always true. We construct a public key encryption system which is secure when at most $ell$ bits are leaked, but if we take $n$ copies of the system and encrypt a share of the message under each using an $n$-out-of-$n$ secret-sharing scheme, leaking $nell$ bits renders the system insecure. Our results hold either in composite order bilinear groups under a variant of the subgroup decision assumption emph{or} in prime order bilinear groups under the decisional linear assumption. We note that the $n$ copies of our public key systems share a common reference parameter.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132823039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given a network, a set of demands and a cost function f(.), the min-cost network design problem is to route all demands with the objective of minimizing sum_e f(l_e), where l_e is the total traffic load under the routing. We focus on cost functions of the form f(x) = s + x^a for x >, 0, with f(0) = 0. For a 1 with a positive startup cost s >, 0. Now, the cost function f(.) is neither sub additive nor super additive. This is motivated by minimizing network-wide energy consumption when supporting a set of traffic demands. It is commonly accepted that, for some computing and communication devices, doubling processing speed more than doubles the energy consumption. Hence, in Economics parlance, such a cost function reflects diseconomies of scale. We begin by discussing why existing routing techniques such as randomized rounding and tree-metric embedding fail to generalize directly. We then present our main contribution, which is a polylogarithmic approximation algorithm. We obtain this result by first deriving a bicriteria approximation for a related capacitated min-cost flow problem that we believe is interesting in its own right. Our approach for this problem builds upon the well-linked decomposition due to Chekuri-Khanna-Shepherd, the construction of expanders via matchings due to Khandekar-Rao-Vazirani, and edge-disjoint routing in well-connected graphs due to Rao-Zhou. However, we also develop new techniques that allow us to keep a handle on the total cost, which was not a concern in the aforementioned literature.
{"title":"Minimum-Cost Network Design with (Dis)economies of Scale","authors":"M. Andrews, S. Antonakopoulos, Lisa Zhang","doi":"10.1137/110825959","DOIUrl":"https://doi.org/10.1137/110825959","url":null,"abstract":"Given a network, a set of demands and a cost function f(.), the min-cost network design problem is to route all demands with the objective of minimizing sum_e f(l_e), where l_e is the total traffic load under the routing. We focus on cost functions of the form f(x) = s + x^a for x >, 0, with f(0) = 0. For a 1 with a positive startup cost s >, 0. Now, the cost function f(.) is neither sub additive nor super additive. This is motivated by minimizing network-wide energy consumption when supporting a set of traffic demands. It is commonly accepted that, for some computing and communication devices, doubling processing speed more than doubles the energy consumption. Hence, in Economics parlance, such a cost function reflects diseconomies of scale. We begin by discussing why existing routing techniques such as randomized rounding and tree-metric embedding fail to generalize directly. We then present our main contribution, which is a polylogarithmic approximation algorithm. We obtain this result by first deriving a bicriteria approximation for a related capacitated min-cost flow problem that we believe is interesting in its own right. Our approach for this problem builds upon the well-linked decomposition due to Chekuri-Khanna-Shepherd, the construction of expanders via matchings due to Khandekar-Rao-Vazirani, and edge-disjoint routing in well-connected graphs due to Rao-Zhou. However, we also develop new techniques that allow us to keep a handle on the total cost, which was not a concern in the aforementioned literature.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127893744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The generalized nested dissection method, developed by Lipton, Rose, and Tarjan, is a seminal method for solving a linear system Ax=b where A is a symmetric positive definite matrix. The method runs extremely fast whenever A is a well-separable matrix (such as matrices whose underlying support is planar or avoids a fixed minor). In this work we extend the nested dissection method to apply to any non-singular well-separable matrix over any field. The running times we obtain essentially match those of the nested dissection method.
{"title":"Solving Linear Systems through Nested Dissection","authors":"N. Alon, R. Yuster","doi":"10.1109/FOCS.2010.28","DOIUrl":"https://doi.org/10.1109/FOCS.2010.28","url":null,"abstract":"The generalized nested dissection method, developed by Lipton, Rose, and Tarjan, is a seminal method for solving a linear system Ax=b where A is a symmetric positive definite matrix. The method runs extremely fast whenever A is a well-separable matrix (such as matrices whose underlying support is planar or avoids a fixed minor). In this work we extend the nested dissection method to apply to any non-singular well-separable matrix over any field. The running times we obtain essentially match those of the nested dissection method.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116483790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}