David Doty, Matthew J. Patitz, D. Reishus, R. Schweller, Scott M. Summers
We consider the problem of fault-tolerance in nanoscale algorithmic self-assembly. We employ a standard variant of Winfree’s abstract Tile Assembly Model (aTAM), the two-handed aTAM, in which square “tiles” – a model of molecules constructed from DNA for the purpose of engineering self-assembled nanostructures – aggregate according to specific binding sites of varying strengths, and in which large aggregations of tiles may attach to each other, in contrast to the seeded aTAM, in which tiles aggregate one at a time to a single specially designated “seed” assembly. We focus on a major cause of errors in tile-based self-assembly: that of unintended growth due to “weak” strength-1 bonds, which if allowed to persist, may be stabilized by subsequent attachment of neighboring tiles in the sense that at least energy 2 is now required to break apart the resulting assembly, i.e., the errant assembly is stable at temperature 2. We study a common self-assembly benchmark problem, that of assembling an n×n square using O(log n) unique tile types, under the two-handed model of self-assembly. Our main result achieves a much stronger notion of fault-tolerance than those achieved previously. Arbitrary strength-1 growth is allowed, however, any assembly that grows sufficiently to become stable at temperature 2 is guaranteed to assemble into the correct final assembly of an n×n square. In other words, errors due to insufficient attachment, which is the cause of errors studied in earlier papers on fault-tolerance, are prevented absolutely in our main construction, rather than only with high probability and for sufficiently small structures, as in previous fault tolerance studies.
{"title":"Strong Fault-Tolerance for Self-Assembly with Fuzzy Temperature","authors":"David Doty, Matthew J. Patitz, D. Reishus, R. Schweller, Scott M. Summers","doi":"10.1109/FOCS.2010.47","DOIUrl":"https://doi.org/10.1109/FOCS.2010.47","url":null,"abstract":"We consider the problem of fault-tolerance in nanoscale algorithmic self-assembly. We employ a standard variant of Winfree’s abstract Tile Assembly Model (aTAM), the two-handed aTAM, in which square “tiles” – a model of molecules constructed from DNA for the purpose of engineering self-assembled nanostructures – aggregate according to specific binding sites of varying strengths, and in which large aggregations of tiles may attach to each other, in contrast to the seeded aTAM, in which tiles aggregate one at a time to a single specially designated “seed” assembly. We focus on a major cause of errors in tile-based self-assembly: that of unintended growth due to “weak” strength-1 bonds, which if allowed to persist, may be stabilized by subsequent attachment of neighboring tiles in the sense that at least energy 2 is now required to break apart the resulting assembly, i.e., the errant assembly is stable at temperature 2. We study a common self-assembly benchmark problem, that of assembling an n×n square using O(log n) unique tile types, under the two-handed model of self-assembly. Our main result achieves a much stronger notion of fault-tolerance than those achieved previously. Arbitrary strength-1 growth is allowed, however, any assembly that grows sufficiently to become stable at temperature 2 is guaranteed to assemble into the correct final assembly of an n×n square. In other words, errors due to insufficient attachment, which is the cause of errors studied in earlier papers on fault-tolerance, are prevented absolutely in our main construction, rather than only with high probability and for sufficiently small structures, as in previous fault tolerance studies.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"20 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126939197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an algorithm that on input of an $n$-vertex $m$-edge weighted graph $G$ and a value $k$, produces an {em incremental sparsifier} $hat{G}$ with $n-1 + m/k$ edges, such that the condition number of $G$ with $hat{G}$ is bounded above by $tilde{O}(klog^2 n) $, with probability $1-p$. The algorithm runs in time $$tilde{O}((m log{n} + nlog^2{n})log(1/p)).$$ As a result, we obtain an algorithm that on input of an $ntimes n$ symmetric diagonally dominant matrix $A$ with $m$ non-zero entries and a vector $b$, computes a vector ${x}$ satisfying $| |{x}-A^{+}b| |_A
{"title":"Approaching Optimality for Solving SDD Linear Systems","authors":"I. Koutis, G. Miller, Richard Peng","doi":"10.1137/110845914","DOIUrl":"https://doi.org/10.1137/110845914","url":null,"abstract":"We present an algorithm that on input of an $n$-vertex $m$-edge weighted graph $G$ and a value $k$, produces an {em incremental sparsifier} $hat{G}$ with $n-1 + m/k$ edges, such that the condition number of $G$ with $hat{G}$ is bounded above by $tilde{O}(klog^2 n) $, with probability $1-p$. The algorithm runs in time $$tilde{O}((m log{n} + nlog^2{n})log(1/p)).$$ As a result, we obtain an algorithm that on input of an $ntimes n$ symmetric diagonally dominant matrix $A$ with $m$ non-zero entries and a vector $b$, computes a vector ${x}$ satisfying $| |{x}-A^{+}b| |_A","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124601554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Borradaile, P. Sankowski, Christian Wulff-Nilsen
For an undirected $n$-vertex planar graph $G$ with non-negative edge-weights, we consider the following type of query: given two vertices $s$ and $t$ in $G$, what is the weight of a min $st$-cut in $G$? We show how to answer such queries in constant time with $O(nlog^5n)$ preprocessing time and $O(nlog n)$ space. We use a Gomory-Hu tree to represent all the pair wise min $st$-cuts implicitly. Previously, no sub quadratic time algorithm was known for this problem. Our oracle can be extended to report the min $st$-cuts in time proportional to their size. Since all-pairs min $st$-cut and the minimum cycle basis are dual problems in planar graphs, we also obtain an implicit representation of a minimum cycle basis in $O(nlog^5n)$ time and $O(nlog n)$ space and an explicit representation with additional $O(C)$ time and space where $C$ is the size of the basis. To obtain our results, we require that shortest paths be unique, this assumption can be removed deterministically with an additional $O(log^2 n)$ running-time factor.
{"title":"Min st-cut Oracle for Planar Graphs with Near-Linear Preprocessing Time","authors":"G. Borradaile, P. Sankowski, Christian Wulff-Nilsen","doi":"10.1145/2684068","DOIUrl":"https://doi.org/10.1145/2684068","url":null,"abstract":"For an undirected $n$-vertex planar graph $G$ with non-negative edge-weights, we consider the following type of query: given two vertices $s$ and $t$ in $G$, what is the weight of a min $st$-cut in $G$? We show how to answer such queries in constant time with $O(nlog^5n)$ preprocessing time and $O(nlog n)$ space. We use a Gomory-Hu tree to represent all the pair wise min $st$-cuts implicitly. Previously, no sub quadratic time algorithm was known for this problem. Our oracle can be extended to report the min $st$-cuts in time proportional to their size. Since all-pairs min $st$-cut and the minimum cycle basis are dual problems in planar graphs, we also obtain an implicit representation of a minimum cycle basis in $O(nlog^5n)$ time and $O(nlog n)$ space and an explicit representation with additional $O(C)$ time and space where $C$ is the size of the basis. To obtain our results, we require that shortest paths be unique, this assumption can be removed deterministically with an additional $O(log^2 n)$ running-time factor.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130505672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, and yet it has not been studied, to our knowledge, in the past. We focus on the case of procurement auctions in which sellers have private costs, and the auctioneer aims to maximize a utility function on subsets of items, under the constraint that the sum of the payments provided by the mechanism does not exceed a given budget. Standard mechanism design ideas such as the VCG mechanism and its variants are not applicable here. We show that, for general functions, the budget constraint can render mechanisms arbitrarily bad in terms of the utility of the buyer. However, our main result shows that for the important class of sub modular functions, a bounded approximation ratio is achievable. Better approximation results are obtained for subclasses of the sub modular functions. We explore the space of budget feasible mechanisms in other domains and give a characterization under more restricted conditions.
{"title":"Budget Feasible Mechanisms","authors":"Yaron Singer","doi":"10.1109/FOCS.2010.78","DOIUrl":"https://doi.org/10.1109/FOCS.2010.78","url":null,"abstract":"We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, and yet it has not been studied, to our knowledge, in the past. We focus on the case of procurement auctions in which sellers have private costs, and the auctioneer aims to maximize a utility function on subsets of items, under the constraint that the sum of the payments provided by the mechanism does not exceed a given budget. Standard mechanism design ideas such as the VCG mechanism and its variants are not applicable here. We show that, for general functions, the budget constraint can render mechanisms arbitrarily bad in terms of the utility of the buyer. However, our main result shows that for the important class of sub modular functions, a bounded approximation ratio is achievable. Better approximation results are obtained for subclasses of the sub modular functions. We explore the space of budget feasible mechanisms in other domains and give a characterization under more restricted conditions.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131137882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given a set system $(V,mathcal{S})$, $V={1,ldots,n}$ and $mathcal{S}={S_1,ldots,S_m}$, the minimum discrepancy problem is to find a 2-coloring $mathcal{X}:V right arrow {-1,+1}$, such that each set is colored as evenly as possible, i.e. find $mathcal{X}$ to minimize $max_{j in [m]} left|sum_{i in S_j} mathcal{X}(i)right|$. In this paper we give the first polynomial time algorithms for discrepancy minimization that achieve bounds similar to those known existentially using the so-called Entropy Method. We also give a first approximation-like result for discrepancy. Specifically we give efficient randomized algorithms to: 1. Construct an $O(n^{1/2})$ discrepancy coloring for general sets systems when $m=O(n)$, matching the celebrated result of Spencer up to $O(1)$ factors. More generally, for $mgeq n$, we obtain a discrepancy of $O(n^{1/2} log (2m/n))$. 2. Construct a coloring with discrepancy $O(t^{1/2} log n)$, if each element lies in at most $t$ sets. This matches the (non-constructive) result of Srinivasan. 3. Construct a coloring with discrepancy $O( lambdalog (nm))$, where $lambda$ is the hereditary discrepancy of the set system. The main idea in our algorithms is to produce a coloring over time by letting the color of the elements perform a random walk (with tiny increments) starting from 0 until they reach $pm 1$. At each step the random hops for various elements are correlated by a solution to a semi definite program, where this program is determined by the current state and the entropy method.
给定一个集合系统$(V,mathcal{S})$, $V={1,ldots,n}$和$mathcal{S}={S_1,ldots,S_m}$,最小差异问题是找到一个2着色的$mathcal{X}:V right arrow {-1,+1}$,使每个集合的着色尽可能均匀,即找到$mathcal{X}$以最小化$max_{j in [m]} left|sum_{i in S_j} mathcal{X}(i)right|$。在本文中,我们给出了第一个多项式时间算法的差异最小化,达到界类似于那些已知的存在使用所谓的熵方法。对于差异,我们也给出了一个近似的第一近似结果。具体来说,我们给出了高效的随机化算法:1。对于一般集系统,当$m=O(n)$时,构造一个$O(n^{1/2})$差异着色,将Spencer的著名结果匹配到$O(1)$因子。更一般地说,对于$mgeq n$,我们得到了$O(n^{1/2} log (2m/n))$的差异。2. 如果每个元素最多位于$t$集中,则构造一个差异为$O(t^{1/2} log n)$的着色。这与Srinivasan的(非建设性)结果相匹配。构造一个偏差$O( lambdalog (nm))$的着色,其中$lambda$为集合系统的遗传偏差。我们算法的主要思想是通过让元素的颜色执行从0开始的随机游走(以微小的增量),直到它们到达$pm 1$,从而随着时间的推移产生颜色。在每一步中,各种元素的随机跳数通过半确定程序的解相关联,其中该程序由当前状态和熵法决定。
{"title":"Constructive Algorithms for Discrepancy Minimization","authors":"N. Bansal","doi":"10.1109/FOCS.2010.7","DOIUrl":"https://doi.org/10.1109/FOCS.2010.7","url":null,"abstract":"Given a set system $(V,mathcal{S})$, $V={1,ldots,n}$ and $mathcal{S}={S_1,ldots,S_m}$, the minimum discrepancy problem is to find a 2-coloring $mathcal{X}:V right arrow {-1,+1}$, such that each set is colored as evenly as possible, i.e. find $mathcal{X}$ to minimize $max_{j in [m]} left|sum_{i in S_j} mathcal{X}(i)right|$. In this paper we give the first polynomial time algorithms for discrepancy minimization that achieve bounds similar to those known existentially using the so-called Entropy Method. We also give a first approximation-like result for discrepancy. Specifically we give efficient randomized algorithms to: 1. Construct an $O(n^{1/2})$ discrepancy coloring for general sets systems when $m=O(n)$, matching the celebrated result of Spencer up to $O(1)$ factors. More generally, for $mgeq n$, we obtain a discrepancy of $O(n^{1/2} log (2m/n))$. 2. Construct a coloring with discrepancy $O(t^{1/2} log n)$, if each element lies in at most $t$ sets. This matches the (non-constructive) result of Srinivasan. 3. Construct a coloring with discrepancy $O( lambdalog (nm))$, where $lambda$ is the hereditary discrepancy of the set system. The main idea in our algorithms is to produce a coloring over time by letting the color of the elements perform a random walk (with tiny increments) starting from 0 until they reach $pm 1$. At each step the random hops for various elements are correlated by a solution to a semi definite program, where this program is determined by the current state and the entropy method.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127851426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the problem of identity testing for depth-3 circuits of top fanin k and degree d. We give a new structure theorem for such identities. A direct application of our theorem improves the known deterministic d^{k^k}-time black-box identity test over rationals (Kayal & Saraf, FOCS 2009) to one that takes d^{k^2}-time. Our structure theorem essentially says that the number of independent variables in a real depth-3 identity is very small. This theorem affirmatively settles the strong rank conjecture posed by Dvir & Shpilka (STOC 2005). We devise a powerful algebraic framework and develop tools to study depth-3 identities. We use these tools to show that any depth-3 identity contains a much smaller nucleus identity that contains most of the "complexity" of the main identity. The special properties of this nucleus allow us to get almost optimal rank bounds for depth-3 identities.
{"title":"From Sylvester-Gallai Configurations to Rank Bounds: Improved Black-Box Identity Test for Depth-3 Circuits","authors":"Nitin Saxena, C. Seshadhri","doi":"10.1145/2528403","DOIUrl":"https://doi.org/10.1145/2528403","url":null,"abstract":"We study the problem of identity testing for depth-3 circuits of top fanin k and degree d. We give a new structure theorem for such identities. A direct application of our theorem improves the known deterministic d^{k^k}-time black-box identity test over rationals (Kayal & Saraf, FOCS 2009) to one that takes d^{k^2}-time. Our structure theorem essentially says that the number of independent variables in a real depth-3 identity is very small. This theorem affirmatively settles the strong rank conjecture posed by Dvir & Shpilka (STOC 2005). We devise a powerful algebraic framework and develop tools to study depth-3 identities. We use these tools to show that any depth-3 identity contains a much smaller nucleus identity that contains most of the \"complexity\" of the main identity. The special properties of this nucleus allow us to get almost optimal rank bounds for depth-3 identities.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131246695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Lov'{a}sz Local Lemma (LLL) is a powerful tool that gives sufficient conditions for avoiding all of a given set of ``bad'' events, with positive probability. A series of results have provided algorithms to efficiently construct structures whose existence is non-constructively guaranteed by the LLL, culminating in the recent breakthrough of Moser & Tardos. We show that the output distribution of the Moser-Tardos algorithm well-approximates the emph{conditional LLL-distribution} – the distribution obtained by conditioning on all bad events being avoided. We show how a known bound on the probabilities of events in this distribution can be used for further probabilistic analysis and give new constructive and non-constructive results. We also show that when an LLL application provides a small amount of slack, the number of resamplings of the Moser-Tardos algorithm is nearly linear in the number of underlying independent variables (not events!), and can thus be used to give efficient constructions in cases where the underlying proof applies the LLL to super-polynomially many events. Even in cases where finding a bad event that holds is computationally hard, we show that applying the algorithm to avoid a polynomial-sized ``core'' subset of bad events leads to a desired outcome with high probability. We demonstrate this idea on several applications. We give the first constant-factor approximation algorithm for the Santa Claus problem by making an LLL-based proof of Feige constructive. We provide Monte Carlo algorithms for acyclic edge coloring, non-repetitive graph colorings, and Ramsey-type graphs. In all these applications the algorithm falls directly out of the non-constructive LLL-based proof. Our algorithms are very simple, often provide better bounds than previous algorithms, and are in several cases the first efficient algorithms known. As a second type of application we consider settings beyond the critical dependency threshold of the LLL: avoiding all bad events is impossible in these cases. As the first (even non-constructive) result of this kind, we show that by sampling from the LLL-distribution of a selected smaller core, we can avoid a fraction of bad events that is higher than the expectation. MAX $k$-SAT is an example of this.
{"title":"New Constructive Aspects of the Lovasz Local Lemma","authors":"Bernhard Haeupler, B. Saha, A. Srinivasan","doi":"10.1145/2049697.2049702","DOIUrl":"https://doi.org/10.1145/2049697.2049702","url":null,"abstract":"The Lov'{a}sz Local Lemma (LLL) is a powerful tool that gives sufficient conditions for avoiding all of a given set of ``bad'' events, with positive probability. A series of results have provided algorithms to efficiently construct structures whose existence is non-constructively guaranteed by the LLL, culminating in the recent breakthrough of Moser & Tardos. We show that the output distribution of the Moser-Tardos algorithm well-approximates the emph{conditional LLL-distribution} – the distribution obtained by conditioning on all bad events being avoided. We show how a known bound on the probabilities of events in this distribution can be used for further probabilistic analysis and give new constructive and non-constructive results. We also show that when an LLL application provides a small amount of slack, the number of resamplings of the Moser-Tardos algorithm is nearly linear in the number of underlying independent variables (not events!), and can thus be used to give efficient constructions in cases where the underlying proof applies the LLL to super-polynomially many events. Even in cases where finding a bad event that holds is computationally hard, we show that applying the algorithm to avoid a polynomial-sized ``core'' subset of bad events leads to a desired outcome with high probability. We demonstrate this idea on several applications. We give the first constant-factor approximation algorithm for the Santa Claus problem by making an LLL-based proof of Feige constructive. We provide Monte Carlo algorithms for acyclic edge coloring, non-repetitive graph colorings, and Ramsey-type graphs. In all these applications the algorithm falls directly out of the non-constructive LLL-based proof. Our algorithms are very simple, often provide better bounds than previous algorithms, and are in several cases the first efficient algorithms known. As a second type of application we consider settings beyond the critical dependency threshold of the LLL: avoiding all bad events is impossible in these cases. As the first (even non-constructive) result of this kind, we show that by sampling from the LLL-distribution of a selected smaller core, we can avoid a fraction of bad events that is higher than the expectation. MAX $k$-SAT is an example of this.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"289 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122303044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that guarantee constant-time operations in the worst case with high probability, and in terms of space consumption there are known constructions that use essentially optimal space. However, although the first analysis of a dynamic dictionary dates back more than 45 years ago (when Knuth analyzed linear probing in 1963), the trade-off between these aspects of performance is still not completely understood. In this paper we settle two fundamental open problems: begin{itemize} item We construct the first dynamic dictionary that enjoys the best of both worlds: it stores $boldsymbol{n}$ elements using $boldsymbol{(1 + epsilon) n}$ memory words, and guarantees constant-time operations in the worst case with high probability. Specifically, for any boldsymbol{epsilon = Omega ( (log log n / log n)^{1/2} )}$ and for any sequence of polynomially many operations, with high probability over the randomness of the initialization phase, all operations are performed in constant time which is independent of $boldsymbol{epsilon}$. The construction is a two-level variant of cuckoo hashing, augmented with a ``backyard'' that handles a large fraction of the elements, together with a de-amortized perfect hashing scheme for eliminating the dependency on $boldsymbol{epsilon}$. item We present a variant of the above construction that uses only $boldsymbol{(1 + o(1))B}$ bits, where $boldsymbol{B}$ is the information-theoretic lower bound for representing a set of size $boldsymbol{n}$ taken from a universe of size $boldsymbol{u}$, and guarantees constant-time operations in the worst case with high probability, as before. This problem was open even in the {em amortized} setting. One of the main ingredients of our construction is a permutation-based variant of cuckoo hashing, which significantly improves the space consumption of cuckoo hashing when dealing with a rather small universe. end{itemize}
{"title":"Backyard Cuckoo Hashing: Constant Worst-Case Operations with a Succinct Representation","authors":"Yuriy Arbitman, M. Naor, G. Segev","doi":"10.1109/FOCS.2010.80","DOIUrl":"https://doi.org/10.1109/FOCS.2010.80","url":null,"abstract":"The performance of a dynamic dictionary is measured mainly by its update time, lookup time, and space consumption. In terms of update time and lookup time there are known constructions that guarantee constant-time operations in the worst case with high probability, and in terms of space consumption there are known constructions that use essentially optimal space. However, although the first analysis of a dynamic dictionary dates back more than 45 years ago (when Knuth analyzed linear probing in 1963), the trade-off between these aspects of performance is still not completely understood. In this paper we settle two fundamental open problems: begin{itemize} item We construct the first dynamic dictionary that enjoys the best of both worlds: it stores $boldsymbol{n}$ elements using $boldsymbol{(1 + epsilon) n}$ memory words, and guarantees constant-time operations in the worst case with high probability. Specifically, for any boldsymbol{epsilon = Omega ( (log log n / log n)^{1/2} )}$ and for any sequence of polynomially many operations, with high probability over the randomness of the initialization phase, all operations are performed in constant time which is independent of $boldsymbol{epsilon}$. The construction is a two-level variant of cuckoo hashing, augmented with a ``backyard'' that handles a large fraction of the elements, together with a de-amortized perfect hashing scheme for eliminating the dependency on $boldsymbol{epsilon}$. item We present a variant of the above construction that uses only $boldsymbol{(1 + o(1))B}$ bits, where $boldsymbol{B}$ is the information-theoretic lower bound for representing a set of size $boldsymbol{n}$ taken from a universe of size $boldsymbol{u}$, and guarantees constant-time operations in the worst case with high probability, as before. This problem was open even in the {em amortized} setting. One of the main ingredients of our construction is a permutation-based variant of cuckoo hashing, which significantly improves the space consumption of cuckoo hashing when dealing with a rather small universe. end{itemize}","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121854045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the design of truthful mechanisms for set systems, i.e., scenarios where a customer needs to hire a team of agents to perform a complex task. In this setting, frugality [2] provides a measure to evaluate the "cost of truthfulness", that is, the overpayment of a truthful mechanism relative to the "fair" payment. We propose a uniform scheme for designing frugal truthful mechanisms for general set systems. Our scheme is based on scaling the agents' bids using the eigenvector of a matrix that encodes the interdependencies between the agents. We demonstrate that the $r$-out-of-$k$-system mechanism and the $^{sqrt{ }}$-mechanism for buying a path in a graph [18] can be viewed as instantiations of our scheme. We then apply our scheme to two other classes of set systems, namely, vertex cover systems and $k$-path systems, in which a customer needs to purchase $k$ edge-disjoint source-sink paths. For both settings, we bound the frugality of our mechanism in terms of the largest eigenvalue of the respective interdependency matrix. We show that our mechanism is optimal for a large subclass of vertex cover systems satisfying a simple local sparsity condition. For $k$-path systems, our mechanism is within a factor of $k+1$ from optimal, moreover, we show that it is, in fact, optimal, when one uses a modified definition of frugality proposed in [10]. Our lower bound argument combines spectral techniques and Young's inequality, and is applicable to all set systems. As both $r$-out-of-$k$ systems and single path systems can be viewed as special cases of $k$-path systems, our result improves the lower bounds of [18] and answers several open questions proposed in [18].
{"title":"Frugal Mechanism Design via Spectral Techniques","authors":"Ning Chen, E. Elkind, N. Gravin, F. Petrov","doi":"10.1109/FOCS.2010.77","DOIUrl":"https://doi.org/10.1109/FOCS.2010.77","url":null,"abstract":"We study the design of truthful mechanisms for set systems, i.e., scenarios where a customer needs to hire a team of agents to perform a complex task. In this setting, frugality [2] provides a measure to evaluate the \"cost of truthfulness\", that is, the overpayment of a truthful mechanism relative to the \"fair\" payment. We propose a uniform scheme for designing frugal truthful mechanisms for general set systems. Our scheme is based on scaling the agents' bids using the eigenvector of a matrix that encodes the interdependencies between the agents. We demonstrate that the $r$-out-of-$k$-system mechanism and the $^{sqrt{ }}$-mechanism for buying a path in a graph [18] can be viewed as instantiations of our scheme. We then apply our scheme to two other classes of set systems, namely, vertex cover systems and $k$-path systems, in which a customer needs to purchase $k$ edge-disjoint source-sink paths. For both settings, we bound the frugality of our mechanism in terms of the largest eigenvalue of the respective interdependency matrix. We show that our mechanism is optimal for a large subclass of vertex cover systems satisfying a simple local sparsity condition. For $k$-path systems, our mechanism is within a factor of $k+1$ from optimal, moreover, we show that it is, in fact, optimal, when one uses a modified definition of frugality proposed in [10]. Our lower bound argument combines spectral techniques and Young's inequality, and is applicable to all set systems. As both $r$-out-of-$k$ systems and single path systems can be viewed as special cases of $k$-path systems, our result improves the lower bounds of [18] and answers several open questions proposed in [18].","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134221737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study truthful mechanisms for hiring a team of agents in three classes of set systems: Vertex Cover auctions, k-???ow auctions, and cut auctions. For Vertex Cover auctions, the vertices are owned by selfish and rational agents, and the auctioneer wants to purchase a vertex cover from them. For k-???ow auctions, the edges are owned by the agents, and the auctioneer wants to purchase k edge-disjoint s-t paths, for given s and t. In the same setting, for cut auctions, the auctioneer wants to purchase an s-t cut. Only the agents know their costs, and the auctioneer needs to select a feasible set and payments based on bids made by the agents. We present constant-competitive truthful mechanisms for all three set systems. That is, the maximum overpayment of the mechanism is within a constant factor of the maximum overpayment of any truthful mechanism, for every set system in the class. The mechanism for Vertex Cover is based on scaling each bid by a multiplier derived from the dominant eigenvector of a certain matrix. The mechanism for k-???ows prunes the graph to be minimally (k + 1)-connected, and then applies the Vertex Cover mechanism. Similarly, the mechanism for cuts contracts the graph until all s-t paths have length exactly 2, and then applies the Vertex Cover mechanism.
{"title":"Frugal and Truthful Auctions for Vertex Covers, Flows and Cuts","authors":"D. Kempe, Mahyar Salek, Cristopher Moore","doi":"10.1109/FOCS.2010.76","DOIUrl":"https://doi.org/10.1109/FOCS.2010.76","url":null,"abstract":"We study truthful mechanisms for hiring a team of agents in three classes of set systems: Vertex Cover auctions, k-???ow auctions, and cut auctions. For Vertex Cover auctions, the vertices are owned by selfish and rational agents, and the auctioneer wants to purchase a vertex cover from them. For k-???ow auctions, the edges are owned by the agents, and the auctioneer wants to purchase k edge-disjoint s-t paths, for given s and t. In the same setting, for cut auctions, the auctioneer wants to purchase an s-t cut. Only the agents know their costs, and the auctioneer needs to select a feasible set and payments based on bids made by the agents. We present constant-competitive truthful mechanisms for all three set systems. That is, the maximum overpayment of the mechanism is within a constant factor of the maximum overpayment of any truthful mechanism, for every set system in the class. The mechanism for Vertex Cover is based on scaling each bid by a multiplier derived from the dominant eigenvector of a certain matrix. The mechanism for k-???ows prunes the graph to be minimally (k + 1)-connected, and then applies the Vertex Cover mechanism. Similarly, the mechanism for cuts contracts the graph until all s-t paths have length exactly 2, and then applies the Vertex Cover mechanism.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128815471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}