Tangles of graphs have been introduced by Robertson and Seymour in the context of their graph minor theory. Tangles may be viewed as describing "k-connected components" of a graph (though in a twisted way). They play an important role in graph minor theory. An interesting aspect of tangles is that they cannot only be defined for graphs, but more generally for arbitrary connectivity functions (that is, integer-valued submodular and symmetric set functions). However, tangles are difficult to deal with algorithmically. To start with, it is unclear how to represent them, because they are families of separations and as such may be exponentially large. Our first contribution is a data structure for representing and accessing all tangles of a graph up to some fixed order. Using this data structure, we can prove an algorithmic version of a very general structure theorem due to Carmesin, Diestel, Harman and Hundertmark (for graphs) and Hundertmark (for arbitrary connectivity functions) that yields a canonical tree decomposition whose parts correspond to the maximal tangles. (This may be viewed as a generalisation of the decomposition of a graph into its 3-connected components.)
{"title":"Computing with Tangles","authors":"Martin Grohe, Pascal Schweitzer","doi":"10.1145/2746539.2746587","DOIUrl":"https://doi.org/10.1145/2746539.2746587","url":null,"abstract":"Tangles of graphs have been introduced by Robertson and Seymour in the context of their graph minor theory. Tangles may be viewed as describing \"k-connected components\" of a graph (though in a twisted way). They play an important role in graph minor theory. An interesting aspect of tangles is that they cannot only be defined for graphs, but more generally for arbitrary connectivity functions (that is, integer-valued submodular and symmetric set functions). However, tangles are difficult to deal with algorithmically. To start with, it is unclear how to represent them, because they are families of separations and as such may be exponentially large. Our first contribution is a data structure for representing and accessing all tangles of a graph up to some fixed order. Using this data structure, we can prove an algorithmic version of a very general structure theorem due to Carmesin, Diestel, Harman and Hundertmark (for graphs) and Hundertmark (for arbitrary connectivity functions) that yields a canonical tree decomposition whose parts correspond to the maximal tangles. (This may be viewed as a generalisation of the decomposition of a graph into its 3-connected components.)","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75140307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Metric data structures (distance oracles, distance labeling schemes, routing schemes) and low-distortion embeddings provide a powerful algorithmic methodology, which has been successfully applied for approximation algorithms [21], online algorithms [7], distributed algorithms [19] and for computing sparsifiers [28]. However, this methodology appears to have a limitation: the worst-case performance inherently depends on the cardinality of the metric, and one could not specify in advance which vertices/points should enjoy a better service (i.e., stretch/distortion, label size/dimension) than that given by the worst-case guarantee. In this paper we alleviate this limitation by devising a suit of prioritized metric data structures and embeddings. We show that given a priority ranking (x1,x2,...,xn) of the graph vertices (respectively, metric points) one can devise a metric data structure (respectively, embedding) in which the stretch (resp., distortion) incurred by any pair containing a vertex xj will depend on the rank j of the vertex. We also show that other important parameters, such as the label size and (in some sense) the dimension, may depend only on j. In some of our metric data structures (resp., embeddings) we achieve both prioritized stretch (resp., distortion) and label size (resp., dimension) simultaneously. The worst-case performance of our metric data structures and embeddings is typically asymptotically no worse than of their non-prioritized counterparts.
{"title":"Prioritized Metric Structures and Embedding","authors":"Michael Elkin, Arnold Filtser, Ofer Neiman","doi":"10.1145/2746539.2746623","DOIUrl":"https://doi.org/10.1145/2746539.2746623","url":null,"abstract":"Metric data structures (distance oracles, distance labeling schemes, routing schemes) and low-distortion embeddings provide a powerful algorithmic methodology, which has been successfully applied for approximation algorithms [21], online algorithms [7], distributed algorithms [19] and for computing sparsifiers [28]. However, this methodology appears to have a limitation: the worst-case performance inherently depends on the cardinality of the metric, and one could not specify in advance which vertices/points should enjoy a better service (i.e., stretch/distortion, label size/dimension) than that given by the worst-case guarantee. In this paper we alleviate this limitation by devising a suit of prioritized metric data structures and embeddings. We show that given a priority ranking (x1,x2,...,xn) of the graph vertices (respectively, metric points) one can devise a metric data structure (respectively, embedding) in which the stretch (resp., distortion) incurred by any pair containing a vertex xj will depend on the rank j of the vertex. We also show that other important parameters, such as the label size and (in some sense) the dimension, may depend only on j. In some of our metric data structures (resp., embeddings) we achieve both prioritized stretch (resp., distortion) and label size (resp., dimension) simultaneously. The worst-case performance of our metric data structures and embeddings is typically asymptotically no worse than of their non-prioritized counterparts.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74945676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a collection of new results on problems related to 3SUM, including: The first truly subquadratic algorithm for computing the (min,+) convolution for monotone increasing sequences with integer values bounded by O(n), solving 3SUM for monotone sets in 2D with integer coordinates bounded by O(n), and preprocessing a binary string for histogram indexing (also called jumbled indexing). The running time is O(n(9+√177)/12, polylog,n)=O(n1.859) with randomization, or O(n1.864) deterministically. This greatly improves the previous n2/2Ω(√log n) time bound obtained from Williams' recent result on all-pairs shortest paths [STOC'14], and answers an open question raised by several researchers studying the histogram indexing problem. The first algorithm for histogram indexing for any constant alphabet size that achieves truly subquadratic preprocessing time and truly sublinear query time. A truly subquadratic algorithm for integer 3SUM in the case when the given set can be partitioned into n1-δ clusters each covered by an interval of length n, for any constant δ>0. An algorithm to preprocess any set of n integers so that subsequently 3SUM on any given subset can be solved in O(n13/7, polylog,n) time. All these results are obtained by a surprising new technique, based on the Balog--Szemeredi--Gowers Theorem from additive combinatorics.
{"title":"Clustered Integer 3SUM via Additive Combinatorics","authors":"Timothy M. Chan, Moshe Lewenstein","doi":"10.1145/2746539.2746568","DOIUrl":"https://doi.org/10.1145/2746539.2746568","url":null,"abstract":"We present a collection of new results on problems related to 3SUM, including: The first truly subquadratic algorithm for computing the (min,+) convolution for monotone increasing sequences with integer values bounded by O(n), solving 3SUM for monotone sets in 2D with integer coordinates bounded by O(n), and preprocessing a binary string for histogram indexing (also called jumbled indexing). The running time is O(n(9+√177)/12, polylog,n)=O(n1.859) with randomization, or O(n1.864) deterministically. This greatly improves the previous n2/2Ω(√log n) time bound obtained from Williams' recent result on all-pairs shortest paths [STOC'14], and answers an open question raised by several researchers studying the histogram indexing problem. The first algorithm for histogram indexing for any constant alphabet size that achieves truly subquadratic preprocessing time and truly sublinear query time. A truly subquadratic algorithm for integer 3SUM in the case when the given set can be partitioned into n1-δ clusters each covered by an interval of length n, for any constant δ>0. An algorithm to preprocess any set of n integers so that subsequently 3SUM on any given subset can be solved in O(n13/7, polylog,n) time. All these results are obtained by a surprising new technique, based on the Balog--Szemeredi--Gowers Theorem from additive combinatorics.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78932811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Kesselheim, Robert D. Kleinberg, Rad Niazadeh
For a number of problems in the theory of online algorithms, it is known that the assumption that elements arrive in uniformly random order enables the design of algorithms with much better performance guarantees than under worst-case assumptions. The quintessential example of this phenomenon is the secretary problem, in which an algorithm attempts to stop a sequence at the moment it observes the maximum value in the sequence. As is well known, if the sequence is presented in uniformly random order there is an algorithm that succeeds with probability 1/e, whereas no non-trivial performance guarantee is possible if the elements arrive in worst-case order. In many of the applications of online algorithms, it is reasonable to assume there is some randomness in the input sequence, but unreasonable to assume that the arrival ordering is uniformly random. This work initiates an investigation into relaxations of the random-ordering hypothesis in online algorithms, by focusing on the secretary problem and asking what performance guarantees one can prove under relaxed assumptions. Toward this end, we present two sets of properties of distributions over permutations as sufficient conditions, called the (p,q,δ)-block-independence property} and (k,δ)-uniform-induced-ordering property}. We show these two are asymptotically equivalent by borrowing some techniques from the celebrated approximation theory. Moreover, we show they both imply the existence of secretary algorithms with constant probability of correct selection, approaching the optimal constant 1/e as the related parameters of the property tend towards their extreme values. Both of these properties are significantly weaker than the usual assumption of uniform randomness; we substantiate this by providing several constructions of distributions that satisfy (p,q,δ)-block-independence. As one application of our investigation, we prove that Θ(log log n) is the minimum entropy of any permutation distribution that permits constant probability of correct selection in the secretary problem with $n$ elements. While our block-independence condition is sufficient for constant probability of correct selection, it is not necessary; however, we present complexity-theoretic evidence that no simple necessary and sufficient criterion exists. Finally, we explore the extent to which the performance guarantees of other algorithms are preserved when one relaxes the uniform random ordering assumption to (p,q,δ)-block-independence, obtaining a negative result for the weighted bipartite matching algorithm of Korula and Pal.
{"title":"Secretary Problems with Non-Uniform Arrival Order","authors":"Thomas Kesselheim, Robert D. Kleinberg, Rad Niazadeh","doi":"10.1145/2746539.2746602","DOIUrl":"https://doi.org/10.1145/2746539.2746602","url":null,"abstract":"For a number of problems in the theory of online algorithms, it is known that the assumption that elements arrive in uniformly random order enables the design of algorithms with much better performance guarantees than under worst-case assumptions. The quintessential example of this phenomenon is the secretary problem, in which an algorithm attempts to stop a sequence at the moment it observes the maximum value in the sequence. As is well known, if the sequence is presented in uniformly random order there is an algorithm that succeeds with probability 1/e, whereas no non-trivial performance guarantee is possible if the elements arrive in worst-case order. In many of the applications of online algorithms, it is reasonable to assume there is some randomness in the input sequence, but unreasonable to assume that the arrival ordering is uniformly random. This work initiates an investigation into relaxations of the random-ordering hypothesis in online algorithms, by focusing on the secretary problem and asking what performance guarantees one can prove under relaxed assumptions. Toward this end, we present two sets of properties of distributions over permutations as sufficient conditions, called the (p,q,δ)-block-independence property} and (k,δ)-uniform-induced-ordering property}. We show these two are asymptotically equivalent by borrowing some techniques from the celebrated approximation theory. Moreover, we show they both imply the existence of secretary algorithms with constant probability of correct selection, approaching the optimal constant 1/e as the related parameters of the property tend towards their extreme values. Both of these properties are significantly weaker than the usual assumption of uniform randomness; we substantiate this by providing several constructions of distributions that satisfy (p,q,δ)-block-independence. As one application of our investigation, we prove that Θ(log log n) is the minimum entropy of any permutation distribution that permits constant probability of correct selection in the secretary problem with $n$ elements. While our block-independence condition is sufficient for constant probability of correct selection, it is not necessary; however, we present complexity-theoretic evidence that no simple necessary and sufficient criterion exists. Finally, we explore the extent to which the performance guarantees of other algorithms are preserved when one relaxes the uniform random ordering assumption to (p,q,δ)-block-independence, obtaining a negative result for the weighted bipartite matching algorithm of Korula and Pal.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75074606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We show an optimal data-dependent hashing scheme for the approximate near neighbor problem. For an n-point dataset in a d-dimensional space our data structure achieves query time O(d ⋅ nρ+o(1)) and space O(n1+ρ+o(1) + d ⋅ n), where ρ=1/(2c2-1) for the Euclidean space and approximation c>1. For the Hamming space, we obtain an exponent of ρ=1/(2c-1). Our result completes the direction set forth in (Andoni, Indyk, Nguyen, Razenshteyn 2014) who gave a proof-of-concept that data-dependent hashing can outperform classic Locality Sensitive Hashing (LSH). In contrast to (Andoni, Indyk, Nguyen, Razenshteyn 2014), the new bound is not only optimal, but in fact improves over the best (optimal) LSH data structures (Indyk, Motwani 1998) (Andoni, Indyk 2006) for all approximation factors c>1. From the technical perspective, we proceed by decomposing an arbitrary dataset into several subsets that are, in a certain sense, pseudo-random.
{"title":"Optimal Data-Dependent Hashing for Approximate Near Neighbors","authors":"Alexandr Andoni, Ilya P. Razenshteyn","doi":"10.1145/2746539.2746553","DOIUrl":"https://doi.org/10.1145/2746539.2746553","url":null,"abstract":"We show an optimal data-dependent hashing scheme for the approximate near neighbor problem. For an n-point dataset in a d-dimensional space our data structure achieves query time O(d ⋅ nρ+o(1)) and space O(n1+ρ+o(1) + d ⋅ n), where ρ=1/(2c2-1) for the Euclidean space and approximation c>1. For the Hamming space, we obtain an exponent of ρ=1/(2c-1). Our result completes the direction set forth in (Andoni, Indyk, Nguyen, Razenshteyn 2014) who gave a proof-of-concept that data-dependent hashing can outperform classic Locality Sensitive Hashing (LSH). In contrast to (Andoni, Indyk, Nguyen, Razenshteyn 2014), the new bound is not only optimal, but in fact improves over the best (optimal) LSH data structures (Indyk, Motwani 1998) (Andoni, Indyk 2006) for all approximation factors c>1. From the technical perspective, we proceed by decomposing an arbitrary dataset into several subsets that are, in a certain sense, pseudo-random.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86925689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We prove that for every ε>0 and predicate P:{0,1}k-> {0,1} that supports a pairwise independent distribution, there exists an instance I of the Max P constraint satisfaction problem on n variables such that no assignment can satisfy more than a ~(|P-1(1)|)/(2k)+ε fraction of I's constraints but the degree Ω(n) Sum of Squares semidefinite programming hierarchy cannot certify that I is unsatisfiable. Similar results were previously only known for weaker hierarchies.
{"title":"Sum of Squares Lower Bounds from Pairwise Independence","authors":"B. Barak, S. Chan, Pravesh Kothari","doi":"10.1145/2746539.2746625","DOIUrl":"https://doi.org/10.1145/2746539.2746625","url":null,"abstract":"We prove that for every ε>0 and predicate P:{0,1}k-> {0,1} that supports a pairwise independent distribution, there exists an instance I of the Max P constraint satisfaction problem on n variables such that no assignment can satisfy more than a ~(|P-1(1)|)/(2k)+ε fraction of I's constraints but the degree Ω(n) Sum of Squares semidefinite programming hierarchy cannot certify that I is unsatisfiable. Similar results were previously only known for weaker hierarchies.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73636118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Divesh Aggarwal, D. Dadush, O. Regev, Noah Stephens-Davidowitz
We give a randomized 2n+o(n)-time and space algorithm for solving the Shortest Vector Problem (SVP) on n-dimensional Euclidean lattices. This improves on the previous fastest algorithm: the deterministic ~O(4n)-time and ~O(2n)-space algorithm of Micciancio and Voulgaris (STOC 2010, SIAM J. Comp. 2013). In fact, we give a conceptually simple algorithm that solves the (in our opinion, even more interesting) problem of discrete Gaussian sampling (DGS). More specifically, we show how to sample 2n/2 vectors from the discrete Gaussian distribution at any parameter in 2n+o(n) time and space. (Prior work only solved DGS for very large parameters.) Our SVP result then follows from a natural reduction from SVP to DGS. In addition, we give a more refined algorithm for DGS above the so-called smoothing parameter of the lattice, which can generate 2n/2 discrete Gaussian samples in just 2n/2+o(n) time and space. Among other things, this implies a 2n/2+o(n)-time and space algorithm for 1.93-approximate decision SVP.
针对n维欧几里德格上的最短向量问题(SVP),给出了一种随机化的2n+o(n)时间和空间算法。这改进了之前最快的算法:Micciancio和Voulgaris (STOC 2010, SIAM J. Comp. 2013)的确定性~O(4n)时间和~O(2n)空间算法。事实上,我们给出了一个概念上简单的算法来解决离散高斯抽样(DGS)的问题(在我们看来,甚至更有趣)。更具体地说,我们展示了如何在2n+o(n)时间和空间中从任意参数的离散高斯分布中采样2n/2个向量。(之前的工作只解决了非常大的参数下的DGS。)我们的SVP结果随后从SVP到DGS的自然还原。此外,我们给出了一种更精细的DGS算法,该算法可以在2n/2+o(n)的时间和空间内生成2n/2个离散高斯样本。除此之外,这意味着一个2n/2+o(n)的时间和空间算法用于1.93近似决策SVP。
{"title":"Solving the Shortest Vector Problem in 2n Time Using Discrete Gaussian Sampling: Extended Abstract","authors":"Divesh Aggarwal, D. Dadush, O. Regev, Noah Stephens-Davidowitz","doi":"10.1145/2746539.2746606","DOIUrl":"https://doi.org/10.1145/2746539.2746606","url":null,"abstract":"We give a randomized 2n+o(n)-time and space algorithm for solving the Shortest Vector Problem (SVP) on n-dimensional Euclidean lattices. This improves on the previous fastest algorithm: the deterministic ~O(4n)-time and ~O(2n)-space algorithm of Micciancio and Voulgaris (STOC 2010, SIAM J. Comp. 2013). In fact, we give a conceptually simple algorithm that solves the (in our opinion, even more interesting) problem of discrete Gaussian sampling (DGS). More specifically, we show how to sample 2n/2 vectors from the discrete Gaussian distribution at any parameter in 2n+o(n) time and space. (Prior work only solved DGS for very large parameters.) Our SVP result then follows from a natural reduction from SVP to DGS. In addition, we give a more refined algorithm for DGS above the so-called smoothing parameter of the lattice, which can generate 2n/2 discrete Gaussian samples in just 2n/2+o(n) time and space. Among other things, this implies a 2n/2+o(n)-time and space algorithm for 1.93-approximate decision SVP.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"62 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87263751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the Steiner Forest problem, we are given terminal pairs si, ti, and need to find the cheapest subgraph which connects each of the terminal pairs together. In 1991, Agrawal, Klein, and Ravi gave a primal-dual constant-factor approximation algorithm for this problem. Until this work, the only constant-factor approximations we know are via linear programming relaxations. In this paper, we consider the following greedy algorithm: Given terminal pairs in a metric space, a terminal is active if its distance to its partner is non-zero. Pick the two closest active terminals (say si, tj), set the distance between them to zero, and buy a path connecting them. Recompute the metric, and repeat.} It has long been open to analyze this greedy algorithm. Our main result shows that this algorithm is a constant-factor approximation. We use this algorithm to give new, simpler constructions of cost-sharing schemes for Steiner forest. In particular, the first "group-strict" cost-shares for this problem implies a very simple combinatorial sampling-based algorithm for stochastic Steiner forest.
{"title":"Greedy Algorithms for Steiner Forest","authors":"Anupam Gupta, Amit Kumar","doi":"10.1145/2746539.2746590","DOIUrl":"https://doi.org/10.1145/2746539.2746590","url":null,"abstract":"In the Steiner Forest problem, we are given terminal pairs si, ti, and need to find the cheapest subgraph which connects each of the terminal pairs together. In 1991, Agrawal, Klein, and Ravi gave a primal-dual constant-factor approximation algorithm for this problem. Until this work, the only constant-factor approximations we know are via linear programming relaxations. In this paper, we consider the following greedy algorithm: Given terminal pairs in a metric space, a terminal is active if its distance to its partner is non-zero. Pick the two closest active terminals (say si, tj), set the distance between them to zero, and buy a path connecting them. Recompute the metric, and repeat.} It has long been open to analyze this greedy algorithm. Our main result shows that this algorithm is a constant-factor approximation. We use this algorithm to give new, simpler constructions of cost-sharing schemes for Steiner forest. In particular, the first \"group-strict\" cost-shares for this problem implies a very simple combinatorial sampling-based algorithm for stochastic Steiner forest.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87388950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the binary online (or "causal") channel coding model, a sender wishes to communicate a message to a receiver by transmitting a codeword x = (x1,...,xn) ∈ {0,1}n bit by bit via a channel limited to at most pn corruptions. The channel is "online" in the sense that at the ith step of communication the channel decides whether to corrupt the ith bit or not based on its view so far, i.e., its decision depends only on the transmitted bits (x1,...,xi). This is in contrast to the classical adversarial channel in which the error is chosen by a channel that has full knowledge of the transmitted codeword x. In this work we study the capacity of binary online channels for two corruption models: the bit-flip model in which the channel may flip at most pn of the bits of the transmitted codeword, and the erasure model in which the channel may erase at most pn bits of the transmitted codeword. Specifically, for both error models we give a full characterization of the capacity as a function of p. The online channel (in both the bit-flip and erasure case) has seen a number of recent studies which present both upper and lower bounds on its capacity. In this work, we present and analyze a coding scheme that improves on the previously suggested lower bounds and matches the previously suggested upper bounds thus implying a tight characterization.
{"title":"A Characterization of the Capacity of Online (causal) Binary Channels","authors":"Zitan Chen, S. Jaggi, M. Langberg","doi":"10.1145/2746539.2746591","DOIUrl":"https://doi.org/10.1145/2746539.2746591","url":null,"abstract":"In the binary online (or \"causal\") channel coding model, a sender wishes to communicate a message to a receiver by transmitting a codeword x = (x1,...,xn) ∈ {0,1}n bit by bit via a channel limited to at most pn corruptions. The channel is \"online\" in the sense that at the ith step of communication the channel decides whether to corrupt the ith bit or not based on its view so far, i.e., its decision depends only on the transmitted bits (x1,...,xi). This is in contrast to the classical adversarial channel in which the error is chosen by a channel that has full knowledge of the transmitted codeword x. In this work we study the capacity of binary online channels for two corruption models: the bit-flip model in which the channel may flip at most pn of the bits of the transmitted codeword, and the erasure model in which the channel may erase at most pn bits of the transmitted codeword. Specifically, for both error models we give a full characterization of the capacity as a function of p. The online channel (in both the bit-flip and erasure case) has seen a number of recent studies which present both upper and lower bounds on its capacity. In this work, we present and analyze a coding scheme that improves on the previously suggested lower bounds and matches the previously suggested upper bounds thus implying a tight characterization.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88946349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Algorithmic mechanism design (AMD) studies the delicate interplay between computational efficiency, truthfulness, and optimality. We focus on AMD's paradigmatic problem: combinatorial auctions. We present a new generalization of the VC dimension to multivalued collections of functions, which encompasses the classical VC dimension, Natarajan dimension, and Steele dimension. We present a corresponding generalization of the Sauer-Shelah Lemma and harness this VC machinery to establish inapproximability results for deterministic truthful mechanisms. Our results essentially unify all inapproximability results for deterministic truthful mechanisms for combinatorial auctions to date and establish new separation gaps between truthful and non-truthful algorithms.
{"title":"Inapproximability of Truthful Mechanisms via Generalizations of the VC Dimension","authors":"Amit Daniely, Michael Schapira, Gal Shahaf","doi":"10.1145/2746539.2746597","DOIUrl":"https://doi.org/10.1145/2746539.2746597","url":null,"abstract":"Algorithmic mechanism design (AMD) studies the delicate interplay between computational efficiency, truthfulness, and optimality. We focus on AMD's paradigmatic problem: combinatorial auctions. We present a new generalization of the VC dimension to multivalued collections of functions, which encompasses the classical VC dimension, Natarajan dimension, and Steele dimension. We present a corresponding generalization of the Sauer-Shelah Lemma and harness this VC machinery to establish inapproximability results for deterministic truthful mechanisms. Our results essentially unify all inapproximability results for deterministic truthful mechanisms for combinatorial auctions to date and establish new separation gaps between truthful and non-truthful algorithms.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82918180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}