We study property testing algorithms in directed graphs (digraphs) with maximum indegree and maximum outdegree upper bounded by d. For directed graphs with bounded degree, there are two different models in property testing introduced by Bender and Ron (2002). In the bidirectional model, one can access both incoming and outgoing edges while in the unidirectional model one can only access outgoing edges. In our paper we provide a new relation between the two models: we prove that if a property can be tested with constant query complexity in the bidirectional model, then it can be tested with sublinear query complexity in the unidirectional model. A corollary of this result is that in the unidirectional model (the model allowing only queries to the outgoing neighbors), every property in hyperfinite digraphs is testable with sublinear query complexity.
{"title":"Relating two property testing models for bounded degree directed graphs","authors":"A. Czumaj, Pan Peng, C. Sohler","doi":"10.1145/2897518.2897575","DOIUrl":"https://doi.org/10.1145/2897518.2897575","url":null,"abstract":"We study property testing algorithms in directed graphs (digraphs) with maximum indegree and maximum outdegree upper bounded by d. For directed graphs with bounded degree, there are two different models in property testing introduced by Bender and Ron (2002). In the bidirectional model, one can access both incoming and outgoing edges while in the unidirectional model one can only access outgoing edges. In our paper we provide a new relation between the two models: we prove that if a property can be tested with constant query complexity in the bidirectional model, then it can be tested with sublinear query complexity in the unidirectional model. A corollary of this result is that in the unidirectional model (the model allowing only queries to the outgoing neighbors), every property in hyperfinite digraphs is testable with sublinear query complexity.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131973349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Let G=(V,E) be an n-vertices m-edges directed graph. Let s∈ V be any designated source vertex. We address the problem of single source reachability (SSR) from s in presence of failures of vertices/edges. We show that for every k≥ 1, there is a subgraph H of G with at most 2k n edges that preserves the reachability from s even after the failure of any k edges. Formally, given a set F of k edges, a vertex u∈ V is reachable from s in G∖ F if and only if u is reachable from s in H∖ F. We call H a k-Fault Tolerant Reachability Subgraph (k-FTRS). We prove also a matching lower bound of Ω(2kn) for such subgraphs. Our results extend to vertex failures without any extra overhead. The general construction of k-FTRS is interesting from several different perspectives. From the Graph theory perspective it reveals a separation between SSR and single source shortest paths (SSSP) in directed graphs. More specifically, in the case of SSSP in weighted directed graphs, there is a lower bound of Ω(m) even for a single edge failure. In the case of unweighted graphs there is a lower bound of Ω(n3/2) edges, again, even for a single edge failure. There is also a matching upper bound but nothing is known for two or more failures in the directed graphs. From the Algorithms perspective it implies fault tolerant solutions to other interesting problems, namely, (i) verifying if the strong connectivity of a graph is preserved after k edge or vertex failures, (ii) computing a dominator tree of a graph after k-failures. From the perspective of Techniques it makes an interesting usage of the concept of farthest min-cut which was already introduced by Ford and Fulkerson in their pioneering work on flows and cuts. We show that there is a close relationship between the farthest min-cut and the k-FTRS. We believe that our new technique is of independent interest.
{"title":"Fault tolerant subgraph for single source reachability: generic and optimal","authors":"Surender Baswana, Keerti Choudhary, L. Roditty","doi":"10.1145/2897518.2897648","DOIUrl":"https://doi.org/10.1145/2897518.2897648","url":null,"abstract":"Let G=(V,E) be an n-vertices m-edges directed graph. Let s∈ V be any designated source vertex. We address the problem of single source reachability (SSR) from s in presence of failures of vertices/edges. We show that for every k≥ 1, there is a subgraph H of G with at most 2k n edges that preserves the reachability from s even after the failure of any k edges. Formally, given a set F of k edges, a vertex u∈ V is reachable from s in G∖ F if and only if u is reachable from s in H∖ F. We call H a k-Fault Tolerant Reachability Subgraph (k-FTRS). We prove also a matching lower bound of Ω(2kn) for such subgraphs. Our results extend to vertex failures without any extra overhead. The general construction of k-FTRS is interesting from several different perspectives. From the Graph theory perspective it reveals a separation between SSR and single source shortest paths (SSSP) in directed graphs. More specifically, in the case of SSSP in weighted directed graphs, there is a lower bound of Ω(m) even for a single edge failure. In the case of unweighted graphs there is a lower bound of Ω(n3/2) edges, again, even for a single edge failure. There is also a matching upper bound but nothing is known for two or more failures in the directed graphs. From the Algorithms perspective it implies fault tolerant solutions to other interesting problems, namely, (i) verifying if the strong connectivity of a graph is preserved after k edge or vertex failures, (ii) computing a dominator tree of a graph after k-failures. From the perspective of Techniques it makes an interesting usage of the concept of farthest min-cut which was already introduced by Ford and Fulkerson in their pioneering work on flows and cuts. We show that there is a close relationship between the farthest min-cut and the k-FTRS. We believe that our new technique is of independent interest.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130727147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Braverman, K. Efremenko, R. Gelles, Bernhard Haeupler
We study coding schemes for multiparty interactive communication over synchronous networks that suffer from stochastic noise, where each bit is independently flipped with probability ε. We analyze the minimal overhead that must be added by the coding scheme in order to succeed in performing the computation despite the noise. Our main result is a lower bound on the communication of any noise-resilient protocol over a synchronous star network with n-parties (where all parties communicate in every round). Specifically, we show a task that can be solved by communicating T bits over the noise-free network, but for which any protocol with success probability of 1-o(1) must communicate at least Ω(T log n / log log n) bits when the channels are noisy. By a 1994 result of Rajagopalan and Schulman, the slowdown we prove is the highest one can obtain on any topology, up to a log log n factor. We complete our lower bound with a matching coding scheme that achieves the same overhead; thus, the capacity of (synchronous) star networks is Θ(log log n / log n). Our bounds prove that, despite several previous coding schemes with rate Ω(1) for certain topologies, no coding scheme with constant rate Ω(1) exists for arbitrary n-party noisy networks.
研究了存在随机噪声的同步网络中每个比特以ε概率独立翻转的多方交互通信编码方案。我们分析了编码方案必须增加的最小开销,以便在噪声下成功执行计算。我们的主要结果是在具有n方的同步星型网络(其中所有各方在每轮通信)上的任何抗噪声协议的通信的下界。具体来说,我们展示了一个可以通过在无噪声网络上通信T位来解决的任务,但是当信道有噪声时,任何成功概率为1- 0(1)的协议必须至少通信Ω(T log n / log log n)位。根据1994年Rajagopalan和Schulman的结果,我们证明了在任何拓扑上可以得到的最大的减速,直到一个log log n的因子。我们用一个匹配的编码方案来完成下界,实现相同的开销;因此,(同步)星型网络的容量为Θ(log log n / log n)。我们的界证明,尽管对于某些拓扑有几种速率为Ω(1)的编码方案,但对于任意n方噪声网络不存在恒定速率为Ω(1)的编码方案。
{"title":"Constant-rate coding for multiparty interactive communication is impossible","authors":"M. Braverman, K. Efremenko, R. Gelles, Bernhard Haeupler","doi":"10.1145/2897518.2897563","DOIUrl":"https://doi.org/10.1145/2897518.2897563","url":null,"abstract":"We study coding schemes for multiparty interactive communication over synchronous networks that suffer from stochastic noise, where each bit is independently flipped with probability ε. We analyze the minimal overhead that must be added by the coding scheme in order to succeed in performing the computation despite the noise. Our main result is a lower bound on the communication of any noise-resilient protocol over a synchronous star network with n-parties (where all parties communicate in every round). Specifically, we show a task that can be solved by communicating T bits over the noise-free network, but for which any protocol with success probability of 1-o(1) must communicate at least Ω(T log n / log log n) bits when the channels are noisy. By a 1994 result of Rajagopalan and Schulman, the slowdown we prove is the highest one can obtain on any topology, up to a log log n factor. We complete our lower bound with a matching coding scheme that achieves the same overhead; thus, the capacity of (synchronous) star networks is Θ(log log n / log n). Our bounds prove that, despite several previous coding schemes with rate Ω(1) for certain topologies, no coding scheme with constant rate Ω(1) exists for arbitrary n-party noisy networks.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116227153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new non-malleable commitment protocol. Our protocol has the following features: itemize The protocol has only three rounds of interaction. Pass (TCC 2013) showed an impossibility result for a two-round non-malleable commitment scheme w.r.t. a black-box reduction to any ``standard" intractability reduction. Thus, this resolves the round complexity of non-malleable commitment at least w.r.t. black-box security reductions. Our construction is secure as per the standard notion of non-malleability w.r.t. commitment. Our protocol is truly efficient. In our basic protocol, the entire computation of the committer is dominated by just three invocations of a non-interactive statically binding commitment scheme, while, the receiver computation (in the commitment stage) is limited to just sampling a random string. Unlike many previous works, we directly construct a protocol for large tags and hence avoid any non-malleability amplification steps. Our protocol is based on a black-box use of any non-interactive statistically binding commitment scheme. Such schemes, in turn, can be based on any one-to-one one-way function (or any one-way function at the cost of an extra initialization round). Previously, the best known black-box construction of non-malleable commitments required a larger (constant) number of rounds. Our construction is public-coin and makes use of only black-box simulation. Prior to our work, no public-coin constant round non-malleable commitment schemes were known based on black-box simulation. itemize Our techniques depart significantly from the techniques used previously to construct non-malleable commitment schemes. As a main technical tool, we rely on non-malleable codes in the split state model. Our proofs of security are purely combinatorial in nature. In addition, we also present a simple construction of constant round non-malleable commitments from any one-way function. While this result is not new, the main feature is its simplicity compared to any previous construction of non-malleable commitments (in any number of rounds). We believe the construction is simple enough to be covered in a graduate level course on cryptography. The construction uses non-malleable codes in the split state model in a black-box way.
{"title":"Textbook non-malleable commitments","authors":"Vipul Goyal, Omkant Pandey, Silas Richelson","doi":"10.1145/2897518.2897657","DOIUrl":"https://doi.org/10.1145/2897518.2897657","url":null,"abstract":"We present a new non-malleable commitment protocol. Our protocol has the following features: itemize The protocol has only three rounds of interaction. Pass (TCC 2013) showed an impossibility result for a two-round non-malleable commitment scheme w.r.t. a black-box reduction to any ``standard\" intractability reduction. Thus, this resolves the round complexity of non-malleable commitment at least w.r.t. black-box security reductions. Our construction is secure as per the standard notion of non-malleability w.r.t. commitment. Our protocol is truly efficient. In our basic protocol, the entire computation of the committer is dominated by just three invocations of a non-interactive statically binding commitment scheme, while, the receiver computation (in the commitment stage) is limited to just sampling a random string. Unlike many previous works, we directly construct a protocol for large tags and hence avoid any non-malleability amplification steps. Our protocol is based on a black-box use of any non-interactive statistically binding commitment scheme. Such schemes, in turn, can be based on any one-to-one one-way function (or any one-way function at the cost of an extra initialization round). Previously, the best known black-box construction of non-malleable commitments required a larger (constant) number of rounds. Our construction is public-coin and makes use of only black-box simulation. Prior to our work, no public-coin constant round non-malleable commitment schemes were known based on black-box simulation. itemize Our techniques depart significantly from the techniques used previously to construct non-malleable commitment schemes. As a main technical tool, we rely on non-malleable codes in the split state model. Our proofs of security are purely combinatorial in nature. In addition, we also present a simple construction of constant round non-malleable commitments from any one-way function. While this result is not new, the main feature is its simplicity compared to any previous construction of non-malleable commitments (in any number of rounds). We believe the construction is simple enough to be covered in a graduate level course on cryptography. The construction uses non-malleable codes in the split state model in a black-box way.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133543229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. A. Bender, T. Kopelowitz, S. Pettie, Maxwell Young
For decades, randomized exponential backoff has provided a critical algorithmic building block in situations where multiple devices seek access to a shared resource. Surprisingly, despite this history, the performance of standard backoff is poor under worst-case scheduling of demands on the resource: (i) subconstant throughput can occur under plausible scenarios, and (ii) each of N devices requires Omega(log N) access attempts before obtaining the resource. In this paper, we address these shortcomings by offering a new backoff protocol for a shared communications channel that guarantees expected constant throughput with only O(log(log* N)) access attempts in expectation. Central to this result are new algorithms for approximate counting and leader election with the same performance guarantees.
{"title":"Contention resolution with log-logstar channel accesses","authors":"M. A. Bender, T. Kopelowitz, S. Pettie, Maxwell Young","doi":"10.1145/2897518.2897655","DOIUrl":"https://doi.org/10.1145/2897518.2897655","url":null,"abstract":"For decades, randomized exponential backoff has provided a critical algorithmic building block in situations where multiple devices seek access to a shared resource. Surprisingly, despite this history, the performance of standard backoff is poor under worst-case scheduling of demands on the resource: (i) subconstant throughput can occur under plausible scenarios, and (ii) each of N devices requires Omega(log N) access attempts before obtaining the resource. In this paper, we address these shortcomings by offering a new backoff protocol for a shared communications channel that guarantees expected constant throughput with only O(log(log* N)) access attempts in expectation. Central to this result are new algorithms for approximate counting and leader election with the same performance guarantees.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126036064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Searchable symmetric encryption (SSE) enables a client to store a database on an untrusted server while supporting keyword search in a secure manner. Despite the rapidly increasing interest in SSE technology, experiments indicate that the performance of the known schemes scales badly to large databases. Somewhat surprisingly, this is not due to their usage of cryptographic tools, but rather due to their poor locality (where locality is defined as the number of non-contiguous memory locations the server accesses with each query). The only known schemes that do not suffer from poor locality suffer either from an impractical space overhead or from an impractical read efficiency (where read efficiency is defined as the ratio between the number of bits the server reads with each query and the actual size of the answer). We construct the first SSE schemes that simultaneously enjoy optimal locality, optimal space overhead, and nearly-optimal read efficiency. Specifically, for a database of size N, under the modest assumption that no keyword appears in more than N1 − 1/loglogN documents, we construct a scheme with read efficiency Õ(loglogN). This essentially matches the lower bound of Cash and Tessaro (EUROCRYPT ’14) showing that any SSE scheme must be sub-optimal in either its locality, its space overhead, or its read efficiency. In addition, even without making any assumptions on the structure of the database, we construct a scheme with read efficiency Õ(logN). Our schemes are obtained via a two-dimensional generalization of the classic balanced allocations (“balls and bins”) problem that we put forward. We construct nearly-optimal two-dimensional balanced allocation schemes, and then combine their algorithmic structure with subtle cryptographic techniques.
{"title":"Searchable symmetric encryption: optimal locality in linear space via two-dimensional balanced allocations","authors":"Gilad Asharov, M. Naor, G. Segev, Ido Shahaf","doi":"10.1145/2897518.2897562","DOIUrl":"https://doi.org/10.1145/2897518.2897562","url":null,"abstract":"Searchable symmetric encryption (SSE) enables a client to store a database on an untrusted server while supporting keyword search in a secure manner. Despite the rapidly increasing interest in SSE technology, experiments indicate that the performance of the known schemes scales badly to large databases. Somewhat surprisingly, this is not due to their usage of cryptographic tools, but rather due to their poor locality (where locality is defined as the number of non-contiguous memory locations the server accesses with each query). The only known schemes that do not suffer from poor locality suffer either from an impractical space overhead or from an impractical read efficiency (where read efficiency is defined as the ratio between the number of bits the server reads with each query and the actual size of the answer). We construct the first SSE schemes that simultaneously enjoy optimal locality, optimal space overhead, and nearly-optimal read efficiency. Specifically, for a database of size N, under the modest assumption that no keyword appears in more than N1 − 1/loglogN documents, we construct a scheme with read efficiency Õ(loglogN). This essentially matches the lower bound of Cash and Tessaro (EUROCRYPT ’14) showing that any SSE scheme must be sub-optimal in either its locality, its space overhead, or its read efficiency. In addition, even without making any assumptions on the structure of the database, we construct a scheme with read efficiency Õ(logN). Our schemes are obtained via a two-dimensional generalization of the classic balanced allocations (“balls and bins”) problem that we put forward. We construct nearly-optimal two-dimensional balanced allocation schemes, and then combine their algorithmic structure with subtle cryptographic techniques.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121117171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The celebrated IP=PSPACE Theorem of Lund et-al. (J.ACM 1992) and Shamir (J.ACM 1992), allows an all-powerful but untrusted prover to convince a polynomial-time verifier of the validity of extremely complicated statements (as long as they can be evaluated using polynomial space). The interactive proof system designed for this purpose requires a polynomial number of communication rounds and an exponential-time (polynomial-space complete) prover. In this paper, we study the power of more efficient interactive proof systems. Our main result is that for every statement that can be evaluated in polynomial time and bounded-polynomial space there exists an interactive proof that satisfies the following strict efficiency requirements: (1) the honest prover runs in polynomial time, (2) the verifier is almost linear time (and under some conditions even sub linear), and (3) the interaction consists of only a constant number of communication rounds. Prior to this work, very little was known about the power of efficient, constant-round interactive proofs (rather than arguments). This result represents significant progress on the round complexity of interactive proofs (even if we ignore the running time of the honest prover), and on the expressive power of interactive proofs with polynomial-time honest prover (even if we ignore the round complexity). This result has several applications, and in particular it can be used for verifiable delegation of computation. Our construction leverages several new notions of interactive proofs, which may be of independent interest. One of these notions is that of unambiguous interactive proofs where the prover has a unique successful strategy. Another notion is that of probabilistically checkable interactive proofs (PCIPs) where the verifier only reads a few bits of the transcript in checking the proof (this could be viewed as an interactive extension of PCPs).
{"title":"Constant-round interactive proofs for delegating computation","authors":"Omer Reingold, R. Rothblum, G. Rothblum","doi":"10.1145/2897518.2897652","DOIUrl":"https://doi.org/10.1145/2897518.2897652","url":null,"abstract":"The celebrated IP=PSPACE Theorem of Lund et-al. (J.ACM 1992) and Shamir (J.ACM 1992), allows an all-powerful but untrusted prover to convince a polynomial-time verifier of the validity of extremely complicated statements (as long as they can be evaluated using polynomial space). The interactive proof system designed for this purpose requires a polynomial number of communication rounds and an exponential-time (polynomial-space complete) prover. In this paper, we study the power of more efficient interactive proof systems. Our main result is that for every statement that can be evaluated in polynomial time and bounded-polynomial space there exists an interactive proof that satisfies the following strict efficiency requirements: (1) the honest prover runs in polynomial time, (2) the verifier is almost linear time (and under some conditions even sub linear), and (3) the interaction consists of only a constant number of communication rounds. Prior to this work, very little was known about the power of efficient, constant-round interactive proofs (rather than arguments). This result represents significant progress on the round complexity of interactive proofs (even if we ignore the running time of the honest prover), and on the expressive power of interactive proofs with polynomial-time honest prover (even if we ignore the round complexity). This result has several applications, and in particular it can be used for verifiable delegation of computation. Our construction leverages several new notions of interactive proofs, which may be of independent interest. One of these notions is that of unambiguous interactive proofs where the prover has a unique successful strategy. Another notion is that of probabilistically checkable interactive proofs (PCIPs) where the verifier only reads a few bits of the transcript in checking the proof (this could be viewed as an interactive extension of PCPs).","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122131828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We provide a unified view of many recent developments in Bayesian mechanism design, including the black-box reductions of Cai et. al., simple auctions for additive buyers, and posted-price mechanisms for unit-demand buyers. Additionally, we show that viewing these three previously disjoint lines of work through the same lens leads to new developments as well. First, we provide a duality framework for Bayesian mechanism design, which naturally accommodates multiple agents and arbitrary objectives/feasibility constraints. Using this, we prove that either a posted-price mechanism or the VCG auction with per-bidder entry fees achieves a constant-factor of the optimal Bayesian IC revenue whenever buyers are unit-demand or additive, unifying previous breakthroughs of Chawla et. al. and Yao, and improving both approximation ratios (from 33.75 to 24 and 69 to 8). Finally, we show that this view also leads to improved structural characterizations in the Cai et. al. framework.
{"title":"A duality based unified approach to Bayesian mechanism design","authors":"Yang Cai, Nikhil R. Devanur, S. M. Weinberg","doi":"10.1145/2897518.2897645","DOIUrl":"https://doi.org/10.1145/2897518.2897645","url":null,"abstract":"We provide a unified view of many recent developments in Bayesian mechanism design, including the black-box reductions of Cai et. al., simple auctions for additive buyers, and posted-price mechanisms for unit-demand buyers. Additionally, we show that viewing these three previously disjoint lines of work through the same lens leads to new developments as well. First, we provide a duality framework for Bayesian mechanism design, which naturally accommodates multiple agents and arbitrary objectives/feasibility constraints. Using this, we prove that either a posted-price mechanism or the VCG auction with per-bidder entry fees achieves a constant-factor of the optimal Bayesian IC revenue whenever buyers are unit-demand or additive, unifying previous breakthroughs of Chawla et. al. and Yao, and improving both approximation ratios (from 33.75 to 24 and 69 to 8). Finally, we show that this view also leads to improved structural characterizations in the Cai et. al. framework.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130431173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The (∆+1)-coloring problem is a fundamental symmetry breaking problem in distributed computing. We give a new randomized coloring algorithm for (∆+1)-coloring running in O(√log ∆)+ 2^O(√log log n) rounds with probability 1-1/n^Ω(1) in a graph with n nodes and maximum degree ∆. This implies that the (∆+1)-coloring problem is easier than the maximal independent set problem and the maximal matching problem, due to their lower bounds by Kuhn, Moscibroda, and Wattenhofer [PODC'04]. Our algorithm also extends to the list-coloring problem where the palette of each node contains ∆+1 colors.
{"title":"Distributed (∆+1)-coloring in sublogarithmic rounds","authors":"David G. Harris, Johannes Schneider, Hsin-Hao Su","doi":"10.1145/2897518.2897533","DOIUrl":"https://doi.org/10.1145/2897518.2897533","url":null,"abstract":"The (∆+1)-coloring problem is a fundamental symmetry breaking problem in distributed computing. We give a new randomized coloring algorithm for (∆+1)-coloring running in O(√log ∆)+ 2^O(√log log n) rounds with probability 1-1/n^Ω(1) in a graph with n nodes and maximum degree ∆. This implies that the (∆+1)-coloring problem is easier than the maximal independent set problem and the maximal matching problem, due to their lower bounds by Kuhn, Moscibroda, and Wattenhofer [PODC'04]. Our algorithm also extends to the list-coloring problem where the palette of each node contains ∆+1 colors.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121339977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We show an exponential gap between communication complexity and external information complexity, by analyzing a communication task suggested as a candidate by Braverman. Previously, only a separation of communication complexity and internal information complexity was known. More precisely, we obtain an explicit example of a search problem with external information complexity ≤ O(k), with respect to any input distribution, and distributional communication complexity ≥ 2k, with respect to some input distribution. In particular, this shows that a communication protocol cannot always be compressed to its external information. By a result of Braverman, our gap is the largest possible. Moreover, since the upper bound of O(k) on the external information complexity of the problem is obtained with respect to any input distribution, our result implies an exponential gap between communication complexity and information complexity (both internal and external) in the non-distributional setting of Braverman. In this setting, no gap was previously known, even for internal information complexity.
{"title":"Exponential separation of communication and external information","authors":"Anat Ganor, Gillat Kol, R. Raz","doi":"10.1145/2897518.2897535","DOIUrl":"https://doi.org/10.1145/2897518.2897535","url":null,"abstract":"We show an exponential gap between communication complexity and external information complexity, by analyzing a communication task suggested as a candidate by Braverman. Previously, only a separation of communication complexity and internal information complexity was known. More precisely, we obtain an explicit example of a search problem with external information complexity ≤ O(k), with respect to any input distribution, and distributional communication complexity ≥ 2k, with respect to some input distribution. In particular, this shows that a communication protocol cannot always be compressed to its external information. By a result of Braverman, our gap is the largest possible. Moreover, since the upper bound of O(k) on the external information complexity of the problem is obtained with respect to any input distribution, our result implies an exponential gap between communication complexity and information complexity (both internal and external) in the non-distributional setting of Braverman. In this setting, no gap was previously known, even for internal information complexity.","PeriodicalId":442965,"journal":{"name":"Proceedings of the forty-eighth annual ACM symposium on Theory of Computing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115548550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}