Pub Date : 2002-08-07DOI: 10.1109/CCC.2002.1004335
J. C. Jackson, Adam R. Klivans, R. Servedio
We give an algorithm for learning a more expressive circuit class than the class AC/sup 0/ considered by Linial et al. (1993) and Kharitonov (1993). The new algorithm learns constant-depth AND/OR/NOT circuits augmented with (a limited number of) majority gates. Our main positive result for these circuits is stated informally.
{"title":"Learnability beyond AC/sup 0/","authors":"J. C. Jackson, Adam R. Klivans, R. Servedio","doi":"10.1109/CCC.2002.1004335","DOIUrl":"https://doi.org/10.1109/CCC.2002.1004335","url":null,"abstract":"We give an algorithm for learning a more expressive circuit class than the class AC/sup 0/ considered by Linial et al. (1993) and Kharitonov (1993). The new algorithm learns constant-depth AND/OR/NOT circuits augmented with (a limited number of) majority gates. Our main positive result for these circuits is stated informally.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124853335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-21DOI: 10.1109/CCC.2002.1004348
L. Trevisan, S. Vadhan
Impagliazzo and Wigderson (1998) gave the first construction of pseudorandom generators from a uniform complexity assumption on EXP (namely EXP = BPP). Unlike results in the nonuniform setting, their result does not provide a continuous trade-off between worst-case hardness and pseudorandomness, nor does it explicitly establish an average-case hardness result. We obtain an optimal worst-case to average-case connection for EXP: if EXP BPTIME(( )), EXP has problems that are cannot be solved on a fraction 1/2 1/'( ) of the inputs by BPTIME('( )) algorithms, for ' = /sup 1/. We exhibit a PSPACE-complete downward self-reducible and random self-reducible problem. This slightly simplifies and strengthens the proof of Impagliazzo and Wigderson (1998), which used a a P-complete problem with these properties. We argue that the results in Impagliazzo and Wigderson (1998) and in this paper cannot be proved via "black-box" uniform reductions.
{"title":"Pseudorandomness and average-case complexity via uniform reductions","authors":"L. Trevisan, S. Vadhan","doi":"10.1109/CCC.2002.1004348","DOIUrl":"https://doi.org/10.1109/CCC.2002.1004348","url":null,"abstract":"Impagliazzo and Wigderson (1998) gave the first construction of pseudorandom generators from a uniform complexity assumption on EXP (namely EXP = BPP). Unlike results in the nonuniform setting, their result does not provide a continuous trade-off between worst-case hardness and pseudorandomness, nor does it explicitly establish an average-case hardness result. We obtain an optimal worst-case to average-case connection for EXP: if EXP BPTIME(( )), EXP has problems that are cannot be solved on a fraction 1/2 1/'( ) of the inputs by BPTIME('( )) algorithms, for ' = /sup 1/. We exhibit a PSPACE-complete downward self-reducible and random self-reducible problem. This slightly simplifies and strengthens the proof of Impagliazzo and Wigderson (1998), which used a a P-complete problem with these properties. We argue that the results in Impagliazzo and Wigderson (1998) and in this paper cannot be proved via \"black-box\" uniform reductions.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"79 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121203287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-21DOI: 10.1109/CCC.2002.1004351
John Watrous
Arthur does not have a lot of time to spend performing difficult computations. He's recently obtained a quantum computer, but often it seems not to help - he only has a few quantum algorithms, and Merlin maintains that there aren't any other interesting ones, so Merlin is forced to convince the untrusting Arthur of the truth of various facts. However, Arthur and Merlin have a new resource at their disposal: quantum information. Some relationships among complexity classes defined by quantum Arthur-Merlin games and other commonly studied complexity classes are known, but many open questions remain. In this paper, I discuss quantum Arthur-Merlin games in detail, with an emphasis on open problems.
{"title":"Arthur and Merlin in a quantum world","authors":"John Watrous","doi":"10.1109/CCC.2002.1004351","DOIUrl":"https://doi.org/10.1109/CCC.2002.1004351","url":null,"abstract":"Arthur does not have a lot of time to spend performing difficult computations. He's recently obtained a quantum computer, but often it seems not to help - he only has a few quantum algorithms, and Merlin maintains that there aren't any other interesting ones, so Merlin is forced to convince the untrusting Arthur of the truth of various facts. However, Arthur and Merlin have a new resource at their disposal: quantum information. Some relationships among complexity classes defined by quantum Arthur-Merlin games and other commonly studied complexity classes are known, but many open questions remain. In this paper, I discuss quantum Arthur-Merlin games in detail, with an emphasis on open problems.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116368608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-21DOI: 10.1109/CCC.2002.1004355
B. Barak, Oded Goldreich
We put forward a new type of computationally-sound proof systems, called universal-arguments, which are related but different from both CS-proofs (as defined by Micali, 2000) and arguments (as defined by Brassard et al., 1986). In particular, we adopt the instance-based prover-efficiency paradigm of CS-proofs, but follow the computational-soundness condition of argument systems (i.e., we consider only cheating strategies that are implementable by polynomial-size circuits). We show that universal-arguments can be constructed based on standard intractability assumptions that refer to polynomial-size circuits (rather than assumptions referring to subexponential-size circuits as used in the construction of CS-proofs). As an application of universal-arguments, we weaken the intractability assumptions used in the recent non-black-box zero-knowledge arguments of Barak (2001). Specifically, we only utilize intractability assumptions that refer to polynomial-size circuits (rather than assumptions referring to circuits of some "nice" super-polynomial size).
{"title":"Universal arguments and their applications","authors":"B. Barak, Oded Goldreich","doi":"10.1109/CCC.2002.1004355","DOIUrl":"https://doi.org/10.1109/CCC.2002.1004355","url":null,"abstract":"We put forward a new type of computationally-sound proof systems, called universal-arguments, which are related but different from both CS-proofs (as defined by Micali, 2000) and arguments (as defined by Brassard et al., 1986). In particular, we adopt the instance-based prover-efficiency paradigm of CS-proofs, but follow the computational-soundness condition of argument systems (i.e., we consider only cheating strategies that are implementable by polynomial-size circuits). We show that universal-arguments can be constructed based on standard intractability assumptions that refer to polynomial-size circuits (rather than assumptions referring to subexponential-size circuits as used in the construction of CS-proofs). As an application of universal-arguments, we weaken the intractability assumptions used in the recent non-black-box zero-knowledge arguments of Barak (2001). Specifically, we only utilize intractability assumptions that refer to polynomial-size circuits (rather than assumptions referring to circuits of some \"nice\" super-polynomial size).","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128977681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-21DOI: 10.1109/CCC.2002.1004352
Ziv Bar-Yossef, L. Trevisan, Omer Reingold, Ronen Shaltiel
We prove (mostly tight) space lower bounds for "streaming" (or "on-line") computations of four fundamental combinatorial objects: error-correcting codes, universal hash functions, extractors, and dispersers. Streaming computations for these objects are motivated algorithmically by massive data set applications and complexity-theoretically by pseudorandomness and derandomization for space-bounded probabilistic algorithms. Our results reveal a surprising separation of extractors and dispersers in terms of the space required to compute them in the streaming model. While online extractors require space linear in their output length, we construct dispersers that are computable online with exponentially less space. We also present several explicit constructions of online extractors that match the lower bound. We show that online universal and almost-universal hash functions require space linear in their output length (this bound was known previously only for "pure" universal hash functions). Finally, we show that both online encoding and online decoding of error-correcting codes require space proportional to the product of the length of the encoded message and the code's relative minimum distance. Block encoding trivially matches the lower bounds for constant rate codes.
{"title":"Streaming computation of combinatorial objects","authors":"Ziv Bar-Yossef, L. Trevisan, Omer Reingold, Ronen Shaltiel","doi":"10.1109/CCC.2002.1004352","DOIUrl":"https://doi.org/10.1109/CCC.2002.1004352","url":null,"abstract":"We prove (mostly tight) space lower bounds for \"streaming\" (or \"on-line\") computations of four fundamental combinatorial objects: error-correcting codes, universal hash functions, extractors, and dispersers. Streaming computations for these objects are motivated algorithmically by massive data set applications and complexity-theoretically by pseudorandomness and derandomization for space-bounded probabilistic algorithms. Our results reveal a surprising separation of extractors and dispersers in terms of the space required to compute them in the streaming model. While online extractors require space linear in their output length, we construct dispersers that are computable online with exponentially less space. We also present several explicit constructions of online extractors that match the lower bound. We show that online universal and almost-universal hash functions require space linear in their output length (this bound was known previously only for \"pure\" universal hash functions). Finally, we show that both online encoding and online decoding of error-correcting codes require space proportional to the product of the length of the encoded message and the code's relative minimum distance. Block encoding trivially matches the lower bounds for constant rate codes.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122490624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-21DOI: 10.1109/CCC.2002.1004342
E. Fischer, I. Newman, J. Sgall
We construct a property on 0/1-strings that has a representation by a collection of width 3, read-twice oblivious branching programs, but for which any 2-sided /spl epsi/-testing algorithm must make at least /spl Omega/(n/sup 1/10/) many queries for some fixed small enough /spl epsi/. This shows that Newman's result (2000) cannot be generalized to read-k-times functions for k > 1.
{"title":"Functions that have read-twice constant width branching programs are not necessarily testable","authors":"E. Fischer, I. Newman, J. Sgall","doi":"10.1109/CCC.2002.1004342","DOIUrl":"https://doi.org/10.1109/CCC.2002.1004342","url":null,"abstract":"We construct a property on 0/1-strings that has a representation by a collection of width 3, read-twice oblivious branching programs, but for which any 2-sided /spl epsi/-testing algorithm must make at least /spl Omega/(n/sup 1/10/) many queries for some fixed small enough /spl epsi/. This shows that Newman's result (2000) cannot be generalized to read-k-times functions for k > 1.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126253336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-21DOI: 10.1109/CCC.2002.1004343
Philipp Woelfel
Branching programs (BPs) are a well-established computation and representation model for Boolean functions. Although exponential lower bounds for restricted BPs such as read-once branching programs (BP1s) have been known for a long time, the proof of lower bounds for important selected functions is sometimes difficult. Especially the complexity of fundamental functions such as integer multiplication in different BP models is interesting. In (Bolling and Woelfel, 2001), the first strongly exponential lower bound of /spl Omega/(2/sup n/4/) has been proven for the complexity of integer multiplication in the deterministic BP1 model. Here, we consider two well-studied BP models which generalize BP1s by allowing a limited amount of nondeterminism and multiple variable tests, respectively. More precisely, we prove a lower bound of /spl Omega/(2/sup n/(7k)/) for the complexity of integer multiplication in the (V, k)-BP model. As a corollary, we obtain that integer multiplication cannot be represented in polynomial size by nondeterministic BP1s, if the number of nondeterministic nodes is bounded by log n - log log n - /spl omega/ (1). Furthermore, we show that any (1, +k)-BP representing integer multiplication has a size of /spl Omega/(2[n/48(k+1)]). This is not polynomial for k = o(n/log n).
分支程序(bp)是一种成熟的布尔函数计算和表示模型。虽然对于受限bp(如一次读分支程序)的指数下界已经知道很长时间了,但对于重要的选定函数的下界的证明有时是困难的。特别是在不同BP模型中整数乘法等基本函数的复杂性是有趣的。在(Bolling and Woelfel, 2001)中,对于确定性BP1模型中整数乘法的复杂度,证明了/spl ω /(2/sup n/4/)的第一个强指数下界。在这里,我们考虑了两个经过充分研究的BP模型,它们分别通过允许有限数量的不确定性和多变量测试来推广BP模型。更准确地说,我们证明了(V, k)-BP模型中整数乘法复杂度的下界为/spl ω /(2/sup n/(7k)/)。作为推论,我们得到,如果不确定性节点的数目以log n - log log n - /spl omega/(1)为界,则整数乘法不能以多项式大小表示为非确定性bp。此外,我们证明了任何表示整数乘法的(1,+k)- bp的大小为/spl omega/ (2[n/48(k+1)])。这不是k = 0 (n/log n)时的多项式。
{"title":"On the complexity of integer multiplication in branching programs with multiple tests and in read-once branching programs with limited nondeterminism","authors":"Philipp Woelfel","doi":"10.1109/CCC.2002.1004343","DOIUrl":"https://doi.org/10.1109/CCC.2002.1004343","url":null,"abstract":"Branching programs (BPs) are a well-established computation and representation model for Boolean functions. Although exponential lower bounds for restricted BPs such as read-once branching programs (BP1s) have been known for a long time, the proof of lower bounds for important selected functions is sometimes difficult. Especially the complexity of fundamental functions such as integer multiplication in different BP models is interesting. In (Bolling and Woelfel, 2001), the first strongly exponential lower bound of /spl Omega/(2/sup n/4/) has been proven for the complexity of integer multiplication in the deterministic BP1 model. Here, we consider two well-studied BP models which generalize BP1s by allowing a limited amount of nondeterminism and multiple variable tests, respectively. More precisely, we prove a lower bound of /spl Omega/(2/sup n/(7k)/) for the complexity of integer multiplication in the (V, k)-BP model. As a corollary, we obtain that integer multiplication cannot be represented in polynomial size by nondeterministic BP1s, if the number of nondeterministic nodes is bounded by log n - log log n - /spl omega/ (1). Furthermore, we show that any (1, +k)-BP representing integer multiplication has a size of /spl Omega/(2[n/48(k+1)]). This is not polynomial for k = o(n/log n).","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125238990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-21DOI: 10.1109/CCC.2002.1004338
U. Feige, Daniele Micciancio
We prove that the closest vector problem with preprocessing (CVPP) is NP-hard to approximate within any factor less than /spl radic/5/3. More specifically, we show that there exists a reduction from an NP-hard problem to the approximate closest vector problem such that the lattice depends only on the size of the original problem, and the specific instance is encoded solely, in the target vector. It follows that there are lattices for which the closest vector problem cannot be approximated within factors /spl gamma/ < /spl radic/5/3 in polynomial time, no matter how the lattice is represented, unless NP is equal to P (or NP is contained in P/poly, in case of nonuniform sequences of lattices). The result easily extends to any lp norm, for p /spl ges/ 1, showing that CVPP in the lp norm is hard to approximate within any factor /spl gamma/ < /sup p//spl radic/5/3. As an intermediate step, we establish analogous results for the nearest codeword problem with preprocessing (NCPP), proving that for any finite field GF(q), NCPP over GF(q) is NP-hard to approximate within any factor less than 5/3.
{"title":"The inapproximability of lattice and coding problems with preprocessing","authors":"U. Feige, Daniele Micciancio","doi":"10.1109/CCC.2002.1004338","DOIUrl":"https://doi.org/10.1109/CCC.2002.1004338","url":null,"abstract":"We prove that the closest vector problem with preprocessing (CVPP) is NP-hard to approximate within any factor less than /spl radic/5/3. More specifically, we show that there exists a reduction from an NP-hard problem to the approximate closest vector problem such that the lattice depends only on the size of the original problem, and the specific instance is encoded solely, in the target vector. It follows that there are lattices for which the closest vector problem cannot be approximated within factors /spl gamma/ < /spl radic/5/3 in polynomial time, no matter how the lattice is represented, unless NP is equal to P (or NP is contained in P/poly, in case of nonuniform sequences of lattices). The result easily extends to any lp norm, for p /spl ges/ 1, showing that CVPP in the lp norm is hard to approximate within any factor /spl gamma/ < /sup p//spl radic/5/3. As an intermediate step, we establish analogous results for the nearest codeword problem with preprocessing (NCPP), proving that for any finite field GF(q), NCPP over GF(q) is NP-hard to approximate within any factor less than 5/3.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129661840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-21DOI: 10.1109/CCC.2002.1004341
Frederic Green
We prove exponentially small upper bounds on the correlation between parity and quadratic polynomials mod 3. One corollary of this is that in order to compute parity, circuits consisting of a threshold gate at the top, mod 3 gates in the middle, and AND gates of fan-in two at the inputs must be of size 2/sup /spl Omega/(n)/. This is the first result of this type for general mod subcircuits with ANDs of fan-in greater than 1. This yields an exponential improvement over a recent result of Alon and Beigel (2001). The proof uses a novel inductive estimate of the relevant exponential sums introduced by Cai et al. (1996). The exponential sum bounds are tight.
{"title":"The correlation between parity and quadratic polynomials mod 3","authors":"Frederic Green","doi":"10.1109/CCC.2002.1004341","DOIUrl":"https://doi.org/10.1109/CCC.2002.1004341","url":null,"abstract":"We prove exponentially small upper bounds on the correlation between parity and quadratic polynomials mod 3. One corollary of this is that in order to compute parity, circuits consisting of a threshold gate at the top, mod 3 gates in the middle, and AND gates of fan-in two at the inputs must be of size 2/sup /spl Omega/(n)/. This is the first result of this type for general mod subcircuits with ANDs of fan-in greater than 1. This yields an exponential improvement over a recent result of Alon and Beigel (2001). The proof uses a novel inductive estimate of the relevant exponential sums introduced by Cai et al. (1996). The exponential sum bounds are tight.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128167917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-05-21DOI: 10.1109/CCC.2002.1004344
Ziv Bar-Yossef, T. S. Jayram, Ravi Kumar, D. Sivakumar
We use tools and techniques from information theory to study communication complexity problems in the one-way and simultaneous communication models. Our results include: (1) a tight characterization of multi-party one-way communication complexity for product distributions in terms of VC-dimension and shatter coefficients; (2) an equivalence of multi-party one-way and simultaneous communication models for product distributions; (3) a suite of lower bounds for specific functions in the simultaneous communication model, most notably an optimal lower bound for the multi-party set disjointness problem of Alon et al. (1999) and for the generalized addressing function problem of Babai et al. (1996) for arbitrary groups. Methodologically, our main contribution is rendering communication complexity problems in the framework of information theory. This allows us access to the powerful calculus of information theory and the use of fundamental principles such as Fano's inequality and the maximum likelihood estimate principle.
{"title":"Information theory methods in communication complexity","authors":"Ziv Bar-Yossef, T. S. Jayram, Ravi Kumar, D. Sivakumar","doi":"10.1109/CCC.2002.1004344","DOIUrl":"https://doi.org/10.1109/CCC.2002.1004344","url":null,"abstract":"We use tools and techniques from information theory to study communication complexity problems in the one-way and simultaneous communication models. Our results include: (1) a tight characterization of multi-party one-way communication complexity for product distributions in terms of VC-dimension and shatter coefficients; (2) an equivalence of multi-party one-way and simultaneous communication models for product distributions; (3) a suite of lower bounds for specific functions in the simultaneous communication model, most notably an optimal lower bound for the multi-party set disjointness problem of Alon et al. (1999) and for the generalized addressing function problem of Babai et al. (1996) for arbitrary groups. Methodologically, our main contribution is rendering communication complexity problems in the framework of information theory. This allows us access to the powerful calculus of information theory and the use of fundamental principles such as Fano's inequality and the maximum likelihood estimate principle.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116990138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}