Pub Date : 1999-10-17DOI: 10.1109/SFFCS.1999.814639
S. Dasgupta
Mixtures of Gaussians are among the most fundamental and widely used statistical models. Current techniques for learning such mixtures from data are local search heuristics with weak performance guarantees. We present the first provably correct algorithm for learning a mixture of Gaussians. This algorithm is very simple and returns the true centers of the Gaussians to within the precision specified by the user with high probability. It runs in time only linear in the dimension of the data and polynomial in the number of Gaussians.
{"title":"Learning mixtures of Gaussians","authors":"S. Dasgupta","doi":"10.1109/SFFCS.1999.814639","DOIUrl":"https://doi.org/10.1109/SFFCS.1999.814639","url":null,"abstract":"Mixtures of Gaussians are among the most fundamental and widely used statistical models. Current techniques for learning such mixtures from data are local search heuristics with weak performance guarantees. We present the first provably correct algorithm for learning a mixture of Gaussians. This algorithm is very simple and returns the true centers of the Gaussians to within the precision specified by the user with high probability. It runs in time only linear in the dimension of the data and polynomial in the number of Gaussians.","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131717397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/SFFCS.1999.814628
A. Sahai
We introduce the notion of non-malleable non-interactive zero-knowledge (NIZK) proof systems. We show how to transform any ordinary NIZK proof system into one that has strong non-malleability properties. We then show that the elegant encryption scheme of Naor and Yung (1990) can be made secure against the strongest form of chosen-ciphertext attack by using a non-malleable NIZK proof instead of a standard NIZK proof. Our encryption scheme is simple to describe and works in the standard cryptographic model under, general assumptions. The encryption scheme can be realized assuming the existence of trapdoor permutations.
{"title":"Non-malleable non-interactive zero knowledge and adaptive chosen-ciphertext security","authors":"A. Sahai","doi":"10.1109/SFFCS.1999.814628","DOIUrl":"https://doi.org/10.1109/SFFCS.1999.814628","url":null,"abstract":"We introduce the notion of non-malleable non-interactive zero-knowledge (NIZK) proof systems. We show how to transform any ordinary NIZK proof system into one that has strong non-malleability properties. We then show that the elegant encryption scheme of Naor and Yung (1990) can be made secure against the strongest form of chosen-ciphertext attack by using a non-malleable NIZK proof instead of a standard NIZK proof. Our encryption scheme is simple to describe and works in the standard cryptographic model under, general assumptions. The encryption scheme can be realized assuming the existence of trapdoor permutations.","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132455666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/SFFCS.1999.814620
I. Dumer, Daniele Micciancio, M. Sudan
We show that the minimum distance of a linear code (or equivalently, the weight of the lightest codeword) is not approximable to within any constant factor in random polynomial time (RP), unless NP equals RP. Under the stronger assumption that NP is not contained in RQP (random quasi-polynomial time), we show that the minimum distance is not approximable to within the factor 2/sup log(1-/spl epsiv/)n/, for any /spl epsiv/>0, where n denotes the block length of the code. Our results hold for codes over every finite field, including the special case of binary codes. In the process we show that the nearest codeword problem is hard to solve even under the promise that the number of errors is (a constant factor) smaller than the distance of the code. This is a particularly meaningful version of the nearest codeword problem. Our results strengthen (though using stronger assumptions) a previous result of A. Vardy (1997) who showed that the minimum distance is NP-hard to compute exactly. Our results are obtained by adapting proofs of analogous results for integer lattices due to M. Ajtai (1998) and D. Micciancio (1998). A critical component in the adaptation is our use of linear codes that perform better than random (linear) codes.
{"title":"Hardness of approximating the minimum distance of a linear code","authors":"I. Dumer, Daniele Micciancio, M. Sudan","doi":"10.1109/SFFCS.1999.814620","DOIUrl":"https://doi.org/10.1109/SFFCS.1999.814620","url":null,"abstract":"We show that the minimum distance of a linear code (or equivalently, the weight of the lightest codeword) is not approximable to within any constant factor in random polynomial time (RP), unless NP equals RP. Under the stronger assumption that NP is not contained in RQP (random quasi-polynomial time), we show that the minimum distance is not approximable to within the factor 2/sup log(1-/spl epsiv/)n/, for any /spl epsiv/>0, where n denotes the block length of the code. Our results hold for codes over every finite field, including the special case of binary codes. In the process we show that the nearest codeword problem is hard to solve even under the promise that the number of errors is (a constant factor) smaller than the distance of the code. This is a particularly meaningful version of the nearest codeword problem. Our results strengthen (though using stronger assumptions) a previous result of A. Vardy (1997) who showed that the minimum distance is NP-hard to compute exactly. Our results are obtained by adapting proofs of analogous results for integer lattices due to M. Ajtai (1998) and D. Micciancio (1998). A critical component in the adaptation is our use of linear codes that perform better than random (linear) codes.","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128642004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-01DOI: 10.1109/SFFCS.1999.814619
C. Umans
We show that a number of natural optimization problems in the second level of the Polynomial Hierarchy are /spl Sigma//sub 2//sup p/-hard to approximate to within n/sup /spl epsiv// factors, for specific /spl epsiv/>0. The main technical tool is the use of explicit dispersers to achieve strong, direct inapproximability results. The problems we consider include Succinct Set Cover, Minimum Equivalent DNF, and other problems relating to DNF minimization. Under a slightly stronger complexity assumption, our method gives optimal n/sup 1-/spl epsiv// inapproximability results for some of these problems. We also prove inapproximability of a variant of an NP optimization problem, Monotone Minimum Satisfying Assignment, to within an n/sup /spl epsiv// factor using the same technique.
{"title":"Hardness of approximating /spl Sigma//sub 2//sup p/ minimization problems","authors":"C. Umans","doi":"10.1109/SFFCS.1999.814619","DOIUrl":"https://doi.org/10.1109/SFFCS.1999.814619","url":null,"abstract":"We show that a number of natural optimization problems in the second level of the Polynomial Hierarchy are /spl Sigma//sub 2//sup p/-hard to approximate to within n/sup /spl epsiv// factors, for specific /spl epsiv/>0. The main technical tool is the use of explicit dispersers to achieve strong, direct inapproximability results. The problems we consider include Succinct Set Cover, Minimum Equivalent DNF, and other problems relating to DNF minimization. Under a slightly stronger complexity assumption, our method gives optimal n/sup 1-/spl epsiv// inapproximability results for some of these problems. We also prove inapproximability of a variant of an NP optimization problem, Monotone Minimum Satisfying Assignment, to within an n/sup /spl epsiv// factor using the same technique.","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128031785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-07-02DOI: 10.1137/S0097539700370084
D. Eppstein
We introduce a class of "inverse parametric optimization" problems, in which one is given both a parametric optimization problem and a desired optimal solution; the task is to determine parameter values that lead to the given solution. We describe algorithms for solving such problems for minimum spanning trees, shortest paths, and other "optimal subgraph" problems, and discuss applications in multicast routing, vehicle path planning, resource allocation, and board game programming.
{"title":"Setting parameters by example","authors":"D. Eppstein","doi":"10.1137/S0097539700370084","DOIUrl":"https://doi.org/10.1137/S0097539700370084","url":null,"abstract":"We introduce a class of \"inverse parametric optimization\" problems, in which one is given both a parametric optimization problem and a desired optimal solution; the task is to determine parameter values that lead to the given solution. We describe algorithms for solving such problems for minimum spanning trees, shortest paths, and other \"optimal subgraph\" problems, and discuss applications in multicast routing, vehicle path planning, resource allocation, and board game programming.","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133567964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-04-27DOI: 10.1109/SFFCS.1999.814608
A. Nayak
Consider the finite regular language L/sub n/={w0|w/spl isin/{0,1}*,|w|/spl les/n}. A. Ambainis et al. (1999) showed that while this language is accepted by a deterministic finite automaton of size O(n), any one-way quantum finite automaton (QFA) for it has size 2/sup /spl Omega/(n/logn)/. This was based on the fact that the evolution of a QFA is required to be reversible. When arbitrary intermediate measurements are allowed, this intuition breaks down. Nonetheless, we show a 2/sup /spl Omega/(n)/ lower bound for such QFA for L/sub n/, thus also improving the previous bound. The improved bound is obtained from simple entropy arguments based on A.S. Holevo's (1973) theorem. This method also allows us to obtain an asymptotically optimal (1-H(p))n bound for the dense quantum codes (random access codes) introduced by A. Ambainis et al. We then turn to Holevo's theorem, and show that in typical situations, it may be replaced by a tighter and more transparent in-probability bound.
{"title":"Optimal lower bounds for quantum automata and random access codes","authors":"A. Nayak","doi":"10.1109/SFFCS.1999.814608","DOIUrl":"https://doi.org/10.1109/SFFCS.1999.814608","url":null,"abstract":"Consider the finite regular language L/sub n/={w0|w/spl isin/{0,1}*,|w|/spl les/n}. A. Ambainis et al. (1999) showed that while this language is accepted by a deterministic finite automaton of size O(n), any one-way quantum finite automaton (QFA) for it has size 2/sup /spl Omega/(n/logn)/. This was based on the fact that the evolution of a QFA is required to be reversible. When arbitrary intermediate measurements are allowed, this intuition breaks down. Nonetheless, we show a 2/sup /spl Omega/(n)/ lower bound for such QFA for L/sub n/, thus also improving the previous bound. The improved bound is obtained from simple entropy arguments based on A.S. Holevo's (1973) theorem. This method also allows us to obtain an asymptotically optimal (1-H(p))n bound for the dense quantum codes (random access codes) introduced by A. Ambainis et al. We then turn to Holevo's theorem, and show that in typical situations, it may be replaced by a tighter and more transparent in-probability bound.","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115703775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-04-26DOI: 10.1109/SFFCS.1999.814607
H. Buhrman, R. Cleve, R. D. Wolf, Christof Zalka
We present a number of results related to quantum algorithms with small error probability and quantum algorithms that are zero-error. First, we give a tight analysis of the trade-offs between the number of queries of quantum search algorithms, their error probability, the size of the search space, and the number of solutions in this space. Using this, we deduce new lower and upper bounds for quantum versions of amplification problems. Next, we establish nearly optimal quantum-classical separations for the query complexity of monotone functions in the zero-error model (where our quantum zero-error model is defined so as to be robust when the quantum gates are noisy). Also, we present a communication complexity problem related to a total function for which there is a quantum-classical communication complexity gap in the zero-error model. Finally, we prove separations for monotone graph properties in the zero-error and other error models which imply that the evasiveness conjecture for such properties does not hold for quantum computers.
{"title":"Bounds for small-error and zero-error quantum algorithms","authors":"H. Buhrman, R. Cleve, R. D. Wolf, Christof Zalka","doi":"10.1109/SFFCS.1999.814607","DOIUrl":"https://doi.org/10.1109/SFFCS.1999.814607","url":null,"abstract":"We present a number of results related to quantum algorithms with small error probability and quantum algorithms that are zero-error. First, we give a tight analysis of the trade-offs between the number of queries of quantum search algorithms, their error probability, the size of the search space, and the number of solutions in this space. Using this, we deduce new lower and upper bounds for quantum versions of amplification problems. Next, we establish nearly optimal quantum-classical separations for the query complexity of monotone functions in the zero-error model (where our quantum zero-error model is defined so as to be robust when the quantum gates are noisy). Also, we present a communication complexity problem related to a total function for which there is a quantum-classical communication complexity gap in the zero-error model. Finally, we prove separations for monotone graph properties in the zero-error and other error models which imply that the evasiveness conjecture for such properties does not hold for quantum computers.","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114970708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-02-13DOI: 10.1109/SFFCS.1999.814606
A. Ambainis
We show that any quantum algorithm searching an ordered list of n elements needs to examine at least (log,n)/12-O(1) of them. Classically, log/sub 2/ n queries are both necessary and sufficient. This shows that quantum algorithms can achieve only a constant speedup for this problem.
{"title":"A better lower bound for quantum algorithms searching an ordered list","authors":"A. Ambainis","doi":"10.1109/SFFCS.1999.814606","DOIUrl":"https://doi.org/10.1109/SFFCS.1999.814606","url":null,"abstract":"We show that any quantum algorithm searching an ordered list of n elements needs to examine at least (log,n)/12-O(1) of them. Classically, log/sub 2/ n queries are both necessary and sufficient. This shows that quantum algorithms can achieve only a constant speedup for this problem.","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128121947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/SFFCS.1999.814623
J. Feigenbaum, Sampath Kannan, M. Strauss, Mahesh Viswanathan
We give a space-efficient, one-pass algorithm for approximating the L/sup 1/ difference /spl Sigma//sub i/|a/sub i/-b/sub i/| between two functions, when the function values a/sub i/ and b/sub i/ are given as data streams, and their order is chosen by an adversary. Our main technical innovation is a method of constructing families {V/sub j/} of limited independence random variables that are range summable by which we mean that /spl Sigma//sub j=0//sup c-1/ V/sub j/(s) is computable in time polylog(c), for all seeds s. These random variable families may be of interest outside our current application domain, i.e., massive data streams generated by communication networks. Our L/sup 1/-difference algorithm can be viewed as a "sketching" algorithm, in the sense of (A. Broder et al., 1998), and our algorithm performs better than that of Broder et al., when used to approximate the symmetric difference of two sets with small symmetric difference.
当函数值a/下标i/和b/下标i/作为数据流给出时,它们的顺序由对手选择,我们给出了一个节省空间的,一次通过的算法来近似两个函数之间的L/下标1/差分/spl Sigma//下标i/|a/下标i/-b/下标i/|。我们的主要技术创新是一种构造家族{V/sub j/}的有限独立随机变量的方法,这些随机变量是范围可求和的,我们的意思是/spl Sigma//sub j=0//sup c-1/ V/sub j/(s)在时间多元log(c)中是可计算的,对于所有种子s。这些随机变量家族可能在我们当前的应用领域之外感兴趣,即由通信网络生成的大量数据流。从(a . Broder et al., 1998)的意义上讲,我们的L/sup 1/-差分算法可以看作是一种“素描”算法,当用于近似两个对称差分较小的集合的对称差分时,我们的算法比Broder et al.的算法表现得更好。
{"title":"An approximate L/sup 1/-difference algorithm for massive data streams","authors":"J. Feigenbaum, Sampath Kannan, M. Strauss, Mahesh Viswanathan","doi":"10.1109/SFFCS.1999.814623","DOIUrl":"https://doi.org/10.1109/SFFCS.1999.814623","url":null,"abstract":"We give a space-efficient, one-pass algorithm for approximating the L/sup 1/ difference /spl Sigma//sub i/|a/sub i/-b/sub i/| between two functions, when the function values a/sub i/ and b/sub i/ are given as data streams, and their order is chosen by an adversary. Our main technical innovation is a method of constructing families {V/sub j/} of limited independence random variables that are range summable by which we mean that /spl Sigma//sub j=0//sup c-1/ V/sub j/(s) is computable in time polylog(c), for all seeds s. These random variable families may be of interest outside our current application domain, i.e., massive data streams generated by communication networks. Our L/sup 1/-difference algorithm can be viewed as a \"sketching\" algorithm, in the sense of (A. Broder et al., 1998), and our algorithm performs better than that of Broder et al., when used to approximate the symmetric difference of two sets with small symmetric difference.","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116327850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/SFFCS.1999.814585
B. Vocking
This paper deals with balls and bins processes related to randomized load balancing, dynamic resource allocation and hashing. Suppose n balls have to be assigned to n bins, where each ball has to be placed without knowledge about the distribution of previously placed balls. The goal is to achieve an allocation that is as even as possible so that no bin gets much more balls than the average. A well known and good solution for this problem is to choose d possible locations for each ball at random, to look into each of these bins, and to place the ball into the least full among these bins. This class of algorithms has been investigated intensively in the past but almost all previous analyses assume that the d locations for each ball are chosen uniform and independently at random from the set of all bins. We investigate whether a non-uniform and possibly dependent choice of the d locations for a ball can improve the load balancing. Three types of selections are distinguished: 1) uniform and independent 2) non-uniform and independent 3) non-uniform and dependent. Our first result shows that choosing the locations in a non-uniform way (type 2) results in a better load balancing than choosing the locations uniformly (type 1). Surprising, this smooth load balancing is obtained by an algorithm called "Always-Go-Left" which creates an asymmetric assignment of the balls to the bins. Our second result is a lower bound on the smallest-possible maximum load that can be achieved by any allocation algorithm of type 1, 2, or 3.
{"title":"How asymmetry helps load balancing","authors":"B. Vocking","doi":"10.1109/SFFCS.1999.814585","DOIUrl":"https://doi.org/10.1109/SFFCS.1999.814585","url":null,"abstract":"This paper deals with balls and bins processes related to randomized load balancing, dynamic resource allocation and hashing. Suppose n balls have to be assigned to n bins, where each ball has to be placed without knowledge about the distribution of previously placed balls. The goal is to achieve an allocation that is as even as possible so that no bin gets much more balls than the average. A well known and good solution for this problem is to choose d possible locations for each ball at random, to look into each of these bins, and to place the ball into the least full among these bins. This class of algorithms has been investigated intensively in the past but almost all previous analyses assume that the d locations for each ball are chosen uniform and independently at random from the set of all bins. We investigate whether a non-uniform and possibly dependent choice of the d locations for a ball can improve the load balancing. Three types of selections are distinguished: 1) uniform and independent 2) non-uniform and independent 3) non-uniform and dependent. Our first result shows that choosing the locations in a non-uniform way (type 2) results in a better load balancing than choosing the locations uniformly (type 1). Surprising, this smooth load balancing is obtained by an algorithm called \"Always-Go-Left\" which creates an asymmetric assignment of the balls to the bins. Our second result is a lower bound on the smallest-possible maximum load that can be achieved by any allocation algorithm of type 1, 2, or 3.","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124929401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}