Pub Date : 1999-10-17DOI: 10.1109/SFFCS.1999.814612
U. Schöning
We present a simple probabilistic algorithm for solving k-SAT and more generally, for solving constraint satisfaction problems (CSP). The algorithm follows a simple local search paradigm (S. Minton et al., 1992): randomly guess an initial assignment and then, guided by those clauses (constraints) that are not satisfied, by successively choosing a random literal from such a clause and flipping the corresponding bit, try to find a satisfying assignment. If no satisfying assignment is found after O(n) steps, start over again. Our analysis shows that for any satisfiable k-CNF-formula with n variables this process has to be repeated only t times, on the average, to find a satisfying assignment, where t is within a polynomial factor of (2(1-1/k))/sup n/. This is the fastest (and also the simplest) algorithm for 3-SAT known up to date. We consider also the more general case of a CSP with n variables, each variable taking at most d values, and constraints of order l, and analyze the complexity of the corresponding (generalized) algorith m. It turns out that any CSP can be solved with complexity at most (d/spl middot/(1-1/l)+/spl epsiv/)/sup n/.
我们提出了一个简单的概率算法来求解k-SAT,更一般地说,用于求解约束满足问题(CSP)。该算法遵循一个简单的局部搜索范式(S. Minton et al., 1992):随机猜测一个初始赋值,然后在那些不满足的子句(约束)的指导下,依次从该子句中选择一个随机文字并翻转相应的位,试图找到一个满意的赋值。如果在O(n)步之后没有找到满意的分配,则重新开始。我们的分析表明,对于任何有n个变量的可满足的k- cnf公式,这个过程平均只需要重复t次,就能找到一个令人满意的赋值,其中t在多项式因子(2(1-1/k))/sup n/内。这是目前已知的最快(也是最简单)的3-SAT算法。我们还考虑了具有n个变量,每个变量最多取d个值,约束为l阶的CSP的更一般情况,并分析了相应的(广义)算法m的复杂度。结果表明,任何CSP都可以以最多(d/spl middot/(1-1/l)+/spl epsiv/)/sup n/的复杂度求解。
{"title":"A probabilistic algorithm for k-SAT and constraint satisfaction problems","authors":"U. Schöning","doi":"10.1109/SFFCS.1999.814612","DOIUrl":"https://doi.org/10.1109/SFFCS.1999.814612","url":null,"abstract":"We present a simple probabilistic algorithm for solving k-SAT and more generally, for solving constraint satisfaction problems (CSP). The algorithm follows a simple local search paradigm (S. Minton et al., 1992): randomly guess an initial assignment and then, guided by those clauses (constraints) that are not satisfied, by successively choosing a random literal from such a clause and flipping the corresponding bit, try to find a satisfying assignment. If no satisfying assignment is found after O(n) steps, start over again. Our analysis shows that for any satisfiable k-CNF-formula with n variables this process has to be repeated only t times, on the average, to find a satisfying assignment, where t is within a polynomial factor of (2(1-1/k))/sup n/. This is the fastest (and also the simplest) algorithm for 3-SAT known up to date. We consider also the more general case of a CSP with n variables, each variable taking at most d values, and constraints of order l, and analyze the complexity of the corresponding (generalized) algorith m. It turns out that any CSP can be solved with complexity at most (d/spl middot/(1-1/l)+/spl epsiv/)/sup n/.","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127228932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/SFFCS.1999.814622
Wojciech Plandowski
We prove that the satisfiability problem for word equations is in PSPACE. The satisfiability problem for word equations has a simple formulation: find out whether or not an input word equation has a solution. The decidability of the problem was proved by G.S. Makanin (1977). His decision procedure is one of the most complicated algorithms existing in the literature. We propose an alternative algorithm. The full version of the algorithm requires only a proof of the upper bound for index of periodicity of a minimal solution (A. Koscielski and L. Pacholski, see Journal of ACM, vol.43, no.4. p.670-84). Our algorithm is the first one which is proved to work in polynomial space.
{"title":"Satisfiability of word equations with constants is in PSPACE","authors":"Wojciech Plandowski","doi":"10.1109/SFFCS.1999.814622","DOIUrl":"https://doi.org/10.1109/SFFCS.1999.814622","url":null,"abstract":"We prove that the satisfiability problem for word equations is in PSPACE. The satisfiability problem for word equations has a simple formulation: find out whether or not an input word equation has a solution. The decidability of the problem was proved by G.S. Makanin (1977). His decision procedure is one of the most complicated algorithms existing in the literature. We propose an alternative algorithm. The full version of the algorithm requires only a proof of the upper bound for index of periodicity of a minimal solution (A. Koscielski and L. Pacholski, see Journal of ACM, vol.43, no.4. p.670-84). Our algorithm is the first one which is proved to work in polynomial space.","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125367043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/SFFCS.1999.814615
S. Muthukrishnan, R. Rajaraman, Anthony Shaheen, J. Gehrke
We consider the classical problem of online job scheduling on uniprocessor and multiprocessor machines. For a given job, we measure the quality of service provided by an algorithm by the stretch of the job, which is defined as the ratio of the amount of time that the job spends in the system to the processing time of the job. For a given sequence of jobs, we measure the performance of an algorithm by the average stretch achieved by the algorithm over all the jobs in the sequence. The average stretch metric has been used to evaluate the performance of scheduling algorithms in many applications arising in databases, networks and systems; however no formal analysis of scheduling algorithms is known for the average stretch metric. The main contribution of the paper is to show that the shortest remaining processing time algorithm (SRPT) is O(l)-competitive with respect to average stretch for both uniprocessors as well as multiprocessors. For uniprocessors, we prove that SRPT is 2-competitive; we also establish an essentially matching lower bound on the competitive ratio of SRPT. For multiprocessors, we show that the competitive ratio of SRPT is at most 14. Furthermore, we establish constant-factor lower bounds on the competitive ratio of any online algorithm for both uniprocessors and multiprocessors.
{"title":"Online scheduling to minimize average stretch","authors":"S. Muthukrishnan, R. Rajaraman, Anthony Shaheen, J. Gehrke","doi":"10.1109/SFFCS.1999.814615","DOIUrl":"https://doi.org/10.1109/SFFCS.1999.814615","url":null,"abstract":"We consider the classical problem of online job scheduling on uniprocessor and multiprocessor machines. For a given job, we measure the quality of service provided by an algorithm by the stretch of the job, which is defined as the ratio of the amount of time that the job spends in the system to the processing time of the job. For a given sequence of jobs, we measure the performance of an algorithm by the average stretch achieved by the algorithm over all the jobs in the sequence. The average stretch metric has been used to evaluate the performance of scheduling algorithms in many applications arising in databases, networks and systems; however no formal analysis of scheduling algorithms is known for the average stretch metric. The main contribution of the paper is to show that the shortest remaining processing time algorithm (SRPT) is O(l)-competitive with respect to average stretch for both uniprocessors as well as multiprocessors. For uniprocessors, we prove that SRPT is 2-competitive; we also establish an essentially matching lower bound on the competitive ratio of SRPT. For multiprocessors, we show that the competitive ratio of SRPT is at most 14. Furthermore, we establish constant-factor lower bounds on the competitive ratio of any online algorithm for both uniprocessors and multiprocessors.","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115836986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/SFFCS.1999.814574
F. Afrati, E. Bampis, C. Chekuri, David R Karger, Claire Mathieu, S. Khanna, I. Milis, M. Queyranne, M. Skutella, C. Stein, M. Sviridenko
We consider the problem of scheduling n jobs with release dates on m machines so as to minimize their average weighted completion time. We present the first known polynomial time approximation schemes for several variants of this problem. Our results include PTASs for the case of identical parallel machines and a constant number of unrelated machines with and without preemption allowed. Our schemes are efficient: for all variants the running time for /spl alpha/(1+/spl epsiv/) approximation is of the form f(1//spl epsiv/, m)poly(n).
{"title":"Approximation schemes for minimizing average weighted completion time with release dates","authors":"F. Afrati, E. Bampis, C. Chekuri, David R Karger, Claire Mathieu, S. Khanna, I. Milis, M. Queyranne, M. Skutella, C. Stein, M. Sviridenko","doi":"10.1109/SFFCS.1999.814574","DOIUrl":"https://doi.org/10.1109/SFFCS.1999.814574","url":null,"abstract":"We consider the problem of scheduling n jobs with release dates on m machines so as to minimize their average weighted completion time. We present the first known polynomial time approximation schemes for several variants of this problem. Our results include PTASs for the case of identical parallel machines and a constant number of unrelated machines with and without preemption allowed. Our schemes are efficient: for all variants the running time for /spl alpha/(1+/spl epsiv/) approximation is of the form f(1//spl epsiv/, m)poly(n).","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127562945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/SFFCS.1999.814638
Adam R. Klivans, R. Servedio
This paper connects two fundamental ideas from theoretical computer science hard-core set construction, a type of hardness amplification from computational complexity, and boosting, a technique from computational learning theory. Using this connection we give fruitful applications of complexity-theoretic techniques to learning theory and vice versa. We show that the hard-core set construction of R. Impagliazzo (1995), which establishes the existence of distributions under which boolean functions are highly inapproximable, may be viewed as a boosting algorithm. Using alternate boosting methods we give an improved bound for hard-core set construction which matches known lower bounds from boosting and thus is optimal within this class of techniques. We then show how to apply techniques from R. Impagliazzo to give a new version of Jackson's celebrated Harmonic Sieve algorithm for learning DNF formulae under the uniform distribution using membership queries. Our new version has a significant asymptotic improvement in running time. Critical to our arguments is a careful analysis of the distributions which are employed in both boosting and hard-core set constructions.
{"title":"Boosting and hard-core sets","authors":"Adam R. Klivans, R. Servedio","doi":"10.1109/SFFCS.1999.814638","DOIUrl":"https://doi.org/10.1109/SFFCS.1999.814638","url":null,"abstract":"This paper connects two fundamental ideas from theoretical computer science hard-core set construction, a type of hardness amplification from computational complexity, and boosting, a technique from computational learning theory. Using this connection we give fruitful applications of complexity-theoretic techniques to learning theory and vice versa. We show that the hard-core set construction of R. Impagliazzo (1995), which establishes the existence of distributions under which boolean functions are highly inapproximable, may be viewed as a boosting algorithm. Using alternate boosting methods we give an improved bound for hard-core set construction which matches known lower bounds from boosting and thus is optimal within this class of techniques. We then show how to apply techniques from R. Impagliazzo to give a new version of Jackson's celebrated Harmonic Sieve algorithm for learning DNF formulae under the uniform distribution using membership queries. Our new version has a significant asymptotic improvement in running time. Critical to our arguments is a careful analysis of the distributions which are employed in both boosting and hard-core set constructions.","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133493833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/SFFCS.1999.814573
L. Fleischer
We describe fully polynomial time approximation schemes for various multicommodity flow problems in graphs with m edges and n vertices. We present the first approximation scheme for maximum multicommodity flow that is independent of the number of commodities k, and our algorithm improves upon the runtime of previous algorithms by this factor of k, running in O*(/spl epsiv//sup -2/ m/sup 2/) time. For maximum concurrent flow, and minimum cost concurrent flow, we present algorithms that are faster than the current known algorithms when the graph is sparse or the number of commodities k is large, i.e. k>m/n. Our algorithms build on the framework proposed by Garg and Konemann (1998). They are simple, deterministic, and for the versions without costs, they are strongly polynomial. Our maximum multicommodity flow algorithm extends to an approximation scheme for the maximum weighted multicommodity flow, which is faster than those implied by previous algorithms by a factor of k/log W where W is the maximum weight of a commodity.
{"title":"Approximating fractional multicommodity flow independent of the number of commodities","authors":"L. Fleischer","doi":"10.1109/SFFCS.1999.814573","DOIUrl":"https://doi.org/10.1109/SFFCS.1999.814573","url":null,"abstract":"We describe fully polynomial time approximation schemes for various multicommodity flow problems in graphs with m edges and n vertices. We present the first approximation scheme for maximum multicommodity flow that is independent of the number of commodities k, and our algorithm improves upon the runtime of previous algorithms by this factor of k, running in O*(/spl epsiv//sup -2/ m/sup 2/) time. For maximum concurrent flow, and minimum cost concurrent flow, we present algorithms that are faster than the current known algorithms when the graph is sparse or the number of commodities k is large, i.e. k>m/n. Our algorithms build on the framework proposed by Garg and Konemann (1998). They are simple, deterministic, and for the versions without costs, they are strongly polynomial. Our maximum multicommodity flow algorithm extends to an approximation scheme for the maximum weighted multicommodity flow, which is faster than those implied by previous algorithms by a factor of k/log W where W is the maximum weight of a commodity.","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114574186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/SFFCS.1999.814594
C. Borgs, J. Chayes, A. Frieze, J. Kim, P. Tetali, Eric Vigoda, Van H. Vu
Studies two widely used algorithms, Glauber dynamics and the Swendsen-Wang (1987) algorithm, on rectangular subsets of the hypercubic lattice Z/sup d/. We prove that, under certain circumstances, the mixing time in a box of side length L with periodic boundary conditions can be exponential in L/sup d-1/. In other words, under these circumstances, the mixing in these widely used algorithms is not rapid; instead it is torpid. The models we study are the independent set model and the q-state Potts model. For both models, we prove that Glauber dynamics is torpid in the region with phase coexistence. For the Potts model, we prove that the Swendsen-Wang mixing is torpid at the phase transition point.
{"title":"Torpid mixing of some Monte Carlo Markov chain algorithms in statistical physics","authors":"C. Borgs, J. Chayes, A. Frieze, J. Kim, P. Tetali, Eric Vigoda, Van H. Vu","doi":"10.1109/SFFCS.1999.814594","DOIUrl":"https://doi.org/10.1109/SFFCS.1999.814594","url":null,"abstract":"Studies two widely used algorithms, Glauber dynamics and the Swendsen-Wang (1987) algorithm, on rectangular subsets of the hypercubic lattice Z/sup d/. We prove that, under certain circumstances, the mixing time in a box of side length L with periodic boundary conditions can be exponential in L/sup d-1/. In other words, under these circumstances, the mixing in these widely used algorithms is not rapid; instead it is torpid. The models we study are the independent set model and the q-state Potts model. For both models, we prove that Glauber dynamics is torpid in the region with phase coexistence. For the Potts model, we prove that the Swendsen-Wang mixing is torpid at the phase transition point.","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124001403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/SFFCS.1999.814581
Timothy M. Chan
We give a data structure that allows arbitrary insertions and deletions on a planar point set P and supports basic queries on the convex hull of P, such as membership and tangent-finding. Updates take O(log/sup 1+/spl epsiv// n) amortized time and queries take O(log n) time each, where n is the maximum size of P and /spl epsiv/ is any fixed positive constant. For some advanced queries such as bridge-finding, both our bounds increase to O(log/sup 3/2/ n). The only previous fully dynamic solution was by Overmars and van Leeuwen (1981) and required O(log/sup 2/ n) time per update.
{"title":"Dynamic planar convex hull operations in near-logarithmic amortized time","authors":"Timothy M. Chan","doi":"10.1109/SFFCS.1999.814581","DOIUrl":"https://doi.org/10.1109/SFFCS.1999.814581","url":null,"abstract":"We give a data structure that allows arbitrary insertions and deletions on a planar point set P and supports basic queries on the convex hull of P, such as membership and tangent-finding. Updates take O(log/sup 1+/spl epsiv// n) amortized time and queries take O(log n) time each, where n is the maximum size of P and /spl epsiv/ is any fixed positive constant. For some advanced queries such as bridge-finding, both our bounds increase to O(log/sup 3/2/ n). The only previous fully dynamic solution was by Overmars and van Leeuwen (1981) and required O(log/sup 2/ n) time per update.","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127432435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/SFFCS.1999.814591
R. Raz, Omer Reingold, S. Vadhan
An extractor is a function which extracts (almost) truly random bits from a weak random source, using a small number of additional random bits as a catalyst. We present a general method to reduce the error of any extractor. Our method works particularly well in the case that the original extractor extracts up to a constant function of the source min-entropy and achieves a polynomially small error. In that case, we are able to reduce the error to (almost) any /spl epsiv/, using only O(log(1//spl epsiv/)) additional truly random bits (while keeping the other parameters of the original extractor more or less the same). In other cases (e.g. when the original extractor extracts all the min-entropy or achieves only a constant error), our method is not optimal but it is still quite efficient and leads to improved constructions of extractors. Using our method, we are able to improve almost all known extractors in the case where the error required is relatively small (e.g. less than a polynomially small error). In particular, we apply our method to the new extractors of L. Trevisan (1999) and R. Raz et al. (1999) to obtain improved constructions in almost all cases. Specifically, we obtain extractors that work for sources of any min-entropy on strings of length n which (a) extract any 1/n/sup /spl gamma// fraction of the min-entropy using O[log n+log(1//spl epsiv/)] truly random bits (for any /spl gamma/>0), (b) extract any constant fraction of the min-entropy using O[log/sup 2/n+log(1//spl epsiv/)] truly random bits, and (c) extract all the min-entropy using O[log/sup 3/n+log n/spl middot/log(1//spl epsiv/)] truly random bits.
{"title":"Error reduction for extractors","authors":"R. Raz, Omer Reingold, S. Vadhan","doi":"10.1109/SFFCS.1999.814591","DOIUrl":"https://doi.org/10.1109/SFFCS.1999.814591","url":null,"abstract":"An extractor is a function which extracts (almost) truly random bits from a weak random source, using a small number of additional random bits as a catalyst. We present a general method to reduce the error of any extractor. Our method works particularly well in the case that the original extractor extracts up to a constant function of the source min-entropy and achieves a polynomially small error. In that case, we are able to reduce the error to (almost) any /spl epsiv/, using only O(log(1//spl epsiv/)) additional truly random bits (while keeping the other parameters of the original extractor more or less the same). In other cases (e.g. when the original extractor extracts all the min-entropy or achieves only a constant error), our method is not optimal but it is still quite efficient and leads to improved constructions of extractors. Using our method, we are able to improve almost all known extractors in the case where the error required is relatively small (e.g. less than a polynomially small error). In particular, we apply our method to the new extractors of L. Trevisan (1999) and R. Raz et al. (1999) to obtain improved constructions in almost all cases. Specifically, we obtain extractors that work for sources of any min-entropy on strings of length n which (a) extract any 1/n/sup /spl gamma// fraction of the min-entropy using O[log n+log(1//spl epsiv/)] truly random bits (for any /spl gamma/>0), (b) extract any constant fraction of the min-entropy using O[log/sup 2/n+log(1//spl epsiv/)] truly random bits, and (c) extract all the min-entropy using O[log/sup 3/n+log n/spl middot/log(1//spl epsiv/)] truly random bits.","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130065889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-10-17DOI: 10.1109/SFFCS.1999.814616
E. Koutsoupias
We study the k-server problem when the offline algorithm has fewer than k servers. We give two upper bounds of the cost WFA(/spl rho/) of the Work Function Algorithm. The first upper bound is kOPT/sub h/(/spl rho/)+(h-1)OPT/sub k/(/spl rho/), where OPT/sub m/(/spl rho/) denotes the optimal cost to service /spl rho/ by m servers. The second upper bound is 2hOPTh(/spl rho/)-OPT/sub k/(/spl rho/) for h/spl les/k. Both bounds imply that the Work Function Algorithm is (2k-1)-competitive. Perhaps more important is our technique which seems promising for settling the k-server conjecture. The proofs are simple and intuitive and they do not involve potential functions. We also apply the technique to give a simple condition for the Work Function Algorithm to be k-competitive; this condition results in a new proof that the k-server conjecture holds for k=2.
{"title":"Weak adversaries for the k-server problem","authors":"E. Koutsoupias","doi":"10.1109/SFFCS.1999.814616","DOIUrl":"https://doi.org/10.1109/SFFCS.1999.814616","url":null,"abstract":"We study the k-server problem when the offline algorithm has fewer than k servers. We give two upper bounds of the cost WFA(/spl rho/) of the Work Function Algorithm. The first upper bound is kOPT/sub h/(/spl rho/)+(h-1)OPT/sub k/(/spl rho/), where OPT/sub m/(/spl rho/) denotes the optimal cost to service /spl rho/ by m servers. The second upper bound is 2hOPTh(/spl rho/)-OPT/sub k/(/spl rho/) for h/spl les/k. Both bounds imply that the Work Function Algorithm is (2k-1)-competitive. Perhaps more important is our technique which seems promising for settling the k-server conjecture. The proofs are simple and intuitive and they do not involve potential functions. We also apply the technique to give a simple condition for the Work Function Algorithm to be k-competitive; this condition results in a new proof that the k-server conjecture holds for k=2.","PeriodicalId":385047,"journal":{"name":"40th Annual Symposium on Foundations of Computer Science (Cat. No.99CB37039)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121703192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}