Cristian S. Calude, Sanjay Jain, B. Khoussainov, Wei Li, F. Stephan
It is shown that the parity game can be solved in quasipolynomial time. The parameterised parity game - with n nodes and m distinct values (aka colours or priorities) - is proven to be in the class of fixed parameter tractable (FPT) problems when parameterised over m. Both results improve known bounds, from runtime nO(√n) to O(nlog(m)+6) and from an XP-algorithm with runtime O(nΘ(m)) for fixed parameter m to an FPT-algorithm with runtime O(n5)+g(m), for some function g depending on m only. As an application it is proven that coloured Muller games with n nodes and m colours can be decided in time O((mm · n)5); it is also shown that this bound cannot be improved to O((2m · n)c), for any c, unless FPT = W[1].
{"title":"Deciding parity games in quasipolynomial time","authors":"Cristian S. Calude, Sanjay Jain, B. Khoussainov, Wei Li, F. Stephan","doi":"10.1145/3055399.3055409","DOIUrl":"https://doi.org/10.1145/3055399.3055409","url":null,"abstract":"It is shown that the parity game can be solved in quasipolynomial time. The parameterised parity game - with n nodes and m distinct values (aka colours or priorities) - is proven to be in the class of fixed parameter tractable (FPT) problems when parameterised over m. Both results improve known bounds, from runtime nO(√n) to O(nlog(m)+6) and from an XP-algorithm with runtime O(nΘ(m)) for fixed parameter m to an FPT-algorithm with runtime O(n5)+g(m), for some function g depending on m only. As an application it is proven that coloured Muller games with n nodes and m colours can be decided in time O((mm · n)5); it is also shown that this bound cannot be improved to O((2m · n)c), for any c, unless FPT = W[1].","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84001999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The question of finding an epsilon-biased set with close to optimal support size, or, equivalently, finding an explicit binary code with distance 1-ϵ/2 and rate close to the Gilbert-Varshamov bound, attracted a lot of attention in recent decades. In this paper we solve the problem almost optimally and show an explicit ϵ-biased set over k bits with support size O(k/ϵ2+o(1)). This improves upon all previous explicit constructions which were in the order of k2/ϵ2, k/ϵ3 or k5/4/ϵ5/2. The result is close to the Gilbert-Varshamov bound which is O(k/ϵ2) and the lower bound which is Ω(k/ϵ2 log1/ϵ). The main technical tool we use is bias amplification with the s-wide replacement product. The sum of two independent samples from an ϵ-biased set is ϵ2 biased. Rozenman and Wigderson showed how to amplify the bias more economically by choosing two samples with an expander. Based on that they suggested a recursive construction that achieves sample size O(k/ϵ4). We show that amplification with a long random walk over the s-wide replacement product reduces the bias almost optimally.
{"title":"Explicit, almost optimal, epsilon-balanced codes","authors":"A. Ta-Shma","doi":"10.1145/3055399.3055408","DOIUrl":"https://doi.org/10.1145/3055399.3055408","url":null,"abstract":"The question of finding an epsilon-biased set with close to optimal support size, or, equivalently, finding an explicit binary code with distance 1-ϵ/2 and rate close to the Gilbert-Varshamov bound, attracted a lot of attention in recent decades. In this paper we solve the problem almost optimally and show an explicit ϵ-biased set over k bits with support size O(k/ϵ2+o(1)). This improves upon all previous explicit constructions which were in the order of k2/ϵ2, k/ϵ3 or k5/4/ϵ5/2. The result is close to the Gilbert-Varshamov bound which is O(k/ϵ2) and the lower bound which is Ω(k/ϵ2 log1/ϵ). The main technical tool we use is bias amplification with the s-wide replacement product. The sum of two independent samples from an ϵ-biased set is ϵ2 biased. Rozenman and Wigderson showed how to amplify the bias more economically by choosing two samples with an expander. Based on that they suggested a recursive construction that achieves sample size O(k/ϵ4). We show that amplification with a long random walk over the s-wide replacement product reduces the bias almost optimally.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89165291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A plethora of recent work has analyzed properties of outcomes in games when each player employs a no-regret learning algorithm. Many algorithms achieve regret against the best fixed action in hindisght that decays at a rate of O(1/'T), when the game is played for T iterations. The latter rate is optimal in adversarial settings. However, in a game a player's opponents are minimizing their own regret, rather than maximizing the player's regret. (Daskalakis et al. 2014) and (Rakhlin and Sridharan 2013) showed that in two player zero-sum games O(1/T) rates are achievable. In (Syrgkanis et al. 2015), we show that O(1/T3/4) rates are achievable in general multi-player games and also analyze convergence of the dynamics to approximately optimal social welfare, where we show a convergence rate of O(1/T). The latter result was subsequently generalized to a broader class of learning algorithms by (Foster et al. 2016). This is based on joint work with Alekh Agarwal, Haipeng Luo and Robert E. Schapire.
最近有大量研究分析了当每个玩家都使用无悔学习算法时游戏结果的属性。当游戏进行T次迭代时,许多算法在以0 (1/'T)的速率衰减的最佳固定动作中实现遗憾。后一种比率在对抗环境中是最佳的。然而,在游戏中,玩家的对手会最小化他们自己的遗憾,而不是最大化玩家的遗憾。(Daskalakis et al. 2014)和(Rakhlin and Sridharan 2013)表明,在两个玩家的零和博弈中,0 (1/T)比率是可以实现的。在(sygkanis et al. 2015)中,我们表明在一般的多人游戏中可以实现0 (1/T3/4)的速率,并且还分析了动态趋同的近似最优社会福利,其中我们显示了O(1/T)的趋同速率。后一种结果随后被推广到更广泛的学习算法类别(Foster et al. 2016)。这是基于与Alekh Agarwal, Haipeng Luo和Robert E. Schapire的合作。
{"title":"Fast convergence of learning in games (invited talk)","authors":"Vasilis Syrgkanis","doi":"10.1145/3055399.3084098","DOIUrl":"https://doi.org/10.1145/3055399.3084098","url":null,"abstract":"A plethora of recent work has analyzed properties of outcomes in games when each player employs a no-regret learning algorithm. Many algorithms achieve regret against the best fixed action in hindisght that decays at a rate of O(1/'T), when the game is played for T iterations. The latter rate is optimal in adversarial settings. However, in a game a player's opponents are minimizing their own regret, rather than maximizing the player's regret. (Daskalakis et al. 2014) and (Rakhlin and Sridharan 2013) showed that in two player zero-sum games O(1/T) rates are achievable. In (Syrgkanis et al. 2015), we show that O(1/T3/4) rates are achievable in general multi-player games and also analyze convergence of the dynamics to approximately optimal social welfare, where we show a convergence rate of O(1/T). The latter result was subsequently generalized to a broader class of learning algorithms by (Foster et al. 2016). This is based on joint work with Alekh Agarwal, Haipeng Luo and Robert E. Schapire.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":"451 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75107012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The matroid parity (or matroid matching) problem, introduced as a common generalization of matching and matroid intersection problems, is so general that it requires an exponential number of oracle calls. Lovász (1980) showed that this problem admits a min-max formula and a polynomial algorithm for linearly represented matroids. Since then efficient algorithms have been developed for the linear matroid parity problem. In this paper, we present a combinatorial, deterministic, strongly polynomial algorithm for the weighted linear matroid parity problem. The algorithm builds on a polynomial matrix formulation using Pfaffian and adopts a primal-dual approach with the aid of the augmenting path algorithm of Gabow and Stallmann (1986) for the unweighted problem.
{"title":"A weighted linear matroid parity algorithm","authors":"S. Iwata, Yusuke Kobayashi","doi":"10.1145/3055399.3055436","DOIUrl":"https://doi.org/10.1145/3055399.3055436","url":null,"abstract":"The matroid parity (or matroid matching) problem, introduced as a common generalization of matching and matroid intersection problems, is so general that it requires an exponential number of oracle calls. Lovász (1980) showed that this problem admits a min-max formula and a polynomial algorithm for linearly represented matroids. Since then efficient algorithms have been developed for the linear matroid parity problem. In this paper, we present a combinatorial, deterministic, strongly polynomial algorithm for the weighted linear matroid parity problem. The algorithm builds on a polynomial matrix formulation using Pfaffian and adopts a primal-dual approach with the aid of the augmenting path algorithm of Gabow and Stallmann (1986) for the unweighted problem.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":"367 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76313854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an adaptive and non-interactive protocol for verifying arbitrary efficient computations in fixed polynomial time. Our protocol is computationally sound and can be based on any computational PIR scheme, which in turn can be based on standard polynomial-time cryptographic assumptions (e.g. the worst case hardness of polynomial-factor approximation of short-vector lattice problems). In our protocol, the verifier sets up a public key ahead of time, and this key can be used by any prover to prove arbitrary statements by simpling sending a proof to the verifier. Verification is done using a secret verification key, and soundness relies on this key not being known to the prover. Our protocol further allows to prove statements about computations of arbitrary RAM machines. Previous works either relied on knowledge assumptions, or could only offer non-adaptive two-message protocols (where the first message could not be re-used), and required either obfuscation-based assumptions or super-polynomial hardness assumptions. We show that our techniques can also be applied to construct a new type of (non-adaptive) 2-message argument for batch NP-statements. Specifically, we can simultaneously prove (with computational soundness) the membership of multiple instances in a given NP language, with communication complexity proportional to the length of a single witness.
{"title":"Non-interactive delegation and batch NP verification from standard computational assumptions","authors":"Zvika Brakerski, Justin Holmgren, Y. Kalai","doi":"10.1145/3055399.3055497","DOIUrl":"https://doi.org/10.1145/3055399.3055497","url":null,"abstract":"We present an adaptive and non-interactive protocol for verifying arbitrary efficient computations in fixed polynomial time. Our protocol is computationally sound and can be based on any computational PIR scheme, which in turn can be based on standard polynomial-time cryptographic assumptions (e.g. the worst case hardness of polynomial-factor approximation of short-vector lattice problems). In our protocol, the verifier sets up a public key ahead of time, and this key can be used by any prover to prove arbitrary statements by simpling sending a proof to the verifier. Verification is done using a secret verification key, and soundness relies on this key not being known to the prover. Our protocol further allows to prove statements about computations of arbitrary RAM machines. Previous works either relied on knowledge assumptions, or could only offer non-adaptive two-message protocols (where the first message could not be re-used), and required either obfuscation-based assumptions or super-polynomial hardness assumptions. We show that our techniques can also be applied to construct a new type of (non-adaptive) 2-message argument for batch NP-statements. Specifically, we can simultaneously prove (with computational soundness) the membership of multiple instances in a given NP language, with communication complexity proportional to the length of a single witness.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88705749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern science and engineering is driven by massively large data sets and its advance heavily relies on massively parallel computing platforms such as Spark, MapReduce, and Hadoop. Theoretical models have been proposed to understand the power and limitations of such platforms. Recent study of developed theoretical models has led to the discovery of new algorithms that are fast and efficient in both theory and practice, thereby beginning to unlock their underlying power. Given recent promising results, the area has turned its focus on discovering widely applicable algorithmic techniques for solving problems efficiently. In this paper we make progress towards this goal by giving a principled framework for simulating sequential dynamic programs in the distributed setting. In particular, we identify two key properties, monotonicity and decomposability, which allow us to derive efficient distributed algorithms for problems possessing the properties. We showcase our framework by considering several core dynamic programming applications, Longest Increasing Subsequence, Optimal Binary Search Tree, and Weighted Interval Selection. For these problems, we derive algorithms yielding solutions that are arbitrarily close to the optimum, using O(1) rounds and Õ(n/m) memory on each machine where n is the input size and m is the number of machines available.
{"title":"Efficient massively parallel methods for dynamic programming","authors":"Sungjin Im, Benjamin Moseley, Xiaorui Sun","doi":"10.1145/3055399.3055460","DOIUrl":"https://doi.org/10.1145/3055399.3055460","url":null,"abstract":"Modern science and engineering is driven by massively large data sets and its advance heavily relies on massively parallel computing platforms such as Spark, MapReduce, and Hadoop. Theoretical models have been proposed to understand the power and limitations of such platforms. Recent study of developed theoretical models has led to the discovery of new algorithms that are fast and efficient in both theory and practice, thereby beginning to unlock their underlying power. Given recent promising results, the area has turned its focus on discovering widely applicable algorithmic techniques for solving problems efficiently. In this paper we make progress towards this goal by giving a principled framework for simulating sequential dynamic programs in the distributed setting. In particular, we identify two key properties, monotonicity and decomposability, which allow us to derive efficient distributed algorithms for problems possessing the properties. We showcase our framework by considering several core dynamic programming applications, Longest Increasing Subsequence, Optimal Binary Search Tree, and Weighted Interval Selection. For these problems, we derive algorithms yielding solutions that are arbitrarily close to the optimum, using O(1) rounds and Õ(n/m) memory on each machine where n is the input size and m is the number of machines available.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":"48 1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78284331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The breakthrough result of Chattopadhyay and Zuckerman (2016) gives a reduction from the construction of explicit two-source extractors to the construction of explicit non-malleable extractors. However, even assuming the existence of optimal explicit non-malleable extractors only gives a two-source extractor (or a Ramsey graph) for poly(logn) entropy, rather than the optimal O(logn). In this paper we modify the construction to solve the above barrier. Using the currently best explicit non-malleable extractors we get an explicit bipartite Ramsey graphs for sets of size 2k, for k=O(logn loglogn). Any further improvement in the construction of non-malleable extractors would immediately yield a corresponding two-source extractor. Intuitively, Chattopadhyay and Zuckerman use an extractor as a sampler, and we observe that one could use a weaker object - a somewhere-random condenser with a small entropy gap and a very short seed. We also show how to explicitly construct this weaker object using the error reduction technique of Raz, Reingold and Vadhan (1999), and the constant-degree dispersers of Zuckerman (2006) that also work against extremely small tests.
{"title":"An efficient reduction from two-source to non-malleable extractors: achieving near-logarithmic min-entropy","authors":"Avraham Ben-Aroya, Dean Doron, A. Ta-Shma","doi":"10.1145/3055399.3055423","DOIUrl":"https://doi.org/10.1145/3055399.3055423","url":null,"abstract":"The breakthrough result of Chattopadhyay and Zuckerman (2016) gives a reduction from the construction of explicit two-source extractors to the construction of explicit non-malleable extractors. However, even assuming the existence of optimal explicit non-malleable extractors only gives a two-source extractor (or a Ramsey graph) for poly(logn) entropy, rather than the optimal O(logn). In this paper we modify the construction to solve the above barrier. Using the currently best explicit non-malleable extractors we get an explicit bipartite Ramsey graphs for sets of size 2k, for k=O(logn loglogn). Any further improvement in the construction of non-malleable extractors would immediately yield a corresponding two-source extractor. Intuitively, Chattopadhyay and Zuckerman use an extractor as a sampler, and we observe that one could use a weaker object - a somewhere-random condenser with a small entropy gap and a very short seed. We also show how to explicitly construct this weaker object using the error reduction technique of Raz, Reingold and Vadhan (1999), and the constant-degree dispersers of Zuckerman (2006) that also work against extremely small tests.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81323431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tree pattern queries are a natural language for querying graph- and tree-structured data. A central question for understanding their optimization problem was whether they can be minimized by cutting away redundant parts. This question has been studied since the early 2000's and was recently resolved.
{"title":"Optimizing tree pattern queries: why cutting is not enough (invited talk)","authors":"W. Martens","doi":"10.1145/3055399.3079076","DOIUrl":"https://doi.org/10.1145/3055399.3079076","url":null,"abstract":"Tree pattern queries are a natural language for querying graph- and tree-structured data. A central question for understanding their optimization problem was whether they can be minimized by cutting away redundant parts. This question has been studied since the early 2000's and was recently resolved.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81349668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Following Bitcoin's introduction, decentralized cryptocurrencies began to emerge as a new application domain for computer science. Bitcoin's protocol has been researched and improved upon along many fronts: from its underlying incentives, through to its cryptographic primitives and its security. Many research questions and challenges still remain as cryptocurrencies and other financial systems that rely on similar principles gain wider adoption.
{"title":"Recent trends in decentralized cryptocurrencies (invited talk)","authors":"Aviv Zohar","doi":"10.1145/3055399.3079074","DOIUrl":"https://doi.org/10.1145/3055399.3079074","url":null,"abstract":"Following Bitcoin's introduction, decentralized cryptocurrencies began to emerge as a new application domain for computer science. Bitcoin's protocol has been researched and improved upon along many fronts: from its underlying incentives, through to its cryptographic primitives and its security. Many research questions and challenges still remain as cryptocurrencies and other financial systems that rely on similar principles gain wider adoption.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79102923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we introduce the online service with delay problem. In this problem, there are n points in a metric space that issue service requests over time, and a server that serves these requests. The goal is to minimize the sum of distance traveled by the server and the total delay (or a penalty function thereof) in serving the requests. This problem models the fundamental tradeoff between batching requests to improve locality and reducing delay to improve response time, that has many applications in operations management, operating systems, logistics, supply chain management, and scheduling. Our main result is to show a poly-logarithmic competitive ratio for the online service with delay problem. This result is obtained by an algorithm that we call the preemptive service algorithm. The salient feature of this algorithm is a process called preemptive service, which uses a novel combination of (recursive) time forwarding and spatial exploration on a metric space. We also generalize our results to k > 1 servers, and obtain stronger results for special metrics such as uniform and star metrics that correspond to (weighted) paging problems.
{"title":"Online service with delay","authors":"Y. Azar, Arun Ganesh, Rong Ge, Debmalya Panigrahi","doi":"10.1145/3055399.3055475","DOIUrl":"https://doi.org/10.1145/3055399.3055475","url":null,"abstract":"In this paper, we introduce the online service with delay problem. In this problem, there are n points in a metric space that issue service requests over time, and a server that serves these requests. The goal is to minimize the sum of distance traveled by the server and the total delay (or a penalty function thereof) in serving the requests. This problem models the fundamental tradeoff between batching requests to improve locality and reducing delay to improve response time, that has many applications in operations management, operating systems, logistics, supply chain management, and scheduling. Our main result is to show a poly-logarithmic competitive ratio for the online service with delay problem. This result is obtained by an algorithm that we call the preemptive service algorithm. The salient feature of this algorithm is a process called preemptive service, which uses a novel combination of (recursive) time forwarding and spatial exploration on a metric space. We also generalize our results to k > 1 servers, and obtain stronger results for special metrics such as uniform and star metrics that correspond to (weighted) paging problems.","PeriodicalId":20615,"journal":{"name":"Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72839880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}