Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253459
A. Fiat, Moty Ricklin
The authors deal with a generalization of the k-server problem, in which the servers are unequal. In the weighted server model each of the servers is assigned a positive weight. The cost associated with moving a server equals the product of the distance traversed and the server weight. A weighted k-server algorithm is called competitive if the competitive ratio depends only upon the number of servers. (i.e., the competitive ratio is independent of the weights associated with the servers and the number of points in the metric space). For the uniform metric space, they give super exponential competitive algorithms for any set of weights. If the servers have one of two possible weights, they give deterministic exponential competitive algorithms and randomized polynomial competitive algorithms. They use the MIN operator for both algorithms. One can model the problem of storage management for RAM and E/sup 2/PROM type memories as a weighted server problem with two weights on the uniform metric space.<>
{"title":"Competitive algorithms for the weighted server problem","authors":"A. Fiat, Moty Ricklin","doi":"10.1109/ISTCS.1993.253459","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253459","url":null,"abstract":"The authors deal with a generalization of the k-server problem, in which the servers are unequal. In the weighted server model each of the servers is assigned a positive weight. The cost associated with moving a server equals the product of the distance traversed and the server weight. A weighted k-server algorithm is called competitive if the competitive ratio depends only upon the number of servers. (i.e., the competitive ratio is independent of the weights associated with the servers and the number of points in the metric space). For the uniform metric space, they give super exponential competitive algorithms for any set of weights. If the servers have one of two possible weights, they give deterministic exponential competitive algorithms and randomized polynomial competitive algorithms. They use the MIN operator for both algorithms. One can model the problem of storage management for RAM and E/sup 2/PROM type memories as a weighted server problem with two weights on the uniform metric space.<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123421993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253477
M. Luby, A. Sinclair, David Zuckerman
Let A be a Las Vegas algorithm, i.e., A is a randomized algorithm that always produces the correct answer when its stops but whose running time is a random variable. The authors consider the problem of minimizing the expected time required to obtain an answer from A using strategies which simulate A as follows: run A for a fixed amount of time t/sub 1/, then run A independent for a fixed amount of time t/sub 2/, etc. The simulation stops if A completes its execution during any of the runs. Let S=(t/sub 1/, t/sub 2/,. . .) be a strategy, and let l/sub A/=inf/sub S/T(A,S), where T(A,S) is the expected value of the running time of the simulation of A under strategy S. The authors describe a simple universal strategy S/sup univ/, with the property that, for any algorithm A, T(A,S/sup univ/)=O(l/sub A/log(l/sub A/)). Furthermore, they show that this is the best performance that can be achieved, up to a constant factor, by any universal strategy.<>
设A是一个拉斯维加斯算法,即A是一个随机算法,当它停止时总是产生正确的答案,但其运行时间是一个随机变量。作者考虑最小化从A获得答案所需的期望时间的问题,使用模拟A的策略如下:在固定时间t/下标1/上运行A,然后在固定时间t/下标2/上独立运行A,等等。如果A在任何运行期间完成其执行,则模拟停止。设S=(t/下标1/,t/下标2/,…)为策略,设l/下标a /=inf/下标S/ t (a,S),其中t (a,S)为策略S下a仿真运行时间的期望值。作者描述了一种简单的通用策略S/sup univ/,其性质是,对于任意算法a, t (a,S/sup univ/)=O(l/下标a /log(l/下标a /))。此外,他们还表明,这是任何通用策略所能达到的最佳性能。
{"title":"Optimal speedup of Las Vegas algorithms","authors":"M. Luby, A. Sinclair, David Zuckerman","doi":"10.1109/ISTCS.1993.253477","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253477","url":null,"abstract":"Let A be a Las Vegas algorithm, i.e., A is a randomized algorithm that always produces the correct answer when its stops but whose running time is a random variable. The authors consider the problem of minimizing the expected time required to obtain an answer from A using strategies which simulate A as follows: run A for a fixed amount of time t/sub 1/, then run A independent for a fixed amount of time t/sub 2/, etc. The simulation stops if A completes its execution during any of the runs. Let S=(t/sub 1/, t/sub 2/,. . .) be a strategy, and let l/sub A/=inf/sub S/T(A,S), where T(A,S) is the expected value of the running time of the simulation of A under strategy S. The authors describe a simple universal strategy S/sup univ/, with the property that, for any algorithm A, T(A,S/sup univ/)=O(l/sub A/log(l/sub A/)). Furthermore, they show that this is the best performance that can be achieved, up to a constant factor, by any universal strategy.<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114774852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253463
J. Håstad, S. Phillips, S. Safra
The authors consider the following NP optimization problem: given a set of polynomials P/sub i/(x), i=1. . .s of degree at most 2 over GF(p) in n variables, find a root common to as many as possible of the polynomials P/sub i/(x). They prove that in the case when the polynomials do not contain any squares as monomials, it is always possible to approximate this problem within a factor of /sup p2///sub p-1/ in polynomial time. This follows from the stronger statement that one can, in polynomial time, find an assignment that satisfies at least /sup p-1///sub p2/ of the nontrivial equations. More interestingly, they prove that approximating the maximal number of polynomials with a common root to within a factor of p- in is NP-hard. They also prove that for any constant delta <1, it is NP-hard to approximate the solution of quadratic equations over the rational numbers, or over the reals, within n/sup delta /.<>
{"title":"A well-characterized approximation problem","authors":"J. Håstad, S. Phillips, S. Safra","doi":"10.1109/ISTCS.1993.253463","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253463","url":null,"abstract":"The authors consider the following NP optimization problem: given a set of polynomials P/sub i/(x), i=1. . .s of degree at most 2 over GF(p) in n variables, find a root common to as many as possible of the polynomials P/sub i/(x). They prove that in the case when the polynomials do not contain any squares as monomials, it is always possible to approximate this problem within a factor of /sup p2///sub p-1/ in polynomial time. This follows from the stronger statement that one can, in polynomial time, find an assignment that satisfies at least /sup p-1///sub p2/ of the nontrivial equations. More interestingly, they prove that approximating the maximal number of polynomials with a common root to within a factor of p- in is NP-hard. They also prove that for any constant delta <1, it is NP-hard to approximate the solution of quadratic equations over the rational numbers, or over the reals, within n/sup delta /.<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115684244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253460
J. Garay, I. Gopal, S. Kutten, Y. Mansour, M. Yung
The authors study the problem of on-line call control, i.e., the problem of accepting or rejecting an incoming call without knowledge of future calls. The problem is part of the more general problem of bandwidth allocation and management. Intuition suggests that knowledge of future call arrivals can be crucial to the performance of the system. They present on-line call control algorithms that, in some circumstances, are competitive, i.e., perform (up to a constant factor) as well as their off-line, clairvoyant counterparts. They also prove the optimality of some algorithms. The model is that of a line of nodes, and they investigate a variety of cases concerning the value of the calls. The value is gained only if the call terminates successfully, otherwise-if the call is rejected, or prematurely terminated-no value is gained. The performance of the algorithm is then measured by the cumulative value achieved, when given a sequence of calls. The variety of call value criteria captures the most natural cost assignments to network services.<>
{"title":"Efficient on-line call control algorithms","authors":"J. Garay, I. Gopal, S. Kutten, Y. Mansour, M. Yung","doi":"10.1109/ISTCS.1993.253460","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253460","url":null,"abstract":"The authors study the problem of on-line call control, i.e., the problem of accepting or rejecting an incoming call without knowledge of future calls. The problem is part of the more general problem of bandwidth allocation and management. Intuition suggests that knowledge of future call arrivals can be crucial to the performance of the system. They present on-line call control algorithms that, in some circumstances, are competitive, i.e., perform (up to a constant factor) as well as their off-line, clairvoyant counterparts. They also prove the optimality of some algorithms. The model is that of a line of nodes, and they investigate a variety of cases concerning the value of the calls. The value is gained only if the call terminates successfully, otherwise-if the call is rejected, or prematurely terminated-no value is gained. The performance of the algorithm is then measured by the cumulative value achieved, when given a sequence of calls. The variety of call value criteria captures the most natural cost assignments to network services.<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122185470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253457
A. Ben-Dor, S. Halevi
Valiant (1979) proved that computing the permanent of a 01-matrix is not=P-complete. The authors present another proof for the same result. The proof uses 'black box' methodology, which facilitates its presentation. They also prove that deciding whether the permanent is divisible by a small prime is not=P-hard. They conclude by proving that a polynomially bounded function can not be not=P-complete under 'reasonable' complexity assumptions.<>
{"title":"Zero-one permanent is not=P-complete, a simpler proof","authors":"A. Ben-Dor, S. Halevi","doi":"10.1109/ISTCS.1993.253457","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253457","url":null,"abstract":"Valiant (1979) proved that computing the permanent of a 01-matrix is not=P-complete. The authors present another proof for the same result. The proof uses 'black box' methodology, which facilitates its presentation. They also prove that deciding whether the permanent is divisible by a small prime is not=P-hard. They conclude by proving that a polynomially bounded function can not be not=P-complete under 'reasonable' complexity assumptions.<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132893924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253474
S. Kutten, R. Ostrovsky, B. Patt-Shamir
One of the fundamental problems in distributed computing is how identical processes with identical local memory can choose unique IDs provided they can flip a coin. The variant considered is the asynchronous shared memory model (atomic registers), and the basic correctness requirement is that upon termination the processes must always have unique IDs. The authors study this problem from several viewpoints. On the positive side, they present the first Las-Vegas protocol that solves the problem. The protocol terminates in (optimal) O(log n) expected time, using O(n) shared memory space, where n is the number of participating processes. On the negative side, they show that there is no Las-Vegas protocol unless n is known precisely, and that no finite-state Las-Vegas protocol can work under schedules that may depend on the history of the shared variable. For the case of arbitrary adversary, they present a Las-Vegas protocol that uses O(n) unbounded registers.<>
{"title":"The Las-Vegas processor identity problem (how and when to be unique)","authors":"S. Kutten, R. Ostrovsky, B. Patt-Shamir","doi":"10.1109/ISTCS.1993.253474","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253474","url":null,"abstract":"One of the fundamental problems in distributed computing is how identical processes with identical local memory can choose unique IDs provided they can flip a coin. The variant considered is the asynchronous shared memory model (atomic registers), and the basic correctness requirement is that upon termination the processes must always have unique IDs. The authors study this problem from several viewpoints. On the positive side, they present the first Las-Vegas protocol that solves the problem. The protocol terminates in (optimal) O(log n) expected time, using O(n) shared memory space, where n is the number of participating processes. On the negative side, they show that there is no Las-Vegas protocol unless n is known precisely, and that no finite-state Las-Vegas protocol can work under schedules that may depend on the history of the shared variable. For the case of arbitrary adversary, they present a Las-Vegas protocol that uses O(n) unbounded registers.<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"270 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116545718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253478
L. Cai, Jianer Chen
Fixed-parameter tractability and approximability of NP-hard optimization problems are studied based on a model GC(s(n), Pi /sub k//sup L/). The main results are (1) a class of NP-hard optimization problems, including dominating-set and zero-one integer-programing, are fixed-parameter tractable if and only if GC(s(n), Pi /sub 2//sup L/) contained in P for some s(n) in omega (log n); (2) most approximable NP-hard optimization problems are fixed-parameter tractable. In particular, the class MAX NP is fixed-parameter tractable; (3) a class of optimization problems do not have fully polynomial time approximation scheme unless GC(s(n), Pi /sub k//sup L/) contained in P for some s(n) in omega (log n) and for some k>l; and (4) every fixed-parameter tractable optimization problem can be approximated in polynomial time to a non-trivial ratio.<>
基于GC(s(n), Pi /sub k//sup L/)模型,研究了NP-hard优化问题的定参数可跟踪性和近似性。主要结果是:(1)一类NP-hard优化问题,包括支配集和0 - 1整数规划,当且仅当对于(log n)中的某个s(n), GC(s(n), Pi /sub 2//sup L/)包含在P中是定参数可处理的;(2)大多数近似NP-hard优化问题是固定参数可处理的。特别地,MAX NP类是固定参数可处理的;(3)一类优化问题除非GC(s(n), Pi /sub k//sup L/)包含在P中,对于(log n)中的某些s(n)和某些k> L,则不具有完全多项式时间逼近方案;(4)每一个固定参数可处理的优化问题都可以在多项式时间内逼近到一个非平凡的比率。
{"title":"On fixed-parameter tractability and approximability of NP-hard optimization problems","authors":"L. Cai, Jianer Chen","doi":"10.1109/ISTCS.1993.253478","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253478","url":null,"abstract":"Fixed-parameter tractability and approximability of NP-hard optimization problems are studied based on a model GC(s(n), Pi /sub k//sup L/). The main results are (1) a class of NP-hard optimization problems, including dominating-set and zero-one integer-programing, are fixed-parameter tractable if and only if GC(s(n), Pi /sub 2//sup L/) contained in P for some s(n) in omega (log n); (2) most approximable NP-hard optimization problems are fixed-parameter tractable. In particular, the class MAX NP is fixed-parameter tractable; (3) a class of optimization problems do not have fully polynomial time approximation scheme unless GC(s(n), Pi /sub k//sup L/) contained in P for some s(n) in omega (log n) and for some k>l; and (4) every fixed-parameter tractable optimization problem can be approximated in polynomial time to a non-trivial ratio.<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125245932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253469
I. Newman, A. Schuster
The theory of worm routing (rather than packet routing) recently attracts an increased attention as an abstraction of the underlying communication mechanisms in many parallel machines. Routing the worms in the hot-potato style is a desired form of communication in high-speed optical interconnection networks. The authors develop a simple method for the design of parallel hot-potato worm routing algorithms. The basic approach is to simulate known packet routing algorithms, so that in each step worms are moved around instead of packets. For hot-potato permutation routing of worms of size k the authors have the following results. They get a O(k/sup 2.5/n) algorithm for the n*n mesh, and a O(k/sup 1.5/n) algorithm for the corresponding offline problem. For the 2/sup n/-nodes hypercube they get a O(k/sup 3/n log /sup 2/n) deterministic algorithm, and a O(k/sup 3/n) randomized algorithm. Although the results are given for permutation routing on the mesh and the hypercube, the general method can be applied to many other networks and to more general communication patterns as well. Moreover, once better routing algorithms are found for the underlying network, the worm routing algorithm improves, too.<>
{"title":"Hot-potato worm routing is almost as easy as store-and-forward packet routing","authors":"I. Newman, A. Schuster","doi":"10.1109/ISTCS.1993.253469","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253469","url":null,"abstract":"The theory of worm routing (rather than packet routing) recently attracts an increased attention as an abstraction of the underlying communication mechanisms in many parallel machines. Routing the worms in the hot-potato style is a desired form of communication in high-speed optical interconnection networks. The authors develop a simple method for the design of parallel hot-potato worm routing algorithms. The basic approach is to simulate known packet routing algorithms, so that in each step worms are moved around instead of packets. For hot-potato permutation routing of worms of size k the authors have the following results. They get a O(k/sup 2.5/n) algorithm for the n*n mesh, and a O(k/sup 1.5/n) algorithm for the corresponding offline problem. For the 2/sup n/-nodes hypercube they get a O(k/sup 3/n log /sup 2/n) deterministic algorithm, and a O(k/sup 3/n) randomized algorithm. Although the results are given for permutation routing on the mesh and the hypercube, the general method can be applied to many other networks and to more general communication patterns as well. Moreover, once better routing algorithms are found for the underlying network, the worm routing algorithm improves, too.<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"241 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122919250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253467
K. Sere
One form of program refinement is to add new variables to the state, together with code that manipulates these new variables. When the addition of new variables and associated computation code is done in a way that prevents the old computation of the program from being disturbed, then the author calls it superpositioning. He studies superposition in the context of constructing parallel programs following the stepwise refinement approach, where the added computation in each step could consist of an entire parallel algorithm. Hence, it is important to find methods that are easy to use and also guarantee the correctness of the operation. It is also important be able to superpose one algorithm, like a termination detection algorithm, onto several different original algorithms. He therefore gives a method for defining and using such superposable modules.<>
{"title":"A formalization of superposition refinement","authors":"K. Sere","doi":"10.1109/ISTCS.1993.253467","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253467","url":null,"abstract":"One form of program refinement is to add new variables to the state, together with code that manipulates these new variables. When the addition of new variables and associated computation code is done in a way that prevents the old computation of the program from being disturbed, then the author calls it superpositioning. He studies superposition in the context of constructing parallel programs following the stepwise refinement approach, where the added computation in each step could consist of an entire parallel algorithm. Hence, it is important to find methods that are easy to use and also guarantee the correctness of the operation. It is also important be able to superpose one algorithm, like a termination detection algorithm, onto several different original algorithms. He therefore gives a method for defining and using such superposable modules.<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129089658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-06-07DOI: 10.1109/ISTCS.1993.253481
E. Cohen
The author considers parallel shortest-path computations in weighted undirected graphs G=(V,E), where n= mod V mod and m= mod E mod . The standard path-doubling algorithms consists of O(log n) phases, where in each phase, for every triple of vertices (u/sub 1/, u/sub 2/, u/sub 3/) in V/sup 3/, she updates the distance between u/sub 1/ and u/sub 3/ to be no more than the sum of the previous-phase distances between (u/sub 1/, u/sub 2/) and (u/sub 2/, u/sub 3/). The work performed in each phase, O(n/sup 3/) (linear in the number of triples), is currently the bottleneck in NC shortest-paths computations. She introduces a new algorithm that for delta =o(n), considers only O(n delta /sup 2/) triples. Roughly, the resulting NC algorithm performs O(n delta /sup 2/) work and augments E with O(n delta ) new weighted edges such that between every pair of vertices, there exists a minimum weight path of size (number of edges) O(n/ delta ) (where O(f) identical to O(f polylog n)). To compute shortest-paths, she applies work-efficient algorithms, where the time depends on the size of shortest paths, to the augmented graph. She obtains a O(t) time O( mod S mod n/sup 2/+n/sup 3//t/sup 2/) work deterministic PRAM algorithm for computing shortest-paths form mod S mod sources to all other vertices, where t>
{"title":"Using selective path-doubling for parallel shortest-path computations","authors":"E. Cohen","doi":"10.1109/ISTCS.1993.253481","DOIUrl":"https://doi.org/10.1109/ISTCS.1993.253481","url":null,"abstract":"The author considers parallel shortest-path computations in weighted undirected graphs G=(V,E), where n= mod V mod and m= mod E mod . The standard path-doubling algorithms consists of O(log n) phases, where in each phase, for every triple of vertices (u/sub 1/, u/sub 2/, u/sub 3/) in V/sup 3/, she updates the distance between u/sub 1/ and u/sub 3/ to be no more than the sum of the previous-phase distances between (u/sub 1/, u/sub 2/) and (u/sub 2/, u/sub 3/). The work performed in each phase, O(n/sup 3/) (linear in the number of triples), is currently the bottleneck in NC shortest-paths computations. She introduces a new algorithm that for delta =o(n), considers only O(n delta /sup 2/) triples. Roughly, the resulting NC algorithm performs O(n delta /sup 2/) work and augments E with O(n delta ) new weighted edges such that between every pair of vertices, there exists a minimum weight path of size (number of edges) O(n/ delta ) (where O(f) identical to O(f polylog n)). To compute shortest-paths, she applies work-efficient algorithms, where the time depends on the size of shortest paths, to the augmented graph. She obtains a O(t) time O( mod S mod n/sup 2/+n/sup 3//t/sup 2/) work deterministic PRAM algorithm for computing shortest-paths form mod S mod sources to all other vertices, where t<or=n is a parameter. When the ratio of the largest edge weight and the smallest edge weight is n/sup O(polylog/ /sup n)/, the algorithm computes shortest paths. When weights are arbitrary, it computes paths within a factor of 1+n/sup - Omega (polylog/ /sup n)/ of shortest. This improves over previous bounds. She achieves improved O( mod S mod (n/sup 2//t+m)+n/sup 3//t/sup 2/) work for computing approximate distances to within a factor of (1+ in ) (for any fixed in ).<<ETX>>","PeriodicalId":281109,"journal":{"name":"[1993] The 2nd Israel Symposium on Theory and Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133157579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}