Leo Liberti, Benedetto Manca, Pierre-Louis Poirion
One way to solve very large linear programs in standard form is to apply a random projection to the constraints, then solve the projected linear program [63]. This will yield a guaranteed bound on the optimal value, as well as a solution to the projected linear program. The process of constructing an approximate solution of the original linear program is called solution retrieval. We improve theoretical bounds on the approximation error of the retrieved solution obtained as in [42], and propose an improved retrieval method based on alternating projections. We show empirical results illustrating the practical benefits of the new approach.
{"title":"Random projections for Linear Programming: an improved retrieval phase","authors":"Leo Liberti, Benedetto Manca, Pierre-Louis Poirion","doi":"10.1145/3617506","DOIUrl":"https://doi.org/10.1145/3617506","url":null,"abstract":"One way to solve very large linear programs in standard form is to apply a random projection to the constraints, then solve the projected linear program [63]. This will yield a guaranteed bound on the optimal value, as well as a solution to the projected linear program. The process of constructing an approximate solution of the original linear program is called solution retrieval. We improve theoretical bounds on the approximation error of the retrieved solution obtained as in [42], and propose an improved retrieval method based on alternating projections. We show empirical results illustrating the practical benefits of the new approach.","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45868469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graph coloring is the problem of coloring the vertices of a graph with as few colors as possible, avoiding monochromatic edges. It is one of the most fundamental NP-hard computational problems. For decades researchers have developed exact and heuristic methods for graph coloring. While methods based on propositional satisfiability (SAT) feature prominently among these exact methods, the encoding size is prohibitive for large graphs. For such graphs, heuristic methods have been proposed, with tabu search among the most successful ones. In this article, we enhance tabu search for graph coloring within the SAT-based local improvement (SLIM) framework. Our hybrid algorithm incrementally improves a candidate solution by repeatedly selecting small subgraphs and coloring them optimally with a SAT solver. This approach scales to dense graphs with several hundred thousand vertices and over 1.5 billion edges. Our experimental evaluation shows that our hybrid algorithm beats state-of-the-art methods on large dense graphs.
{"title":"SAT-Boosted Tabu Search for Coloring Massive Graphs","authors":"André Schidler, Stefan Szeider","doi":"10.1145/3603112","DOIUrl":"https://doi.org/10.1145/3603112","url":null,"abstract":"Graph coloring is the problem of coloring the vertices of a graph with as few colors as possible, avoiding monochromatic edges. It is one of the most fundamental NP-hard computational problems. For decades researchers have developed exact and heuristic methods for graph coloring. While methods based on propositional satisfiability (SAT) feature prominently among these exact methods, the encoding size is prohibitive for large graphs. For such graphs, heuristic methods have been proposed, with tabu search among the most successful ones. In this article, we enhance tabu search for graph coloring within the SAT-based local improvement (SLIM) framework. Our hybrid algorithm incrementally improves a candidate solution by repeatedly selecting small subgraphs and coloring them optimally with a SAT solver. This approach scales to dense graphs with several hundred thousand vertices and over 1.5 billion edges. Our experimental evaluation shows that our hybrid algorithm beats state-of-the-art methods on large dense graphs.","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":"1 1","pages":"1 - 19"},"PeriodicalIF":0.0,"publicationDate":"2023-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64077986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We experimentally evaluate the performance of several Max Cut approximation algorithms. In particular, we compare the results of the Goemans and Williamson algorithm using semidefinite programming with Trevisan’s algorithm using spectral partitioning. The former algorithm has a known.878 approximation guarantee whereas the latter has a.614 approximation guarantee. We investigate whether this gap in approximation guarantees is evident in practice or whether the spectral algorithm performs as well as the SDP. We also compare the performances to the standard greedy Max Cut algorithm which has a.5 approximation guarantee, two additional spectral algorithms, and a heuristic from Burer, Monteiro, and Zhang. The algorithms are tested on Erdős-Renyi random graphs, complete graphs from TSPLIB, and real-world graphs from the Network Repository. We find, unsurprisingly, that the spectral algorithms provide a significant speed advantage over the SDP. In our experiments, the spectral algorithms and BMZ heuristic return cuts with values which are competitive with those of the SDP.
{"title":"An Experimental Evaluation of Semidefinite Programming and Spectral Algorithms for Max Cut","authors":"Renee Mirka, David P. Williamson","doi":"10.1145/3609426","DOIUrl":"https://doi.org/10.1145/3609426","url":null,"abstract":"We experimentally evaluate the performance of several Max Cut approximation algorithms. In particular, we compare the results of the Goemans and Williamson algorithm using semidefinite programming with Trevisan’s algorithm using spectral partitioning. The former algorithm has a known.878 approximation guarantee whereas the latter has a.614 approximation guarantee. We investigate whether this gap in approximation guarantees is evident in practice or whether the spectral algorithm performs as well as the SDP. We also compare the performances to the standard greedy Max Cut algorithm which has a.5 approximation guarantee, two additional spectral algorithms, and a heuristic from Burer, Monteiro, and Zhang. The algorithms are tested on Erdős-Renyi random graphs, complete graphs from TSPLIB, and real-world graphs from the Network Repository. We find, unsurprisingly, that the spectral algorithms provide a significant speed advantage over the SDP. In our experiments, the spectral algorithms and BMZ heuristic return cuts with values which are competitive with those of the SDP.","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48988714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
José Alejandro Cornejo Acosta, Jesús García Díaz, Julio César Pérez Sansalvador, R. Z. Ríos-Mercado, Saúl Eduardo Pomares Hernández
The uniform capacitated vertex k-center problem is an (mathcal {NP} ) -hard combinatorial optimization problem that models real situations where k centers can only attend a maximum number of customers, and the travel time or distance from the customers to their assigned center has to be minimized. This paper introduces a polynomial-time constructive heuristic algorithm that exploits the relationship between this problem and the minimum capacitated dominating set problem. The proposed heuristic is based on the one-hop farthest-first heuristic that has proven effective for the uncapacitated version of the problem. We carried out different empirical evaluations of the proposed heuristics, including an analysis of the effect of a parallel implementation of the algorithm, which significantly improved the running time for relatively large instances.
{"title":"A constructive heuristic for the uniform capacitated vertex k-center problem","authors":"José Alejandro Cornejo Acosta, Jesús García Díaz, Julio César Pérez Sansalvador, R. Z. Ríos-Mercado, Saúl Eduardo Pomares Hernández","doi":"10.1145/3604911","DOIUrl":"https://doi.org/10.1145/3604911","url":null,"abstract":"The uniform capacitated vertex k-center problem is an (mathcal {NP} ) -hard combinatorial optimization problem that models real situations where k centers can only attend a maximum number of customers, and the travel time or distance from the customers to their assigned center has to be minimized. This paper introduces a polynomial-time constructive heuristic algorithm that exploits the relationship between this problem and the minimum capacitated dominating set problem. The proposed heuristic is based on the one-hop farthest-first heuristic that has proven effective for the uncapacitated version of the problem. We carried out different empirical evaluations of the proposed heuristics, including an analysis of the effect of a parallel implementation of the algorithm, which significantly improved the running time for relatively large instances.","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47777595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes methods for efficiently computing the anonymity of entities in networks. We do so by partitioning nodes into equivalence classes where a node is k-anonymous if it is equivalent to k − 1 other nodes. This assessment of anonymity is crucial when one wants to share data and must ensure the anonymity of entities represented is compliant with privacy laws. Additionally, in such an assessment, it is necessary to account for a realistic amount of information in the hands of a possible attacker that attempts to de-anonymize entities in the network. However, measures introduced in earlier work often assume a fixed amount of attacker knowledge. Therefore, in this work, we use a new parameterized measure for anonymity called d-k-anonymity. This measure can be used to model the scenario where an attacker has perfect knowledge of a node’s surroundings up to a given distance d. This poses nontrivial computational challenges, as naive approaches would employ large numbers of possibly computationally expensive graph isomorphism checks. This paper proposes novel algorithms that severely reduce this computational burden. In particular, we present an iterative approach, assisted by techniques for preprocessing nodes that are trivially automorphic and heuristics that exploit graph invariants. We evaluate our algorithms on three well-known graph models and a wide range of empirical network datasets. Results show that our approaches significantly speed up the computation by multiple orders of magnitude, which allows one to compute d-k-anonymity for a range of meaningful values of d on large empirical networks with tens of thousands of nodes and over a million edges.
{"title":"Algorithms for Efficiently Computing Structural Anonymity in Complex Networks","authors":"R. G. de Jong, Mark P. J. van der Loo, F. Takes","doi":"10.1145/3604908","DOIUrl":"https://doi.org/10.1145/3604908","url":null,"abstract":"This paper proposes methods for efficiently computing the anonymity of entities in networks. We do so by partitioning nodes into equivalence classes where a node is k-anonymous if it is equivalent to k − 1 other nodes. This assessment of anonymity is crucial when one wants to share data and must ensure the anonymity of entities represented is compliant with privacy laws. Additionally, in such an assessment, it is necessary to account for a realistic amount of information in the hands of a possible attacker that attempts to de-anonymize entities in the network. However, measures introduced in earlier work often assume a fixed amount of attacker knowledge. Therefore, in this work, we use a new parameterized measure for anonymity called d-k-anonymity. This measure can be used to model the scenario where an attacker has perfect knowledge of a node’s surroundings up to a given distance d. This poses nontrivial computational challenges, as naive approaches would employ large numbers of possibly computationally expensive graph isomorphism checks. This paper proposes novel algorithms that severely reduce this computational burden. In particular, we present an iterative approach, assisted by techniques for preprocessing nodes that are trivially automorphic and heuristics that exploit graph invariants. We evaluate our algorithms on three well-known graph models and a wide range of empirical network datasets. Results show that our approaches significantly speed up the computation by multiple orders of magnitude, which allows one to compute d-k-anonymity for a range of meaningful values of d on large empirical networks with tens of thousands of nodes and over a million edges.","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45345726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the paper we study a fingerprint-based minimal perfect hash function (FMPH for short). While FMPH is not as space-efficient as some other minimal perfect hash functions (for example RecSplit, CHD, or PTHash), it has a number of practical advantages that make it worthy of consideration. FMPH is simple and quite fast to evaluate. Its construction requires very little auxiliary memory, takes a short time and, in addition, can be parallelized or carried out without holding keys in memory. In this paper, we propose an effective method (called FMPHGO) that reduces the size of FMPH, as well as a number of implementation improvements. In addition, we experimentally study FMPHGO performance and find the best values for its parameters. Our benchmarks show that with our method and an efficient structure to support the rank queries on a bit vector, the FMPH size can be reduced to about 2.1 bits/key, which is close to the size achieved by state-of-the-art methods and noticeably larger only compared to RecSplit. FMPHGO preserves most of the FMPH advantages mentioned above, but significantly reduces its construction speed. However, FMPHGO’s construction speed is still competitive with methods of similar space efficiency (like CHD or PTHash), and seems to be good enough for practical applications.
{"title":"Fingerprinting-based minimal perfect hashing revisited","authors":"Piotr Beling","doi":"10.1145/3596453","DOIUrl":"https://doi.org/10.1145/3596453","url":null,"abstract":"In the paper we study a fingerprint-based minimal perfect hash function (FMPH for short). While FMPH is not as space-efficient as some other minimal perfect hash functions (for example RecSplit, CHD, or PTHash), it has a number of practical advantages that make it worthy of consideration. FMPH is simple and quite fast to evaluate. Its construction requires very little auxiliary memory, takes a short time and, in addition, can be parallelized or carried out without holding keys in memory. In this paper, we propose an effective method (called FMPHGO) that reduces the size of FMPH, as well as a number of implementation improvements. In addition, we experimentally study FMPHGO performance and find the best values for its parameters. Our benchmarks show that with our method and an efficient structure to support the rank queries on a bit vector, the FMPH size can be reduced to about 2.1 bits/key, which is close to the size achieved by state-of-the-art methods and noticeably larger only compared to RecSplit. FMPHGO preserves most of the FMPH advantages mentioned above, but significantly reduces its construction speed. However, FMPHGO’s construction speed is still competitive with methods of similar space efficiency (like CHD or PTHash), and seems to be good enough for practical applications.","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47195949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Loïc Crombez, G. D. D. Fonseca, Florian Fontan, Y. Gérard, A. Gonzalez-Lorenzo, P. Lafourcade, Luc Libralesso, B. Momège, Jack Spalding-Jamieson, Brandon Zhang, D. Zheng
CG:SHOP is an annual geometric optimization challenge and the 2022 edition proposed the problem of coloring a certain geometric graph defined by line segments. Surprisingly, the top three teams used the same technique, called conflict optimization. This technique has been introduced in the 2021 edition of the challenge, to solve a coordinated motion planning problem. In this paper, we present the technique in the more general framework of binary constraint satisfaction problems (binary CSP). Then, the top three teams describe their different implementations of the same underlying strategy. We evaluate the performance of those implementations to vertex color not only geometric graphs, but also other types of graphs.
{"title":"Conflict Optimization for Binary CSP Applied to Minimum Partition into Plane Subgraphs and Graph Coloring","authors":"Loïc Crombez, G. D. D. Fonseca, Florian Fontan, Y. Gérard, A. Gonzalez-Lorenzo, P. Lafourcade, Luc Libralesso, B. Momège, Jack Spalding-Jamieson, Brandon Zhang, D. Zheng","doi":"10.1145/3588869","DOIUrl":"https://doi.org/10.1145/3588869","url":null,"abstract":"CG:SHOP is an annual geometric optimization challenge and the 2022 edition proposed the problem of coloring a certain geometric graph defined by line segments. Surprisingly, the top three teams used the same technique, called conflict optimization. This technique has been introduced in the 2021 edition of the challenge, to solve a coordinated motion planning problem. In this paper, we present the technique in the more general framework of binary constraint satisfaction problems (binary CSP). Then, the top three teams describe their different implementations of the same underlying strategy. We evaluate the performance of those implementations to vertex color not only geometric graphs, but also other types of graphs.","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46367394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Apostolos Chalkis, I. Emiris, Vissarion Fisikopoulos
We tackle the problem of efficiently approximating the volume of convex polytopes, when these are given in three different representations: H-polytopes, which have been studied extensively, V-polytopes, and zonotopes (Z-polytopes). We design a novel practical Multiphase Monte Carlo algorithm that leverages random walks based on billiard trajectories, as well as a new empirical convergence tests and a simulated annealing schedule of adaptive convex bodies. After tuning several parameters of our proposed method, we present a detailed experimental evaluation of our tuned algorithm using a rich dataset containing Birkhoff polytopes and polytopes from structural biology. Our open-source implementation tackles problems that have been intractable so far, offering the first software to scale up in thousands of dimensions for H-polytopes and in the hundreds for V- and Z-polytopes on moderate hardware. Last, we illustrate our software in evaluating Z-polytope approximations.
{"title":"A practical algorithm for volume estimation based on billiard trajectories and simulated annealing","authors":"Apostolos Chalkis, I. Emiris, Vissarion Fisikopoulos","doi":"10.1145/3584182","DOIUrl":"https://doi.org/10.1145/3584182","url":null,"abstract":"We tackle the problem of efficiently approximating the volume of convex polytopes, when these are given in three different representations: H-polytopes, which have been studied extensively, V-polytopes, and zonotopes (Z-polytopes). We design a novel practical Multiphase Monte Carlo algorithm that leverages random walks based on billiard trajectories, as well as a new empirical convergence tests and a simulated annealing schedule of adaptive convex bodies. After tuning several parameters of our proposed method, we present a detailed experimental evaluation of our tuned algorithm using a rich dataset containing Birkhoff polytopes and polytopes from structural biology. Our open-source implementation tackles problems that have been intractable so far, offering the first software to scale up in thousands of dimensions for H-polytopes and in the hundreds for V- and Z-polytopes on moderate hardware. Last, we illustrate our software in evaluating Z-polytope approximations.","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47231913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of the “CG:SHOP Challenge 2019” was to generate optimal area polygonizations of a planar point set. We describe here the algorithm that won the challenge. It is a two-phase algorithm based on the node-insertion move technique, which comes from the TSP. In the first phase, we use constrained triangulations to check efficiently the simplicity of the generated polygonizations. In the second phase, we perform visibility searches to be able to generate a wider variety of polygonizations. In both phases, the simulated annealing metaheuristic is implemented to approach the optimum.
{"title":"Optimal Area Polygonization by Triangulation and Visibility Search","authors":"Julien Lepagnot, L. Moalic, Dominique Schmitt","doi":"10.1145/3503953","DOIUrl":"https://doi.org/10.1145/3503953","url":null,"abstract":"The aim of the “CG:SHOP Challenge 2019” was to generate optimal area polygonizations of a planar point set. We describe here the algorithm that won the challenge. It is a two-phase algorithm based on the node-insertion move technique, which comes from the TSP. In the first phase, we use constrained triangulations to check efficiently the simplicity of the generated polygonizations. In the second phase, we perform visibility searches to be able to generate a wider variety of polygonizations. In both phases, the simulated annealing metaheuristic is implemented to approach the optimum.","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":" ","pages":"1 - 23"},"PeriodicalIF":0.0,"publicationDate":"2022-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49628692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study exact, efficient, and practical algorithms for route planning applications in large road networks. On the one hand, such algorithms should be able to answer shortest path queries within milliseconds. On the other hand, routing applications often require integrating the current traffic situation, planning ahead with predictions for future traffic, respecting forbidden turns, and many other features depending on the specific application. Therefore, such algorithms must be flexible and able to support a variety of problem variants. In this work, we revisit the A* algorithm to build a simple, extensible, and unified algorithmic framework applicable to many route planning problems. A* has been previously used for routing in road networks. However, its performance was not competitive because no sufficiently fast and tight distance estimation function was available. We present a novel, efficient, and accurate A* heuristic using Contraction Hierarchies, another popular speedup technique. The core of our heuristic is a new Contraction Hierarchies query algorithm called Lazy RPHAST, which can efficiently compute shortest distances from many incrementally provided sources toward a common target. Additionally, we describe A* optimizations to accelerate the processing of low-degree vertices, which are typical in road networks, and present a new pruning criterion for symmetrical bidirectional A*. An extensive experimental study confirms the practicality of our approach for many applications.
{"title":"Using Incremental Many-to-One Queries to Build a Fast and Tight Heuristic for A* in Road Networks","authors":"Ben Strasser, Tim Zeitz","doi":"10.1145/3571282","DOIUrl":"https://doi.org/10.1145/3571282","url":null,"abstract":"We study exact, efficient, and practical algorithms for route planning applications in large road networks. On the one hand, such algorithms should be able to answer shortest path queries within milliseconds. On the other hand, routing applications often require integrating the current traffic situation, planning ahead with predictions for future traffic, respecting forbidden turns, and many other features depending on the specific application. Therefore, such algorithms must be flexible and able to support a variety of problem variants. In this work, we revisit the A* algorithm to build a simple, extensible, and unified algorithmic framework applicable to many route planning problems. A* has been previously used for routing in road networks. However, its performance was not competitive because no sufficiently fast and tight distance estimation function was available. We present a novel, efficient, and accurate A* heuristic using Contraction Hierarchies, another popular speedup technique. The core of our heuristic is a new Contraction Hierarchies query algorithm called Lazy RPHAST, which can efficiently compute shortest distances from many incrementally provided sources toward a common target. Additionally, we describe A* optimizations to accelerate the processing of low-degree vertices, which are typical in road networks, and present a new pruning criterion for symmetrical bidirectional A*. An extensive experimental study confirms the practicality of our approach for many applications.","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":" ","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2022-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44738989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}