首页 > 最新文献

Journal of Experimental Algorithmics最新文献

英文 中文
Experimental Comparison of PC-Trees and PQ-Trees PC树与PQ树的实验比较
Q2 Mathematics Pub Date : 2021-06-28 DOI: 10.1145/3611653
S. D. Fink, Matthias Pfretzschner, Ignaz Rutter
PQ-trees and PC-trees are data structures that represent sets of linear and circular orders, respectively, subject to constraints that specific subsets of elements have to be consecutive. While equivalent to each other, PC-trees are conceptually much simpler than PQ-trees; updating a PC-tree so that a set of elements becomes consecutive requires only a single operation, whereas PQ-trees use an update procedure that is described in terms of nine transformation templates that have to be recursively matched and applied. Despite these theoretical advantages, to date no practical PC-tree implementation is available. This might be due to the original description by Hsu and McConnell [14] in some places only sketching the details of the implementation. In this paper, we describe two alternative implementations of PC-trees. For the first one, we follow the approach by Hsu and McConnell, filling in the necessary details and also proposing improvements on the original algorithm. For the second one, we use a different technique for efficiently representing the tree using a Union-Find data structure. In an extensive experimental evaluation we compare our implementations to a variety of other implementations of PQ-trees that are available on the web as part of academic and other software libraries. Our results show that both PC-tree implementations beat their closest fully correct competitor, the PQ-tree implementation from the OGDF library [6, 15], by a factor of 2 to 4, showing that PC-trees are not only conceptually simpler but also fast in practice. Moreover, we find the Union-Find-based implementation, while having a slightly worse asymptotic runtime, to be twice as fast as the one based on the description by Hsu and McConnell.
PQ树和PC树是分别表示线性阶和循环阶的集合的数据结构,受元素的特定子集必须连续的约束。虽然PC树彼此等价,但在概念上比PQ树简单得多;更新PC树以使一组元素变得连续只需要一次操作,而PQ树使用的更新过程是根据必须递归匹配和应用的九个转换模板来描述的。尽管有这些理论上的优势,但到目前为止还没有实用的PC树实现。这可能是由于Hsu和McConnell[14]在某些地方的原始描述仅概述了实现的细节。在本文中,我们描述了PC树的两种替代实现。对于第一个,我们遵循Hsu和McConnell的方法,填写了必要的细节,并对原始算法提出了改进建议。对于第二个,我们使用不同的技术来使用Union Find数据结构有效地表示树。在一次广泛的实验评估中,我们将我们的实现与网络上作为学术和其他软件库的一部分提供的PQ树的各种其他实现进行了比较。我们的结果表明,两种PC树实现都以2到4的倍数击败了最接近的完全正确的竞争对手,即OGDF库[6,15]中的PQ树实现,这表明PC树不仅在概念上更简单,而且在实践中也很快。此外,我们发现基于Union find的实现,虽然渐近运行时稍差,但速度是基于Hsu和McConnell描述的实现的两倍。
{"title":"Experimental Comparison of PC-Trees and PQ-Trees","authors":"S. D. Fink, Matthias Pfretzschner, Ignaz Rutter","doi":"10.1145/3611653","DOIUrl":"https://doi.org/10.1145/3611653","url":null,"abstract":"PQ-trees and PC-trees are data structures that represent sets of linear and circular orders, respectively, subject to constraints that specific subsets of elements have to be consecutive. While equivalent to each other, PC-trees are conceptually much simpler than PQ-trees; updating a PC-tree so that a set of elements becomes consecutive requires only a single operation, whereas PQ-trees use an update procedure that is described in terms of nine transformation templates that have to be recursively matched and applied. Despite these theoretical advantages, to date no practical PC-tree implementation is available. This might be due to the original description by Hsu and McConnell [14] in some places only sketching the details of the implementation. In this paper, we describe two alternative implementations of PC-trees. For the first one, we follow the approach by Hsu and McConnell, filling in the necessary details and also proposing improvements on the original algorithm. For the second one, we use a different technique for efficiently representing the tree using a Union-Find data structure. In an extensive experimental evaluation we compare our implementations to a variety of other implementations of PQ-trees that are available on the web as part of academic and other software libraries. Our results show that both PC-tree implementations beat their closest fully correct competitor, the PQ-tree implementation from the OGDF library [6, 15], by a factor of 2 to 4, showing that PC-trees are not only conceptually simpler but also fast in practice. Moreover, we find the Union-Find-based implementation, while having a slightly worse asymptotic runtime, to be twice as fast as the one based on the description by Hsu and McConnell.","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49045127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
High-Quality Hypergraph Partitioning 高质量Hypergraph分区
Q2 Mathematics Pub Date : 2021-06-16 DOI: 10.1145/3529090
Sebastian Schlag, Tobias Heuer, Lars Gottesbüren, Yaroslav Akhremtsev, Christian Schulz, P. Sanders
Hypergraphs are a generalization of graphs where edges (aka nets) are allowed to connect more than two vertices. They have a similarly wide range of applications as graphs. This article considers the fundamental and intensively studied problem of balanced hypergraph partitioning (BHP), which asks for partitioning the vertices into k disjoint blocks of bounded size while minimizing an objective function over the hyperedges. Here, we consider the two most commonly used objectives: the cut-net metric and the connectivity metric. We describe our open-source hypergraph partitioner KaHyPar which is based on the successful multi-level approach—driving it to the extreme of using one level for (almost) every vertex. Using carefully designed data structures and dynamic update techniques, this approach turns out to have a very good time–quality tradeoff. We present two preprocessing techniques—pin sparsification using locality-sensitive hashing (LSH) and community detection based on the Louvain algorithm. The community structure is used to guide the coarsening process that incrementally contracts vertices. Portfolio-based partitioning of the contracted hypergraph then already achieves a good initial solution. While reversing the contraction process, a combination of several refinement techniques achieves a good final partitioning. In particular, we support a highly-localized local search that can directly produce a k-way partitioning and complement this with flow-based techniques that take a more global view. Optionally, a memetic algorithm evolves a pool of solution candidates to an overall good solution. We evaluate KaHyPar for a large set of instances from a wide range of application domains. With respect to quality, KaHyPar outperforms all previously considered systems that can handle large hypergraphs such as hMETIS, PaToH, Mondriaan, or Zoltan. Somewhat surprisingly, to some extend, this even extends to graph partitioners such as KaHIP when considering the special case of graphs. KaHyPar is also faster than most of these systems except for PaToH which represents a different speed–quality tradeoff.
Hypergraph是允许边(也称为网)连接两个以上顶点的图的推广。它们与图形有着同样广泛的应用。本文考虑了平衡超图划分(BHP)的基本问题和深入研究的问题,该问题要求将顶点划分为有界大小的k个不相交块,同时最小化超边上的目标函数。在这里,我们考虑两个最常用的目标:割网度量和连接度量。我们描述了我们的开源超图分割器KaHyPar,它基于成功的多层次方法——将其推向了对(几乎)每个顶点使用一个层次的极端。使用精心设计的数据结构和动态更新技术,这种方法具有很好的时间-质量折衷。我们提出了两种预处理技术——使用位置敏感哈希(LSH)的pin稀疏化和基于Louvain算法的社区检测。社区结构用于指导逐渐收缩顶点的粗化过程。基于组合的收缩超图划分已经实现了一个很好的初始解决方案。在扭转收缩过程的同时,几种细化技术的组合实现了良好的最终划分。特别是,我们支持高度本地化的局部搜索,它可以直接产生k路划分,并用更全局的基于流的技术来补充这一点。可选地,模因算法将候选解决方案库进化为整体良好的解决方案。我们针对来自广泛应用领域的大量实例对KaHyPar进行了评估。在质量方面,KaHyPar优于所有以前考虑的可以处理大型超图的系统,如hMETIS、PaToH、Mondrian或Zoltan。令人惊讶的是,在某种程度上,当考虑图的特殊情况时,这甚至扩展到了像KaHIP这样的图分割器。KaHyPar也比大多数系统更快,除了PaToH,它代表了不同的速度-质量权衡。
{"title":"High-Quality Hypergraph Partitioning","authors":"Sebastian Schlag, Tobias Heuer, Lars Gottesbüren, Yaroslav Akhremtsev, Christian Schulz, P. Sanders","doi":"10.1145/3529090","DOIUrl":"https://doi.org/10.1145/3529090","url":null,"abstract":"Hypergraphs are a generalization of graphs where edges (aka nets) are allowed to connect more than two vertices. They have a similarly wide range of applications as graphs. This article considers the fundamental and intensively studied problem of balanced hypergraph partitioning (BHP), which asks for partitioning the vertices into k disjoint blocks of bounded size while minimizing an objective function over the hyperedges. Here, we consider the two most commonly used objectives: the cut-net metric and the connectivity metric. We describe our open-source hypergraph partitioner KaHyPar which is based on the successful multi-level approach—driving it to the extreme of using one level for (almost) every vertex. Using carefully designed data structures and dynamic update techniques, this approach turns out to have a very good time–quality tradeoff. We present two preprocessing techniques—pin sparsification using locality-sensitive hashing (LSH) and community detection based on the Louvain algorithm. The community structure is used to guide the coarsening process that incrementally contracts vertices. Portfolio-based partitioning of the contracted hypergraph then already achieves a good initial solution. While reversing the contraction process, a combination of several refinement techniques achieves a good final partitioning. In particular, we support a highly-localized local search that can directly produce a k-way partitioning and complement this with flow-based techniques that take a more global view. Optionally, a memetic algorithm evolves a pool of solution candidates to an overall good solution. We evaluate KaHyPar for a large set of instances from a wide range of application domains. With respect to quality, KaHyPar outperforms all previously considered systems that can handle large hypergraphs such as hMETIS, PaToH, Mondriaan, or Zoltan. Somewhat surprisingly, to some extend, this even extends to graph partitioners such as KaHIP when considering the special case of graphs. KaHyPar is also faster than most of these systems except for PaToH which represents a different speed–quality tradeoff.","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":"27 1","pages":"1 - 39"},"PeriodicalIF":0.0,"publicationDate":"2021-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46717887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Enumeration of Far-apart Pairs by Decreasing Distance for Faster Hyperbolicity Computation 通过减少距离来枚举相距较远的对数以实现更快的双曲性计算
Q2 Mathematics Pub Date : 2021-04-26 DOI: 10.1145/3569169
D. Coudert, A. Nusser, L. Viennot
Hyperbolicity is a graph parameter that indicates how much the shortest-path distance metric of a graph deviates from a tree metric. It is used in various fields such as networking, security, and bioinformatics for the classification of complex networks, the design of routing schemes, and the analysis of graph algorithms. Despite recent progress, computing the hyperbolicity of a graph remains challenging. Indeed, the best known algorithm has time complexity O(n3.69), which is prohibitive for large graphs, and the most efficient algorithms in practice have space complexity O(n2). Thus, time as well as space are bottlenecks for computing the hyperbolicity. In this article, we design a tool for enumerating all far-apart pairs of a graph by decreasing distances. A node pair (u, v) of a graph is far-apart if both v is a leaf of all shortest-path trees rooted at u and u is a leaf of all shortest-path trees rooted at v. This notion was previously used to drastically reduce the computation time for hyperbolicity in practice. However, it required the computation of the distance matrix to sort all pairs of nodes by decreasing distance, which requires an infeasible amount of memory already for medium-sized graphs. We present a new data structure that avoids this memory bottleneck in practice and for the first time enables computing the hyperbolicity of several large graphs that were far out of reach using previous algorithms. For some instances, we reduce the memory consumption by at least two orders of magnitude. Furthermore, we show that for many graphs, only a very small fraction of far-apart pairs has to be considered for the hyperbolicity computation, explaining this drastic reduction of memory. As iterating over far-apart pairs in decreasing order without storing them explicitly is a very general tool, we believe that our approach might also be relevant to other problems.
双曲度是一个图参数,表示图的最短路径距离度量与树度量的偏差程度。它被用于各种领域,如网络、安全、生物信息学,用于复杂网络的分类、路由方案的设计和图算法的分析。尽管最近取得了进展,但计算图形的双曲性仍然具有挑战性。事实上,最著名的算法的时间复杂度为0 (n3.69),这对于大型图来说是令人望而却步的,而在实践中最有效的算法的空间复杂度为O(n2)。因此,时间和空间都是计算双曲度的瓶颈。在这篇文章中,我们设计了一个工具,用来通过减少距离来枚举一个图的所有远距对。如果一个图的节点对(u, v)是所有以u为根的最短路径树的叶子,而u是所有以v为根的最短路径树的叶子,那么这个节点对(u, v)是相隔很远的。这个概念以前被用来在实践中大大减少双曲的计算时间。然而,它需要计算距离矩阵来通过减小距离对所有节点对进行排序,这对于中等大小的图来说已经需要不可行的内存量。我们提出了一种新的数据结构,在实践中避免了这种内存瓶颈,并且首次能够计算使用以前的算法远远无法达到的几个大型图的双曲性。对于某些实例,我们将内存消耗降低了至少两个数量级。此外,我们表明,对于许多图,只有非常小的一部分远距对必须考虑双曲计算,解释了这种急剧减少的内存。由于在相隔很远的对上按降序迭代而不显式地存储它们是一种非常通用的工具,我们相信我们的方法也可能与其他问题相关。
{"title":"Enumeration of Far-apart Pairs by Decreasing Distance for Faster Hyperbolicity Computation","authors":"D. Coudert, A. Nusser, L. Viennot","doi":"10.1145/3569169","DOIUrl":"https://doi.org/10.1145/3569169","url":null,"abstract":"Hyperbolicity is a graph parameter that indicates how much the shortest-path distance metric of a graph deviates from a tree metric. It is used in various fields such as networking, security, and bioinformatics for the classification of complex networks, the design of routing schemes, and the analysis of graph algorithms. Despite recent progress, computing the hyperbolicity of a graph remains challenging. Indeed, the best known algorithm has time complexity O(n3.69), which is prohibitive for large graphs, and the most efficient algorithms in practice have space complexity O(n2). Thus, time as well as space are bottlenecks for computing the hyperbolicity. In this article, we design a tool for enumerating all far-apart pairs of a graph by decreasing distances. A node pair (u, v) of a graph is far-apart if both v is a leaf of all shortest-path trees rooted at u and u is a leaf of all shortest-path trees rooted at v. This notion was previously used to drastically reduce the computation time for hyperbolicity in practice. However, it required the computation of the distance matrix to sort all pairs of nodes by decreasing distance, which requires an infeasible amount of memory already for medium-sized graphs. We present a new data structure that avoids this memory bottleneck in practice and for the first time enables computing the hyperbolicity of several large graphs that were far out of reach using previous algorithms. For some instances, we reduce the memory consumption by at least two orders of magnitude. Furthermore, we show that for many graphs, only a very small fraction of far-apart pairs has to be considered for the hyperbolicity computation, explaining this drastic reduction of memory. As iterating over far-apart pairs in decreasing order without storing them explicitly is a very general tool, we believe that our approach might also be relevant to other problems.","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":"27 1","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2021-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46903386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Engineering Nearly Linear-time Algorithms for Small Vertex Connectivity 小顶点连通性的近似线性时间算法工程
Q2 Mathematics Pub Date : 2021-03-29 DOI: 10.1145/3564822
Max Franck, Sorrachai Yingchareonthawornchai
Vertex connectivity is a well-studied concept in graph theory with numerous applications. A graph is k-connected if it remains connected after removing any k −1 vertices. The vertex connectivity of a graph is the maximum k such that the graph is k-connected. There is a long history of algorithmic development for efficiently computing vertex connectivity. Recently, two near linear-time algorithms for small k were introduced by Forster et al. [SODA 2020]. Prior to that, the best-known algorithm was one by Henzinger et al. [FOCS 1996] with quadratic running time when k is small. In this article, we study the practical performance of the algorithms by Forster et al. In addition, we introduce a new heuristic on a key subroutine called local cut detection, which we call degree counting. We prove that the new heuristic improves space-efficiency (which can be good for caching purposes) and allows the subroutine to terminate earlier. According to experimental results on random graphs with planted vertex cuts, random hyperbolic graphs, and real-world graphs with vertex connectivity between 4 and 8, the degree counting heuristic offers a factor of 2–4 speedup over the original non-degree counting version for small graphs and almost 20 times for some graphs with millions of edges. It also outperforms the previous state-of-the-art algorithm by Henzinger et al., even on relatively small graphs.
顶点连通性是图论中一个被广泛研究的概念,有着广泛的应用。如果一个图在去掉任意k−1个顶点后仍然保持连通,那么这个图就是k连通的。图的顶点连通性是使图是k连通的最大k。高效计算顶点连通性的算法已经有了很长的发展历史。最近,Forster等人[SODA 2020]引入了两种小k的近线性时间算法。在此之前,最著名的算法是Henzinger等人[FOCS 1996]的算法,当k很小时,它的运行时间是二次的。在本文中,我们研究了Forster等人的算法的实际性能。此外,我们在关键子程序中引入了一种新的启发式方法,称为局部切割检测,我们称之为度计数。我们证明了新的启发式方法提高了空间效率(这对于缓存目的很有好处),并允许子例程提前终止。实验结果表明,在具有种植顶点切割的随机图、随机双曲图和顶点连接在4 ~ 8之间的现实图中,对于小图,度计数启发式算法的速度比原始的非度计数版本提高了2 ~ 4倍,对于具有数百万条边的图,度计数启发式算法的速度提高了近20倍。即使在相对较小的图上,它也优于Henzinger等人先前最先进的算法。
{"title":"Engineering Nearly Linear-time Algorithms for Small Vertex Connectivity","authors":"Max Franck, Sorrachai Yingchareonthawornchai","doi":"10.1145/3564822","DOIUrl":"https://doi.org/10.1145/3564822","url":null,"abstract":"Vertex connectivity is a well-studied concept in graph theory with numerous applications. A graph is k-connected if it remains connected after removing any k −1 vertices. The vertex connectivity of a graph is the maximum k such that the graph is k-connected. There is a long history of algorithmic development for efficiently computing vertex connectivity. Recently, two near linear-time algorithms for small k were introduced by Forster et al. [SODA 2020]. Prior to that, the best-known algorithm was one by Henzinger et al. [FOCS 1996] with quadratic running time when k is small. In this article, we study the practical performance of the algorithms by Forster et al. In addition, we introduce a new heuristic on a key subroutine called local cut detection, which we call degree counting. We prove that the new heuristic improves space-efficiency (which can be good for caching purposes) and allows the subroutine to terminate earlier. According to experimental results on random graphs with planted vertex cuts, random hyperbolic graphs, and real-world graphs with vertex connectivity between 4 and 8, the degree counting heuristic offers a factor of 2–4 speedup over the original non-degree counting version for small graphs and almost 20 times for some graphs with millions of edges. It also outperforms the previous state-of-the-art algorithm by Henzinger et al., even on relatively small graphs.","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":" ","pages":"1 - 29"},"PeriodicalIF":0.0,"publicationDate":"2021-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48606251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimum Scan Cover and Variants: Theory and Experiments 最小扫描覆盖率及其变体:理论与实验
Q2 Mathematics Pub Date : 2021-03-26 DOI: 10.1145/3567674
K. Buchin, Alexander Hill, S. Fekete, Linda Kleist, I. Kostitsyna, Dominik Krupke, R. Lambers, Martijn Struijs
We consider a spectrum of geometric optimization problems motivated by contexts such as satellite communication and astrophysics. In the problem Minimum Scan Cover with Angular Costs, we are given a graph G that is embedded in Euclidean space. The edges of G need to be scanned, i.e., probed from both of their vertices. To scan their edge, two vertices need to face each other; changing the heading of a vertex incurs some cost in terms of energy or rotation time that is proportional to the corresponding rotation angle. Our goal is to compute schedules that minimize the following objective functions: (i) in Minimum Makespan Scan Cover (MSC-MS), this is the time until all edges are scanned; (ii) in Minimum Total Energy Scan Cover (MSC-TE), the sum of all rotation angles; and (iii) in Minimum Bottleneck Energy Scan Cover (MSC-BE), the maximum total rotation angle at one vertex. Previous theoretical work on MSC-MS revealed a close connection to graph coloring and the cut cover problem, leading to hardness and approximability results. In this article, we present polynomial-time algorithms for one-dimensional (1D) instances of MSC-TE and MSC-BE but NP-hardness proofs for bipartite 2D instances. For bipartite graphs in 2D, we also give 2-approximation algorithms for both MSC-TE and MSC-BE. Most importantly, we provide a comprehensive study of practical methods for all three problems. We compare three different mixed-integer programming and two constraint programming approaches and show how to compute provably optimal solutions for geometric instances with up to 300 edges. Additionally, we compare the performance of different meta-heuristics for even larger instances.
我们考虑了一系列几何优化问题,这些问题是由卫星通信和天体物理学等背景引起的。在具有角代价的最小扫描覆盖问题中,我们给出了嵌入在欧几里德空间中的图G。需要扫描G的边,即从它们的两个顶点探查。为了扫描它们的边缘,两个顶点需要彼此面对;改变顶点的方向会产生一些能量或旋转时间方面的成本,这些成本与相应的旋转角度成正比。我们的目标是计算最小化以下目标函数的调度:(i)在最小最大扫描时间覆盖(MSC-MS)中,这是扫描所有边缘的时间;(ii)最小总能量扫描盖(MSC-TE)中,所有旋转角度之和;(iii)在最小瓶颈能量扫描覆盖(MSC-BE)中,一个顶点的最大总旋转角度。先前关于MSC-MS的理论工作揭示了与图着色和切盖问题的密切联系,从而导致了硬度和近似结果。在本文中,我们提出了MSC-TE和MSC-BE的一维(1D)实例的多项式时间算法,但对于二部二维实例的np -硬度证明。对于二维二部图,我们也给出了MSC-TE和MSC-BE的2-逼近算法。最重要的是,我们为这三个问题提供了全面的实用方法研究。我们比较了三种不同的混合整数规划和两种约束规划方法,并展示了如何为多达300条边的几何实例计算可证明的最优解。此外,我们还比较了更大实例下不同元启发式的性能。
{"title":"Minimum Scan Cover and Variants: Theory and Experiments","authors":"K. Buchin, Alexander Hill, S. Fekete, Linda Kleist, I. Kostitsyna, Dominik Krupke, R. Lambers, Martijn Struijs","doi":"10.1145/3567674","DOIUrl":"https://doi.org/10.1145/3567674","url":null,"abstract":"We consider a spectrum of geometric optimization problems motivated by contexts such as satellite communication and astrophysics. In the problem Minimum Scan Cover with Angular Costs, we are given a graph G that is embedded in Euclidean space. The edges of G need to be scanned, i.e., probed from both of their vertices. To scan their edge, two vertices need to face each other; changing the heading of a vertex incurs some cost in terms of energy or rotation time that is proportional to the corresponding rotation angle. Our goal is to compute schedules that minimize the following objective functions: (i) in Minimum Makespan Scan Cover (MSC-MS), this is the time until all edges are scanned; (ii) in Minimum Total Energy Scan Cover (MSC-TE), the sum of all rotation angles; and (iii) in Minimum Bottleneck Energy Scan Cover (MSC-BE), the maximum total rotation angle at one vertex. Previous theoretical work on MSC-MS revealed a close connection to graph coloring and the cut cover problem, leading to hardness and approximability results. In this article, we present polynomial-time algorithms for one-dimensional (1D) instances of MSC-TE and MSC-BE but NP-hardness proofs for bipartite 2D instances. For bipartite graphs in 2D, we also give 2-approximation algorithms for both MSC-TE and MSC-BE. Most importantly, we provide a comprehensive study of practical methods for all three problems. We compare three different mixed-integer programming and two constraint programming approaches and show how to compute provably optimal solutions for geometric instances with up to 300 edges. Additionally, we compare the performance of different meta-heuristics for even larger instances.","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":"27 1","pages":"1 - 28"},"PeriodicalIF":0.0,"publicationDate":"2021-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47944085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recent Advances in Fully Dynamic Graph Algorithms – A Quick Reference Guide 全动态图算法的最新进展——快速参考指南
Q2 Mathematics Pub Date : 2021-02-22 DOI: 10.1145/3555806
Kathrin Hanauer, M. Henzinger, Christian Schulz
In recent years, significant advances have been made in the design and analysis of fully dynamic algorithms. However, these theoretical results have received very little attention from the practical perspective. Few of the algorithms are implemented and tested on real datasets, and their practical potential is far from understood. Here, we present a quick reference guide to recent engineering and theory results in the area of fully dynamic graph algorithms.
近年来,在全动态算法的设计和分析方面取得了重大进展。然而,从实践的角度来看,这些理论结果很少受到重视。很少有算法在真实数据集上实现和测试,它们的实际潜力还远未被理解。在这里,我们提供了一个快速参考指南,介绍了全动态图算法领域的最新工程和理论成果。
{"title":"Recent Advances in Fully Dynamic Graph Algorithms – A Quick Reference Guide","authors":"Kathrin Hanauer, M. Henzinger, Christian Schulz","doi":"10.1145/3555806","DOIUrl":"https://doi.org/10.1145/3555806","url":null,"abstract":"In recent years, significant advances have been made in the design and analysis of fully dynamic algorithms. However, these theoretical results have received very little attention from the practical perspective. Few of the algorithms are implemented and tested on real datasets, and their practical potential is far from understood. Here, we present a quick reference guide to recent engineering and theory results in the area of fully dynamic graph algorithms.","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":"27 1","pages":"1 - 45"},"PeriodicalIF":0.0,"publicationDate":"2021-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46900888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Buffered Streaming Graph Partitioning 缓冲流图分区
Q2 Mathematics Pub Date : 2021-02-18 DOI: 10.1145/3546911
Marcelo Fonseca Faraj, Christian Schulz
Partitioning graphs into blocks of roughly equal size is a widely used tool when processing large graphs. Currently, there is a gap observed in the space of available partitioning algorithms. On the one hand, there are streaming algorithms that have been adopted to partition massive graph data on small machines. In the streaming model, vertices arrive one at a time including their neighborhood, and then have to be assigned directly to a block. These algorithms can partition huge graphs quickly with little memory, but they produce partitions with low solution quality. On the other hand, there are offline (shared-memory) multilevel algorithms that produce partitions with high-quality but also need a machine with enough memory to partition huge networks. In this work, we make a first step to close this gap by presenting an algorithm that computes significantly improved partitions of huge graphs using a single machine with little memory in a streaming setting. First, we adopt the buffered streaming model which is a more reasonable approach in practice. In this model, a processing element can store a buffer of nodes alongside with their edges before making assignment decisions. When our algorithm receives a batch of nodes, we build a model graph that represents the nodes of the batch and the already present partition structure. This model enables us to apply multilevel algorithms and in turn, on cheap machines, compute much higher quality solutions of huge graphs than previously possible. To partition the model graph, we develop a multilevel algorithm that optimizes an objective function that has previously been shown to be effective for the streaming setting. Surprisingly, this also removes the dependency on the number of blocks k from the running time compared to the previous state-of-the-art. Overall, our algorithm computes, on average, 75.9% better solutions than Fennel [35] using a very small buffer size. In addition, for large values of k our algorithm becomes faster than Fennel.
在处理大型图时,将图划分为大小大致相等的块是一种广泛使用的工具。目前,在可用的分区算法空间中观察到一个空白。一方面,流算法已经被用于在小型机器上对海量图数据进行分区。在流模型中,顶点每次到达一个,包括它们的邻域,然后必须直接分配给一个块。这些算法可以在内存较少的情况下快速划分巨大的图,但它们产生的分区解质量较低。另一方面,有一些离线(共享内存)多层算法可以产生高质量的分区,但也需要一台具有足够内存的机器来分区庞大的网络。在这项工作中,我们通过提出一种算法来缩小这一差距,该算法在流式设置中使用单个机器使用少量内存计算巨大图形的分区。首先,我们采用了在实践中更为合理的缓冲流模型。在该模型中,处理元素可以在做出分配决策之前,在其边缘附近存储节点的缓冲区。当我们的算法接收到一批节点时,我们构建一个模型图来表示这批节点和已经存在的分区结构。这个模型使我们能够应用多层算法,反过来,在便宜的机器上,计算出比以前更高质量的巨大图形的解决方案。为了划分模型图,我们开发了一种多级算法,该算法优化了先前已被证明对流设置有效的目标函数。令人惊讶的是,与以前的状态相比,这也消除了对运行时间中块数量k的依赖。总体而言,我们的算法使用非常小的缓冲区大小计算出的解决方案平均比Fennel[35]好75.9%。此外,对于较大的k值,我们的算法变得比Fennel更快。
{"title":"Buffered Streaming Graph Partitioning","authors":"Marcelo Fonseca Faraj, Christian Schulz","doi":"10.1145/3546911","DOIUrl":"https://doi.org/10.1145/3546911","url":null,"abstract":"Partitioning graphs into blocks of roughly equal size is a widely used tool when processing large graphs. Currently, there is a gap observed in the space of available partitioning algorithms. On the one hand, there are streaming algorithms that have been adopted to partition massive graph data on small machines. In the streaming model, vertices arrive one at a time including their neighborhood, and then have to be assigned directly to a block. These algorithms can partition huge graphs quickly with little memory, but they produce partitions with low solution quality. On the other hand, there are offline (shared-memory) multilevel algorithms that produce partitions with high-quality but also need a machine with enough memory to partition huge networks. In this work, we make a first step to close this gap by presenting an algorithm that computes significantly improved partitions of huge graphs using a single machine with little memory in a streaming setting. First, we adopt the buffered streaming model which is a more reasonable approach in practice. In this model, a processing element can store a buffer of nodes alongside with their edges before making assignment decisions. When our algorithm receives a batch of nodes, we build a model graph that represents the nodes of the batch and the already present partition structure. This model enables us to apply multilevel algorithms and in turn, on cheap machines, compute much higher quality solutions of huge graphs than previously possible. To partition the model graph, we develop a multilevel algorithm that optimizes an objective function that has previously been shown to be effective for the streaming setting. Surprisingly, this also removes the dependency on the number of blocks k from the running time compared to the previous state-of-the-art. Overall, our algorithm computes, on average, 75.9% better solutions than Fennel [35] using a very small buffer size. In addition, for large values of k our algorithm becomes faster than Fennel.","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":"27 1","pages":"1 - 26"},"PeriodicalIF":0.0,"publicationDate":"2021-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43243122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Parameterized Complexity: The Main Ideas and Connections to Practical Computing 参数化复杂性:与实际计算的主要思想和联系
Q2 Mathematics Pub Date : 2000-01-01 DOI: 10.1007/3-540-36383-1_3
M. Fellows
{"title":"Parameterized Complexity: The Main Ideas and Connections to Practical Computing","authors":"M. Fellows","doi":"10.1007/3-540-36383-1_3","DOIUrl":"https://doi.org/10.1007/3-540-36383-1_3","url":null,"abstract":"","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":"73 1","pages":"1-19"},"PeriodicalIF":0.0,"publicationDate":"2000-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74185909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
Algorithm Engineering for Parallel Computation 并行计算的算法工程
Q2 Mathematics Pub Date : 2000-01-01 DOI: 10.1007/3-540-36383-1_1
David A. Bader, B. Moret, P. Sanders
{"title":"Algorithm Engineering for Parallel Computation","authors":"David A. Bader, B. Moret, P. Sanders","doi":"10.1007/3-540-36383-1_1","DOIUrl":"https://doi.org/10.1007/3-540-36383-1_1","url":null,"abstract":"","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":"12 1","pages":"1-23"},"PeriodicalIF":0.0,"publicationDate":"2000-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81915535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Reconstructing Optimal Phylogenetic Trees: A Challenge in Experimental Algorithmics 重建最优系统发育树:实验算法中的挑战
Q2 Mathematics Pub Date : 2000-01-01 DOI: 10.1007/3-540-36383-1_8
B. Moret, T. Warnow
{"title":"Reconstructing Optimal Phylogenetic Trees: A Challenge in Experimental Algorithmics","authors":"B. Moret, T. Warnow","doi":"10.1007/3-540-36383-1_8","DOIUrl":"https://doi.org/10.1007/3-540-36383-1_8","url":null,"abstract":"","PeriodicalId":53707,"journal":{"name":"Journal of Experimental Algorithmics","volume":"78 1","pages":"163-180"},"PeriodicalIF":0.0,"publicationDate":"2000-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76394356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
期刊
Journal of Experimental Algorithmics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1