首页 > 最新文献

Bulletin of the Society of Sea Water Science, Japan最新文献

英文 中文
Efficient Yao Graph Construction 高效姚图构造
Pub Date : 2023-03-14 DOI: 10.48550/arXiv.2303.07858
Daniel Funke, P. Sanders
Yao graphs are geometric spanners that connect each point of a given point set to its nearest neighbor in each of $k$ cones drawn around it. Yao graphs were introduced to construct minimum spanning trees in $d$ dimensional spaces. Moreover, they are used for instance in topology control in wireless networks. An optimal Onlogn time algorithm to construct Yao graphs for given point set has been proposed in the literature but -- to the best of our knowledge -- never been implemented. Instead, algorithms with a quadratic complexity are used in popular packages to construct these graphs. In this paper we present the first implementation of the optimal Yao graph algorithm. We develop and tune the data structures required to achieve the O(n log n) bound and detail algorithmic adaptions necessary to take the original algorithm from theory to practice. We propose a priority queue data structure that separates static and dynamic events and might be of independent interest for other sweepline algorithms. Additionally, we propose a new Yao graph algorithm based on a uniform grid data structure that performs well for medium-sized inputs. We evaluate our implementations on a wide variety synthetic and real-world datasets and show that our implementation outperforms current publicly available implementations by at least an order of magnitude.
姚图是几何扳手,它将给定点集的每个点连接到其周围绘制的$k$锥体中的最近邻居。在d维空间中引入了Yao图来构造最小生成树。此外,它们还用于无线网络的拓扑控制。在文献中已经提出了一个最优的Onlogn时间算法来构造给定点集的Yao图,但据我们所知,从未实现过。相反,在流行的包中使用具有二次复杂度的算法来构建这些图。本文首次实现了最优姚图算法。我们开发和调整了实现O(n log n)边界所需的数据结构,并详细介绍了将原始算法从理论应用到实践所需的算法调整。我们提出了一种优先级队列数据结构,它将静态事件和动态事件分开,并且可能对其他扫描线算法具有独立的兴趣。此外,我们提出了一种新的基于统一网格数据结构的Yao图算法,该算法在中等输入下表现良好。我们在各种各样的合成数据集和真实世界的数据集上评估了我们的实现,并表明我们的实现比目前公开可用的实现至少要好一个数量级。
{"title":"Efficient Yao Graph Construction","authors":"Daniel Funke, P. Sanders","doi":"10.48550/arXiv.2303.07858","DOIUrl":"https://doi.org/10.48550/arXiv.2303.07858","url":null,"abstract":"Yao graphs are geometric spanners that connect each point of a given point set to its nearest neighbor in each of $k$ cones drawn around it. Yao graphs were introduced to construct minimum spanning trees in $d$ dimensional spaces. Moreover, they are used for instance in topology control in wireless networks. An optimal Onlogn time algorithm to construct Yao graphs for given point set has been proposed in the literature but -- to the best of our knowledge -- never been implemented. Instead, algorithms with a quadratic complexity are used in popular packages to construct these graphs. In this paper we present the first implementation of the optimal Yao graph algorithm. We develop and tune the data structures required to achieve the O(n log n) bound and detail algorithmic adaptions necessary to take the original algorithm from theory to practice. We propose a priority queue data structure that separates static and dynamic events and might be of independent interest for other sweepline algorithms. Additionally, we propose a new Yao graph algorithm based on a uniform grid data structure that performs well for medium-sized inputs. We evaluate our implementations on a wide variety synthetic and real-world datasets and show that our implementation outperforms current publicly available implementations by at least an order of magnitude.","PeriodicalId":9448,"journal":{"name":"Bulletin of the Society of Sea Water Science, Japan","volume":"2 1","pages":"20:1-20:20"},"PeriodicalIF":0.0,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85483749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Partitioning the Bags of a Tree Decomposition Into Cliques 将树分解的包划分为小块
Pub Date : 2023-02-17 DOI: 10.48550/arXiv.2302.08870
Thomas Bläsius, Maximilian Katzmann, Marcus Wilhelm
We consider a variant of treewidth that we call clique-partitioned treewidth in which each bag is partitioned into cliques. This is motivated by the recent development of FPT-algorithms based on similar parameters for various problems. With this paper, we take a first step towards computing clique-partitioned tree decompositions. Our focus lies on the subproblem of computing clique partitions, i.e., for each bag of a given tree decomposition, we compute an optimal partition of the induced subgraph into cliques. The goal here is to minimize the product of the clique sizes (plus 1). We show that this problem is NP-hard. We also describe four heuristic approaches as well as an exact branch-and-bound algorithm. Our evaluation shows that the branch-and-bound solver is sufficiently efficient to serve as a good baseline. Moreover, our heuristics yield solutions close to the optimum. As a bonus, our algorithms allow us to compute first upper bounds for the clique-partitioned treewidth of real-world networks. A comparison to traditional treewidth indicates that clique-partitioned treewidth is a promising parameter for graphs with high clustering.
我们考虑树宽的一种变体,我们称之为派系划分树宽,其中每个包被划分为派系。这是由最近基于类似参数的各种问题的fpt算法的发展所激发的。在本文中,我们向计算团划分树分解迈出了第一步。我们的重点在于计算团分区的子问题,即对于给定树分解的每个包,我们计算诱导子图的最优分区到团。这里的目标是最小化团大小的乘积(加1)。我们证明这个问题是np困难的。我们还描述了四种启发式方法以及一种精确的分支定界算法。我们的评估表明,分支定界求解器是足够有效的,可以作为一个很好的基线。此外,我们的启发式方法产生了接近最优的解决方案。作为奖励,我们的算法允许我们计算现实世界网络的团划分树宽的第一个上界。与传统树宽的比较表明,对于高聚类的图,团划分树宽是一个很有前途的参数。
{"title":"Partitioning the Bags of a Tree Decomposition Into Cliques","authors":"Thomas Bläsius, Maximilian Katzmann, Marcus Wilhelm","doi":"10.48550/arXiv.2302.08870","DOIUrl":"https://doi.org/10.48550/arXiv.2302.08870","url":null,"abstract":"We consider a variant of treewidth that we call clique-partitioned treewidth in which each bag is partitioned into cliques. This is motivated by the recent development of FPT-algorithms based on similar parameters for various problems. With this paper, we take a first step towards computing clique-partitioned tree decompositions. Our focus lies on the subproblem of computing clique partitions, i.e., for each bag of a given tree decomposition, we compute an optimal partition of the induced subgraph into cliques. The goal here is to minimize the product of the clique sizes (plus 1). We show that this problem is NP-hard. We also describe four heuristic approaches as well as an exact branch-and-bound algorithm. Our evaluation shows that the branch-and-bound solver is sufficiently efficient to serve as a good baseline. Moreover, our heuristics yield solutions close to the optimum. As a bonus, our algorithms allow us to compute first upper bounds for the clique-partitioned treewidth of real-world networks. A comparison to traditional treewidth indicates that clique-partitioned treewidth is a promising parameter for graphs with high clustering.","PeriodicalId":9448,"journal":{"name":"Bulletin of the Society of Sea Water Science, Japan","volume":"7 1","pages":"3:1-3:19"},"PeriodicalIF":0.0,"publicationDate":"2023-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89840611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Arc-Flags Meet Trip-Based Public Transit Routing 弧形标志满足基于出行的公共交通路线
Pub Date : 2023-02-14 DOI: 10.48550/arXiv.2302.07168
Ernestine Großmann, J. Sauer, Christian Schulz, Patrick Steil
We present Arc-Flag TB, a journey planning algorithm for public transit networks which combines Trip-Based Public Transit Routing (TB) with the Arc-Flags speedup technique. Compared to previous attempts to apply Arc-Flags to public transit networks, which saw limited success, our approach uses stronger pruning rules to reduce the search space. Our experiments show that Arc-Flag TB achieves a speedup of up to two orders of magnitude over TB, offering query times of less than a millisecond even on large countrywide networks. Compared to the state-of-the-art speedup technique Trip-Based Public Transit Routing Using Condensed Search Trees (TB-CST), our algorithm achieves similar query times but requires significantly less additional memory. Other state-of-the-art algorithms which achieve even faster query times, e.g., Public Transit Labeling, require enormous memory usage. In contrast, Arc-Flag TB offers a tradeoff between query performance and memory usage due to the fact that the number of regions in the network partition required by our algorithm is a configurable parameter. We also identify an issue in the transfer precomputation of TB that affects both TB-CST and Arc-Flag TB, leading to incorrect answers for some queries. This has not been previously recognized by the author of TB-CST. We provide discussion on how to resolve this issue in the future. Currently, Arc-Flag TB answers 1-6% of queries incorrectly, compared to over 20% for TB-CST on some networks.
本文提出了一种基于行程的公共交通路由(TB)与Arc-Flags加速技术相结合的公共交通网络行程规划算法。与之前将Arc-Flags应用于公共交通网络的尝试相比,我们的方法使用了更强的修剪规则来减少搜索空间。我们的实验表明,Arc-Flag TB比TB实现了高达两个数量级的加速,即使在大型全国性网络上也提供不到一毫秒的查询时间。与最先进的加速技术基于行程的公共交通路由使用压缩搜索树(TB-CST)相比,我们的算法实现了相似的查询时间,但需要的额外内存要少得多。其他最先进的算法实现更快的查询时间,例如,公共交通标签,需要大量的内存使用。相反,Arc-Flag TB提供了查询性能和内存使用之间的折衷,因为我们的算法所需的网络分区中区域的数量是一个可配置的参数。我们还确定了TB的传输预计算中的一个问题,该问题影响TB- cst和Arc-Flag TB,导致某些查询的答案不正确。TB-CST的作者以前没有认识到这一点。我们提供了关于将来如何解决这个问题的讨论。目前,Arc-Flag TB错误回答了1-6%的查询,而在某些网络上,TB- cst错误回答率超过20%。
{"title":"Arc-Flags Meet Trip-Based Public Transit Routing","authors":"Ernestine Großmann, J. Sauer, Christian Schulz, Patrick Steil","doi":"10.48550/arXiv.2302.07168","DOIUrl":"https://doi.org/10.48550/arXiv.2302.07168","url":null,"abstract":"We present Arc-Flag TB, a journey planning algorithm for public transit networks which combines Trip-Based Public Transit Routing (TB) with the Arc-Flags speedup technique. Compared to previous attempts to apply Arc-Flags to public transit networks, which saw limited success, our approach uses stronger pruning rules to reduce the search space. Our experiments show that Arc-Flag TB achieves a speedup of up to two orders of magnitude over TB, offering query times of less than a millisecond even on large countrywide networks. Compared to the state-of-the-art speedup technique Trip-Based Public Transit Routing Using Condensed Search Trees (TB-CST), our algorithm achieves similar query times but requires significantly less additional memory. Other state-of-the-art algorithms which achieve even faster query times, e.g., Public Transit Labeling, require enormous memory usage. In contrast, Arc-Flag TB offers a tradeoff between query performance and memory usage due to the fact that the number of regions in the network partition required by our algorithm is a configurable parameter. We also identify an issue in the transfer precomputation of TB that affects both TB-CST and Arc-Flag TB, leading to incorrect answers for some queries. This has not been previously recognized by the author of TB-CST. We provide discussion on how to resolve this issue in the future. Currently, Arc-Flag TB answers 1-6% of queries incorrectly, compared to over 20% for TB-CST on some networks.","PeriodicalId":9448,"journal":{"name":"Bulletin of the Society of Sea Water Science, Japan","volume":"222 1","pages":"16:1-16:18"},"PeriodicalIF":0.0,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87177890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Maximum Coverage in Sublinear Space, Faster 最大覆盖亚线性空间,更快
Pub Date : 2023-02-13 DOI: 10.48550/arXiv.2302.06137
Stephen Jaud, Anthony Wirth, F. Choudhury
Given a collection of $m$ sets from a universe $mathcal{U}$, the Maximum Set Coverage problem consists of finding $k$ sets whose union has largest cardinality. This problem is NP-Hard, but the solution can be approximated by a polynomial time algorithm up to a factor $1-1/e$. However, this algorithm does not scale well with the input size. In a streaming context, practical high-quality solutions are found, but with space complexity that scales linearly with respect to the size of the universe $|mathcal{U}|$. However, one randomized streaming algorithm has been shown to produce a $1-1/e-varepsilon$ approximation of the optimal solution with a space complexity that scales only poly-logarithmically with respect to $m$ and $|mathcal{U}|$. In order to achieve such a low space complexity, the authors used a technique called subsampling, based on independent-wise hash functions. This article focuses on this sublinear-space algorithm and introduces methods to reduce the time cost of subsampling. We first show how to accelerate by several orders of magnitude without altering the space complexity, number of passes and approximation quality of the original algorithm. Secondly, we derive a new lower bound for the probability of producing a $1-1/e-varepsilon$ approximation using only pairwise independence: $1-tfrac{4}{c k log m}$ compared to the original $1-tfrac{2e}{m^{ck/6}}$. Although the theoretical approximation guarantees are weaker, for large streams, our algorithm performs well in practice and present the best time-space-performance trade-off for maximum coverage in streams.
给定一个宇宙$mathcal{U}$中$m$个集合的集合,最大集合覆盖问题包括找到其并集具有最大基数的$k$个集合。这个问题是np困难的,但是解决方案可以用多项式时间算法近似到一个因子$1-1/e$。然而,该算法不能很好地随输入大小进行伸缩。在流环境中,找到了实用的高质量解决方案,但具有相对于宇宙大小线性扩展的空间复杂性$|mathcal{U}|$。然而,一种随机流算法已被证明可以产生最优解的$1-1/e-varepsilon$近似,其空间复杂度仅相对于$m$和$|mathcal{U}|$进行多对数缩放。为了实现如此低的空间复杂度,作者使用了一种基于独立哈希函数的称为子采样的技术。本文重点研究了这种次线性空间算法,并介绍了降低次采样时间成本的方法。我们首先展示了如何在不改变原始算法的空间复杂度、通过次数和近似质量的情况下加速几个数量级。其次,我们推导出仅使用成对独立产生$1-1/e-varepsilon$近似的概率的新下界:$1-tfrac{4}{c k log m}$与原始的$1-tfrac{2e}{m^{ck/6}}$相比。虽然理论上的近似保证较弱,但对于大型流,我们的算法在实践中表现良好,并且在流的最大覆盖方面提供了最佳的时间-空间性能权衡。
{"title":"Maximum Coverage in Sublinear Space, Faster","authors":"Stephen Jaud, Anthony Wirth, F. Choudhury","doi":"10.48550/arXiv.2302.06137","DOIUrl":"https://doi.org/10.48550/arXiv.2302.06137","url":null,"abstract":"Given a collection of $m$ sets from a universe $mathcal{U}$, the Maximum Set Coverage problem consists of finding $k$ sets whose union has largest cardinality. This problem is NP-Hard, but the solution can be approximated by a polynomial time algorithm up to a factor $1-1/e$. However, this algorithm does not scale well with the input size. In a streaming context, practical high-quality solutions are found, but with space complexity that scales linearly with respect to the size of the universe $|mathcal{U}|$. However, one randomized streaming algorithm has been shown to produce a $1-1/e-varepsilon$ approximation of the optimal solution with a space complexity that scales only poly-logarithmically with respect to $m$ and $|mathcal{U}|$. In order to achieve such a low space complexity, the authors used a technique called subsampling, based on independent-wise hash functions. This article focuses on this sublinear-space algorithm and introduces methods to reduce the time cost of subsampling. We first show how to accelerate by several orders of magnitude without altering the space complexity, number of passes and approximation quality of the original algorithm. Secondly, we derive a new lower bound for the probability of producing a $1-1/e-varepsilon$ approximation using only pairwise independence: $1-tfrac{4}{c k log m}$ compared to the original $1-tfrac{2e}{m^{ck/6}}$. Although the theoretical approximation guarantees are weaker, for large streams, our algorithm performs well in practice and present the best time-space-performance trade-off for maximum coverage in streams.","PeriodicalId":9448,"journal":{"name":"Bulletin of the Society of Sea Water Science, Japan","volume":"37 1","pages":"21:1-21:20"},"PeriodicalIF":0.0,"publicationDate":"2023-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74645164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
FREIGHT: Fast Streaming Hypergraph Partitioning 快速流超图分区
Pub Date : 2023-02-13 DOI: 10.48550/arXiv.2302.06259
K. Eyubov, Marcelo Fonseca Faraj, Christian Schulz
Partitioning the vertices of a (hyper)graph into k roughly balanced blocks such that few (hyper)edges run between blocks is a key problem for large-scale distributed processing. A current trend for partitioning huge (hyper)graphs using low computational resources are streaming algorithms. In this work, we propose FREIGHT: a Fast stREamInG Hypergraph parTitioning algorithm which is an adaptation of the widely-known graph-based algorithm Fennel. By using an efficient data structure, we make the overall running of FREIGHT linearly dependent on the pin-count of the hypergraph and the memory consumption linearly dependent on the numbers of nets and blocks. The results of our extensive experimentation showcase the promising performance of FREIGHT as a highly efficient and effective solution for streaming hypergraph partitioning. Our algorithm demonstrates competitive running time with the Hashing algorithm, with a difference of a maximum factor of four observed on three fourths of the instances. Significantly, our findings highlight the superiority of FREIGHT over all existing (buffered) streaming algorithms and even the in-memory algorithm HYPE, with respect to both cut-net and connectivity measures. This indicates that our proposed algorithm is a promising hypergraph partitioning tool to tackle the challenge posed by large-scale and dynamic data processing.
将(超)图的顶点划分为k个大致平衡的块,以便块之间很少有(超)边运行,这是大规模分布式处理的关键问题。当前使用低计算资源对大型(超)图进行分区的趋势是流算法。在这项工作中,我们提出了FREIGHT:一种快速流超图分区算法,它是对广为人知的基于图的算法Fennel的改编。通过使用有效的数据结构,我们使FREIGHT的整体运行线性依赖于超图的引脚数,内存消耗线性依赖于网络和块的数量。我们广泛的实验结果展示了FREIGHT作为流超图分区的高效和有效解决方案的良好性能。我们的算法展示了与哈希算法竞争的运行时间,在四分之三的实例上观察到的最大差异因子为四。值得注意的是,我们的研究结果强调了FREIGHT优于所有现有的(缓冲的)流算法,甚至是内存算法HYPE,涉及到割网和连接措施。这表明我们提出的算法是一种很有前途的超图划分工具,可以解决大规模和动态数据处理带来的挑战。
{"title":"FREIGHT: Fast Streaming Hypergraph Partitioning","authors":"K. Eyubov, Marcelo Fonseca Faraj, Christian Schulz","doi":"10.48550/arXiv.2302.06259","DOIUrl":"https://doi.org/10.48550/arXiv.2302.06259","url":null,"abstract":"Partitioning the vertices of a (hyper)graph into k roughly balanced blocks such that few (hyper)edges run between blocks is a key problem for large-scale distributed processing. A current trend for partitioning huge (hyper)graphs using low computational resources are streaming algorithms. In this work, we propose FREIGHT: a Fast stREamInG Hypergraph parTitioning algorithm which is an adaptation of the widely-known graph-based algorithm Fennel. By using an efficient data structure, we make the overall running of FREIGHT linearly dependent on the pin-count of the hypergraph and the memory consumption linearly dependent on the numbers of nets and blocks. The results of our extensive experimentation showcase the promising performance of FREIGHT as a highly efficient and effective solution for streaming hypergraph partitioning. Our algorithm demonstrates competitive running time with the Hashing algorithm, with a difference of a maximum factor of four observed on three fourths of the instances. Significantly, our findings highlight the superiority of FREIGHT over all existing (buffered) streaming algorithms and even the in-memory algorithm HYPE, with respect to both cut-net and connectivity measures. This indicates that our proposed algorithm is a promising hypergraph partitioning tool to tackle the challenge posed by large-scale and dynamic data processing.","PeriodicalId":9448,"journal":{"name":"Bulletin of the Society of Sea Water Science, Japan","volume":"25 1","pages":"15:1-15:16"},"PeriodicalIF":0.0,"publicationDate":"2023-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84610096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Greedy Heuristics for Judicious Hypergraph Partitioning 超图分区的贪婪启发式算法
Pub Date : 2023-01-01 DOI: 10.4230/LIPIcs.SEA.2023.17
Noah Wahl, Lars Gottesbüren
We investigate the efficacy of greedy heuristics for the judicious hypergraph partitioning problem. In contrast to balanced partitioning problems, the goal of judicious hypergraph partitioning is to minimize the maximum load over all blocks of the partition. We devise strategies for initial partitioning and FM-style post-processing. In combination with a multilevel scheme, they beat the previous state-of-the-art solver – based on greedy set covers – in both running time (two to four orders of magnitude) and solution quality (18% to 45%). A major challenge that makes local greedy approaches difficult to use for this problem is the high frequency of zero-gain moves , for which we present and evaluate counteracting mechanisms.
研究了贪心启发式算法在超图明智划分问题中的有效性。与平衡分区问题相反,明智的超图分区的目标是最小化分区所有块上的最大负载。我们设计了初始分区和fm风格的后处理策略。结合多层方案,它们在运行时间(2到4个数量级)和解决方案质量(18%到45%)上都击败了以前最先进的基于贪婪集覆盖的求解器。使局部贪婪方法难以用于此问题的主要挑战是零增益移动的高频率,为此我们提出并评估了抵消机制。
{"title":"Greedy Heuristics for Judicious Hypergraph Partitioning","authors":"Noah Wahl, Lars Gottesbüren","doi":"10.4230/LIPIcs.SEA.2023.17","DOIUrl":"https://doi.org/10.4230/LIPIcs.SEA.2023.17","url":null,"abstract":"We investigate the efficacy of greedy heuristics for the judicious hypergraph partitioning problem. In contrast to balanced partitioning problems, the goal of judicious hypergraph partitioning is to minimize the maximum load over all blocks of the partition. We devise strategies for initial partitioning and FM-style post-processing. In combination with a multilevel scheme, they beat the previous state-of-the-art solver – based on greedy set covers – in both running time (two to four orders of magnitude) and solution quality (18% to 45%). A major challenge that makes local greedy approaches difficult to use for this problem is the high frequency of zero-gain moves , for which we present and evaluate counteracting mechanisms.","PeriodicalId":9448,"journal":{"name":"Bulletin of the Society of Sea Water Science, Japan","volume":"134 1","pages":"17:1-17:16"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86340171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Noisy Sorting Without Searching: Data Oblivious Sorting with Comparison Errors 无搜索的噪声排序:带有比较错误的数据无关排序
Pub Date : 2023-01-01 DOI: 10.4230/LIPIcs.SEA.2023.8
Ramtin Afshar, M. Dillencourt, M. Goodrich, Evrim Ozel
We provide and study several algorithms for sorting an array of n comparable distinct elements subject to probabilistic comparison errors. In this model, the comparison of two elements returns the wrong answer according to a fixed probability, p e < 1 / 2, and otherwise returns the correct answer. The dislocation of an element is the distance between its position in a given (current or output) array and its position in a sorted array. There are various algorithms that can be utilized for sorting or near-sorting elements subject to probabilistic comparison errors, but these algorithms are not data oblivious because they all make heavy use of noisy binary searching. In this paper, we provide new methods for sorting with comparison errors that are data oblivious while avoiding the use of noisy binary search methods. In addition, we experimentally compare our algorithms and other sorting algorithms.
我们提供并研究了几种算法来对n个可比较的不同元素进行排序,这些元素受到概率比较误差的影响。在该模型中,两个元素的比较按固定概率p e < 1 / 2返回错误答案,否则返回正确答案。元素的错位是指其在给定(当前或输出)数组中的位置与在已排序数组中的位置之间的距离。有各种各样的算法可用于排序或近似排序受概率比较错误影响的元素,但这些算法不是数据无关的,因为它们都大量使用有噪声的二进制搜索。在本文中,我们提供了一种新的排序方法,这种方法具有数据无关的比较误差,同时避免了使用有噪声的二分搜索方法。此外,我们实验比较了我们的算法和其他排序算法。
{"title":"Noisy Sorting Without Searching: Data Oblivious Sorting with Comparison Errors","authors":"Ramtin Afshar, M. Dillencourt, M. Goodrich, Evrim Ozel","doi":"10.4230/LIPIcs.SEA.2023.8","DOIUrl":"https://doi.org/10.4230/LIPIcs.SEA.2023.8","url":null,"abstract":"We provide and study several algorithms for sorting an array of n comparable distinct elements subject to probabilistic comparison errors. In this model, the comparison of two elements returns the wrong answer according to a fixed probability, p e < 1 / 2, and otherwise returns the correct answer. The dislocation of an element is the distance between its position in a given (current or output) array and its position in a sorted array. There are various algorithms that can be utilized for sorting or near-sorting elements subject to probabilistic comparison errors, but these algorithms are not data oblivious because they all make heavy use of noisy binary searching. In this paper, we provide new methods for sorting with comparison errors that are data oblivious while avoiding the use of noisy binary search methods. In addition, we experimentally compare our algorithms and other sorting algorithms.","PeriodicalId":9448,"journal":{"name":"Bulletin of the Society of Sea Water Science, Japan","volume":"161 1","pages":"8:1-8:18"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73576845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Graph-Theoretic Formulation of Exploratory Blockmodeling 探索性块建模的图论表述
Pub Date : 2023-01-01 DOI: 10.4230/LIPIcs.SEA.2023.14
Alexander Bille, Niels Grüttemeier, Christian Komusiewicz, Nils Morawietz
We present a new simple graph-theoretic formulation of the exploratory blockmodeling problem on undirected and unweighted one-mode networks. Our formulation takes as input the network G and the maximum number t of blocks for the solution model. The task is to find a minimum-size set of edge insertions and deletions that transform the input graph G into a graph G ′ with at most t neighborhood classes. Herein, a neighborhood class is a maximal set of vertices with the same neighborhood. The neighborhood classes of G ′ directly give the blocks and block interactions of the computed blockmodel. We analyze the classic and parameterized complexity of the exploratory blockmodeling problem, provide a branch-and-bound algorithm, an ILP formulation and several heuristics. Finally, we compare our exact algorithms to previous ILP-based approaches and show that the new algorithms are faster for t ≥ 4. 2012 ACM
在无向无权单模网络上,我们提出了探索性块建模问题的一个新的简单图论公式。我们的公式以网络G和解模型的最大块数t作为输入。任务是找到一个最小大小的边插入和边删除集合,将输入图G转换成一个最多有t个邻域类的图G '。在这里,邻域类是具有相同邻域的顶点的最大集合。G '的邻域类直接给出了计算块模型的块和块之间的相互作用。我们分析了探索性块建模问题的经典复杂性和参数化复杂性,提供了一个分支定界算法、一个ILP公式和几种启发式方法。最后,我们将我们的精确算法与以前基于ilp的方法进行了比较,并表明新算法在t≥4时更快。2012年ACM
{"title":"A Graph-Theoretic Formulation of Exploratory Blockmodeling","authors":"Alexander Bille, Niels Grüttemeier, Christian Komusiewicz, Nils Morawietz","doi":"10.4230/LIPIcs.SEA.2023.14","DOIUrl":"https://doi.org/10.4230/LIPIcs.SEA.2023.14","url":null,"abstract":"We present a new simple graph-theoretic formulation of the exploratory blockmodeling problem on undirected and unweighted one-mode networks. Our formulation takes as input the network G and the maximum number t of blocks for the solution model. The task is to find a minimum-size set of edge insertions and deletions that transform the input graph G into a graph G ′ with at most t neighborhood classes. Herein, a neighborhood class is a maximal set of vertices with the same neighborhood. The neighborhood classes of G ′ directly give the blocks and block interactions of the computed blockmodel. We analyze the classic and parameterized complexity of the exploratory blockmodeling problem, provide a branch-and-bound algorithm, an ILP formulation and several heuristics. Finally, we compare our exact algorithms to previous ILP-based approaches and show that the new algorithms are faster for t ≥ 4. 2012 ACM","PeriodicalId":9448,"journal":{"name":"Bulletin of the Society of Sea Water Science, Japan","volume":"1 1","pages":"14:1-14:20"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90493319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CompDP: A Framework for Simultaneous Subgraph Counting Under Connectivity Constraints CompDP:一种连通性约束下的同时子图计数框架
Pub Date : 2023-01-01 DOI: 10.4230/LIPIcs.SEA.2023.11
Kengo Nakamura, Masaaki Nishino, Norihito Yasuda, S. Minato
The subgraph counting problem computes the number of subgraphs of a given graph that satisfy some constraints. Among various constraints imposed on a graph, those regarding the connectivity of vertices, such as “these two vertices must be connected,” have great importance since they are indispensable for determining various graph substructures, e.g., paths, Steiner trees, and rooted spanning forests. In this view, the subgraph counting problem under connectivity constraints is also important because counting such substructures often corresponds to measuring the importance of a vertex in network infrastructures. However, we must solve the subgraph counting problems multiple times to compute such an importance measure for every vertex. Conventionally, they are solved separately by constructing decision diagrams such as BDD and ZDD for each problem. However, even solving a single subgraph counting is a computationally hard task, preventing us from solving it multiple times in a reasonable time. In this paper, we propose a dynamic programming framework that simultaneously counts subgraphs for every vertex by focusing on similar connectivity constraints. Experimental results show that the proposed method solved multiple subgraph counting problems about 10–20 times faster than the existing approach for many problem settings.
子图计数问题计算给定图中满足某些约束的子图的个数。在对图施加的各种约束中,关于顶点的连通性的约束,例如“这两个顶点必须连接”,非常重要,因为它们对于确定各种图的子结构(例如路径、斯坦纳树和有根的生成森林)是必不可少的。在这种观点下,连通性约束下的子图计数问题也很重要,因为计算这样的子结构通常对应于测量网络基础设施中顶点的重要性。然而,我们必须多次解决子图计数问题来计算每个顶点的重要性度量。通常,通过为每个问题构建决策图(如BDD和ZDD)来分别解决它们。然而,即使解决单个子图计数也是一项计算困难的任务,这使我们无法在合理的时间内多次解决它。在本文中,我们提出了一个动态规划框架,通过关注相似的连通性约束,同时对每个顶点的子图进行计数。实验结果表明,对于许多问题设置,该方法解决多子图计数问题的速度比现有方法快10-20倍。
{"title":"CompDP: A Framework for Simultaneous Subgraph Counting Under Connectivity Constraints","authors":"Kengo Nakamura, Masaaki Nishino, Norihito Yasuda, S. Minato","doi":"10.4230/LIPIcs.SEA.2023.11","DOIUrl":"https://doi.org/10.4230/LIPIcs.SEA.2023.11","url":null,"abstract":"The subgraph counting problem computes the number of subgraphs of a given graph that satisfy some constraints. Among various constraints imposed on a graph, those regarding the connectivity of vertices, such as “these two vertices must be connected,” have great importance since they are indispensable for determining various graph substructures, e.g., paths, Steiner trees, and rooted spanning forests. In this view, the subgraph counting problem under connectivity constraints is also important because counting such substructures often corresponds to measuring the importance of a vertex in network infrastructures. However, we must solve the subgraph counting problems multiple times to compute such an importance measure for every vertex. Conventionally, they are solved separately by constructing decision diagrams such as BDD and ZDD for each problem. However, even solving a single subgraph counting is a computationally hard task, preventing us from solving it multiple times in a reasonable time. In this paper, we propose a dynamic programming framework that simultaneously counts subgraphs for every vertex by focusing on similar connectivity constraints. Experimental results show that the proposed method solved multiple subgraph counting problems about 10–20 times faster than the existing approach for many problem settings.","PeriodicalId":9448,"journal":{"name":"Bulletin of the Society of Sea Water Science, Japan","volume":"18 1","pages":"11:1-11:20"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75895123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proxying Betweenness Centrality Rankings in Temporal Networks 时间网络中的代理中间性中心性排序
Pub Date : 2023-01-01 DOI: 10.4230/LIPIcs.SEA.2023.6
R. Becker, P. Crescenzi, A. Cruciani, Bojana Kodric
Identifying influential nodes in a network is arguably one of the most important tasks in graph mining and network analysis. A large variety of centrality measures, all aiming at correctly quantifying a node’s importance in the network, have been formulated in the literature. One of the most cited ones is the betweenness centrality , formally introduced by Freeman (Sociometry, 1977). On the other hand, researchers have recently been very interested in capturing the dynamic nature of real-world networks by studying temporal graphs , rather than static ones. Clearly, centrality measures, including the betweenness centrality, have also been extended to temporal graphs. Buß et al. (KDD, 2020) gave algorithms to compute various notions of temporal betweenness centrality, including the perhaps most natural one – shortest temporal betweenness . Their algorithm computes centrality values of all nodes in time O ( n 3 T 2 ), where n is the size of the network and T is the total number of time steps. For real-world networks, which easily contain tens of thousands of nodes, this complexity becomes prohibitive. Thus, it is reasonable to consider proxies for shortest temporal betweenness rankings that are more efficiently computed, and, therefore, allow for measuring the relative importance of nodes in very large temporal graphs. In this paper, we compare several such proxies on a diverse set of real-world networks. These proxies can be divided into global and local proxies. The considered global proxies include the exact algorithm for static betweenness (computed on the underlying graph), prefix foremost temporal betweenness of Buß et al., which is more efficiently computable than shortest temporal betweenness, and the recently introduced approximation approach of Santoro and Sarpe (WWW, 2022). As all of these global proxies are still expensive to compute on very large networks, we also turn to more efficiently computable local proxies. Here, we consider temporal versions of the ego-betweenness in the sense of Everett and Borgatti (Social Networks, 2005), standard degree notions, and a novel temporal degree notion termed the pass-through degree , that we introduce in this paper and which we consider to be one of our main contributions. We show that the pass-through degree, which measures the number of pairs of neighbors of a node that are temporally connected through it, can be computed in nearly linear time for all nodes in the network and we experimentally observe that it is surprisingly competitive as a proxy for shortest temporal betweenness.
识别网络中有影响的节点可以说是图挖掘和网络分析中最重要的任务之一。各种各样的中心性度量,都旨在正确量化节点在网络中的重要性,已经在文献中表述。其中被引用最多的是由Freeman (Sociometry, 1977)正式引入的中间性中心性。另一方面,研究人员最近对通过研究时间图而不是静态图来捕捉现实世界网络的动态特性非常感兴趣。显然,中心性度量,包括中间中心性,也已经扩展到时间图。Buß等人(KDD, 2020)给出了计算各种时间间隔中心性概念的算法,包括可能最自然的最短时间间隔。他们的算法计算所有节点在时间O (n 3t 2)内的中心性值,其中n为网络的大小,T为总时间步数。对于现实世界的网络,很容易包含成千上万个节点,这种复杂性变得令人望而却步。因此,考虑更有效地计算最短时间间隔排序的代理是合理的,因此,允许在非常大的时间图中测量节点的相对重要性。在本文中,我们在一组不同的现实世界网络上比较了几种这样的代理。这些代理可以分为全局代理和本地代理。考虑的全局代理包括静态间性的精确算法(在底层图上计算),Buß等人的前缀优先时间间性,它比最短时间间性更有效地计算,以及Santoro和Sarpe最近引入的近似方法(WWW, 2022)。由于在非常大的网络上计算所有这些全局代理仍然非常昂贵,因此我们还转向更有效的可计算本地代理。在这里,我们考虑了Everett和Borgatti (Social Networks, 2005)意义上的自我-中介的时间版本,标准程度概念,以及我们在本文中介绍的称为传递度的新时间程度概念,我们认为这是我们的主要贡献之一。我们表明,传递度(衡量节点临时连接的邻居对的数量)可以在网络中所有节点的近线性时间内计算出来,并且我们通过实验观察到,它作为最短时间间隔的代理具有惊人的竞争力。
{"title":"Proxying Betweenness Centrality Rankings in Temporal Networks","authors":"R. Becker, P. Crescenzi, A. Cruciani, Bojana Kodric","doi":"10.4230/LIPIcs.SEA.2023.6","DOIUrl":"https://doi.org/10.4230/LIPIcs.SEA.2023.6","url":null,"abstract":"Identifying influential nodes in a network is arguably one of the most important tasks in graph mining and network analysis. A large variety of centrality measures, all aiming at correctly quantifying a node’s importance in the network, have been formulated in the literature. One of the most cited ones is the betweenness centrality , formally introduced by Freeman (Sociometry, 1977). On the other hand, researchers have recently been very interested in capturing the dynamic nature of real-world networks by studying temporal graphs , rather than static ones. Clearly, centrality measures, including the betweenness centrality, have also been extended to temporal graphs. Buß et al. (KDD, 2020) gave algorithms to compute various notions of temporal betweenness centrality, including the perhaps most natural one – shortest temporal betweenness . Their algorithm computes centrality values of all nodes in time O ( n 3 T 2 ), where n is the size of the network and T is the total number of time steps. For real-world networks, which easily contain tens of thousands of nodes, this complexity becomes prohibitive. Thus, it is reasonable to consider proxies for shortest temporal betweenness rankings that are more efficiently computed, and, therefore, allow for measuring the relative importance of nodes in very large temporal graphs. In this paper, we compare several such proxies on a diverse set of real-world networks. These proxies can be divided into global and local proxies. The considered global proxies include the exact algorithm for static betweenness (computed on the underlying graph), prefix foremost temporal betweenness of Buß et al., which is more efficiently computable than shortest temporal betweenness, and the recently introduced approximation approach of Santoro and Sarpe (WWW, 2022). As all of these global proxies are still expensive to compute on very large networks, we also turn to more efficiently computable local proxies. Here, we consider temporal versions of the ego-betweenness in the sense of Everett and Borgatti (Social Networks, 2005), standard degree notions, and a novel temporal degree notion termed the pass-through degree , that we introduce in this paper and which we consider to be one of our main contributions. We show that the pass-through degree, which measures the number of pairs of neighbors of a node that are temporally connected through it, can be computed in nearly linear time for all nodes in the network and we experimentally observe that it is surprisingly competitive as a proxy for shortest temporal betweenness.","PeriodicalId":9448,"journal":{"name":"Bulletin of the Society of Sea Water Science, Japan","volume":"47 1","pages":"6:1-6:22"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74208707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Bulletin of the Society of Sea Water Science, Japan
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1