Pub Date : 2018-04-01DOI: 10.4230/LIPIcs.SWAT.2018.33
Lijie Chen, E. Demaine, Yuzhou Gu, V. V. Williams, Yinzhan Xu, Yuancheng Yu
Since the introduction of retroactive data structures at SODA 2004, a major unsolved problem has been to bound the gap between the best partially retroactive data structure (where changes can be made to the past, but only the present can be queried) and the best fully retroactive data structure (where the past can also be queried) for any problem. It was proved in 2004 that any partially retroactive data structure with operation time $T(n,m)$ can be transformed into a fully retroactive data structure with operation time $O(sqrt{m} cdot T(n,m))$, where $n$ is the size of the data structure and $m$ is the number of operations in the timeline [Demaine 2004], but it has been open for 14 years whether such a gap is necessary. In this paper, we prove nearly matching upper and lower bounds on this gap for all $n$ and $m$. We improve the upper bound for $n ll sqrt m$ by showing a new transformation with multiplicative overhead $n log m$. We then prove a lower bound of $min{n log m, sqrt m}^{1-o(1)}$ assuming any of the following conjectures: - Conjecture I: Circuit SAT requires $2^{n - o(n)}$ time on $n$-input circuits of size $2^{o(n)}$. (Far weaker than the well-believed SETH conjecture, which asserts that CNF SAT with $n$ variables and $O(n)$ clauses already requires $2^{n-o(n)}$ time.) - Conjecture II: Online $(min,+)$ product between an integer $ntimes n$ matrix and $n$ vectors requires $n^{3 - o(1)}$ time. - Conjecture III (3-SUM Conjecture): Given three sets $A,B,C$ of integers, each of size $n$, deciding whether there exist $a in A, b in B, c in C$ such that $a + b + c = 0$ requires $n^{2 - o(1)}$ time. Our lower bound construction illustrates an interesting power of fully retroactive queries: they can be used to quickly solve batched pair evaluation. We believe this technique can prove useful for other data structure lower bounds, especially dynamic ones.
自从SODA 2004引入追溯性数据结构以来,一个主要的未解决的问题是,对于任何问题,最好的部分追溯性数据结构(可以对过去的数据进行更改,但只能查询现在的数据)和最好的完全追溯性数据结构(也可以查询过去的数据)之间的差距。2004年证明,任何具有部分追溯性的操作时间为$T(n,m)$的数据结构都可以转化为具有完全追溯性的操作时间为$O(sqrt{m} cdot T(n,m))$的数据结构,其中$n$为数据结构的大小,$m$为时间轴上的操作次数[Demaine 2004],但是否需要这样的差距已经开放了14年。在本文中,我们证明了对于所有$n$和$m$,该间隙的上界和下界几乎匹配。我们通过展示一个新的带有乘法开销$n log m$的变换来改进$n ll sqrt m$的上界。然后,我们证明了$min{n log m, sqrt m}^{1-o(1)}$的下界,假设以下猜想中的任何一个:猜想一:电路SAT需要$2^{n - o(n)}$时间在$n$上-尺寸为$2^{o(n)}$的输入电路。(远弱于广为相信的SETH猜想,它断言具有$n$变量和$O(n)$子句的CNF SAT已经需要$2^{n-o(n)}$时间。)猜想二:整数$ntimes n$矩阵和$n$向量之间的在线$(min,+)$乘积需要$n^{3 - o(1)}$时间。—猜想三(3-SUM猜想):给定三组$A,B,C$大小为$n$的整数,判断是否存在$a in A, b in B, c in C$使得$a + b + c = 0$需要$n^{2 - o(1)}$时间。我们的下界构造说明了完全追溯查询的有趣功能:它们可用于快速解决批处理对求值。我们相信这种技术可以证明对其他数据结构下界,特别是动态下界是有用的。
{"title":"Nearly Optimal Separation Between Partially And Fully Retroactive Data Structures","authors":"Lijie Chen, E. Demaine, Yuzhou Gu, V. V. Williams, Yinzhan Xu, Yuancheng Yu","doi":"10.4230/LIPIcs.SWAT.2018.33","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2018.33","url":null,"abstract":"Since the introduction of retroactive data structures at SODA 2004, a major unsolved problem has been to bound the gap between the best partially retroactive data structure (where changes can be made to the past, but only the present can be queried) and the best fully retroactive data structure (where the past can also be queried) for any problem. It was proved in 2004 that any partially retroactive data structure with operation time $T(n,m)$ can be transformed into a fully retroactive data structure with operation time $O(sqrt{m} cdot T(n,m))$, where $n$ is the size of the data structure and $m$ is the number of operations in the timeline [Demaine 2004], but it has been open for 14 years whether such a gap is necessary. \u0000In this paper, we prove nearly matching upper and lower bounds on this gap for all $n$ and $m$. We improve the upper bound for $n ll sqrt m$ by showing a new transformation with multiplicative overhead $n log m$. We then prove a lower bound of $min{n log m, sqrt m}^{1-o(1)}$ assuming any of the following conjectures: \u0000- Conjecture I: Circuit SAT requires $2^{n - o(n)}$ time on $n$-input circuits of size $2^{o(n)}$. (Far weaker than the well-believed SETH conjecture, which asserts that CNF SAT with $n$ variables and $O(n)$ clauses already requires $2^{n-o(n)}$ time.) \u0000- Conjecture II: Online $(min,+)$ product between an integer $ntimes n$ matrix and $n$ vectors requires $n^{3 - o(1)}$ time. \u0000- Conjecture III (3-SUM Conjecture): Given three sets $A,B,C$ of integers, each of size $n$, deciding whether there exist $a in A, b in B, c in C$ such that $a + b + c = 0$ requires $n^{2 - o(1)}$ time. \u0000Our lower bound construction illustrates an interesting power of fully retroactive queries: they can be used to quickly solve batched pair evaluation. We believe this technique can prove useful for other data structure lower bounds, especially dynamic ones.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125306392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-28DOI: 10.4230/LIPIcs.SWAT.2018.12
P. Bose, Paz Carmi, J. Keil, S. Mehrabi, Debajyoti Mondal
Given a set of $n$ points (sites) inside a rectangle $R$ and $n$ points (label locations or ports) on its boundary, a boundary labeling problem seeks ways of connecting every site to a distinct port while achieving different labeling aesthetics. We examine the scenario when the connecting lines (leaders) are drawn as axis-aligned polylines with few bends, every leader lies strictly inside $R$, no two leaders cross, and the sum of the lengths of all the leaders is minimized. In a $k$-sided boundary labeling problem, where $1le kle 4$, the label locations are located on the $k$ consecutive sides of $R$. In this paper, we develop an $O(n^3log n)$-time algorithm for 2-sided boundary labeling, where the leaders are restricted to have one bend. This improves the previously best known $O(n^8log n)$-time algorithm of Kindermann et al. (Algorithmica, 76(1):225-258, 2016). We show the problem is polynomial-time solvable in more general settings such as when the ports are located on more than two sides of $R$, in the presence of obstacles, and even when the objective is to minimize the total number of bends. Our results improve the previous algorithms on boundary labeling with obstacles, as well as provide the first polynomial-time algorithms for minimizing the total leader length and number of bends for 3- and 4-sided boundary labeling. These results settle a number of open questions on the boundary labeling problems (Wolff, Handbook of Graph Drawing, Chapter 23, Table 23.1, 2014).
{"title":"Boundary Labeling for Rectangular Diagrams","authors":"P. Bose, Paz Carmi, J. Keil, S. Mehrabi, Debajyoti Mondal","doi":"10.4230/LIPIcs.SWAT.2018.12","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2018.12","url":null,"abstract":"Given a set of $n$ points (sites) inside a rectangle $R$ and $n$ points (label locations or ports) on its boundary, a boundary labeling problem seeks ways of connecting every site to a distinct port while achieving different labeling aesthetics. We examine the scenario when the connecting lines (leaders) are drawn as axis-aligned polylines with few bends, every leader lies strictly inside $R$, no two leaders cross, and the sum of the lengths of all the leaders is minimized. In a $k$-sided boundary labeling problem, where $1le kle 4$, the label locations are located on the $k$ consecutive sides of $R$. \u0000In this paper, we develop an $O(n^3log n)$-time algorithm for 2-sided boundary labeling, where the leaders are restricted to have one bend. This improves the previously best known $O(n^8log n)$-time algorithm of Kindermann et al. (Algorithmica, 76(1):225-258, 2016). We show the problem is polynomial-time solvable in more general settings such as when the ports are located on more than two sides of $R$, in the presence of obstacles, and even when the objective is to minimize the total number of bends. Our results improve the previous algorithms on boundary labeling with obstacles, as well as provide the first polynomial-time algorithms for minimizing the total leader length and number of bends for 3- and 4-sided boundary labeling. These results settle a number of open questions on the boundary labeling problems (Wolff, Handbook of Graph Drawing, Chapter 23, Table 23.1, 2014).","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124068885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.4230/LIPIcs.SWAT.2018.26
Shang-En Huang, S. Pettie
We prove better lower bounds on additive spanners and emulators, which are lossy compression schemes for undirected graphs, as well as lower bounds on shortcut sets, which reduce the diameter of directed graphs. We show that any $O(n)$-size shortcut set cannot bring the diameter below $Omega(n^{1/6})$, and that any $O(m)$-size shortcut set cannot bring it below $Omega(n^{1/11})$. These improve Hesse's [Hesse03] lower bound of $Omega(n^{1/17})$. By combining these constructions with Abboud and Bodwin's [AbboudB17] edge-splitting technique, we get additive stretch lower bounds of $+Omega(n^{1/13})$ for $O(n)$-size spanners and $+Omega(n^{1/18})$ for $O(n)$-size emulators. These improve Abboud and Bodwin's $+Omega(n^{1/22})$ lower bounds.
{"title":"Lower Bounds on Sparse Spanners, Emulators, and Diameter-reducing shortcuts","authors":"Shang-En Huang, S. Pettie","doi":"10.4230/LIPIcs.SWAT.2018.26","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2018.26","url":null,"abstract":"We prove better lower bounds on additive spanners and emulators, which are lossy compression schemes for undirected graphs, as well as lower bounds on shortcut sets, which reduce the diameter of directed graphs. We show that any $O(n)$-size shortcut set cannot bring the diameter below $Omega(n^{1/6})$, and that any $O(m)$-size shortcut set cannot bring it below $Omega(n^{1/11})$. These improve Hesse's [Hesse03] lower bound of $Omega(n^{1/17})$. By combining these constructions with Abboud and Bodwin's [AbboudB17] edge-splitting technique, we get additive stretch lower bounds of $+Omega(n^{1/13})$ for $O(n)$-size spanners and $+Omega(n^{1/18})$ for $O(n)$-size emulators. These improve Abboud and Bodwin's $+Omega(n^{1/22})$ lower bounds.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122286931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-12DOI: 10.4230/LIPIcs.SWAT.2018.18
Khaled M. Elbassioni, K. Makino
We give an incremental polynomial time algorithm for enumerating the vertices of any polyhedron $mathcal{P}(A,mathbf{1})={xinRR^n mid Axgeq b1,~xgeq b0}$, when $A$ is a totally unimodular matrix. Our algorithm is based on decomposing the hypergraph transversal problem for unimodular hypergraphs using Seymour's decomposition of totally unimodular matrices, and may be of independent interest.
{"title":"Enumerating Vertices of $0/1$-Polyhedra associated with $0/1$-Totally Unimodular Matrices","authors":"Khaled M. Elbassioni, K. Makino","doi":"10.4230/LIPIcs.SWAT.2018.18","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2018.18","url":null,"abstract":"We give an incremental polynomial time algorithm for enumerating the vertices of any polyhedron $mathcal{P}(A,mathbf{1})={xinRR^n mid Axgeq b1,~xgeq b0}$, when $A$ is a totally unimodular matrix. Our algorithm is based on decomposing the hypergraph transversal problem for unimodular hypergraphs using Seymour's decomposition of totally unimodular matrices, and may be of independent interest.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115338115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-07-01DOI: 10.4230/LIPIcs.SWAT.2018.16
Lijie Chen, Ran Duan, Ruosong Wang, Hanrui Zhang, Tianyi Zhang
Depth first search (DFS) tree is one of the most well-known data structures for designing efficient graph algorithms. Given an undirected graph $G=(V,E)$ with $n$ vertices and $m$ edges, the textbook algorithm takes $O(n+m)$ time to construct a DFS tree. In this paper, we study the problem of maintaining a DFS tree when the graph is undergoing incremental updates. Formally, we show: Given an arbitrary online sequence of edge or vertex insertions, there is an algorithm that reports a DFS tree in $O(n)$ worst case time per operation, and requires $Oleft(min{m log n, n^2}right)$ preprocessing time. Our result improves the previous $O(n log^3 n)$ worst case update time algorithm by Baswana et al. and the $O(n log n)$ time by Nakamura and Sadakane, and matches the trivial $Omega(n)$ lower bound when it is required to explicitly output a DFS tree. Our result builds on the framework introduced in the breakthrough work by Baswana et al., together with a novel use of a tree-partition lemma by Duan and Zhan, and the celebrated fractional cascading technique by Chazelle and Guibas.
深度优先搜索(DFS)树是设计高效图算法最常用的数据结构之一。给定一个具有$n$个顶点和$m$条边的无向图$G=(V,E)$,教科书算法需要$O(n+m)$时间来构建DFS树。本文研究了图进行增量更新时DFS树的维护问题。正式地,我们表明:给定任意的边或顶点插入在线序列,存在一种算法,该算法在每次操作的最坏情况下以$O(n)$时间报告DFS树,并且需要$Oleft(min{m log n, n^2}right)$预处理时间。我们的结果改进了先前Baswana等人的$O(n log^3 n)$最坏情况更新时间算法和Nakamura和Sadakane的$O(n log n)$时间算法,并且在需要显式输出DFS树时匹配平凡的$Omega(n)$下界。我们的结果建立在Baswana等人的突破性工作中引入的框架,以及Duan和Zhan对树划分引理的新使用,以及Chazelle和gu著名的分数级联技术的基础上。
{"title":"An Improved Algorithm for Incremental DFS Tree in Undirected Graphs","authors":"Lijie Chen, Ran Duan, Ruosong Wang, Hanrui Zhang, Tianyi Zhang","doi":"10.4230/LIPIcs.SWAT.2018.16","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2018.16","url":null,"abstract":"Depth first search (DFS) tree is one of the most well-known data structures for designing efficient graph algorithms. Given an undirected graph $G=(V,E)$ with $n$ vertices and $m$ edges, the textbook algorithm takes $O(n+m)$ time to construct a DFS tree. In this paper, we study the problem of maintaining a DFS tree when the graph is undergoing incremental updates. Formally, we show: Given an arbitrary online sequence of edge or vertex insertions, there is an algorithm that reports a DFS tree in $O(n)$ worst case time per operation, and requires $Oleft(min{m log n, n^2}right)$ preprocessing time. \u0000Our result improves the previous $O(n log^3 n)$ worst case update time algorithm by Baswana et al. and the $O(n log n)$ time by Nakamura and Sadakane, and matches the trivial $Omega(n)$ lower bound when it is required to explicitly output a DFS tree. \u0000Our result builds on the framework introduced in the breakthrough work by Baswana et al., together with a novel use of a tree-partition lemma by Duan and Zhan, and the celebrated fractional cascading technique by Chazelle and Guibas.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133269509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-24DOI: 10.4230/LIPIcs.SWAT.2016.4
P. Golovach, D. Kratsch, D. Paulusma, A. Stewart
A graph H is a square root of a graph G if G can be obtained from H by the addition of edges between any two vertices in H that are of distance 2 from each other. The Square Root problem is that of deciding whether a given graph admits a square root. We consider this problem for planar graphs in the context of the "distance from triviality" framework. For an integer k, a planar+kv graph (or k-apex graph) is a graph that can be made planar by the removal of at most k vertices. We prove that a generalization of Square Root, in which some edges are prescribed to be either in or out of any solution, has a kernel of size O(k) for planar+kv graphs, when parameterized by k. Our result is based on a new edge reduction rule which, as we shall also show, has a wider applicability for the Square Root problem.
{"title":"A Linear Kernel for Finding Square Roots of Almost Planar Graphs","authors":"P. Golovach, D. Kratsch, D. Paulusma, A. Stewart","doi":"10.4230/LIPIcs.SWAT.2016.4","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2016.4","url":null,"abstract":"A graph H is a square root of a graph G if G can be obtained from H by the addition of edges between any two vertices in H that are of distance 2 from each other. The Square Root problem is that of deciding whether a given graph admits a square root. We consider this problem for planar graphs in the context of the \"distance from triviality\" framework. For an integer k, a planar+kv graph (or k-apex graph) is a graph that can be made planar by the removal of at most k vertices. We prove that a generalization of Square Root, in which some edges are prescribed to be either in or out of any solution, has a kernel of size O(k) for planar+kv graphs, when parameterized by k. Our result is based on a new edge reduction rule which, as we shall also show, has a wider applicability for the Square Root problem.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"405 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114058045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-21DOI: 10.4230/LIPIcs.SWAT.2016.5
Matthias Mnich, Ignaz Rutter, Jens M. Schmidt
Map graphs generalize planar graphs and were introduced by Chen, Grigni and Papadimitriou [STOC 1998, J.ACM 2002]. They showed that the problem of recognizing map graphs is in NP by proving the existence of a planar witness graph W. Shortly after, Thorup [FOCS 1998] published a polynomial-time recognition algorithm for map graphs. However, the run time of this algorithm is estimated to be Omega(n^{120}) for n-vertex graphs, and a full description of its details remains unpublished. We give a new and purely combinatorial algorithm that decides whether a graph G is a map graph having an outerplanar witness W. This is a step towards a first combinatorial recognition algorithm for general map graphs. The algorithm runs in time and space O(n+m). In contrast to Thorup's approach, it computes the witness graph W in the affirmative case.
{"title":"Linear-Time Recognition of Map Graphs with Outerplanar Witness","authors":"Matthias Mnich, Ignaz Rutter, Jens M. Schmidt","doi":"10.4230/LIPIcs.SWAT.2016.5","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2016.5","url":null,"abstract":"Map graphs generalize planar graphs and were introduced by Chen, Grigni and Papadimitriou [STOC 1998, J.ACM 2002]. They showed that the problem of recognizing map graphs is in NP by proving the existence of a planar witness graph W. Shortly after, Thorup [FOCS 1998] published a polynomial-time recognition algorithm for map graphs. However, the run time of this algorithm is estimated to be Omega(n^{120}) for n-vertex graphs, and a full description of its details remains unpublished. \u0000 \u0000We give a new and purely combinatorial algorithm that decides whether a graph G is a map graph having an outerplanar witness W. This is a step towards a first combinatorial recognition algorithm for general map graphs. The algorithm runs in time and space O(n+m). In contrast to Thorup's approach, it computes the witness graph W in the affirmative case.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126727288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-01DOI: 10.4230/LIPIcs.SWAT.2016.6
Aritra Banik, B. Bhattacharya, Sandip Das, T. Kameda, Zhao Song
We present two improved algorithms for weighted discrete $p$-center problem for tree networks with $n$ vertices. One of our proposed algorithms runs in $O(n log n + p log^2 n log(n/p))$ time. For all values of $p$, our algorithm thus runs as fast as or faster than the most efficient $O(nlog^2 n)$ time algorithm obtained by applying Cole's speed-up technique [cole1987] to the algorithm due to Megiddo and Tamir [megiddo1983], which has remained unchallenged for nearly 30 years. Our other algorithm, which is more practical, runs in $O(n log n + p^2 log^2(n/p))$ time, and when $p=O(sqrt{n})$ it is faster than Megiddo and Tamir's $O(n log^2n loglog n)$ time algorithm [megiddo1983].
针对具有$n$个顶点的树状网络的加权离散$p$ -中心问题,提出了两种改进算法。我们提出的一个算法运行时间为$O(n log n + p log^2 n log(n/p))$。因此,对于$p$的所有值,我们的算法运行速度与将Cole的加速技术[cole1987]应用于Megiddo和Tamir [megiddo1983]的算法所获得的最有效的$O(nlog^2 n)$时间算法一样快或更快,该算法在近30年来一直没有受到挑战。我们的另一种算法更实用,运行时间为$O(n log n + p^2 log^2(n/p))$,当$p=O(sqrt{n})$时,它比Megiddo和Tamir的$O(n log^2n loglog n)$时间算法[megiddo1983]快。
{"title":"The $p$-Center Problem in Tree Networks Revisited","authors":"Aritra Banik, B. Bhattacharya, Sandip Das, T. Kameda, Zhao Song","doi":"10.4230/LIPIcs.SWAT.2016.6","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2016.6","url":null,"abstract":"We present two improved algorithms for weighted discrete $p$-center problem for tree networks with $n$ vertices. One of our proposed algorithms runs in $O(n log n + p log^2 n log(n/p))$ time. For all values of $p$, our algorithm thus runs as fast as or faster than the most efficient $O(nlog^2 n)$ time algorithm obtained by applying Cole's speed-up technique [cole1987] to the algorithm due to Megiddo and Tamir [megiddo1983], which has remained unchallenged for nearly 30 years. Our other algorithm, which is more practical, runs in $O(n log n + p^2 log^2(n/p))$ time, and when $p=O(sqrt{n})$ it is faster than Megiddo and Tamir's $O(n log^2n loglog n)$ time algorithm [megiddo1983].","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114152658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-04-01DOI: 10.4230/LIPIcs.SWAT.2016
R. Ben-Basat, Gil Einziger, R. Friedman, Yaron Kassner
This paper considers the problem of maintaining statistic aggregates over the last W elements of a data stream. First, the problem of counting the number of 1's in the last W bits of a binary stream is considered. A lower bound of {Omega}(1/{epsilon} + log W) memory bits for W{epsilon}-additive approximations is derived. This is followed by an algorithm whose memory consumption is O(1/{epsilon} + log W) bits, indicating that the algorithm is optimal and that the bound is tight. Next, the more general problem of maintaining a sum of the last W integers, each in the range of {0,1,...,R}, is addressed. The paper shows that approximating the sum within an additive error of RW{epsilon} can also be done using {Theta}(1/{epsilon} + log W) bits for {epsilon}={Omega}(1/W). For {epsilon}=o(1/W), we present a succinct algorithm which uses B(1 + o(1)) bits, where B={Theta}(Wlog(1/W{epsilon})) is the derived lower bound. We show that all lower bounds generalize to randomized algorithms as well. All algorithms process new elements and answer queries in O(1) worst-case time.
{"title":"Efficient Summing over Sliding Windows","authors":"R. Ben-Basat, Gil Einziger, R. Friedman, Yaron Kassner","doi":"10.4230/LIPIcs.SWAT.2016","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2016","url":null,"abstract":"This paper considers the problem of maintaining statistic aggregates over the last W elements of a data stream. First, the problem of counting the number of 1's in the last W bits of a binary stream is considered. A lower bound of {Omega}(1/{epsilon} + log W) memory bits for W{epsilon}-additive approximations is derived. This is followed by an algorithm whose memory consumption is O(1/{epsilon} + log W) bits, indicating that the algorithm is optimal and that the bound is tight. Next, the more general problem of maintaining a sum of the last W integers, each in the range of {0,1,...,R}, is addressed. The paper shows that approximating the sum within an additive error of RW{epsilon} can also be done using {Theta}(1/{epsilon} + log W) bits for {epsilon}={Omega}(1/W). For {epsilon}=o(1/W), we present a succinct algorithm which uses B(1 + o(1)) bits, where B={Theta}(Wlog(1/W{epsilon})) is the derived lower bound. We show that all lower bounds generalize to randomized algorithms as well. All algorithms process new elements and answer queries in O(1) worst-case time.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116406479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-11DOI: 10.4230/LIPIcs.SWAT.2016.2
J. Byrka, M. Lewandowski, Carsten Moldenhauer
We study the prize-collecting version of the Node-weighted Steiner Tree problem (NWPCST) restricted to planar graphs. We give a new primal-dual Lagrangian-multiplier-preserving (LMP) 3-approximation algorithm for planar NWPCST. We then show a ($2.88 + epsilon$)-approximation which establishes a new best approximation guarantee for planar NWPCST. This is done by combining our LMP algorithm with a threshold rounding technique and utilizing the 2.4-approximation of Berman and Yaroslavtsev for the version without penalties. We also give a primal-dual 4-approximation algorithm for the more general forest version using techniques introduced by Hajiaghay and Jain.
{"title":"Approximation algorithms for node-weighted prize-collecting Steiner tree problems on planar graphs","authors":"J. Byrka, M. Lewandowski, Carsten Moldenhauer","doi":"10.4230/LIPIcs.SWAT.2016.2","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2016.2","url":null,"abstract":"We study the prize-collecting version of the Node-weighted Steiner Tree problem (NWPCST) restricted to planar graphs. We give a new primal-dual Lagrangian-multiplier-preserving (LMP) 3-approximation algorithm for planar NWPCST. We then show a ($2.88 + epsilon$)-approximation which establishes a new best approximation guarantee for planar NWPCST. This is done by combining our LMP algorithm with a threshold rounding technique and utilizing the 2.4-approximation of Berman and Yaroslavtsev for the version without penalties. We also give a primal-dual 4-approximation algorithm for the more general forest version using techniques introduced by Hajiaghay and Jain.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125726958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}