Pub Date : 2019-07-26DOI: 10.4230/LIPIcs.SWAT.2022.23
Michael Elkin, Yuval Gitlitz, Ofer Neiman
Let $G=(V,E,w)$ be a weighted undirected graph with $n$ vertices and $m$ edges, and fix a set of $s$ sources $Ssubseteq V$. We study the problem of computing {em almost shortest paths} (ASP) for all pairs in $S times V$ in both classical centralized and parallel (PRAM) models of computation. Consider the regime of multiplicative approximation of $1+epsilon$, for an arbitrarily small constant $epsilon > 0$ . In this regime existing centralized algorithms require $Omega(min{|E|s,n^omega})$ time, where $omega < 2.372$ is the matrix multiplication exponent. Existing PRAM algorithms with polylogarithmic depth (aka time) require work $Omega(min{|E|s,n^omega})$. Our centralized algorithm has running time $O((m+ ns)n^rho)$, and its PRAM counterpart has polylogarithmic depth and work $O((m + ns)n^rho)$, for an arbitrarily small constant $rho > 0$. For a pair $(s,v) in Stimes V$, it provides a path of length $hat{d}(s,v)$ that satisfies $hat{d}(s,v) le (1+epsilon)d_G(s,v) + beta cdot W(s,v)$, where $W(s,v)$ is the weight of the heaviest edge on some shortest $s-v$ path. Hence our additive term depends linearly on a {em local} maximum edge weight, as opposed to the global maximum edge weight in previous works. Finally, our $beta = (1/rho)^{O(1/rho)}$. We also extend a centralized algorithm of Dor et al. cite{DHZ00}. For a parameter $kappa = 1,2,ldots$, this algorithm provides for {em unweighted} graphs a purely additive approximation of $2(kappa -1)$ for {em all pairs shortest paths} (APASP) in time $tilde{O}(n^{2+1/kappa})$. Within the same running time, our algorithm for {em weighted} graphs provides a purely additive error of $2(kappa - 1) W(u,v)$, for every vertex pair $(u,v) in {V choose 2}$, with $W(u,v)$ defined as above. On the way to these results we devise a suit of novel constructions of spanners, emulators and hopsets.
设$G=(V,E,w)$为一个有$n$个顶点和$m$条边的加权无向图,并固定一组$s$个源$Ssubseteq V$。我们研究了在经典的集中式和并行(PRAM)计算模型下{em}$S times V$中所有对的(ASP)的计算问题。考虑对于任意小的常数$epsilon > 0$,乘以近似$1+epsilon$的情形。在这种情况下,现有的集中式算法需要$Omega(min{|E|s,n^omega})$时间,其中$omega < 2.372$是矩阵乘法指数。现有的PRAM算法与多对数深度(即时间)需要工作$Omega(min{|E|s,n^omega})$。我们的集中式算法的运行时间为$O((m+ ns)n^rho)$,对于任意小的常数$rho > 0$,其对应的PRAM具有多对数深度和工作$O((m + ns)n^rho)$。对于一对$(s,v) in Stimes V$,它提供了一条长度为$hat{d}(s,v)$的路径,满足$hat{d}(s,v) le (1+epsilon)d_G(s,v) + beta cdot W(s,v)$,其中$W(s,v)$是某个最短$s-v$路径上最重边的权值。因此,我们的加性项线性依赖于{em局部}最大边权,而不是以前的研究中的全局最大边权。最后,我们的$beta = (1/rho)^{O(1/rho)}$。我们还扩展了Dor等人的集中式算法cite{DHZ00}。对于参数$kappa = 1,2,ldots$,该算法为{em未加权}图提供了时间$tilde{O}(n^{2+1/kappa})$中{em所有对最短路径}(APASP)的$2(kappa -1)$的纯加性近似。在相同的运行时间内,我们的{em加权}图算法为每个顶点对$(u,v) in {V choose 2}$提供了一个纯加性误差$2(kappa - 1) W(u,v)$,其中$W(u,v)$定义如上所述。在获得这些结果的过程中,我们设计了一套新的扳手,模拟器和hopsets结构。
{"title":"Almost Shortest Paths with Near-Additive Error in Weighted Graphs","authors":"Michael Elkin, Yuval Gitlitz, Ofer Neiman","doi":"10.4230/LIPIcs.SWAT.2022.23","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2022.23","url":null,"abstract":"Let $G=(V,E,w)$ be a weighted undirected graph with $n$ vertices and $m$ edges, and fix a set of $s$ sources $Ssubseteq V$. We study the problem of computing {em almost shortest paths} (ASP) for all pairs in $S times V$ in both classical centralized and parallel (PRAM) models of computation. Consider the regime of multiplicative approximation of $1+epsilon$, for an arbitrarily small constant $epsilon > 0$ . In this regime existing centralized algorithms require $Omega(min{|E|s,n^omega})$ time, where $omega < 2.372$ is the matrix multiplication exponent. Existing PRAM algorithms with polylogarithmic depth (aka time) require work $Omega(min{|E|s,n^omega})$. \u0000Our centralized algorithm has running time $O((m+ ns)n^rho)$, and its PRAM counterpart has polylogarithmic depth and work $O((m + ns)n^rho)$, for an arbitrarily small constant $rho > 0$. For a pair $(s,v) in Stimes V$, it provides a path of length $hat{d}(s,v)$ that satisfies $hat{d}(s,v) le (1+epsilon)d_G(s,v) + beta cdot W(s,v)$, where $W(s,v)$ is the weight of the heaviest edge on some shortest $s-v$ path. Hence our additive term depends linearly on a {em local} maximum edge weight, as opposed to the global maximum edge weight in previous works. Finally, our $beta = (1/rho)^{O(1/rho)}$. \u0000We also extend a centralized algorithm of Dor et al. cite{DHZ00}. For a parameter $kappa = 1,2,ldots$, this algorithm provides for {em unweighted} graphs a purely additive approximation of $2(kappa -1)$ for {em all pairs shortest paths} (APASP) in time $tilde{O}(n^{2+1/kappa})$. Within the same running time, our algorithm for {em weighted} graphs provides a purely additive error of $2(kappa - 1) W(u,v)$, for every vertex pair $(u,v) in {V choose 2}$, with $W(u,v)$ defined as above. \u0000On the way to these results we devise a suit of novel constructions of spanners, emulators and hopsets.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132764645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-11DOI: 10.4230/LIPIcs.SWAT.2020.35
J. Spoerhase, Sabine Storandt, Johannes Zink
We propose and study generalizations to the well-known problem of polyline simplification. Instead of a single polyline, we are given a set of polylines possibly sharing some line segments and bend points. The simplification of those shared parts has to be consistent among the polylines. We consider two optimization goals: either minimizing the number of line segments or minimizing the number of bend points in the simplification. By reduction from Minimum-Independent-Dominating-Set, we show that both of these optimization problems are NP-hard to approximate within a factor $n^{1/3 - varepsilon}$ for any $varepsilon > 0$ where $n$ is the number of bend points in the polyline bundle. Moreover, we outline that both problems remain NP-hard even if the input is planar. On the positive side, we give a polynomial-size integer linear program and show fixed-parameter tractability in the number of shared bend points.
{"title":"Simplification of Polyline Bundles","authors":"J. Spoerhase, Sabine Storandt, Johannes Zink","doi":"10.4230/LIPIcs.SWAT.2020.35","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2020.35","url":null,"abstract":"We propose and study generalizations to the well-known problem of polyline simplification. Instead of a single polyline, we are given a set of polylines possibly sharing some line segments and bend points. The simplification of those shared parts has to be consistent among the polylines. We consider two optimization goals: either minimizing the number of line segments or minimizing the number of bend points in the simplification. By reduction from Minimum-Independent-Dominating-Set, we show that both of these optimization problems are NP-hard to approximate within a factor $n^{1/3 - varepsilon}$ for any $varepsilon > 0$ where $n$ is the number of bend points in the polyline bundle. Moreover, we outline that both problems remain NP-hard even if the input is planar. On the positive side, we give a polynomial-size integer linear program and show fixed-parameter tractability in the number of shared bend points.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130138546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-03DOI: 10.4230/LIPIcs.SWAT.2018.6
Hee-Kap Ahn, Eunjin Oh, Lena Schlipf, Fabian Stehn, Darren Strash
We introduce a variant of the watchman route problem, which we call the quickest pair-visibility problem. Given two persons standing at points $s$ and $t$ in a simple polygon $P$ with no holes, we want to minimize the distance they travel in order to see each other in $P$. We solve two variants of this problem, one minimizing the longer distance the two persons travel (min-max) and one minimizing the total travel distance (min-sum), optimally in linear time. We also consider a query version of this problem for the min-max variant. We can preprocess a simple $n$-gon in linear time so that the minimum of the longer distance the two persons travel can be computed in $O(log^2 n)$ time for any two query positions $s,t$ where the two persons start.
{"title":"On Romeo and Juliet Problems: Minimizing Distance-to-Sight","authors":"Hee-Kap Ahn, Eunjin Oh, Lena Schlipf, Fabian Stehn, Darren Strash","doi":"10.4230/LIPIcs.SWAT.2018.6","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2018.6","url":null,"abstract":"We introduce a variant of the watchman route problem, which we call the quickest pair-visibility problem. Given two persons standing at points $s$ and $t$ in a simple polygon $P$ with no holes, we want to minimize the distance they travel in order to see each other in $P$. We solve two variants of this problem, one minimizing the longer distance the two persons travel (min-max) and one minimizing the total travel distance (min-sum), optimally in linear time. We also consider a query version of this problem for the min-max variant. We can preprocess a simple $n$-gon in linear time so that the minimum of the longer distance the two persons travel can be computed in $O(log^2 n)$ time for any two query positions $s,t$ where the two persons start.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127733681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-02DOI: 10.4230/LIPIcs.SWAT.2018.11
Ahmad Biniaz, A. Maheshwari, M. Smid
We study an old geometric optimization problem in the plane. Given a perfect matching M on a set of n points in the plane, we can transform it to a non-crossing perfect matching by a finite sequence of flip operations. The flip operation removes two crossing edges from M and adds two non-crossing edges. Let f(M) and F(M) denote the minimum and maximum lengths of a flip sequence on M, respectively. It has been proved by Bonnet and Miltzow (2016) that f(M)=O(n^2) and by van Leeuwen and Schoone (1980) that F(M)=O(n^3). We prove that f(M)=O(n Delta) where Delta is the spread of the point set, which is defined as the ratio between the longest and the shortest pairwise distances. This improves the previous bound for point sets with sublinear spread. For a matching M on n points in convex position we prove that f(M)=n/2-1 and F(M)={{n/2} choose 2}; these bounds are tight. Any bound on F(*) carries over to the bichromatic setting, while this is not necessarily true for f(*). Let M' be a bichromatic matching. The best known upper bound for f(M') is the same as for F(M'), which is essentially O(n^3). We prove that f(M')<=slant n-2 for points in convex position, and f(M')= O(n^2) for semi-collinear points. The flip operation can also be defined on spanning trees. For a spanning tree T on a convex point set we show that f(T)=O(n log n).
研究了平面上一个古老的几何优化问题。给定平面上n个点的集合上的完美匹配M,我们可以通过有限的翻转操作序列将其转化为非交叉的完美匹配。翻转操作从M中移除两条交叉边,并添加两条非交叉边。设f(M)和f(M)分别表示M上一个翻转序列的最小和最大长度。Bonnet和Miltzow(2016)证明了f(M)=O(n^2), van Leeuwen和Schoone(1980)证明了f(M)=O(n^ 3)。我们证明了f(M)=O(n),其中Delta是点集的扩展,它被定义为最长和最短的对向距离之比。这改进了先前的具有亚线性扩展的点集的边界。对于凸位置n个点上的匹配M,证明了f(M)=n/2-1, f(M)= {{n/2}选择2};这些界限很紧。F(*)上的任何边界都会延续到双色设置,而F(*)不一定是这样。设M是一个双色匹配。f(M')的上界和f(M')的上界是一样的,本质上是O(n^3)证明了对于凸点f(M′)<=斜n-2,对于半共线点f(M′)= O(n^2)。翻转操作也可以在生成树上定义。对于凸点集上的生成树T,我们证明了f(T)=O(n log n)。
{"title":"Flip Distance to some Plane Configurations","authors":"Ahmad Biniaz, A. Maheshwari, M. Smid","doi":"10.4230/LIPIcs.SWAT.2018.11","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2018.11","url":null,"abstract":"We study an old geometric optimization problem in the plane. Given a perfect matching M on a set of n points in the plane, we can transform it to a non-crossing perfect matching by a finite sequence of flip operations. The flip operation removes two crossing edges from M and adds two non-crossing edges. Let f(M) and F(M) denote the minimum and maximum lengths of a flip sequence on M, respectively. It has been proved by Bonnet and Miltzow (2016) that f(M)=O(n^2) and by van Leeuwen and Schoone (1980) that F(M)=O(n^3). We prove that f(M)=O(n Delta) where Delta is the spread of the point set, which is defined as the ratio between the longest and the shortest pairwise distances. This improves the previous bound for point sets with sublinear spread. For a matching M on n points in convex position we prove that f(M)=n/2-1 and F(M)={{n/2} choose 2}; these bounds are tight.\u0000Any bound on F(*) carries over to the bichromatic setting, while this is not necessarily true for f(*). Let M' be a bichromatic matching. The best known upper bound for f(M') is the same as for F(M'), which is essentially O(n^3). We prove that f(M')<=slant n-2 for points in convex position, and f(M')= O(n^2) for semi-collinear points.\u0000The flip operation can also be defined on spanning trees. For a spanning tree T on a convex point set we show that f(T)=O(n log n).","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122276678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-13DOI: 10.4230/LIPIcs.SWAT.2020.31
J. Munro, Bryce Sandlund, Corwin Sinnamon
A lattice is a partially-ordered set in which every pair of elements has a unique meet (greatest lower bound) and join (least upper bound). We present new data structures for lattices that are simple, efficient, and nearly optimal in terms of space complexity. Our first data structure can answer partial order queries in constant time and find the meet or join of two elements in $O(n^{3/4})$ time, where $n$ is the number of elements in the lattice. It occupies $O(n^{3/2}log n)$ bits of space, which is only a $Theta(log n)$ factor from the $Theta(n^{3/2})$-bit lower bound for storing lattices. The preprocessing time is $O(n^2)$. This structure admits a simple space-time tradeoff so that, for any $c in [frac{1}{2}, 1]$, the data structure supports meet and join queries in $O(n^{1-c/2})$ time, occupies $O(n^{1+c}log n)$ bits of space, and can be constructed in $O(n^2 + n^{1+3c/2})$ time. Our second data structure uses $O(n^{3/2}log n)$ bits of space and supports meet and join in $O(d frac{log n}{log d})$ time, where $d$ is the maximum degree of any element in the transitive reduction graph of the lattice. This structure is much faster for lattices with low-degree elements. This paper also identifies an error in a long-standing solution to the problem of representing lattices. We discuss the issue with this previous work.
{"title":"Space-Efficient Data Structures for Lattices","authors":"J. Munro, Bryce Sandlund, Corwin Sinnamon","doi":"10.4230/LIPIcs.SWAT.2020.31","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2020.31","url":null,"abstract":"A lattice is a partially-ordered set in which every pair of elements has a unique meet (greatest lower bound) and join (least upper bound). We present new data structures for lattices that are simple, efficient, and nearly optimal in terms of space complexity. \u0000Our first data structure can answer partial order queries in constant time and find the meet or join of two elements in $O(n^{3/4})$ time, where $n$ is the number of elements in the lattice. It occupies $O(n^{3/2}log n)$ bits of space, which is only a $Theta(log n)$ factor from the $Theta(n^{3/2})$-bit lower bound for storing lattices. The preprocessing time is $O(n^2)$. \u0000This structure admits a simple space-time tradeoff so that, for any $c in [frac{1}{2}, 1]$, the data structure supports meet and join queries in $O(n^{1-c/2})$ time, occupies $O(n^{1+c}log n)$ bits of space, and can be constructed in $O(n^2 + n^{1+3c/2})$ time. \u0000Our second data structure uses $O(n^{3/2}log n)$ bits of space and supports meet and join in $O(d frac{log n}{log d})$ time, where $d$ is the maximum degree of any element in the transitive reduction graph of the lattice. This structure is much faster for lattices with low-degree elements. \u0000This paper also identifies an error in a long-standing solution to the problem of representing lattices. We discuss the issue with this previous work.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"319 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124438278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-26DOI: 10.4230/LIPIcs.SWAT.2018.13
P. Bose, T. Shermer
We consider a repulsion actuator located in an n-sided convex environment full of point particles. When the actuator is activated, all the particles move away from the actuator. We study the problem of gathering all the particles to a point. We give an O(n^2) time algorithm to compute all the actuator locations that gather the particles to one point with one activation, and an O(n) time algorithm to find a single such actuator location if one exists. We then provide an O(n) time algorithm to place the optimal number of actuators whose sequential activation results in the gathering of the particles when such a placement exists.
{"title":"Gathering by Repulsion","authors":"P. Bose, T. Shermer","doi":"10.4230/LIPIcs.SWAT.2018.13","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2018.13","url":null,"abstract":"We consider a repulsion actuator located in an n-sided convex environment full of point particles. When the actuator is activated, all the particles move away from the actuator. We study the problem of gathering all the particles to a point. We give an O(n^2) time algorithm to compute all the actuator locations that gather the particles to one point with one activation, and an O(n) time algorithm to find a single such actuator location if one exists. We then provide an O(n) time algorithm to place the optimal number of actuators whose sequential activation results in the gathering of the particles when such a placement exists.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124039167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.4230/LIPIcs.SWAT.2018.8
Luis Barba, M. Hoffmann, Matias Korman, Alexander Pilz
We study generalizations of convex hulls to polygonal domains with holes. Convexity in Euclidean space is based on the notion of shortest paths, which are straight-line segments. In a polygonal domain, shortest paths are polygonal paths called geodesics. One possible generalization of convex hulls is based on the “rubber band” conception of the convex hull boundary as a shortest curve that encloses a given set of sites. However, it is NP-hard to compute such a curve in a general polygonal domain. Hence, we focus on a different, more direct generalization of convexity, where a set X is geodesically convex if it contains all geodesics between every pair of points x, y ∈ X. The corresponding geodesic convex hull presents a few surprises, and turns out to behave quite differently compared to the classic Euclidean setting or to the geodesic hull inside a simple polygon. We describe a class of geometric objects that suffice to represent geodesic convex hulls of sets of sites, and characterize which such domains are geodesically convex. Using such a representation we present an algorithm to construct the geodesic convex hull of a set of O(n) sites in a polygonal domain with a total of n vertices and h holes in O(n3h3+ε) time, for any constant ε > 0. 2012 ACM Subject Classification Theory of computation → Computational geometry
{"title":"Convex Hulls in Polygonal Domains","authors":"Luis Barba, M. Hoffmann, Matias Korman, Alexander Pilz","doi":"10.4230/LIPIcs.SWAT.2018.8","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2018.8","url":null,"abstract":"We study generalizations of convex hulls to polygonal domains with holes. Convexity in Euclidean space is based on the notion of shortest paths, which are straight-line segments. In a polygonal domain, shortest paths are polygonal paths called geodesics. One possible generalization of convex hulls is based on the “rubber band” conception of the convex hull boundary as a shortest curve that encloses a given set of sites. However, it is NP-hard to compute such a curve in a general polygonal domain. Hence, we focus on a different, more direct generalization of convexity, where a set X is geodesically convex if it contains all geodesics between every pair of points x, y ∈ X. The corresponding geodesic convex hull presents a few surprises, and turns out to behave quite differently compared to the classic Euclidean setting or to the geodesic hull inside a simple polygon. We describe a class of geometric objects that suffice to represent geodesic convex hulls of sets of sites, and characterize which such domains are geodesically convex. Using such a representation we present an algorithm to construct the geodesic convex hull of a set of O(n) sites in a polygonal domain with a total of n vertices and h holes in O(n3h3+ε) time, for any constant ε > 0. 2012 ACM Subject Classification Theory of computation → Computational geometry","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133114890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-05-30DOI: 10.4230/LIPIcs.SWAT.2018.15
Diptarka Chakraborty, Debarati Das
In this paper we address the problem of computing a sparse subgraph of any weighted directed graph such that the exact distances from a designated source vertex to all other vertices are preserved under bounded weight increment. Finding a small sized subgraph that preserves distances between any pair of vertices is a well studied problem. Since in the real world any network is prone to failures, it is natural to study the fault tolerant version of the above problem. Unfortunately, it turns out that there may not always exist such a sparse subgraph even under single edge failure [Demetrescu et al. '08]. However in real applications it is not always the case that a link (edge) in a network becomes completely faulty. Instead, it can happen that some links become more congested which can be captured by increasing weight on the corresponding edges. Thus it makes sense to try to construct a sparse distance preserving subgraph under the above weight increment model where total increase in weight in the whole network (graph) is bounded by some parameter k. To the best of our knowledge this problem has not been studied so far. In this paper we show that given any weighted directed graph with n vertices and a source vertex, one can construct a subgraph of size at most e * (k-1)!2^kn such that it preserves distances between the source and all other vertices as long as the total weight increment is bounded by k and we are allowed to only have integer valued (can be negative) weight on edges and also weight of an edge can only be increased by some positive integer. Next we show a lower bound of c * 2^kn, for some constant c >= 5/4, on the size of such a subgraph. We further argue that the restrictions of integral weight and integral weight increment are actually essential by showing that if we remove any one of these two we may need to store Omega(n^2) edges to preserve the distances.
{"title":"Sparse Weight Tolerant Subgraph for Single Source Shortest Path","authors":"Diptarka Chakraborty, Debarati Das","doi":"10.4230/LIPIcs.SWAT.2018.15","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2018.15","url":null,"abstract":"In this paper we address the problem of computing a sparse subgraph of any weighted directed graph such that the exact distances from a designated source vertex to all other vertices are preserved under bounded weight increment. Finding a small sized subgraph that preserves distances between any pair of vertices is a well studied problem. Since in the real world any network is prone to failures, it is natural to study the fault tolerant version of the above problem. Unfortunately, it turns out that there may not always exist such a sparse subgraph even under single edge failure [Demetrescu et al. '08]. However in real applications it is not always the case that a link (edge) in a network becomes completely faulty. Instead, it can happen that some links become more congested which can be captured by increasing weight on the corresponding edges. Thus it makes sense to try to construct a sparse distance preserving subgraph under the above weight increment model where total increase in weight in the whole network (graph) is bounded by some parameter k. To the best of our knowledge this problem has not been studied so far.\u0000In this paper we show that given any weighted directed graph with n vertices and a source vertex, one can construct a subgraph of size at most e * (k-1)!2^kn such that it preserves distances between the source and all other vertices as long as the total weight increment is bounded by k and we are allowed to only have integer valued (can be negative) weight on edges and also weight of an edge can only be increased by some positive integer. Next we show a lower bound of c * 2^kn, for some constant c >= 5/4, on the size of such a subgraph. We further argue that the restrictions of integral weight and integral weight increment are actually essential by showing that if we remove any one of these two we may need to store Omega(n^2) edges to preserve the distances.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131397072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-04-29DOI: 10.4230/LIPIcs.SWAT.2018.21
F. Fomin, P. Golovach, Torstein J. F. Strømme, D. Thilikos
A partial complement of the graph $G$ is a graph obtained from $G$ by complementing all the edges in one of its induced subgraphs. We study the following algorithmic question: for a given graph $G$ and graph class $mathcal{G}$, is there a partial complement of $G$ which is in $mathcal{G}$? We show that this problem can be solved in polynomial time for various choices of the graphs class $mathcal{G}$, such as bipartite, degenerate, or cographs. We complement these results by proving that the problem is NP-complete when $mathcal{G}$ is the class of $r$-regular graphs.
{"title":"Partial complementation of graphs","authors":"F. Fomin, P. Golovach, Torstein J. F. Strømme, D. Thilikos","doi":"10.4230/LIPIcs.SWAT.2018.21","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2018.21","url":null,"abstract":"A partial complement of the graph $G$ is a graph obtained from $G$ by complementing all the edges in one of its induced subgraphs. We study the following algorithmic question: for a given graph $G$ and graph class $mathcal{G}$, is there a partial complement of $G$ which is in $mathcal{G}$? We show that this problem can be solved in polynomial time for various choices of the graphs class $mathcal{G}$, such as bipartite, degenerate, or cographs. We complement these results by proving that the problem is NP-complete when $mathcal{G}$ is the class of $r$-regular graphs.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115160756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-04-01DOI: 10.4230/LIPIcs.SWAT.2018.28
Lukasz Kowalik, Arkadiusz Socala
The fastest algorithms for edge coloring run in time $2^m n^{O(1)}$, where $m$ and $n$ are the number of edges and vertices of the input graph, respectively. For dense graphs, this bound becomes $2^{Theta(n^2)}$. This is a somewhat unique situation, since most of the studied graph problems admit algorithms running in time $2^{O(nlog n)}$. It is a notorious open problem to either show an algorithm for edge coloring running in time $2^{o(n^2)}$ or to refute it, assuming Exponential Time Hypothesis (ETH) or other well established assumption. We notice that the same question can be asked for list edge coloring, a well-studied generalization of edge coloring where every edge comes with a set (often called a list) of allowed colors. Our main result states that list edge coloring for simple graphs does not admit an algorithm running in time $2^{o(n^2)}$, unless ETH fails. Interestingly, the algorithm for edge coloring running in time $2^m n^{O(1)}$ generalizes to the list version without any asymptotic slow-down. Thus, our lower bound is essentially tight. This also means that in order to design an algorithm running in time $2^{o(n^2)}$ for edge coloring, one has to exploit its special features compared to the list version.
{"title":"Tight Lower Bounds for List Edge Coloring","authors":"Lukasz Kowalik, Arkadiusz Socala","doi":"10.4230/LIPIcs.SWAT.2018.28","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2018.28","url":null,"abstract":"The fastest algorithms for edge coloring run in time $2^m n^{O(1)}$, where $m$ and $n$ are the number of edges and vertices of the input graph, respectively. For dense graphs, this bound becomes $2^{Theta(n^2)}$. This is a somewhat unique situation, since most of the studied graph problems admit algorithms running in time $2^{O(nlog n)}$. It is a notorious open problem to either show an algorithm for edge coloring running in time $2^{o(n^2)}$ or to refute it, assuming Exponential Time Hypothesis (ETH) or other well established assumption. \u0000We notice that the same question can be asked for list edge coloring, a well-studied generalization of edge coloring where every edge comes with a set (often called a list) of allowed colors. Our main result states that list edge coloring for simple graphs does not admit an algorithm running in time $2^{o(n^2)}$, unless ETH fails. Interestingly, the algorithm for edge coloring running in time $2^m n^{O(1)}$ generalizes to the list version without any asymptotic slow-down. Thus, our lower bound is essentially tight. This also means that in order to design an algorithm running in time $2^{o(n^2)}$ for edge coloring, one has to exploit its special features compared to the list version.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133021991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}