Pub Date : 2021-07-09DOI: 10.4230/LIPIcs.SWAT.2022.29
Pascal Kunz, T. Fluschnik, R. Niedermeier, Malte Renken
Proximity graphs have been studied for several decades, motivated by applications in computational geometry, geography, data mining, and many other fields. However, the computational complexity of classic graph problems on proximity graphs mostly remained open. We now study 3-Colorability, Dominating Set, Feedback Vertex Set, Hamiltonian Cycle, and Independent Set on the proximity graph classes relative neighborhood graphs, Gabriel graphs, and relatively closest graphs. We prove that all of the problems remain NP-hard on these graphs, except for 3-Colorability and Hamiltonian Cycle on relatively closest graphs, where the former is trivial and the latter is left open. Moreover, for every NP-hard case we additionally show that no $2^{o(n^{1/4})}$-time algorithm exists unless the ETH fails, where n denotes the number of vertices.
{"title":"Most Classic Problems Remain NP-hard on Relative Neighborhood Graphs and their Relatives","authors":"Pascal Kunz, T. Fluschnik, R. Niedermeier, Malte Renken","doi":"10.4230/LIPIcs.SWAT.2022.29","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2022.29","url":null,"abstract":"Proximity graphs have been studied for several decades, motivated by applications in computational geometry, geography, data mining, and many other fields. However, the computational complexity of classic graph problems on proximity graphs mostly remained open. We now study 3-Colorability, Dominating Set, Feedback Vertex Set, Hamiltonian Cycle, and Independent Set on the proximity graph classes relative neighborhood graphs, Gabriel graphs, and relatively closest graphs. We prove that all of the problems remain NP-hard on these graphs, except for 3-Colorability and Hamiltonian Cycle on relatively closest graphs, where the former is trivial and the latter is left open. Moreover, for every NP-hard case we additionally show that no $2^{o(n^{1/4})}$-time algorithm exists unless the ETH fails, where n denotes the number of vertices.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124379812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-17DOI: 10.4230/LIPIcs.SWAT.2022.4
I. Kostitsyna, I. Parada, Willem Sonke, B. Speckmann, J. Wulms
A well-established theoretical model for modular robots in two dimensions are edge-connected configurations of square modules, which can reconfigure through so-called sliding moves. Dumitrescu and Pach [Graphs and Combinatorics, 2006] proved that it is always possible to reconfigure one edge-connected configuration of $n$ squares into any other using at most $O(n^2)$ sliding moves, while keeping the configuration connected at all times. For certain pairs of configurations, reconfiguration may require $Omega(n^2)$ sliding moves. However, significantly fewer moves may be sufficient. We prove that it is NP-hard to minimize the number of sliding moves for a given pair of edge-connected configurations. On the positive side we present Gather&Compact, an input-sensitive in-place algorithm that requires only $O(bar{P} n)$ sliding moves to transform one configuration into the other, where $bar{P}$ is the maximum perimeter of the two bounding boxes. The squares move within the bounding boxes only, with the exception of at most one square at a time which may move through the positions adjacent to the bounding boxes. The $O(bar{P} n)$ bound never exceeds $O(n^2)$, and is optimal (up to constant factors) among all bounds parameterized by just $n$ and $bar{P}$. Our algorithm is built on the basic principle that well-connected components of modular robots can be transformed efficiently. Hence we iteratively increase the connectivity within a configuration, to finally arrive at a single solid $xy$-monotone component. We implemented Gather&Compact and compared it experimentally to the in-place modification by Moreno and Sacrist'an [EuroCG 2020] of the Dumitrescu and Pach algorithm (MSDP). Our experiments show that Gather&Compact consistently outperforms MSDP by a significant margin, on all types of square configurations.
一个成熟的二维模块化机器人的理论模型是正方形模块的边缘连接配置,它可以通过所谓的滑动移动来重新配置。Dumitrescu和Pach [Graphs and Combinatorics, 2006]证明了使用最多$O(n^2)$滑动移动将$n$正方形的一个边连接构型重新配置为任何其他构型,同时始终保持构型的连接,总是可能的。对于某些配置对,重新配置可能需要$Omega(n^2)$滑动移动。然而,大幅减少行动可能就足够了。我们证明了对于给定的一对边连通构型,最小化滑动移动次数是np困难的。在积极的方面,我们提出了Gather&Compact,这是一种输入敏感的原地算法,只需要$O(bar{P} n)$滑动移动就可以将一种配置转换为另一种配置,其中$bar{P}$是两个边界框的最大周长。方块只能在边界框内移动,但一次最多只能移动一个方块,它可以穿过边界框附近的位置。$O(bar{P} n)$边界永远不会超过$O(n^2)$,并且在所有由$n$和$bar{P}$参数化的边界中是最优的(直到常数因子)。我们的算法建立在模块化机器人连接良好的部件可以有效转换的基本原理之上。因此,我们迭代地增加配置内的连通性,最终达到单个实体$xy$ -单调组件。我们实现了Gather&Compact,并将其与Moreno和Sacristán [EuroCG 2020]对Dumitrescu和Pach算法(MSDP)的就地修改进行了实验比较。我们的实验表明,在所有类型的正方形配置上,Gather&Compact的性能始终明显优于MSDP。
{"title":"Compacting Squares","authors":"I. Kostitsyna, I. Parada, Willem Sonke, B. Speckmann, J. Wulms","doi":"10.4230/LIPIcs.SWAT.2022.4","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2022.4","url":null,"abstract":"A well-established theoretical model for modular robots in two dimensions are edge-connected configurations of square modules, which can reconfigure through so-called sliding moves. Dumitrescu and Pach [Graphs and Combinatorics, 2006] proved that it is always possible to reconfigure one edge-connected configuration of $n$ squares into any other using at most $O(n^2)$ sliding moves, while keeping the configuration connected at all times. For certain pairs of configurations, reconfiguration may require $Omega(n^2)$ sliding moves. However, significantly fewer moves may be sufficient. We prove that it is NP-hard to minimize the number of sliding moves for a given pair of edge-connected configurations. On the positive side we present Gather&Compact, an input-sensitive in-place algorithm that requires only $O(bar{P} n)$ sliding moves to transform one configuration into the other, where $bar{P}$ is the maximum perimeter of the two bounding boxes. The squares move within the bounding boxes only, with the exception of at most one square at a time which may move through the positions adjacent to the bounding boxes. The $O(bar{P} n)$ bound never exceeds $O(n^2)$, and is optimal (up to constant factors) among all bounds parameterized by just $n$ and $bar{P}$. Our algorithm is built on the basic principle that well-connected components of modular robots can be transformed efficiently. Hence we iteratively increase the connectivity within a configuration, to finally arrive at a single solid $xy$-monotone component. We implemented Gather&Compact and compared it experimentally to the in-place modification by Moreno and Sacrist'an [EuroCG 2020] of the Dumitrescu and Pach algorithm (MSDP). Our experiments show that Gather&Compact consistently outperforms MSDP by a significant margin, on all types of square configurations.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122025804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-27DOI: 10.4230/LIPIcs.SWAT.2022.10
A. Antoniadis, Sándor Kisfaludi-Bak, Bundit Laekhanukit, Daniel Vaz
We study the variant of the Euclidean Traveling Salesman problem where instead of a set of points, we are given a set of lines as input, and the goal is to find the shortest tour that visits each line. The best known upper and lower bounds for the problem in $mathbb{R}^d$, with $dge 3$, are $mathrm{NP}$-hardness and an $O(log^3 n)$-approximation algorithm which is based on a reduction to the group Steiner tree problem. We show that TSP with lines in $mathbb{R}^d$ is APX-hard for any $dge 3$. More generally, this implies that TSP with $k$-dimensional flats does not admit a PTAS for any $1le k leq d-2$ unless $mathrm{P}=mathrm{NP}$, which gives a complete classification of the approximability of these problems, as there are known PTASes for $k=0$ (i.e., points) and $k=d-1$ (hyperplanes). We are able to give a stronger inapproximability factor for $d=O(log n)$ by showing that TSP with lines does not admit a $(2-epsilon)$-approximation in $d$ dimensions under the unique games conjecture. On the positive side, we leverage recent results on restricted variants of the group Steiner tree problem in order to give an $O(log^2 n)$-approximation algorithm for the problem, albeit with a running time of $n^{O(loglog n)}$.
{"title":"On the Approximability of the Traveling Salesman Problem with Line Neighborhoods","authors":"A. Antoniadis, Sándor Kisfaludi-Bak, Bundit Laekhanukit, Daniel Vaz","doi":"10.4230/LIPIcs.SWAT.2022.10","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2022.10","url":null,"abstract":"We study the variant of the Euclidean Traveling Salesman problem where instead of a set of points, we are given a set of lines as input, and the goal is to find the shortest tour that visits each line. The best known upper and lower bounds for the problem in $mathbb{R}^d$, with $dge 3$, are $mathrm{NP}$-hardness and an $O(log^3 n)$-approximation algorithm which is based on a reduction to the group Steiner tree problem. \u0000We show that TSP with lines in $mathbb{R}^d$ is APX-hard for any $dge 3$. More generally, this implies that TSP with $k$-dimensional flats does not admit a PTAS for any $1le k leq d-2$ unless $mathrm{P}=mathrm{NP}$, which gives a complete classification of the approximability of these problems, as there are known PTASes for $k=0$ (i.e., points) and $k=d-1$ (hyperplanes). We are able to give a stronger inapproximability factor for $d=O(log n)$ by showing that TSP with lines does not admit a $(2-epsilon)$-approximation in $d$ dimensions under the unique games conjecture. On the positive side, we leverage recent results on restricted variants of the group Steiner tree problem in order to give an $O(log^2 n)$-approximation algorithm for the problem, albeit with a running time of $n^{O(loglog n)}$.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"49 407 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123371223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-18DOI: 10.4230/LIPIcs.SWAT.2018.20
O. Filtser, M. J. Katz
The (discrete) Frechet distance (DFD) is a popular similarity measure for curves. Often the input curves are not aligned, so one of them must undergo some transformation for the distance computation to be meaningful. Ben Avraham et al. [Rinat Ben Avraham et al., 2015] presented an O(m^3n^2(1+log(n/m))log(m+n))-time algorithm for DFD between two sequences of points of sizes m and n in the plane under translation. In this paper we consider two variants of DFD, both under translation. For DFD with shortcuts in the plane, we present an O(m^2n^2 log^2(m+n))-time algorithm, by presenting a dynamic data structure for reachability queries in the underlying directed graph. In 1D, we show how to avoid the use of parametric search and remove a logarithmic factor from the running time of (the 1D versions of) these algorithms and of an algorithm for the weak discrete Frechet distance; the resulting running times are thus O(m^2n(1+log(n/m))), for the discrete Frechet distance, and O(mn log(m+n)), for its two variants. Our 1D algorithms follow a general scheme introduced by Martello et al. [Martello et al., 1984] for the Balanced Optimization Problem (BOP), which is especially useful when an efficient dynamic version of the feasibility decider is available. We present an alternative scheme for BOP, whose advantage is that it yields efficient algorithms quite easily, without having to devise a specially tailored dynamic version of the feasibility decider. We demonstrate our scheme on the most uniform path problem (significantly improving the known bound), and observe that the weak DFD under translation in 1D is a special case of it.
(离散)Frechet距离(DFD)是一种常用的曲线相似性度量方法。通常输入曲线是不对齐的,因此其中一条必须经过一些变换才能使距离计算有意义。Ben Avraham等人[Rinat Ben Avraham et al., 2015]提出了一种O(m^3n^2(1+log(n/m))log(m+n))时间算法,用于平移平面中两个大小为m和n的点序列之间的DFD。在本文中,我们考虑了两种变体的DFD,都是在翻译下。对于平面上具有快捷方式的DFD,我们通过在底层有向图中提供可达性查询的动态数据结构,提出了O(m^2n^2 log^2(m+n))时间算法。在一维中,我们展示了如何避免使用参数搜索,并从这些算法(一维版本)和弱离散Frechet距离算法的运行时间中去除对数因子;因此,对于离散Frechet距离,最终的运行时间为O(m^2n(1+log(n/m))),对于其两个变体,运行时间为O(mn log(m+n))。我们的一维算法遵循Martello等人[Martello等人,1984]为平衡优化问题(BOP)引入的一般方案,当可行性决策器的有效动态版本可用时,该方案特别有用。我们提出了一种防喷器的替代方案,其优点是它很容易产生有效的算法,而无需设计专门定制的动态可行性决策方案。我们在最均匀路径问题上证明了我们的方案(显著改进了已知的边界),并观察到一维中平移下的弱DFD是它的一个特殊情况。
{"title":"Algorithms for the Discrete Fréchet Distance Under Translation","authors":"O. Filtser, M. J. Katz","doi":"10.4230/LIPIcs.SWAT.2018.20","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2018.20","url":null,"abstract":"The (discrete) Frechet distance (DFD) is a popular similarity measure for curves. Often the input curves are not aligned, so one of them must undergo some transformation for the distance computation to be meaningful. Ben Avraham et al. [Rinat Ben Avraham et al., 2015] presented an O(m^3n^2(1+log(n/m))log(m+n))-time algorithm for DFD between two sequences of points of sizes m and n in the plane under translation. In this paper we consider two variants of DFD, both under translation.\u0000For DFD with shortcuts in the plane, we present an O(m^2n^2 log^2(m+n))-time algorithm, by presenting a dynamic data structure for reachability queries in the underlying directed graph. In 1D, we show how to avoid the use of parametric search and remove a logarithmic factor from the running time of (the 1D versions of) these algorithms and of an algorithm for the weak discrete Frechet distance; the resulting running times are thus O(m^2n(1+log(n/m))), for the discrete Frechet distance, and O(mn log(m+n)), for its two variants.\u0000Our 1D algorithms follow a general scheme introduced by Martello et al. [Martello et al., 1984] for the Balanced Optimization Problem (BOP), which is especially useful when an efficient dynamic version of the feasibility decider is available. We present an alternative scheme for BOP, whose advantage is that it yields efficient algorithms quite easily, without having to devise a specially tailored dynamic version of the feasibility decider. We demonstrate our scheme on the most uniform path problem (significantly improving the known bound), and observe that the weak DFD under translation in 1D is a special case of it.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129599364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-15DOI: 10.4230/LIPIcs.SWAT.2020.28
Haim Kaplan, J. Tenenbaum
Locality Sensitive Hashing (LSH) is an effective method to index a set of points such that we can efficiently find the nearest neighbors of a query point. We extend this method to our novel Set-query LSH (SLSH), such that it can find the nearest neighbors of a set of points, given as a query. Let $ s(x,y) $ be the similarity between two points $ x $ and $ y $. We define a similarity between a set $ Q$ and a point $ x $ by aggregating the similarities $ s(p,x) $ for all $ pin Q $. For example, we can take $ s(p,x) $ to be the angular similarity between $ p $ and $ x $ (i.e., $1-{angle (x,p)}/{pi}$), and aggregate by arithmetic or geometric averaging, or taking the lowest similarity. We develop locality sensitive hash families and data structures for a large set of such arithmetic and geometric averaging similarities, and analyze their collision probabilities. We also establish an analogous framework and hash families for distance functions. Specifically, we give a structure for the euclidean distance aggregated by either averaging or taking the maximum. We leverage SLSH to solve a geometric extension of the approximate near neighbors problem. In this version, we consider a metric for which the unit ball is an ellipsoid and its orientation is specified with the query. An important application that motivates our work is group recommendation systems. Such a system embeds movies and users in the same feature space, and the task of recommending a movie for a group to watch together, translates to a set-query $ Q $ using an appropriate similarity.
局部敏感哈希(Locality Sensitive hash, LSH)是一种有效的索引点的方法,可以有效地找到查询点的最近邻居。我们将此方法扩展到新的集-查询LSH (SLSH),这样它就可以找到作为查询给定的一组点的最近邻居。设$ s(x,y) $为两个点$ x $和$ y $之间的相似度。我们通过集合$ Q$和点$ x $之间的相似性$ s(p,x) $对Q$中所有$ p的相似性$ s(p,x) $的聚合来定义两者之间的相似性。例如,我们可以取$ s(p,x) $作为$ p $和$ x $之间的角相似度(即$1-{angle (x,p)}/{pi}$),并通过算术或几何平均或取最低相似度进行聚合。我们为大量这样的算术和几何平均相似度开发了局部敏感哈希族和数据结构,并分析了它们的碰撞概率。我们还建立了距离函数的类似框架和哈希族。具体地说,我们给出了一个欧几里得距离通过取平均值或取最大值聚合的结构。我们利用SLSH来解决近似近邻问题的几何扩展。在这个版本中,我们考虑一个度量,其中单位球是一个椭球,它的方向是通过查询指定的。激励我们工作的一个重要应用是小组推荐系统。这样的系统将电影和用户嵌入到相同的特征空间中,为一组人推荐一起观看的电影的任务,转化为使用适当相似性的集查询$ Q $。
{"title":"Locality Sensitive Hashing for Set-Queries, Motivated by Group Recommendations","authors":"Haim Kaplan, J. Tenenbaum","doi":"10.4230/LIPIcs.SWAT.2020.28","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2020.28","url":null,"abstract":"Locality Sensitive Hashing (LSH) is an effective method to index a set of points such that we can efficiently find the nearest neighbors of a query point. We extend this method to our novel Set-query LSH (SLSH), such that it can find the nearest neighbors of a set of points, given as a query. \u0000Let $ s(x,y) $ be the similarity between two points $ x $ and $ y $. We define a similarity between a set $ Q$ and a point $ x $ by aggregating the similarities $ s(p,x) $ for all $ pin Q $. For example, we can take $ s(p,x) $ to be the angular similarity between $ p $ and $ x $ (i.e., $1-{angle (x,p)}/{pi}$), and aggregate by arithmetic or geometric averaging, or taking the lowest similarity. \u0000We develop locality sensitive hash families and data structures for a large set of such arithmetic and geometric averaging similarities, and analyze their collision probabilities. We also establish an analogous framework and hash families for distance functions. Specifically, we give a structure for the euclidean distance aggregated by either averaging or taking the maximum. \u0000We leverage SLSH to solve a geometric extension of the approximate near neighbors problem. In this version, we consider a metric for which the unit ball is an ellipsoid and its orientation is specified with the query. \u0000An important application that motivates our work is group recommendation systems. Such a system embeds movies and users in the same feature space, and the task of recommending a movie for a group to watch together, translates to a set-query $ Q $ using an appropriate similarity.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129633007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-11DOI: 10.4230/LIPIcs.SWAT.2020.8
A. Backurs, Sariel Har-Peled
We study a clustering problem where the goal is to maximize the coverage of the input points by $k$ chosen centers. Specifically, given a set of $n$ points $P subseteq mathbb{R}^d$, the goal is to pick $k$ centers $C subseteq mathbb{R}^d$ that maximize the service $ sum_{p in P}mathsf{varphi}bigl( mathsf{d}(p,C) bigr) $ to the points $P$, where $mathsf{d}(p,C)$ is the distance of $p$ to its nearest center in $C$, and $mathsf{varphi}$ is a non-increasing service function $mathsf{varphi} : mathbb{R}^+ to mathbb{R}^+$. This includes problems of placing $k$ base stations as to maximize the total bandwidth to the clients -- indeed, the closer the client is to its nearest base station, the more data it can send/receive, and the target is to place $k$ base stations so that the total bandwidth is maximized. We provide an $n^{varepsilon^{-O(d)}}$ time algorithm for this problem that achieves a $(1-varepsilon)$-approximation. Notably, the runtime does not depend on the parameter $k$ and it works for an arbitrary non-increasing service function $mathsf{varphi} : mathbb{R}^+ to mathbb{R}^+$.
我们研究了一个聚类问题,其目标是通过$k$选择的中心来最大化输入点的覆盖率。具体来说,给定一组$n$点$P subseteq mathbb{R}^d$,目标是选择$k$点$C subseteq mathbb{R}^d$,使$ sum_{p in P}mathsf{varphi}bigl( mathsf{d}(p,C) bigr) $点到$P$点的服务最大化,其中$mathsf{d}(p,C)$是$p$到最近的$C$中心的距离,$mathsf{varphi}$是一个不增加的服务函数$mathsf{varphi} : mathbb{R}^+ to mathbb{R}^+$。这包括放置$k$基站以最大化到客户端的总带宽的问题——实际上,客户端离它最近的基站越近,它可以发送/接收的数据就越多,而目标是放置$k$基站以最大化总带宽。我们为这个问题提供了一个$n^{varepsilon^{-O(d)}}$时间算法,实现了$(1-varepsilon)$ -近似。值得注意的是,运行时不依赖于参数$k$,它适用于任意不增加的业务函数$mathsf{varphi} : mathbb{R}^+ to mathbb{R}^+$。
{"title":"Submodular Clustering in Low Dimensions","authors":"A. Backurs, Sariel Har-Peled","doi":"10.4230/LIPIcs.SWAT.2020.8","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2020.8","url":null,"abstract":"We study a clustering problem where the goal is to maximize the coverage of the input points by $k$ chosen centers. Specifically, given a set of $n$ points $P subseteq mathbb{R}^d$, the goal is to pick $k$ centers $C subseteq mathbb{R}^d$ that maximize the service $ sum_{p in P}mathsf{varphi}bigl( mathsf{d}(p,C) bigr) $ to the points $P$, where $mathsf{d}(p,C)$ is the distance of $p$ to its nearest center in $C$, and $mathsf{varphi}$ is a non-increasing service function $mathsf{varphi} : mathbb{R}^+ to mathbb{R}^+$. This includes problems of placing $k$ base stations as to maximize the total bandwidth to the clients -- indeed, the closer the client is to its nearest base station, the more data it can send/receive, and the target is to place $k$ base stations so that the total bandwidth is maximized. We provide an $n^{varepsilon^{-O(d)}}$ time algorithm for this problem that achieves a $(1-varepsilon)$-approximation. Notably, the runtime does not depend on the parameter $k$ and it works for an arbitrary non-increasing service function $mathsf{varphi} : mathbb{R}^+ to mathbb{R}^+$.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122342715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-05DOI: 10.4230/LIPIcs.SWAT.2020.24
D. Eppstein, Daniel Frishberg, Elham Havvaei
We formalize the simplification of activity-on-edge graphs used for visualizing project schedules, where the vertices of the graphs represent project milestones, and the edges represent either tasks of the project or timing constraints between milestones. In this framework, a timeline of the project can be constructed as a leveled drawing of the graph, where the levels of the vertices represent the time at which each milestone is scheduled to happen. We focus on the following problem: given an activity-on-edge graph representing a project, find an equivalent activity-on-edge graph (one with the same critical paths) that has the minimum possible number of milestone vertices among all equivalent activity-on-edge graphs. We provide a polynomial-time algorithm for solving this graph minimization problem.
{"title":"Simplifying Activity-on-Edge Graphs","authors":"D. Eppstein, Daniel Frishberg, Elham Havvaei","doi":"10.4230/LIPIcs.SWAT.2020.24","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2020.24","url":null,"abstract":"We formalize the simplification of activity-on-edge graphs used for visualizing project schedules, where the vertices of the graphs represent project milestones, and the edges represent either tasks of the project or timing constraints between milestones. In this framework, a timeline of the project can be constructed as a leveled drawing of the graph, where the levels of the vertices represent the time at which each milestone is scheduled to happen. We focus on the following problem: given an activity-on-edge graph representing a project, find an equivalent activity-on-edge graph (one with the same critical paths) that has the minimum possible number of milestone vertices among all equivalent activity-on-edge graphs. We provide a polynomial-time algorithm for solving this graph minimization problem.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116953202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-02-01DOI: 10.4230/LIPIcs.SWAT.2020.16
P. Bose, S. Mehrabi, Debajyoti Mondal
A emph{2-interval} is the union of two disjoint intervals on the real line. Two 2-intervals $D_1$ and $D_2$ are emph{disjoint} if their intersection is empty (i.e., no interval of $D_1$ intersects any interval of $D_2$). There can be three different relations between two disjoint 2-intervals; namely, preceding ($<$), nested ($sqsubset$) and crossing ($between$). Two 2-intervals $D_1$ and $D_2$ are called emph{$R$-comparable} for some $Rin{<,sqsubset,between}$, if either $D_1RD_2$ or $D_2RD_1$. A set $mathcal{D}$ of disjoint 2-intervals is $mathcal{R}$-comparable, for some $mathcal{R}subseteq{<,sqsubset,between}$ and $mathcal{R}neqemptyset$, if every pair of 2-intervals in $mathcal{R}$ are $R$-comparable for some $Rinmathcal{R}$. Given a set of 2-intervals and some $mathcal{R}subseteq{<,sqsubset,between}$, the objective of the emph{2-interval pattern problem} is to find a largest subset of 2-intervals that is $mathcal{R}$-comparable. The 2-interval pattern problem is known to be $W[1]$-hard when $|mathcal{R}|=3$ and $NP$-hard when $|mathcal{R}|=2$ (except for $mathcal{R}={<,sqsubset}$, which is solvable in quadratic time). In this paper, we fully settle the parameterized complexity of the problem by showing it to be $W[1]$-hard for both $mathcal{R}={sqsubset,between}$ and $mathcal{R}={<,between}$ (when parameterized by the size of an optimal solution); this answers an open question posed by Vialette [Encyclopedia of Algorithms, 2008].
A emph{2-区间}是实线上两个不相交区间的并集。如果两个2-区间$D_1$和$D_2$的交点为空(即,没有区间$D_1$与任何区间$D_2$相交),则它们为emph{不相交}。两个不相交的2-区间之间可以有三种不同的关系;即,先行($<$)、嵌套($sqsubset$)和交叉($between$)。对于某些$Rin{<,sqsubset,between}$,如果是$D_1RD_2$或$D_2RD_1$emph{,则两个2间隔的}$D_1$和$D_2$称为emph{$R$}emph{-可比}。一个不相交的2-区间集合$mathcal{D}$对于某些$mathcal{R}subseteq{<,sqsubset,between}$和$mathcal{R}neqemptyset$具有$mathcal{R}$ -可比性,如果$mathcal{R}$中的每一对2-区间对于某些$Rinmathcal{R}$具有$R$ -可比性。给定一组2-区间和一些$mathcal{R}subseteq{<,sqsubset,between}$, emph{2区间模式问题}的目标是找到可$mathcal{R}$比较的2-区间的最大子集。已知2区间模式问题在$|mathcal{R}|=3$时为$W[1]$ -hard,在$|mathcal{R}|=2$时为$NP$ -hard(除了$mathcal{R}={<,sqsubset}$,它在二次时间内可解)。在本文中,我们通过表明对于$mathcal{R}={sqsubset,between}$和$mathcal{R}={<,between}$(当以最优解的大小参数化时),问题的参数化复杂性是$W[1]$ -hard,从而完全解决了问题的参数化复杂性;这回答了Vialette[算法百科全书,2008]提出的一个开放性问题。
{"title":"Parameterized Complexity of Two-Interval Pattern Problem","authors":"P. Bose, S. Mehrabi, Debajyoti Mondal","doi":"10.4230/LIPIcs.SWAT.2020.16","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2020.16","url":null,"abstract":"A emph{2-interval} is the union of two disjoint intervals on the real line. Two 2-intervals $D_1$ and $D_2$ are emph{disjoint} if their intersection is empty (i.e., no interval of $D_1$ intersects any interval of $D_2$). There can be three different relations between two disjoint 2-intervals; namely, preceding ($<$), nested ($sqsubset$) and crossing ($between$). Two 2-intervals $D_1$ and $D_2$ are called emph{$R$-comparable} for some $Rin{<,sqsubset,between}$, if either $D_1RD_2$ or $D_2RD_1$. A set $mathcal{D}$ of disjoint 2-intervals is $mathcal{R}$-comparable, for some $mathcal{R}subseteq{<,sqsubset,between}$ and $mathcal{R}neqemptyset$, if every pair of 2-intervals in $mathcal{R}$ are $R$-comparable for some $Rinmathcal{R}$. Given a set of 2-intervals and some $mathcal{R}subseteq{<,sqsubset,between}$, the objective of the emph{2-interval pattern problem} is to find a largest subset of 2-intervals that is $mathcal{R}$-comparable. \u0000The 2-interval pattern problem is known to be $W[1]$-hard when $|mathcal{R}|=3$ and $NP$-hard when $|mathcal{R}|=2$ (except for $mathcal{R}={<,sqsubset}$, which is solvable in quadratic time). In this paper, we fully settle the parameterized complexity of the problem by showing it to be $W[1]$-hard for both $mathcal{R}={sqsubset,between}$ and $mathcal{R}={<,between}$ (when parameterized by the size of an optimal solution); this answers an open question posed by Vialette [Encyclopedia of Algorithms, 2008].","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128841096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-30DOI: 10.4230/LIPIcs.SWAT.2020.30
L. Kozma
Partially ordered sets (posets) are fundamental combinatorial objects with important applications in computer science. Perhaps the most natural algorithmic task, given a size-$n$ poset, is to compute its number of linear extensions. In 1991 Brightwell and Winkler showed this problem to be $#P$-hard. In spite of extensive research, the fastest known algorithm is still the straightforward $O(n 2^n)$-time dynamic programming (an adaptation of the Bellman-Held-Karp algorithm for the TSP). Very recently, Dittmer and Pak showed that the problem remains $#P$-hard for two-dimensional posets, and no algorithm was known to break the $2^n$-barrier even in this special case. The question of whether the two-dimensional problem is easier than the general case was raised decades ago by Mohring, Felsner and Wernisch, and others. In this paper we show that the number of linear extensions of a two-dimensional poset can be computed in time $O(1.8172^n)$. The related jump number problem asks for a linear extension of a poset, minimizing the number of neighboring incomparable pairs. The problem has applications in scheduling, and has been widely studied. In 1981 Pulleyblank showed it to be NP-complete. We show that the jump number problem can be solved (in arbitrary posets) in time $O(1.824^n)$. This improves (slightly) the previous best bound of Kratsch and Kratsch.
偏序集是计算机科学中具有重要应用的基本组合对象。也许最自然的算法任务,给定一个大小为-$n$的偏序集,就是计算它的线性扩展的数量。1991年,Brightwell和Winkler指出这个问题很难解决。尽管进行了广泛的研究,但已知最快的算法仍然是直接的$O(n2 ^n)$时间动态规划(对TSP的bellman - hold - karp算法的改编)。最近,Dittmer和Pak表明,这个问题对于二维偏序集来说仍然很困难,即使在这种特殊情况下,也没有已知的算法可以打破2^n$障碍。二维问题是否比一般情况更容易的问题是几十年前由莫林、费尔斯纳和韦尼什等人提出的。本文证明了二维偏序集的线性扩展的个数可以在时间$O(1.8172^n)$上计算。相关的跳数问题要求对偏序集进行线性扩展,使相邻的不可比较对的数量最小化。该问题在调度中有广泛的应用,并得到了广泛的研究。1981年,普利布兰克证明了它是np完备的。我们证明了跳数问题可以在时间$O(1.824^n)$上(在任意偏置集)得到解决。这(稍微)改进了之前的Kratsch和Kratsch的最佳界。
{"title":"Exact exponential algorithms for two poset problems","authors":"L. Kozma","doi":"10.4230/LIPIcs.SWAT.2020.30","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2020.30","url":null,"abstract":"Partially ordered sets (posets) are fundamental combinatorial objects with important applications in computer science. Perhaps the most natural algorithmic task, given a size-$n$ poset, is to compute its number of linear extensions. In 1991 Brightwell and Winkler showed this problem to be $#P$-hard. In spite of extensive research, the fastest known algorithm is still the straightforward $O(n 2^n)$-time dynamic programming (an adaptation of the Bellman-Held-Karp algorithm for the TSP). Very recently, Dittmer and Pak showed that the problem remains $#P$-hard for two-dimensional posets, and no algorithm was known to break the $2^n$-barrier even in this special case. The question of whether the two-dimensional problem is easier than the general case was raised decades ago by Mohring, Felsner and Wernisch, and others. In this paper we show that the number of linear extensions of a two-dimensional poset can be computed in time $O(1.8172^n)$. \u0000The related jump number problem asks for a linear extension of a poset, minimizing the number of neighboring incomparable pairs. The problem has applications in scheduling, and has been widely studied. In 1981 Pulleyblank showed it to be NP-complete. We show that the jump number problem can be solved (in arbitrary posets) in time $O(1.824^n)$. This improves (slightly) the previous best bound of Kratsch and Kratsch.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125855678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-30DOI: 10.4230/LIPIcs.SWAT.2020.12
Antonio Molina Lovett, Bryce Sandlund
We consider the minimum cut problem in undirected, weighted graphs. We give a simple algorithm to find a minimum cut that $2$-respects (cuts two edges of) a spanning tree $T$ of a graph $G$. This procedure can be used in place of the complicated subroutine given in Karger's near-linear time minimum cut algorithm (J. ACM, 2000). We give a self-contained version of Karger's algorithm with the new procedure, which is easy to state and relatively simple to implement. It produces a minimum cut on an $m$-edge, $n$-vertex graph in $O(m log^3 n)$ time with high probability, matching the complexity of Karger's approach.
{"title":"A Simple Algorithm for Minimum Cuts in Near-Linear Time","authors":"Antonio Molina Lovett, Bryce Sandlund","doi":"10.4230/LIPIcs.SWAT.2020.12","DOIUrl":"https://doi.org/10.4230/LIPIcs.SWAT.2020.12","url":null,"abstract":"We consider the minimum cut problem in undirected, weighted graphs. We give a simple algorithm to find a minimum cut that $2$-respects (cuts two edges of) a spanning tree $T$ of a graph $G$. This procedure can be used in place of the complicated subroutine given in Karger's near-linear time minimum cut algorithm (J. ACM, 2000). We give a self-contained version of Karger's algorithm with the new procedure, which is easy to state and relatively simple to implement. It produces a minimum cut on an $m$-edge, $n$-vertex graph in $O(m log^3 n)$ time with high probability, matching the complexity of Karger's approach.","PeriodicalId":447445,"journal":{"name":"Scandinavian Workshop on Algorithm Theory","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132637513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}