Graph embedding approaches attempt to project graphs into geometric entities, i.e, manifolds. The idea is that the geometric properties of the projected manifolds are helpful in the inference of graph properties. However, if the choice of the embedding manifold is incorrectly performed, it can lead to incorrect geometric inference. In this paper, we argue that the classical embedding techniques cannot lead to correct geometric interpretation as they miss the curvature at each point, of manifold. We advocate that for doing correct geometric interpretation the embedding of graph should be done over regular constant curvature manifolds. To this end, we present an embedding approach, the discrete Ricci flow graph embedding (dRfge) based on the discrete Ricci flow that adapts the distance between nodes in a graph so that the graph can be embedded onto a constant curvature manifold that is homogeneous and isotropic, i.e., all directions are equivalent and distances comparable, resulting in correct geometric interpretations. A major contribution of this paper is that for the first time, we prove the convergence of discrete Ricci flow to a constant curvature and stable distance metrics over the edges. A drawback of using the discrete Ricci flow is the high computational complexity that prevented its usage in large-scale graph analysis. Another contribution of this paper is a new algorithmic solution that makes it feasible to calculate the Ricci flow for graphs of up to 50k nodes, and beyond. The intuitions behind the discrete Ricci flow make it possible to obtain new insights into the structure of large-scale graphs. We demonstrate this through a case study on analyzing the internet connectivity structure between countries at the BGP level.
{"title":"Ironing the Graphs: Toward a Correct Geometric Analysis of Large-Scale Graphs","authors":"Saloua Naama, Kavé Salamatian, Francesco Bronzino","doi":"arxiv-2407.21609","DOIUrl":"https://doi.org/arxiv-2407.21609","url":null,"abstract":"Graph embedding approaches attempt to project graphs into geometric entities,\u0000i.e, manifolds. The idea is that the geometric properties of the projected\u0000manifolds are helpful in the inference of graph properties. However, if the\u0000choice of the embedding manifold is incorrectly performed, it can lead to\u0000incorrect geometric inference. In this paper, we argue that the classical\u0000embedding techniques cannot lead to correct geometric interpretation as they\u0000miss the curvature at each point, of manifold. We advocate that for doing\u0000correct geometric interpretation the embedding of graph should be done over\u0000regular constant curvature manifolds. To this end, we present an embedding\u0000approach, the discrete Ricci flow graph embedding (dRfge) based on the discrete\u0000Ricci flow that adapts the distance between nodes in a graph so that the graph\u0000can be embedded onto a constant curvature manifold that is homogeneous and\u0000isotropic, i.e., all directions are equivalent and distances comparable,\u0000resulting in correct geometric interpretations. A major contribution of this\u0000paper is that for the first time, we prove the convergence of discrete Ricci\u0000flow to a constant curvature and stable distance metrics over the edges. A\u0000drawback of using the discrete Ricci flow is the high computational complexity\u0000that prevented its usage in large-scale graph analysis. Another contribution of\u0000this paper is a new algorithmic solution that makes it feasible to calculate\u0000the Ricci flow for graphs of up to 50k nodes, and beyond. The intuitions behind\u0000the discrete Ricci flow make it possible to obtain new insights into the\u0000structure of large-scale graphs. We demonstrate this through a case study on\u0000analyzing the internet connectivity structure between countries at the BGP\u0000level.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"56 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141870280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kevin Buchin, Maike Buchin, Joachim Gudmundsson, Aleksandr Popov, Sampson Wong
Map matching is a common task when analysing GPS tracks, such as vehicle trajectories. The goal is to match a recorded noisy polygonal curve to a path on the map, usually represented as a geometric graph. The Fr'echet distance is a commonly used metric for curves, making it a natural fit. The map-matching problem is well-studied, yet until recently no-one tackled the data structure question: preprocess a given graph so that one can query the minimum Fr'echet distance between all graph paths and a polygonal curve. Recently, Gudmundsson, Seybold, and Wong [SODA 2023, arXiv:2211.02951] studied this problem for arbitrary query polygonal curves and $c$-packed graphs. In this paper, we instead require the graphs to be $lambda$-low-density $t$-spanners, which is significantly more representative of real-world networks. We also show how to report a path that minimises the distance efficiently rather than only returning the minimal distance, which was stated as an open problem in their paper.
{"title":"Map-Matching Queries under Fréchet Distance on Low-Density Spanners","authors":"Kevin Buchin, Maike Buchin, Joachim Gudmundsson, Aleksandr Popov, Sampson Wong","doi":"arxiv-2407.19304","DOIUrl":"https://doi.org/arxiv-2407.19304","url":null,"abstract":"Map matching is a common task when analysing GPS tracks, such as vehicle\u0000trajectories. The goal is to match a recorded noisy polygonal curve to a path\u0000on the map, usually represented as a geometric graph. The Fr'echet distance is\u0000a commonly used metric for curves, making it a natural fit. The map-matching\u0000problem is well-studied, yet until recently no-one tackled the data structure\u0000question: preprocess a given graph so that one can query the minimum Fr'echet\u0000distance between all graph paths and a polygonal curve. Recently, Gudmundsson,\u0000Seybold, and Wong [SODA 2023, arXiv:2211.02951] studied this problem for\u0000arbitrary query polygonal curves and $c$-packed graphs. In this paper, we\u0000instead require the graphs to be $lambda$-low-density $t$-spanners, which is\u0000significantly more representative of real-world networks. We also show how to\u0000report a path that minimises the distance efficiently rather than only\u0000returning the minimal distance, which was stated as an open problem in their\u0000paper.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141870281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eliot W. Robson, Jack Spalding-Jamieson, Da Wei Zheng
We investigate the problem of carving an $n$-face triangulated three-dimensional polytope using a tool to make cuts modelled by either a half-plane or sweeps from an infinite ray. In the case of half-planes cuts, we present a deterministic algorithm running in $O(n^2)$ time and a randomized algorithm running in $O(n^{3/2+varepsilon})$ expected time for any $varepsilon>0$. In the case of cuts defined by sweeps of infinite rays, we present an algorithm running in $O(n^5)$ time.
{"title":"Carving Polytopes with Saws in 3D","authors":"Eliot W. Robson, Jack Spalding-Jamieson, Da Wei Zheng","doi":"arxiv-2407.15981","DOIUrl":"https://doi.org/arxiv-2407.15981","url":null,"abstract":"We investigate the problem of carving an $n$-face triangulated\u0000three-dimensional polytope using a tool to make cuts modelled by either a\u0000half-plane or sweeps from an infinite ray. In the case of half-planes cuts, we\u0000present a deterministic algorithm running in $O(n^2)$ time and a randomized\u0000algorithm running in $O(n^{3/2+varepsilon})$ expected time for any\u0000$varepsilon>0$. In the case of cuts defined by sweeps of infinite rays, we\u0000present an algorithm running in $O(n^5)$ time.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steven van den Broek, Wouter Meulemans, Bettina Speckmann
Points of interest on a map such as restaurants, hotels, or subway stations, give rise to categorical point data: data that have a fixed location and one or more categorical attributes. Consequently, recent years have seen various set visualization approaches that visually connect points of the same category to support users in understanding the spatial distribution of categories. Existing methods use complex and often highly irregular shapes to connect points of the same category, leading to high cognitive load for the user. In this paper we introduce SimpleSets, which uses simple shapes to enclose categorical point patterns, thereby providing a clean overview of the data distribution. SimpleSets is designed to visualize sets of points with a single categorical attribute; as a result, the point patterns enclosed by SimpleSets form a partition of the data. We give formal definitions of point patterns that correspond to simple shapes and describe an algorithm that partitions categorical points into few such patterns. Our second contribution is a rendering algorithm that transforms a given partition into a clean set of shapes resulting in an aesthetically pleasing set visualization. Our algorithm pays particular attention to resolving intersections between nearby shapes in a consistent manner. We compare SimpleSets to the state-of-the-art set visualizations using standard datasets from the literature.
{"title":"SimpleSets: Capturing Categorical Point Patterns with Simple Shapes","authors":"Steven van den Broek, Wouter Meulemans, Bettina Speckmann","doi":"arxiv-2407.14433","DOIUrl":"https://doi.org/arxiv-2407.14433","url":null,"abstract":"Points of interest on a map such as restaurants, hotels, or subway stations,\u0000give rise to categorical point data: data that have a fixed location and one or\u0000more categorical attributes. Consequently, recent years have seen various set\u0000visualization approaches that visually connect points of the same category to\u0000support users in understanding the spatial distribution of categories. Existing\u0000methods use complex and often highly irregular shapes to connect points of the\u0000same category, leading to high cognitive load for the user. In this paper we\u0000introduce SimpleSets, which uses simple shapes to enclose categorical point\u0000patterns, thereby providing a clean overview of the data distribution.\u0000SimpleSets is designed to visualize sets of points with a single categorical\u0000attribute; as a result, the point patterns enclosed by SimpleSets form a\u0000partition of the data. We give formal definitions of point patterns that\u0000correspond to simple shapes and describe an algorithm that partitions\u0000categorical points into few such patterns. Our second contribution is a\u0000rendering algorithm that transforms a given partition into a clean set of\u0000shapes resulting in an aesthetically pleasing set visualization. Our algorithm\u0000pays particular attention to resolving intersections between nearby shapes in a\u0000consistent manner. We compare SimpleSets to the state-of-the-art set\u0000visualizations using standard datasets from the literature.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"56 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141743995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MIT Hardness Group, Nithid Anchaleenukoon, Alex Dang, Erik D. Demaine, Kaylee Ji, Pitchayut Saengrungkongka
Given a chain of $HW$ cubes where each cube is marked "turn $90^circ$" or "go straight", when can it fold into a $1 times H times W$ rectangular box? We prove several variants of this (still) open problem NP-hard: (1) allowing some cubes to be wildcard (can turn or go straight); (2) allowing a larger box with empty spaces (simplifying a proof from CCCG 2022); (3) growing the box (and the number of cubes) to $2 times H times W$ (improving a prior 3D result from height $8$ to $2$); (4) with hexagonal prisms rather than cubes, each specified as going straight, turning $60^circ$, or turning $120^circ$; and (5) allowing the cubes to be encoded implicitly to compress exponentially large repetitions.
给定一个由 $HW$ 立方体组成的链,其中每个立方体都标有 "转 90^circ$ "或 "直走",那么它什么时候能折叠成一个 $1 times H times W$ 的矩形盒子?我们证明了这个(仍然)未决问题的几种 NP 难变体:(1) 允许一些立方体是通配符(可以转弯或直行);(2) 允许一个更大的空方框(简化了 CCCG 2022 的证明);(3) 将方框(和立方体数量)增加到 2 (乘以 H (乘以 W $)(将之前的三维结果从高度 $8 改进为 $2);(4) 使用六角棱柱而不是立方体,每个棱柱指定为直行、转弯 60^circ$ 或转弯 120^circ$;(5) 允许对立方体进行隐式编码,以压缩指数级的大量重复。
{"title":"Complexity of 2D Snake Cube Puzzles","authors":"MIT Hardness Group, Nithid Anchaleenukoon, Alex Dang, Erik D. Demaine, Kaylee Ji, Pitchayut Saengrungkongka","doi":"arxiv-2407.10323","DOIUrl":"https://doi.org/arxiv-2407.10323","url":null,"abstract":"Given a chain of $HW$ cubes where each cube is marked \"turn $90^circ$\" or\u0000\"go straight\", when can it fold into a $1 times H times W$ rectangular box?\u0000We prove several variants of this (still) open problem NP-hard: (1) allowing\u0000some cubes to be wildcard (can turn or go straight); (2) allowing a larger box\u0000with empty spaces (simplifying a proof from CCCG 2022); (3) growing the box\u0000(and the number of cubes) to $2 times H times W$ (improving a prior 3D result\u0000from height $8$ to $2$); (4) with hexagonal prisms rather than cubes, each\u0000specified as going straight, turning $60^circ$, or turning $120^circ$; and\u0000(5) allowing the cubes to be encoded implicitly to compress exponentially large\u0000repetitions.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141722448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most recent unsupervised non-rigid 3D shape matching methods are based on the functional map framework due to its efficiency and superior performance. Nevertheless, respective methods struggle to obtain spatially smooth pointwise correspondences due to the lack of proper regularisation. In this work, inspired by the success of message passing on graphs, we propose a synchronous diffusion process which we use as regularisation to achieve smoothness in non-rigid 3D shape matching problems. The intuition of synchronous diffusion is that diffusing the same input function on two different shapes results in consistent outputs. Using different challenging datasets, we demonstrate that our novel regularisation can substantially improve the state-of-the-art in shape matching, especially in the presence of topological noise.
{"title":"Synchronous Diffusion for Unsupervised Smooth Non-Rigid 3D Shape Matching","authors":"Dongliang Cao, Zorah Laehner, Florian Bernard","doi":"arxiv-2407.08244","DOIUrl":"https://doi.org/arxiv-2407.08244","url":null,"abstract":"Most recent unsupervised non-rigid 3D shape matching methods are based on the\u0000functional map framework due to its efficiency and superior performance.\u0000Nevertheless, respective methods struggle to obtain spatially smooth pointwise\u0000correspondences due to the lack of proper regularisation. In this work,\u0000inspired by the success of message passing on graphs, we propose a synchronous\u0000diffusion process which we use as regularisation to achieve smoothness in\u0000non-rigid 3D shape matching problems. The intuition of synchronous diffusion is\u0000that diffusing the same input function on two different shapes results in\u0000consistent outputs. Using different challenging datasets, we demonstrate that\u0000our novel regularisation can substantially improve the state-of-the-art in\u0000shape matching, especially in the presence of topological noise.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141613783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One approach to studying the Fr'echet distance is to consider curves that satisfy realistic assumptions. By now, the most popular realistic assumption for curves is $c$-packedness. Existing algorithms for computing the Fr'echet distance between $c$-packed curves require both curves to be $c$-packed. In this paper, we only require one of the two curves to be $c$-packed. Our result is a nearly-linear time algorithm that $(1+varepsilon)$-approximates the Fr'echet distance between a $c$-packed curve and a general curve in $mathbb R^d$, for constant values of $varepsilon$, $d$ and $c$.
{"title":"Approximating the Fréchet distance when only one curve is $c$-packed","authors":"Joachim Gudmundsson, Michael Mai, Sampson Wong","doi":"arxiv-2407.05114","DOIUrl":"https://doi.org/arxiv-2407.05114","url":null,"abstract":"One approach to studying the Fr'echet distance is to consider curves that\u0000satisfy realistic assumptions. By now, the most popular realistic assumption\u0000for curves is $c$-packedness. Existing algorithms for computing the Fr'echet\u0000distance between $c$-packed curves require both curves to be $c$-packed. In\u0000this paper, we only require one of the two curves to be $c$-packed. Our result\u0000is a nearly-linear time algorithm that $(1+varepsilon)$-approximates the\u0000Fr'echet distance between a $c$-packed curve and a general curve in $mathbb\u0000R^d$, for constant values of $varepsilon$, $d$ and $c$.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141569145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kevin Buchin, Maike Buchin, Joachim Gudmundsson, Sampson Wong
Spanner constructions focus on the initial design of the network. However, networks tend to improve over time. In this paper, we focus on the improvement step. Given a graph and a budget $k$, which $k$ edges do we add to the graph to minimise its dilation? Gudmundsson and Wong [TALG'22] provided the first positive result for this problem, but their approximation factor is linear in $k$. Our main result is a $(2 sqrt[r]{2} k^{1/r},2r)$-bicriteria approximation that runs in $O(n^3 log n)$ time, for all $r geq 1$. In other words, if $t^*$ is the minimum dilation after adding any $k$ edges to a graph, then our algorithm adds $O(k^{1+1/r})$ edges to the graph to obtain a dilation of $2rt^*$. Moreover, our analysis of the algorithm is tight under the ErdH{o}s girth conjecture.
{"title":"Bicriteria approximation for minimum dilation graph augmentation","authors":"Kevin Buchin, Maike Buchin, Joachim Gudmundsson, Sampson Wong","doi":"arxiv-2407.04614","DOIUrl":"https://doi.org/arxiv-2407.04614","url":null,"abstract":"Spanner constructions focus on the initial design of the network. However,\u0000networks tend to improve over time. In this paper, we focus on the improvement\u0000step. Given a graph and a budget $k$, which $k$ edges do we add to the graph to\u0000minimise its dilation? Gudmundsson and Wong [TALG'22] provided the first\u0000positive result for this problem, but their approximation factor is linear in\u0000$k$. Our main result is a $(2 sqrt[r]{2} k^{1/r},2r)$-bicriteria approximation\u0000that runs in $O(n^3 log n)$ time, for all $r geq 1$. In other words, if $t^*$\u0000is the minimum dilation after adding any $k$ edges to a graph, then our\u0000algorithm adds $O(k^{1+1/r})$ edges to the graph to obtain a dilation of\u0000$2rt^*$. Moreover, our analysis of the algorithm is tight under the ErdH{o}s\u0000girth conjecture.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141569146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sariel Har-Peled, Benjamin Raichel, Eliot W. Robson
We show that a minor variant of the continuous Fr'echet distance between polygonal curves can be computed using essentially the same algorithm used to solve the discrete version, thus dramatically simplifying the algorithm for computing it. The new variant is not necessarily monotone, but this shortcoming can be easily handled via refinement. Combined with a Dijkstra/Prim type algorithm, this leads to a realization of the Fr'echet distance (i.e., a morphing) that is locally optimal (aka locally correct), that is both easy to compute, and in practice, takes near linear time on many inputs. The new morphing has the property that the leash is always as short-as-possible. We implemented the new algorithm, and developed various strategies to get a fast execution in practice. Among our new contributions is a new simplification strategy that is distance-sensitive, and enables us to compute the exact continuous Fr'echet distance in near linear time in practice. We preformed extensive experiments on our new algorithm, and released texttt{Julia} and texttt{Python} packages with these new implementations.
{"title":"The Fréchet Distance Unleashed: Approximating a Dog with a Frog","authors":"Sariel Har-Peled, Benjamin Raichel, Eliot W. Robson","doi":"arxiv-2407.03101","DOIUrl":"https://doi.org/arxiv-2407.03101","url":null,"abstract":"We show that a minor variant of the continuous Fr'echet distance between\u0000polygonal curves can be computed using essentially the same algorithm used to\u0000solve the discrete version, thus dramatically simplifying the algorithm for\u0000computing it. The new variant is not necessarily monotone, but this shortcoming\u0000can be easily handled via refinement. Combined with a Dijkstra/Prim type algorithm, this leads to a realization of\u0000the Fr'echet distance (i.e., a morphing) that is locally optimal (aka locally\u0000correct), that is both easy to compute, and in practice, takes near linear time\u0000on many inputs. The new morphing has the property that the leash is always as\u0000short-as-possible. We implemented the new algorithm, and developed various strategies to get a\u0000fast execution in practice. Among our new contributions is a new simplification\u0000strategy that is distance-sensitive, and enables us to compute the exact\u0000continuous Fr'echet distance in near linear time in practice. We preformed\u0000extensive experiments on our new algorithm, and released texttt{Julia} and\u0000texttt{Python} packages with these new implementations.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141551373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Orthogonal Polygon Covering with Squares (OPCS) problem takes as input an orthogonal polygon $P$ without holes with $n$ vertices, where vertices have integral coordinates. The aim is to find a minimum number of axis-parallel, possibly overlapping squares which lie completely inside $P$, such that their union covers the entire region inside $P$. Aupperle et. al~cite{aupperle1988covering} provide an $mathcal O(N^{1.5})$-time algorithm to solve OPCS for orthogonal polygons without holes, where $N$ is the number of integral lattice points lying in the interior or on the boundary of $P$. Designing algorithms for OPCS with a running time polynomial in $n$ (the number of vertices of $P$) was discussed as an open question in cite{aupperle1988covering}, since $N$ can be exponentially larger than $n$. In this paper we design a polynomial-time exact algorithm for OPCS with a running time of $mathcal O(n^{14})$. We also consider the following structural parameterized version of the problem. A knob in an orthogonal polygon is a polygon edge whose both endpoints are convex polygon vertices. Given an input orthogonal polygon with $n$ vertices and $k$ knobs, we design an algorithm for OPCS with running time $mathcal O(n^2 + k^{14} cdot n)$. In cite{aupperle1988covering}, the Orthogonal Polygon with Holes Covering with Squares (OPCSH) problem is also studied where orthogonal polygon could have holes, and the objective is to find a minimum square covering of the input polygon. This is shown to be NP-complete. We think there is an error in the existing proof in cite{aupperle1988covering}, where a reduction from Planar 3-CNF is shown. We fix this error in the proof with an alternate construction of one of the gadgets used in the reduction, hence completing the proof of NP-completeness of OPCSH.
{"title":"Efficient Exact Algorithms for Minimum Covering of Orthogonal Polygons with Squares","authors":"Anubhav Dhar, Subham Ghosh, Sudeshna Kolay","doi":"arxiv-2407.02658","DOIUrl":"https://doi.org/arxiv-2407.02658","url":null,"abstract":"The Orthogonal Polygon Covering with Squares (OPCS) problem takes as input an\u0000orthogonal polygon $P$ without holes with $n$ vertices, where vertices have\u0000integral coordinates. The aim is to find a minimum number of axis-parallel,\u0000possibly overlapping squares which lie completely inside $P$, such that their\u0000union covers the entire region inside $P$. Aupperle et.\u0000al~cite{aupperle1988covering} provide an $mathcal O(N^{1.5})$-time algorithm\u0000to solve OPCS for orthogonal polygons without holes, where $N$ is the number of\u0000integral lattice points lying in the interior or on the boundary of $P$.\u0000Designing algorithms for OPCS with a running time polynomial in $n$ (the number\u0000of vertices of $P$) was discussed as an open question in\u0000cite{aupperle1988covering}, since $N$ can be exponentially larger than $n$. In\u0000this paper we design a polynomial-time exact algorithm for OPCS with a running\u0000time of $mathcal O(n^{14})$. We also consider the following structural parameterized version of the\u0000problem. A knob in an orthogonal polygon is a polygon edge whose both endpoints\u0000are convex polygon vertices. Given an input orthogonal polygon with $n$\u0000vertices and $k$ knobs, we design an algorithm for OPCS with running time\u0000$mathcal O(n^2 + k^{14} cdot n)$. In cite{aupperle1988covering}, the Orthogonal Polygon with Holes Covering\u0000with Squares (OPCSH) problem is also studied where orthogonal polygon could\u0000have holes, and the objective is to find a minimum square covering of the input\u0000polygon. This is shown to be NP-complete. We think there is an error in the\u0000existing proof in cite{aupperle1988covering}, where a reduction from Planar\u00003-CNF is shown. We fix this error in the proof with an alternate construction\u0000of one of the gadgets used in the reduction, hence completing the proof of\u0000NP-completeness of OPCSH.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"364 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141551374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}