This paper explores the problem of effectively compressing 3D geometry sets containing diverse categories. We make textit{the first} attempt to tackle this fundamental and challenging problem and propose NeCGS, a neural compression paradigm, which can compress hundreds of detailed and diverse 3D mesh models (~684 MB) by about 900 times (0.76 MB) with high accuracy and preservation of detailed geometric details. Specifically, we first represent each irregular mesh model/shape in a regular representation that implicitly describes the geometry structure of the model using a 4D regular volume, called TSDF-Def volume. Such a regular representation can not only capture local surfaces more effectively but also facilitate the subsequent process. Then we construct a quantization-aware auto-decoder network architecture to regress these 4D volumes, which can summarize the similarity of local geometric structures within a model and across different models for redundancy limination, resulting in more compact representations, including an embedded feature of a smaller size associated with each model and a network parameter set shared by all models. We finally encode the resulting features and network parameters into bitstreams through entropy coding. After decompressing the features and network parameters, we can reconstruct the TSDF-Def volumes, where the 3D surfaces can be extracted through the deformable marching cubes.Extensive experiments and ablation studies demonstrate the significant advantages of our NeCGS over state-of-the-art methods both quantitatively and qualitatively.
{"title":"NeCGS: Neural Compression for 3D Geometry Sets","authors":"Siyu Ren, Junhui Hou, Wenping Wang","doi":"arxiv-2405.15034","DOIUrl":"https://doi.org/arxiv-2405.15034","url":null,"abstract":"This paper explores the problem of effectively compressing 3D geometry sets\u0000containing diverse categories. We make textit{the first} attempt to tackle\u0000this fundamental and challenging problem and propose NeCGS, a neural\u0000compression paradigm, which can compress hundreds of detailed and diverse 3D\u0000mesh models (~684 MB) by about 900 times (0.76 MB) with high accuracy and\u0000preservation of detailed geometric details. Specifically, we first represent\u0000each irregular mesh model/shape in a regular representation that implicitly\u0000describes the geometry structure of the model using a 4D regular volume, called\u0000TSDF-Def volume. Such a regular representation can not only capture local\u0000surfaces more effectively but also facilitate the subsequent process. Then we\u0000construct a quantization-aware auto-decoder network architecture to regress\u0000these 4D volumes, which can summarize the similarity of local geometric\u0000structures within a model and across different models for redundancy\u0000limination, resulting in more compact representations, including an embedded\u0000feature of a smaller size associated with each model and a network parameter\u0000set shared by all models. We finally encode the resulting features and network\u0000parameters into bitstreams through entropy coding. After decompressing the\u0000features and network parameters, we can reconstruct the TSDF-Def volumes, where\u0000the 3D surfaces can be extracted through the deformable marching\u0000cubes.Extensive experiments and ablation studies demonstrate the significant\u0000advantages of our NeCGS over state-of-the-art methods both quantitatively and\u0000qualitatively.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141165844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article introduces a general mesh intersection algorithm that exactly computes the so-called Weiler model and that uses it to implement boolean operations with arbitrary multi-operand expressions, CSG (constructive solid geometry) and some mesh repair operations. From an input polygon soup, the algorithm first computes the co-refinement, with an exact representation of the intersection points. Then, the decomposition of 3D space into volumetric regions (Weiler model) is constructed, by sorting the facets around the non-manifold intersection edges (radial sort), using specialized exact predicates. Finally, based on the input boolean expression, the triangular facets that belong to the boundary of the result are classified. This is, to our knowledge, the first algorithm that computes an exact Weiler model. To implement all the involved predicates and constructions, two geometric kernels are proposed, tested and discussed (arithmetic expansions and multi-precision floating-point). As a guiding principle,the combinatorial information shared between each step is kept as simple as possible. It is made possible by treating all the particular cases in the kernel. In particular, triangles with intersections are remeshed using the (uniquely defined) Constrained Delaunay Triangulation, with symbolic perturbations to disambiguate configurations with co-cyclic points. It makes it easy to discard the duplicated triangles that appear when remeshing overlapping facets. The method is tested and compared with previous work, on the existing "thingi10K" dataset (to test co-refinement and mesh repair) and on a new "thingiCSG" dataset made publicly available (to test the full CSG pipeline) on a variety of interesting examples featuring different types of "pathologies"
{"title":"Exact predicates, exact constructions and combinatorics for mesh CSG","authors":"Bruno Lévy","doi":"arxiv-2405.12949","DOIUrl":"https://doi.org/arxiv-2405.12949","url":null,"abstract":"This article introduces a general mesh intersection algorithm that exactly\u0000computes the so-called Weiler model and that uses it to implement boolean\u0000operations with arbitrary multi-operand expressions, CSG (constructive solid\u0000geometry) and some mesh repair operations. From an input polygon soup, the\u0000algorithm first computes the co-refinement, with an exact representation of the\u0000intersection points. Then, the decomposition of 3D space into volumetric\u0000regions (Weiler model) is constructed, by sorting the facets around the\u0000non-manifold intersection edges (radial sort), using specialized exact\u0000predicates. Finally, based on the input boolean expression, the triangular\u0000facets that belong to the boundary of the result are classified. This is, to\u0000our knowledge, the first algorithm that computes an exact Weiler model. To\u0000implement all the involved predicates and constructions, two geometric kernels\u0000are proposed, tested and discussed (arithmetic expansions and multi-precision\u0000floating-point). As a guiding principle,the combinatorial information shared\u0000between each step is kept as simple as possible. It is made possible by\u0000treating all the particular cases in the kernel. In particular, triangles with\u0000intersections are remeshed using the (uniquely defined) Constrained Delaunay\u0000Triangulation, with symbolic perturbations to disambiguate configurations with\u0000co-cyclic points. It makes it easy to discard the duplicated triangles that\u0000appear when remeshing overlapping facets. The method is tested and compared\u0000with previous work, on the existing \"thingi10K\" dataset (to test co-refinement\u0000and mesh repair) and on a new \"thingiCSG\" dataset made publicly available (to\u0000test the full CSG pipeline) on a variety of interesting examples featuring\u0000different types of \"pathologies\"","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141149038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The computation of Voronoi Diagrams, or their dual Delauney triangulations is difficult in high dimensions. In a recent publication Polianskii and Pokorny propose an iterative randomized algorithm facilitating the approximation of Voronoi tesselations in high dimensions. In this paper, we provide an improved vertex search method that is not only exact but even faster than the bisection method that was previously recommended. Building on this we also provide a depth-first graph-traversal algorithm which allows us to compute the entire Voronoi diagram. This enables us to compare the outcomes with those of classical algorithms like qHull, which we either match or marginally beat in terms of computation time. We furthermore show how the raycasting algorithm naturally lends to a Monte Carlo approximation for the volume and boundary integrals of the Voronoi cells, both of which are of importance for finite Volume methods. We compare the Monte-Carlo methods to the exact polygonal integration, as well as a hybrid approximation scheme.
{"title":"Voronoi Graph -- Improved raycasting and integration schemes for high dimensional Voronoi diagrams","authors":"Alexander Sikorski, Martin Heida","doi":"arxiv-2405.10050","DOIUrl":"https://doi.org/arxiv-2405.10050","url":null,"abstract":"The computation of Voronoi Diagrams, or their dual Delauney triangulations is\u0000difficult in high dimensions. In a recent publication Polianskii and Pokorny\u0000propose an iterative randomized algorithm facilitating the approximation of\u0000Voronoi tesselations in high dimensions. In this paper, we provide an improved\u0000vertex search method that is not only exact but even faster than the bisection\u0000method that was previously recommended. Building on this we also provide a\u0000depth-first graph-traversal algorithm which allows us to compute the entire\u0000Voronoi diagram. This enables us to compare the outcomes with those of\u0000classical algorithms like qHull, which we either match or marginally beat in\u0000terms of computation time. We furthermore show how the raycasting algorithm\u0000naturally lends to a Monte Carlo approximation for the volume and boundary\u0000integrals of the Voronoi cells, both of which are of importance for finite\u0000Volume methods. We compare the Monte-Carlo methods to the exact polygonal\u0000integration, as well as a hybrid approximation scheme.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141059363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Triply periodic minimal surface (TPMS) is emerging as an important way of designing microstructures. However, there has been limited use of commercial CAD/CAM/CAE software packages for TPMS design and manufacturing. This is mainly because TPMS is consistently described in the functional representation (F-rep) format, while modern CAD/CAM/CAE tools are built upon the boundary representation (B-rep) format. One possible solution to this gap is translating TPMS to STEP, which is the standard data exchange format of CAD/CAM/CAE. Following this direction, this paper proposes a new translation method with error-controlling and $C^2$ continuity-preserving features. It is based on an approximation error-driven TPMS sampling algorithm and a constrained-PIA algorithm. The sampling algorithm controls the deviation between the original and translated models. With it, an error bound of $2epsilon$ on the deviation can be ensured if two conditions called $epsilon$-density and $epsilon$-approximation are satisfied. The constrained-PIA algorithm enforces $C^2$ continuity constraints during TPMS approximation, and meanwhile attaining high efficiency. A theoretical convergence proof of this algorithm is also given. The effectiveness of the translation method has been demonstrated by a series of examples and comparisons.
{"title":"TPMS2STEP: error-controlled and C2 continuity-preserving translation of TPMS models to STEP files based on constrained-PIA","authors":"Yaonaiming Zhao, Qiang Zou, Guoyue Luo, Jiayu Wu, Sifan Chen","doi":"arxiv-2405.07946","DOIUrl":"https://doi.org/arxiv-2405.07946","url":null,"abstract":"Triply periodic minimal surface (TPMS) is emerging as an important way of\u0000designing microstructures. However, there has been limited use of commercial\u0000CAD/CAM/CAE software packages for TPMS design and manufacturing. This is mainly\u0000because TPMS is consistently described in the functional representation (F-rep)\u0000format, while modern CAD/CAM/CAE tools are built upon the boundary\u0000representation (B-rep) format. One possible solution to this gap is translating\u0000TPMS to STEP, which is the standard data exchange format of CAD/CAM/CAE.\u0000Following this direction, this paper proposes a new translation method with\u0000error-controlling and $C^2$ continuity-preserving features. It is based on an\u0000approximation error-driven TPMS sampling algorithm and a constrained-PIA\u0000algorithm. The sampling algorithm controls the deviation between the original\u0000and translated models. With it, an error bound of $2epsilon$ on the deviation\u0000can be ensured if two conditions called $epsilon$-density and\u0000$epsilon$-approximation are satisfied. The constrained-PIA algorithm enforces\u0000$C^2$ continuity constraints during TPMS approximation, and meanwhile attaining\u0000high efficiency. A theoretical convergence proof of this algorithm is also\u0000given. The effectiveness of the translation method has been demonstrated by a\u0000series of examples and comparisons.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140929387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Topological integral transforms have found many applications in shape analysis, from prediction of clinical outcomes in brain cancer to analysis of barley seeds. Using Euler characteristic as a measure, these objects record rich geometric information on weighted polytopal complexes. While some implementations exist, they only enable discretized representations of the transforms, and they do not handle weighted complexes (such as for instance images). Moreover, recent hybrid transforms lack an implementation. In this paper, we introduce Eucalc, a novel implementation of three topological integral transforms -- the Euler characteristic transform, the Radon transform, and hybrid transforms -- for weighted cubical complexes. Leveraging piecewise linear Morse theory and Euler calculus, the algorithms significantly reduce computational complexity by focusing on critical points. Our software provides exact representations of transforms, handles both binary and grayscale images, and supports multi-core processing. It is publicly available as a C++ library with a Python wrapper. We present mathematical foundations, implementation details, and experimental evaluations, demonstrating Eucalc's efficiency.
拓扑积分变换在形状分析方面有许多应用,从预测脑癌的临床结果到分析大麦种子。这些对象使用欧拉特征作为度量,记录了加权多顶复合物的丰富几何信息。虽然有一些实现方法,但它们只能实现变换的离散化表示,无法处理加权复合物(例如图像)。此外,最近的混合变换也缺乏实现方法。本文介绍了 Eucalc,它是针对加权立方复数的三种拓扑积分变换--欧拉特征变换、拉登变换和混合变换--的新型实现。我们的软件提供了精确的变换表示,可处理二进制和灰度图像,并支持多核处理。它是一个公开的 C++ 库,带有 Python 封装。我们介绍了 Eucalc 的数学基础、实现细节和实验评估,展示了 Eucalc 的效率。
{"title":"Efficient computation of topological integral transforms","authors":"Vadim Lebovici, Steve Oudot, Hugo Passe","doi":"arxiv-2405.02256","DOIUrl":"https://doi.org/arxiv-2405.02256","url":null,"abstract":"Topological integral transforms have found many applications in shape\u0000analysis, from prediction of clinical outcomes in brain cancer to analysis of\u0000barley seeds. Using Euler characteristic as a measure, these objects record\u0000rich geometric information on weighted polytopal complexes. While some\u0000implementations exist, they only enable discretized representations of the\u0000transforms, and they do not handle weighted complexes (such as for instance\u0000images). Moreover, recent hybrid transforms lack an implementation. In this paper, we introduce Eucalc, a novel implementation of three\u0000topological integral transforms -- the Euler characteristic transform, the\u0000Radon transform, and hybrid transforms -- for weighted cubical complexes.\u0000Leveraging piecewise linear Morse theory and Euler calculus, the algorithms\u0000significantly reduce computational complexity by focusing on critical points.\u0000Our software provides exact representations of transforms, handles both binary\u0000and grayscale images, and supports multi-core processing. It is publicly\u0000available as a C++ library with a Python wrapper. We present mathematical\u0000foundations, implementation details, and experimental evaluations,\u0000demonstrating Eucalc's efficiency.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140881761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe optimal robust algorithms for finding a triangle and the unweighted girth in a unit disk graph, as well as finding a triangle in a transmission graph.In the robust setting, the input is not given as a set of sites in the plane, but rather as an abstract graph. The input may or may not be realizable as a unit disk graph or a transmission graph. If the graph is realizable, the algorithm is guaranteed to give the correct answer. If not, the algorithm will either give a correct answer or correctly state that the input is not of the required type.
{"title":"Robust Algorithms for Finding Triangles and Computing the Girth in Unit Disk and Transmission Graphs","authors":"Katharina Klost, Wolfgang Mulzer","doi":"arxiv-2405.01180","DOIUrl":"https://doi.org/arxiv-2405.01180","url":null,"abstract":"We describe optimal robust algorithms for finding a triangle and the\u0000unweighted girth in a unit disk graph, as well as finding a triangle in a\u0000transmission graph.In the robust setting, the input is not given as a set of\u0000sites in the plane, but rather as an abstract graph. The input may or may not\u0000be realizable as a unit disk graph or a transmission graph. If the graph is\u0000realizable, the algorithm is guaranteed to give the correct answer. If not, the\u0000algorithm will either give a correct answer or correctly state that the input\u0000is not of the required type.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140828286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Geometric packing problems have been investigated for centuries in mathematics. In contrast, works on sphere packing in the field of approximation algorithms are scarce. Most results are for squares and rectangles, and their d-dimensional counterparts. To help fill this gap, we present a framework that yields approximation schemes for the geometric knapsack problem as well as other packing problems and some generalizations, and that supports not only hyperspheres but also a wide range of shapes for the items and the bins. Our first result is a PTAS for the hypersphere multiple knapsack problem. In fact, we can deal with a more generalized version of the problem that contains additional constraints on the items. These constraints, under some conditions, can encompass very common and pertinent constraints such as conflict constraints, multiple-choice constraints, and capacity constraints. Our second result is a resource augmentation scheme for the multiple knapsack problem for a wide range of convex fat objects, which are not restricted to polygons and polytopes. Examples are ellipsoids, rhombi, hypercubes, hyperspheres under the Lp-norm, etc. Also, for the generalized version of the multiple knapsack problem, our technique still yields a PTAS under resource augmentation for these objects. Thirdly, we improve the resource augmentation schemes of fat objects to allow rotation on the objects by any angle. This result, in particular, brings something extra to our framework, since most results comprising such general objects are limited to translations. At last, our framework is able to contemplate other problems such as the cutting stock problem, the minimum-size bin packing problem and the multiple strip packing problem.
{"title":"A Framework for Approximation Schemes on Knapsack and Packing Problems of Hyperspheres and Fat Objects","authors":"Vítor Gomes Chagas, Elisa Dell'Arriva, Flávio Keidi Miyazawa","doi":"arxiv-2405.00246","DOIUrl":"https://doi.org/arxiv-2405.00246","url":null,"abstract":"Geometric packing problems have been investigated for centuries in\u0000mathematics. In contrast, works on sphere packing in the field of approximation\u0000algorithms are scarce. Most results are for squares and rectangles, and their\u0000d-dimensional counterparts. To help fill this gap, we present a framework that\u0000yields approximation schemes for the geometric knapsack problem as well as\u0000other packing problems and some generalizations, and that supports not only\u0000hyperspheres but also a wide range of shapes for the items and the bins. Our\u0000first result is a PTAS for the hypersphere multiple knapsack problem. In fact,\u0000we can deal with a more generalized version of the problem that contains\u0000additional constraints on the items. These constraints, under some conditions,\u0000can encompass very common and pertinent constraints such as conflict\u0000constraints, multiple-choice constraints, and capacity constraints. Our second\u0000result is a resource augmentation scheme for the multiple knapsack problem for\u0000a wide range of convex fat objects, which are not restricted to polygons and\u0000polytopes. Examples are ellipsoids, rhombi, hypercubes, hyperspheres under the\u0000Lp-norm, etc. Also, for the generalized version of the multiple knapsack\u0000problem, our technique still yields a PTAS under resource augmentation for\u0000these objects. Thirdly, we improve the resource augmentation schemes of fat\u0000objects to allow rotation on the objects by any angle. This result, in\u0000particular, brings something extra to our framework, since most results\u0000comprising such general objects are limited to translations. At last, our\u0000framework is able to contemplate other problems such as the cutting stock\u0000problem, the minimum-size bin packing problem and the multiple strip packing\u0000problem.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140828227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The fine-grained complexity of computing the Fr'echet distance has been a topic of much recent work, starting with the quadratic SETH-based conditional lower bound by Bringmann from 2014. Subsequent work established largely the same complexity lower bounds for the Fr'echet distance in 1D. However, the imbalanced case, which was shown by Bringmann to be tight in dimensions $dgeq 2$, was still left open. Filling in this gap, we show that a faster algorithm for the Fr'echet distance in the imbalanced case is possible: Given two 1-dimensional curves of complexity $n$ and $n^{alpha}$ for some $alpha in (0,1)$, we can compute their Fr'echet distance in $O(n^{2alpha} log^2 n + n log n)$ time. This rules out a conditional lower bound of the form $O((nm)^{1-epsilon})$ that Bringmann showed for $d geq 2$ and any $varepsilon>0$ in turn showing a strict separation with the setting $d=1$. At the heart of our approach lies a data structure that stores a 1-dimensional curve $P$ of complexity $n$, and supports queries with a curve $Q$ of complexity~$m$ for the continuous Fr'echet distance between $P$ and $Q$. The data structure has size in $mathcal{O}(nlog n)$ and uses query time in $mathcal{O}(m^2 log^2 n)$. Our proof uses a key lemma that is based on the concept of visiting orders and may be of independent interest. We demonstrate this by substantially simplifying the correctness proof of a clustering algorithm by Driemel, Krivov{s}ija and Sohler from 2015.
{"title":"A faster algorithm for the Fréchet distance in 1D for the imbalanced case","authors":"Lotte Blank, Anne Driemel","doi":"arxiv-2404.18738","DOIUrl":"https://doi.org/arxiv-2404.18738","url":null,"abstract":"The fine-grained complexity of computing the Fr'echet distance has been a\u0000topic of much recent work, starting with the quadratic SETH-based conditional\u0000lower bound by Bringmann from 2014. Subsequent work established largely the\u0000same complexity lower bounds for the Fr'echet distance in 1D. However, the\u0000imbalanced case, which was shown by Bringmann to be tight in dimensions $dgeq\u00002$, was still left open. Filling in this gap, we show that a faster algorithm\u0000for the Fr'echet distance in the imbalanced case is possible: Given two\u00001-dimensional curves of complexity $n$ and $n^{alpha}$ for some $alpha in\u0000(0,1)$, we can compute their Fr'echet distance in $O(n^{2alpha} log^2 n + n\u0000log n)$ time. This rules out a conditional lower bound of the form\u0000$O((nm)^{1-epsilon})$ that Bringmann showed for $d geq 2$ and any\u0000$varepsilon>0$ in turn showing a strict separation with the setting $d=1$. At\u0000the heart of our approach lies a data structure that stores a 1-dimensional\u0000curve $P$ of complexity $n$, and supports queries with a curve $Q$ of\u0000complexity~$m$ for the continuous Fr'echet distance between $P$ and $Q$. The\u0000data structure has size in $mathcal{O}(nlog n)$ and uses query time in\u0000$mathcal{O}(m^2 log^2 n)$. Our proof uses a key lemma that is based on the\u0000concept of visiting orders and may be of independent interest. We demonstrate\u0000this by substantially simplifying the correctness proof of a clustering\u0000algorithm by Driemel, Krivov{s}ija and Sohler from 2015.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140827978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Danny Z. Chen, Ziyun Huang, Yangwei Liu, Jinhui Xu
In this paper, we study a generalization of the classical Voronoi diagram, called clustering induced Voronoi diagram (CIVD). Different from the traditional model, CIVD takes as its sites the power set $U$ of an input set $P$ of objects. For each subset $C$ of $P$, CIVD uses an influence function $F(C,q)$ to measure the total (or joint) influence of all objects in $C$ on an arbitrary point $q$ in the space $mathbb{R}^d$, and determines the influence-based Voronoi cell in $mathbb{R}^d$ for $C$. This generalized model offers a number of new features (e.g., simultaneous clustering and space partition) to Voronoi diagram which are useful in various new applications. We investigate the general conditions for the influence function which ensure the existence of a small-size (e.g., nearly linear) approximate CIVD for a set $P$ of $n$ points in $mathbb{R}^d$ for some fixed $d$. To construct CIVD, we first present a standalone new technique, called approximate influence (AI) decomposition, for the general CIVD problem. With only $O(nlog n)$ time, the AI decomposition partitions the space $mathbb{R}^{d}$ into a nearly linear number of cells so that all points in each cell receive their approximate maximum influence from the same (possibly unknown) site (i.e., a subset of $P$). Based on this technique, we develop assignment algorithms to determine a proper site for each cell in the decomposition and form various $(1-epsilon)$-approximate CIVDs for some small fixed $epsilon>0$. Particularly, we consider two representative CIVD problems, vector CIVD and density-based CIVD, and show that both of them admit fast assignment algorithms; consequently, their $(1-epsilon)$-approximate CIVDs can be built in $O(n log^{max{3,d+1}}n)$ and $O(n log^{2} n)$ time, respectively.
{"title":"On Clustering Induced Voronoi Diagrams","authors":"Danny Z. Chen, Ziyun Huang, Yangwei Liu, Jinhui Xu","doi":"arxiv-2404.18906","DOIUrl":"https://doi.org/arxiv-2404.18906","url":null,"abstract":"In this paper, we study a generalization of the classical Voronoi diagram,\u0000called clustering induced Voronoi diagram (CIVD). Different from the\u0000traditional model, CIVD takes as its sites the power set $U$ of an input set\u0000$P$ of objects. For each subset $C$ of $P$, CIVD uses an influence function\u0000$F(C,q)$ to measure the total (or joint) influence of all objects in $C$ on an\u0000arbitrary point $q$ in the space $mathbb{R}^d$, and determines the\u0000influence-based Voronoi cell in $mathbb{R}^d$ for $C$. This generalized model\u0000offers a number of new features (e.g., simultaneous clustering and space\u0000partition) to Voronoi diagram which are useful in various new applications. We\u0000investigate the general conditions for the influence function which ensure the\u0000existence of a small-size (e.g., nearly linear) approximate CIVD for a set $P$\u0000of $n$ points in $mathbb{R}^d$ for some fixed $d$. To construct CIVD, we first\u0000present a standalone new technique, called approximate influence (AI)\u0000decomposition, for the general CIVD problem. With only $O(nlog n)$ time, the\u0000AI decomposition partitions the space $mathbb{R}^{d}$ into a nearly linear\u0000number of cells so that all points in each cell receive their approximate\u0000maximum influence from the same (possibly unknown) site (i.e., a subset of\u0000$P$). Based on this technique, we develop assignment algorithms to determine a\u0000proper site for each cell in the decomposition and form various\u0000$(1-epsilon)$-approximate CIVDs for some small fixed $epsilon>0$.\u0000Particularly, we consider two representative CIVD problems, vector CIVD and\u0000density-based CIVD, and show that both of them admit fast assignment\u0000algorithms; consequently, their $(1-epsilon)$-approximate CIVDs can be built\u0000in $O(n log^{max{3,d+1}}n)$ and $O(n log^{2} n)$ time, respectively.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140828226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given a road network modelled as a planar straight-line graph $G=(V,E)$ with $|V|=n$, let $(u,v)in Vtimes V$, the shortest path (distance) between $u,v$ is denoted as $delta_G(u,v)$. Let $delta(G)=max_{(u,v)}delta_G(u,v)$, for $(u,v)in Vtimes V$, which is called the diameter of $G$. Given a disconnected road network modelled as two disjoint trees $T_1$ and $T_2$, this paper first aims at inserting one and two edges (bridges) between them to minimize the (constrained) diameter $delta(T_1cup T_2cup I_j)$ going through the inserted edges, where $I_j, j=1,2$, is the set of inserted edges with $|I_1|=1$ and $|I_2|=2$. The corresponding problems are called the {em optimal bridge} and {em twin bridges} problems. Since when more than one edge are inserted between two trees the resulting graph is becoming more complex, for the general network $G$ we consider the problem of inserting a minimum of $k$ edges such that the shortest distances between a set of $m$ pairs $P={(u_i,v_i)mid u_i,v_iin V, iin [m]}$, $delta_G(u_i,v_i)$'s, are all decreased. The main results of this paper are summarized as follows: (1) We show that the optimal bridge problem can be solved in $O(n^2)$ time and that a variation of it has a near-quadratic lower bound unless SETH fails. The proof also implies that the famous 3-SUM problem does have a near-quadratic lower bound for large integers, e.g., each of the $n$ input integers has $Omega(log n)$ decimal digits. We then give a simple factor-2 $O(nlog n)$ time approximation algorithm for the optimal bridge problem. (2) We present an $O(n^4)$ time algorithm to solve the twin bridges problem, exploiting some new property not in the optimal bridge problem. (3) For the general problem of inserting $k$ edges to reduce the (graph) distances between $m$ given pairs, we show that the problem is NP-complete.
{"title":"Optimal Bridge, Twin Bridges and Beyond: Inserting Edges into a Road Network to Minimize the Constrained Diameters","authors":"Zhidan Feng, Henning Fernau, Binhai Zhu","doi":"arxiv-2404.19164","DOIUrl":"https://doi.org/arxiv-2404.19164","url":null,"abstract":"Given a road network modelled as a planar straight-line graph $G=(V,E)$ with\u0000$|V|=n$, let $(u,v)in Vtimes V$, the shortest path (distance) between $u,v$\u0000is denoted as $delta_G(u,v)$. Let $delta(G)=max_{(u,v)}delta_G(u,v)$, for\u0000$(u,v)in Vtimes V$, which is called the diameter of $G$. Given a disconnected\u0000road network modelled as two disjoint trees $T_1$ and $T_2$, this paper first\u0000aims at inserting one and two edges (bridges) between them to minimize the\u0000(constrained) diameter $delta(T_1cup T_2cup I_j)$ going through the inserted\u0000edges, where $I_j, j=1,2$, is the set of inserted edges with $|I_1|=1$ and\u0000$|I_2|=2$. The corresponding problems are called the {em optimal bridge} and\u0000{em twin bridges} problems. Since when more than one edge are inserted between\u0000two trees the resulting graph is becoming more complex, for the general network\u0000$G$ we consider the problem of inserting a minimum of $k$ edges such that the\u0000shortest distances between a set of $m$ pairs $P={(u_i,v_i)mid u_i,v_iin V,\u0000iin [m]}$, $delta_G(u_i,v_i)$'s, are all decreased. The main results of this paper are summarized as follows: (1) We show that the optimal bridge problem can be solved in $O(n^2)$ time\u0000and that a variation of it has a near-quadratic lower bound unless SETH fails.\u0000The proof also implies that the famous 3-SUM problem does have a near-quadratic\u0000lower bound for large integers, e.g., each of the $n$ input integers has\u0000$Omega(log n)$ decimal digits. We then give a simple factor-2 $O(nlog n)$\u0000time approximation algorithm for the optimal bridge problem. (2) We present an $O(n^4)$ time algorithm to solve the twin bridges problem,\u0000exploiting some new property not in the optimal bridge problem. (3) For the general problem of inserting $k$ edges to reduce the (graph)\u0000distances between $m$ given pairs, we show that the problem is NP-complete.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140828285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}