Efficient and robust anisotropic mesh adaptation is crucial for Computational Fluid Dynamics (CFD) simulations. The CFD Vision 2030 Study highlights the pressing need for this technology, particularly for simulations targeting supercomputers. This work applies a fine-grained speculative approach to anisotropic mesh operations. Our implementation exhibits more than 90% parallel efficiency on a multi-core node. Additionally, we evaluate our method within an adaptive pipeline for a spectrum of publicly available test-cases that includes both analytically derived and error-based fields. For all test-cases, our results are in accordance with published results in the literature. Support for CAD-based data is introduced, and its effectiveness is demonstrated on one of NASA's High-Lift prediction workshop cases.
{"title":"Parallel Metric-based Anisotropic Mesh Adaptation using Speculative Execution on Shared Memory","authors":"Christos Tsolakis, Nikos Chrisochoides","doi":"arxiv-2404.18030","DOIUrl":"https://doi.org/arxiv-2404.18030","url":null,"abstract":"Efficient and robust anisotropic mesh adaptation is crucial for Computational\u0000Fluid Dynamics (CFD) simulations. The CFD Vision 2030 Study highlights the\u0000pressing need for this technology, particularly for simulations targeting\u0000supercomputers. This work applies a fine-grained speculative approach to\u0000anisotropic mesh operations. Our implementation exhibits more than 90% parallel\u0000efficiency on a multi-core node. Additionally, we evaluate our method within an\u0000adaptive pipeline for a spectrum of publicly available test-cases that includes\u0000both analytically derived and error-based fields. For all test-cases, our\u0000results are in accordance with published results in the literature. Support for\u0000CAD-based data is introduced, and its effectiveness is demonstrated on one of\u0000NASA's High-Lift prediction workshop cases.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"81 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140828283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mercè Claverol, Andrea de las Heras-Parrilla, Clemens Huemer, Dolores Lara
Let $S$ be a set of $n$ points in general position in $mathbb{R}^d$. The order-$k$ Voronoi diagram of $S$, $V_k(S)$, is a subdivision of $mathbb{R}^d$ into cells whose points have the same $k$ nearest points of $S$. Sibson, in his seminal paper from 1980 (A vector identity for the Dirichlet tessellation), gives a formula to express a point $Q$ of $S$ as a convex combination of other points of $S$ by using ratios of volumes of the intersection of cells of $V_2(S)$ and the cell of $Q$ in $V_1(S)$. The natural neighbour interpolation method is based on Sibson's formula. We generalize his result to express $Q$ as a convex combination of other points of $S$ by using ratios of volumes from Voronoi diagrams of any given order.
{"title":"Sibson's formula for higher order Voronoi diagrams","authors":"Mercè Claverol, Andrea de las Heras-Parrilla, Clemens Huemer, Dolores Lara","doi":"arxiv-2404.17422","DOIUrl":"https://doi.org/arxiv-2404.17422","url":null,"abstract":"Let $S$ be a set of $n$ points in general position in $mathbb{R}^d$. The\u0000order-$k$ Voronoi diagram of $S$, $V_k(S)$, is a subdivision of $mathbb{R}^d$\u0000into cells whose points have the same $k$ nearest points of $S$. Sibson, in his seminal paper from 1980 (A vector identity for the Dirichlet\u0000tessellation), gives a formula to express a point $Q$ of $S$ as a convex\u0000combination of other points of $S$ by using ratios of volumes of the\u0000intersection of cells of $V_2(S)$ and the cell of $Q$ in $V_1(S)$. The natural\u0000neighbour interpolation method is based on Sibson's formula. We generalize his\u0000result to express $Q$ as a convex combination of other points of $S$ by using\u0000ratios of volumes from Voronoi diagrams of any given order.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"136 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140809297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weixiao Gao, Ravi Peters, Hugo Ledoux, Jantien Stoter
This paper presents a new algorithm for filling holes in Level of Detail 2 (LoD2) building mesh models, addressing the challenges posed by geometric inaccuracies and topological errors. Unlike traditional methods that often alter the original geometric structure or impose stringent input requirements, our approach preserves the integrity of the original model while effectively managing a range of topological errors. The algorithm operates in three distinct phases: (1) pre-processing, which addresses topological errors and identifies pseudo-holes; (2) detecting and extracting complete border rings of holes; and (3) remeshing, aimed at reconstructing the complete geometric surface. Our method demonstrates superior performance compared to related work in filling holes in building mesh models, achieving both uniform local geometry around the holes and structural completeness. Comparative experiments with established methods demonstrate our algorithm's effectiveness in delivering more complete and geometrically consistent hole-filling results, albeit with a slight trade-off in efficiency. The paper also identifies challenges in handling certain complex scenarios and outlines future directions for research, including the pursuit of a comprehensive repair goal for LoD2 models to achieve watertight 2-manifold models with correctly oriented normals. Our source code is available at https://github.com/tudelft3d/Automatic-Repair-of-LoD2-Building-Models.git
{"title":"Filling holes in LoD2 building models","authors":"Weixiao Gao, Ravi Peters, Hugo Ledoux, Jantien Stoter","doi":"arxiv-2404.15892","DOIUrl":"https://doi.org/arxiv-2404.15892","url":null,"abstract":"This paper presents a new algorithm for filling holes in Level of Detail 2\u0000(LoD2) building mesh models, addressing the challenges posed by geometric\u0000inaccuracies and topological errors. Unlike traditional methods that often\u0000alter the original geometric structure or impose stringent input requirements,\u0000our approach preserves the integrity of the original model while effectively\u0000managing a range of topological errors. The algorithm operates in three\u0000distinct phases: (1) pre-processing, which addresses topological errors and\u0000identifies pseudo-holes; (2) detecting and extracting complete border rings of\u0000holes; and (3) remeshing, aimed at reconstructing the complete geometric\u0000surface. Our method demonstrates superior performance compared to related work\u0000in filling holes in building mesh models, achieving both uniform local geometry\u0000around the holes and structural completeness. Comparative experiments with\u0000established methods demonstrate our algorithm's effectiveness in delivering\u0000more complete and geometrically consistent hole-filling results, albeit with a\u0000slight trade-off in efficiency. The paper also identifies challenges in\u0000handling certain complex scenarios and outlines future directions for research,\u0000including the pursuit of a comprehensive repair goal for LoD2 models to achieve\u0000watertight 2-manifold models with correctly oriented normals. Our source code\u0000is available at\u0000https://github.com/tudelft3d/Automatic-Repair-of-LoD2-Building-Models.git","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140802618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tao Liu, Tianyu Zhang, Yongxue Chen, Yuming Huang, Charlie C. L. Wang
We introduce a novel neural network-based computational pipeline as a representation-agnostic slicer for multi-axis 3D printing. This advanced slicer can work on models with diverse representations and intricate topology. The approach involves employing neural networks to establish a deformation mapping, defining a scalar field in the space surrounding an input model. Isosurfaces are subsequently extracted from this field to generate curved layers for 3D printing. Creating a differentiable pipeline enables us to optimize the mapping through loss functions directly defined on the field gradients as the local printing directions. New loss functions have been introduced to meet the manufacturing objectives of support-free and strength reinforcement. Our new computation pipeline relies less on the initial values of the field and can generate slicing results with significantly improved performance.
我们介绍了一种基于神经网络的新型计算管道,作为多轴三维打印的表征无关切片机。这种先进的切片机可以处理具有不同表现形式和复杂拓扑结构的模型。该方法采用神经网络建立变形映射,在输入模型周围空间定义标量场。然后从该场中提取等值面,生成用于 3D 打印的曲面层。创建可微分管道使我们能够通过直接定义在作为局部打印方向的场梯度上的损失函数来优化映射。我们引入了新的损耗函数,以实现无支撑和强度增强的制造目标。我们的新计算管道对场的初始值依赖较少,并能生成性能显著提高的切片结果。
{"title":"Neural Slicer for Multi-Axis 3D Printing","authors":"Tao Liu, Tianyu Zhang, Yongxue Chen, Yuming Huang, Charlie C. L. Wang","doi":"arxiv-2404.15061","DOIUrl":"https://doi.org/arxiv-2404.15061","url":null,"abstract":"We introduce a novel neural network-based computational pipeline as a\u0000representation-agnostic slicer for multi-axis 3D printing. This advanced slicer\u0000can work on models with diverse representations and intricate topology. The\u0000approach involves employing neural networks to establish a deformation mapping,\u0000defining a scalar field in the space surrounding an input model. Isosurfaces\u0000are subsequently extracted from this field to generate curved layers for 3D\u0000printing. Creating a differentiable pipeline enables us to optimize the mapping\u0000through loss functions directly defined on the field gradients as the local\u0000printing directions. New loss functions have been introduced to meet the\u0000manufacturing objectives of support-free and strength reinforcement. Our new\u0000computation pipeline relies less on the initial values of the field and can\u0000generate slicing results with significantly improved performance.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140802786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Akanksha Agrawal, Sergio Cabello, Michael Kaufmann, Saket Saurabh, Roohani Sharma, Yushi Uno, Alexander Wolff
Drawing a graph in the plane with as few crossings as possible is one of the central problems in graph drawing and computational geometry. Another option is to remove the smallest number of vertices or edges such that the remaining graph can be drawn without crossings. We study both problems in a book-embedding setting for ordered graphs, that is, graphs with a fixed vertex order. In this setting, the vertices lie on a straight line, called the spine, in the given order, and each edge must be drawn on one of several pages of a book such that every edge has at most a fixed number of crossings. In book embeddings, there is another way to reduce or avoid crossings; namely by using more pages. The minimum number of pages needed to draw an ordered graph without any crossings is its (fixed-vertex-order) page number. We show that the page number of an ordered graph with $n$ vertices and $m$ edges can be computed in $2^m cdot n^{O(1)}$ time. An $O(log n)$-approximation of this number can be computed efficiently. We can decide in $2^{O(d sqrt{k} log (d+k))} cdot n^{O(1)}$ time whether it suffices to delete $k$ edges of an ordered graph to obtain a $d$-planar layout (where every edge crosses at most $d$ other edges) on one page. As an additional parameter, we consider the size $h$ of a hitting set, that is, a set of points on the spine such that every edge, seen as an open interval, contains at least one of the points. For $h=1$, we can efficiently compute the minimum number of edges whose deletion yields fixed-vertex-order page number $p$. For $h>1$, we give an XP algorithm with respect to $h+p$. Finally, we consider spine+$t$-track drawings, where some but not all vertices lie on the spine. The vertex order on the spine is given; we must map every vertex that does not lie on the spine to one of $t$ tracks, each of which is a straight line on a separate page, parallel to the spine.
{"title":"Eliminating Crossings in Ordered Graphs","authors":"Akanksha Agrawal, Sergio Cabello, Michael Kaufmann, Saket Saurabh, Roohani Sharma, Yushi Uno, Alexander Wolff","doi":"arxiv-2404.09771","DOIUrl":"https://doi.org/arxiv-2404.09771","url":null,"abstract":"Drawing a graph in the plane with as few crossings as possible is one of the\u0000central problems in graph drawing and computational geometry. Another option is\u0000to remove the smallest number of vertices or edges such that the remaining\u0000graph can be drawn without crossings. We study both problems in a\u0000book-embedding setting for ordered graphs, that is, graphs with a fixed vertex\u0000order. In this setting, the vertices lie on a straight line, called the spine,\u0000in the given order, and each edge must be drawn on one of several pages of a\u0000book such that every edge has at most a fixed number of crossings. In book\u0000embeddings, there is another way to reduce or avoid crossings; namely by using\u0000more pages. The minimum number of pages needed to draw an ordered graph without\u0000any crossings is its (fixed-vertex-order) page number. We show that the page number of an ordered graph with $n$ vertices and $m$\u0000edges can be computed in $2^m cdot n^{O(1)}$ time. An $O(log\u0000n)$-approximation of this number can be computed efficiently. We can decide in\u0000$2^{O(d sqrt{k} log (d+k))} cdot n^{O(1)}$ time whether it suffices to\u0000delete $k$ edges of an ordered graph to obtain a $d$-planar layout (where every\u0000edge crosses at most $d$ other edges) on one page. As an additional parameter,\u0000we consider the size $h$ of a hitting set, that is, a set of points on the\u0000spine such that every edge, seen as an open interval, contains at least one of\u0000the points. For $h=1$, we can efficiently compute the minimum number of edges\u0000whose deletion yields fixed-vertex-order page number $p$. For $h>1$, we give an\u0000XP algorithm with respect to $h+p$. Finally, we consider spine+$t$-track\u0000drawings, where some but not all vertices lie on the spine. The vertex order on\u0000the spine is given; we must map every vertex that does not lie on the spine to\u0000one of $t$ tracks, each of which is a straight line on a separate page,\u0000parallel to the spine.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"114 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140587030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We show that packing axis-aligned unit squares into a simple polygon $P$ is NP-hard, even when $P$ is an orthogonal and orthogonally convex polygon with half-integer coordinates. It has been known since the early 80s that packing unit squares into a polygon with holes is NP-hard~[Fowler, Paterson, Tanimoto, Inf. Process. Lett., 1981], but the version without holes was conjectured to be polynomial-time solvable more than two decades ago~[Baur and Fekete, Algorithmica, 2001]. Our reduction relies on a new way of reducing from textsc{Planar-3SAT}. Interestingly, our geometric realization of a planar formula is non-planar. Vertices become rows and edges become columns, with crossings being allowed. The planarity ensures that all endpoints of rows and columns are incident to the outer face of the resulting drawing. We can then construct a polygon following the outer face that realizes all the logic of the formula geometrically, without the need of any holes. This new reduction technique proves to be general enough to also show hardness of two natural covering and partitioning problems, even when the input polygon is simple. We say that a polygon $Q$ is emph{small} if $Q$ is contained in a unit square. We prove that it is NP-hard to find a minimum number of small polygons whose union is $P$ (covering) and to find a minimum number of pairwise interior-disjoint small polygons whose union is $P$ (partitioning), when $P$ is an orthogonal simple polygon with half-integer coordinates. This is the first partitioning problem known to be NP-hard for polygons without holes, with the usual objective of minimizing the number of pieces.
{"title":"Hardness of Packing, Covering and Partitioning Simple Polygons with Unit Squares","authors":"Jack Stade, Mikkel Abrahamsen","doi":"arxiv-2404.09835","DOIUrl":"https://doi.org/arxiv-2404.09835","url":null,"abstract":"We show that packing axis-aligned unit squares into a simple polygon $P$ is\u0000NP-hard, even when $P$ is an orthogonal and orthogonally convex polygon with\u0000half-integer coordinates. It has been known since the early 80s that packing\u0000unit squares into a polygon with holes is NP-hard~[Fowler, Paterson, Tanimoto,\u0000Inf. Process. Lett., 1981], but the version without holes was conjectured to be\u0000polynomial-time solvable more than two decades ago~[Baur and Fekete,\u0000Algorithmica, 2001]. Our reduction relies on a new way of reducing from textsc{Planar-3SAT}.\u0000Interestingly, our geometric realization of a planar formula is non-planar.\u0000Vertices become rows and edges become columns, with crossings being allowed.\u0000The planarity ensures that all endpoints of rows and columns are incident to\u0000the outer face of the resulting drawing. We can then construct a polygon\u0000following the outer face that realizes all the logic of the formula\u0000geometrically, without the need of any holes. This new reduction technique proves to be general enough to also show\u0000hardness of two natural covering and partitioning problems, even when the input\u0000polygon is simple. We say that a polygon $Q$ is emph{small} if $Q$ is\u0000contained in a unit square. We prove that it is NP-hard to find a minimum\u0000number of small polygons whose union is $P$ (covering) and to find a minimum\u0000number of pairwise interior-disjoint small polygons whose union is $P$\u0000(partitioning), when $P$ is an orthogonal simple polygon with half-integer\u0000coordinates. This is the first partitioning problem known to be NP-hard for\u0000polygons without holes, with the usual objective of minimizing the number of\u0000pieces.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"53 41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140587031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic methods for reconstructing buildings from airborne LiDAR point clouds focus on producing accurate 3D models in a fast and scalable manner, but they overlook the problem of delivering simple and regularized models to practitioners. As a result, output meshes often suffer from connectivity approximations around corners with either the presence of multiple vertices and tiny facets, or the necessity to break the planarity constraint on roof sections and facade components. We propose a 2D planimetric arrangement-based framework to address this problem. We first regularize, not the 3D planes as commonly done in the literature, but a 2D polyhedral partition constructed from the planes. Second, we extrude this partition to 3D by an optimization process that guarantees the planarity of the roof sections as well as the preservation of the vertical discontinuities and horizontal rooftop edges. We show the benefits of our approach against existing methods by producing simpler 3D models while offering a similar fidelity and efficiency.
{"title":"SimpliCity: Reconstructing Buildings with Simple Regularized 3D Models","authors":"Jean-Philippe Bauchet, Raphael Sulzer, Florent Lafarge, Yuliya Tarabalka","doi":"arxiv-2404.08104","DOIUrl":"https://doi.org/arxiv-2404.08104","url":null,"abstract":"Automatic methods for reconstructing buildings from airborne LiDAR point\u0000clouds focus on producing accurate 3D models in a fast and scalable manner, but\u0000they overlook the problem of delivering simple and regularized models to\u0000practitioners. As a result, output meshes often suffer from connectivity\u0000approximations around corners with either the presence of multiple vertices and\u0000tiny facets, or the necessity to break the planarity constraint on roof\u0000sections and facade components. We propose a 2D planimetric arrangement-based\u0000framework to address this problem. We first regularize, not the 3D planes as\u0000commonly done in the literature, but a 2D polyhedral partition constructed from\u0000the planes. Second, we extrude this partition to 3D by an optimization process\u0000that guarantees the planarity of the roof sections as well as the preservation\u0000of the vertical discontinuities and horizontal rooftop edges. We show the\u0000benefits of our approach against existing methods by producing simpler 3D\u0000models while offering a similar fidelity and efficiency.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140587076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prosenjit Bose, Guillermo Esteban, David Orden, Rodrigo I. Silveira
Continuous 2-dimensional space is often discretized by considering a mesh of weighted cells. In this work we study how well a weighted mesh approximates the space, with respect to shortest paths. We consider a shortest path $ mathit{SP_w}(s,t) $ from $ s $ to $ t $ in the continuous 2-dimensional space, a shortest vertex path $ mathit{SVP_w}(s,t) $ (or any-angle path), which is a shortest path where the vertices of the path are vertices of the mesh, and a shortest grid path $ mathit{SGP_w}(s,t) $, which is a shortest path in a graph associated to the weighted mesh. We provide upper and lower bounds on the ratios $ frac{lVert mathit{SGP_w}(s,t)rVert}{lVert mathit{SP_w}(s,t)rVert} $, $ frac{lVert mathit{SVP_w}(s,t)rVert}{lVert mathit{SP_w}(s,t)rVert} $, $ frac{lVert mathit{SGP_w}(s,t)rVert}{lVert mathit{SVP_w}(s,t)rVert} $ in square and hexagonal meshes, extending previous results for triangular grids. These ratios determine the effectiveness of existing algorithms that compute shortest paths on the graphs obtained from the grids. Our main results are that the ratio $ frac{lVert mathit{SGP_w}(s,t)rVert}{lVert mathit{SP_w}(s,t)rVert} $ is at most $ frac{2}{sqrt{2+sqrt{2}}} approx 1.08 $ and $ frac{2}{sqrt{2+sqrt{3}}} approx 1.04 $ in a square and a hexagonal mesh, respectively.
{"title":"Approximating shortest paths in weighted square and hexagonal meshes","authors":"Prosenjit Bose, Guillermo Esteban, David Orden, Rodrigo I. Silveira","doi":"arxiv-2404.07562","DOIUrl":"https://doi.org/arxiv-2404.07562","url":null,"abstract":"Continuous 2-dimensional space is often discretized by considering a mesh of\u0000weighted cells. In this work we study how well a weighted mesh approximates the\u0000space, with respect to shortest paths. We consider a shortest path $\u0000mathit{SP_w}(s,t) $ from $ s $ to $ t $ in the continuous 2-dimensional space,\u0000a shortest vertex path $ mathit{SVP_w}(s,t) $ (or any-angle path), which is a\u0000shortest path where the vertices of the path are vertices of the mesh, and a\u0000shortest grid path $ mathit{SGP_w}(s,t) $, which is a shortest path in a graph\u0000associated to the weighted mesh. We provide upper and lower bounds on the\u0000ratios $ frac{lVert mathit{SGP_w}(s,t)rVert}{lVert\u0000mathit{SP_w}(s,t)rVert} $, $ frac{lVert mathit{SVP_w}(s,t)rVert}{lVert\u0000mathit{SP_w}(s,t)rVert} $, $ frac{lVert mathit{SGP_w}(s,t)rVert}{lVert\u0000mathit{SVP_w}(s,t)rVert} $ in square and hexagonal meshes, extending previous\u0000results for triangular grids. These ratios determine the effectiveness of\u0000existing algorithms that compute shortest paths on the graphs obtained from the\u0000grids. Our main results are that the ratio $ frac{lVert\u0000mathit{SGP_w}(s,t)rVert}{lVert mathit{SP_w}(s,t)rVert} $ is at most $\u0000frac{2}{sqrt{2+sqrt{2}}} approx 1.08 $ and $ frac{2}{sqrt{2+sqrt{3}}}\u0000approx 1.04 $ in a square and a hexagonal mesh, respectively.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140587281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given two polygonal curves, there are many ways to define a notion of similarity between them. One popular measure is the Fr'echet distance which has many desirable properties but is notoriously expensive to calculate, especially for non-trivial metrics. In 1994, Eiter and Mannila introduced the discrete Fr'echet distance which is much easier to implement and approximates the continuous Fr'echet distance with a quadratic runtime overhead. However, this algorithm relies on recursions and is not well suited for modern hardware. To that end, we introduce the Fast Fr'echet Distance algorithm, a recursion-free algorithm that calculates the discrete Fr'echet distance with a linear memory overhead and that can utilize modern hardware more effectively. We showcase an implementation with only four lines of code and present benchmarks of our algorithm running fast on modern CPUs and GPGPUs.
{"title":"Walking Your Frog Fast in 4 LoC","authors":"Nis Meinert","doi":"arxiv-2404.05708","DOIUrl":"https://doi.org/arxiv-2404.05708","url":null,"abstract":"Given two polygonal curves, there are many ways to define a notion of\u0000similarity between them. One popular measure is the Fr'echet distance which\u0000has many desirable properties but is notoriously expensive to calculate,\u0000especially for non-trivial metrics. In 1994, Eiter and Mannila introduced the\u0000discrete Fr'echet distance which is much easier to implement and approximates\u0000the continuous Fr'echet distance with a quadratic runtime overhead. However,\u0000this algorithm relies on recursions and is not well suited for modern hardware.\u0000To that end, we introduce the Fast Fr'echet Distance algorithm, a\u0000recursion-free algorithm that calculates the discrete Fr'echet distance with a\u0000linear memory overhead and that can utilize modern hardware more effectively.\u0000We showcase an implementation with only four lines of code and present\u0000benchmarks of our algorithm running fast on modern CPUs and GPGPUs.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"76 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140587029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pritam Acharya, Sujoy Bhore, Aaryan Gupta, Arindam Khan, Bratin Mondal, Andreas Wiese
We study the geometric knapsack problem in which we are given a set of $d$-dimensional objects (each with associated profits) and the goal is to find the maximum profit subset that can be packed non-overlappingly into a given $d$-dimensional (unit hypercube) knapsack. Even if $d=2$ and all input objects are disks, this problem is known to be NP-hard [Demaine, Fekete, Lang, 2010]. In this paper, we give polynomial-time $(1+varepsilon)$-approximation algorithms for the following types of input objects in any constant dimension $d$: - disks and hyperspheres, - a class of fat convex polygons that generalizes regular $k$-gons for $kge 5$ (formally, polygons with a constant number of edges, whose lengths are in a bounded range, and in which each angle is strictly larger than $pi/2$) - arbitrary fat convex objects that are sufficiently small compared to the knapsack. We remark that in our textsf{PTAS} for disks and hyperspheres, we output the computed set of objects, but for a $O_varepsilon(1)$ of them we determine their coordinates only up to an exponentially small error. However, it is not clear whether there always exists a $(1+varepsilon)$-approximate solution that uses only rational coordinates for the disks' centers. We leave this as an open problem which is related to well-studied geometric questions in the realm of circle packing.
{"title":"Approximation Schemes for Geometric Knapsack for Packing Spheres and Fat Objects","authors":"Pritam Acharya, Sujoy Bhore, Aaryan Gupta, Arindam Khan, Bratin Mondal, Andreas Wiese","doi":"arxiv-2404.03981","DOIUrl":"https://doi.org/arxiv-2404.03981","url":null,"abstract":"We study the geometric knapsack problem in which we are given a set of\u0000$d$-dimensional objects (each with associated profits) and the goal is to find\u0000the maximum profit subset that can be packed non-overlappingly into a given\u0000$d$-dimensional (unit hypercube) knapsack. Even if $d=2$ and all input objects\u0000are disks, this problem is known to be NP-hard [Demaine, Fekete, Lang, 2010].\u0000In this paper, we give polynomial-time $(1+varepsilon)$-approximation\u0000algorithms for the following types of input objects in any constant dimension\u0000$d$: - disks and hyperspheres, - a class of fat convex polygons that generalizes regular $k$-gons for $kge\u00005$ (formally, polygons with a constant number of edges, whose lengths are in a\u0000bounded range, and in which each angle is strictly larger than $pi/2$) - arbitrary fat convex objects that are sufficiently small compared to the\u0000knapsack. We remark that in our textsf{PTAS} for disks and hyperspheres, we output the\u0000computed set of objects, but for a $O_varepsilon(1)$ of them we determine\u0000their coordinates only up to an exponentially small error. However, it is not\u0000clear whether there always exists a $(1+varepsilon)$-approximate solution that\u0000uses only rational coordinates for the disks' centers. We leave this as an open\u0000problem which is related to well-studied geometric questions in the realm of\u0000circle packing.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140587025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}