Geometric data sets arising in modern applications are often very large and change dynamically over time. A popular framework for dealing with such data sets is the evolving data framework, where a discrete structure continuously varies over time due to the unseen actions of an evolver, which makes small changes to the data. An algorithm probes the current state through an oracle, and the objective is to maintain a hypothesis of the data set's current state that is close to its actual state at all times. In this paper, we apply this framework to maintaining a set of $n$ point objects in motion in $d$-dimensional Euclidean space. To model the uncertainty in the object locations, both the ground truth and hypothesis are based on spatial probability distributions, and the distance between them is measured by the Kullback-Leibler divergence (relative entropy). We introduce a simple and intuitive motion model where with each time step, the distance that any object can move is a fraction of the distance to its nearest neighbor. We present an algorithm that, in steady state, guarantees a distance of $O(n)$ between the true and hypothesized placements. We also show that for any algorithm in this model, there is an evolver that can generate a distance of $Omega(n)$, implying that our algorithm is asymptotically optimal.
{"title":"Evolving Distributions Under Local Motion","authors":"Aditya Acharya, David M. Mount","doi":"arxiv-2409.11779","DOIUrl":"https://doi.org/arxiv-2409.11779","url":null,"abstract":"Geometric data sets arising in modern applications are often very large and\u0000change dynamically over time. A popular framework for dealing with such data\u0000sets is the evolving data framework, where a discrete structure continuously\u0000varies over time due to the unseen actions of an evolver, which makes small\u0000changes to the data. An algorithm probes the current state through an oracle,\u0000and the objective is to maintain a hypothesis of the data set's current state\u0000that is close to its actual state at all times. In this paper, we apply this\u0000framework to maintaining a set of $n$ point objects in motion in\u0000$d$-dimensional Euclidean space. To model the uncertainty in the object\u0000locations, both the ground truth and hypothesis are based on spatial\u0000probability distributions, and the distance between them is measured by the\u0000Kullback-Leibler divergence (relative entropy). We introduce a simple and\u0000intuitive motion model where with each time step, the distance that any object\u0000can move is a fraction of the distance to its nearest neighbor. We present an\u0000algorithm that, in steady state, guarantees a distance of $O(n)$ between the\u0000true and hypothesized placements. We also show that for any algorithm in this\u0000model, there is an evolver that can generate a distance of $Omega(n)$,\u0000implying that our algorithm is asymptotically optimal.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hugo A. Akitaya, Ahmad Biniaz, Erik D. Demaine, Linda Kleist, Frederick Stock, Csaba D. Tóth
For a set of red and blue points in the plane, a minimum bichromatic spanning tree (MinBST) is a shortest spanning tree of the points such that every edge has a red and a blue endpoint. A MinBST can be computed in $O(nlog n)$ time where $n$ is the number of points. In contrast to the standard Euclidean MST, which is always plane (noncrossing), a MinBST may have edges that cross each other. However, we prove that a MinBST is quasi-plane, that is, it does not contain three pairwise crossing edges, and we determine the maximum number of crossings. Moreover, we study the problem of finding a minimum plane bichromatic spanning tree (MinPBST) which is a shortest bichromatic spanning tree with pairwise noncrossing edges. This problem is known to be NP-hard. The previous best approximation algorithm, due to Borgelt et al. (2009), has a ratio of $O(sqrt{n})$. It is also known that the optimum solution can be computed in polynomial time in some special cases, for instance, when the points are in convex position, collinear, semi-collinear, or when one color class has constant size. We present an $O(log n)$-factor approximation algorithm for the general case.
{"title":"Minimum Plane Bichromatic Spanning Trees","authors":"Hugo A. Akitaya, Ahmad Biniaz, Erik D. Demaine, Linda Kleist, Frederick Stock, Csaba D. Tóth","doi":"arxiv-2409.11614","DOIUrl":"https://doi.org/arxiv-2409.11614","url":null,"abstract":"For a set of red and blue points in the plane, a minimum bichromatic spanning\u0000tree (MinBST) is a shortest spanning tree of the points such that every edge\u0000has a red and a blue endpoint. A MinBST can be computed in $O(nlog n)$ time\u0000where $n$ is the number of points. In contrast to the standard Euclidean MST,\u0000which is always plane (noncrossing), a MinBST may have edges that cross each\u0000other. However, we prove that a MinBST is quasi-plane, that is, it does not\u0000contain three pairwise crossing edges, and we determine the maximum number of\u0000crossings. Moreover, we study the problem of finding a minimum plane bichromatic\u0000spanning tree (MinPBST) which is a shortest bichromatic spanning tree with\u0000pairwise noncrossing edges. This problem is known to be NP-hard. The previous\u0000best approximation algorithm, due to Borgelt et al. (2009), has a ratio of\u0000$O(sqrt{n})$. It is also known that the optimum solution can be computed in\u0000polynomial time in some special cases, for instance, when the points are in\u0000convex position, collinear, semi-collinear, or when one color class has\u0000constant size. We present an $O(log n)$-factor approximation algorithm for the\u0000general case.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"65 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The hitting set problem is one of the fundamental problems in combinatorial optimization and is well-studied in offline setup. We consider the online hitting set problem, where only the set of points is known in advance, and objects are introduced one by one. Our objective is to maintain a minimum-sized hitting set by making irrevocable decisions. Here, we present the study of two variants of the online hitting set problem depending on the point set. In the first variant, we consider the point set to be the entire $mathbb{Z}^d$, while in the second variant, we consider the point set to be a finite subset of $mathbb{R}^2$. For hitting similarly sized {$alpha$-fat objects} in $mathbb{R}^d$ with diameters in the range $[1, M]$ using points in $mathbb{Z}^d$, we propose a deterministic algorithm having a competitive ratio of at most ${lfloorfrac{2}{alpha}+2rfloor^d}$ $left(lfloorlog_{2}Mrfloor+1right)$. This improves the current best-known upper bound due to Alefkhani et al. [WAOA'23]. Then, for homothetic hypercubes in $mathbb{R}^d$ with side lengths in the range $[1, M]$ using points in $mathbb{Z}^d$, we propose a randomized algorithm having a competitive ratio of $O(d^2log M)$. To complement this result, we show that no randomized algorithm can have a competitive ratio better than $Omega(dlog M)$. This improves the current best-known (deterministic) upper and lower bound of $25^dlog M$ and $Omega(log M)$, respectively, due to Alefkhani et al. [WAOA'23]. Next, we consider the hitting set problem when the point set consists of $n$ points in $mathbb{R}^2$ and the objects are homothetic regular $k$-gons having diameter in the range $[1, M]$. We present an $O(log nlog M)$ competitive randomized algorithm. In particular, for a fixed $M$ this result partially answers an open question for squares proposed by Khan et al. [SoCG'23] and Alefkhani et al. [WAOA'23].
{"title":"New Lower Bound and Algorithms for Online Geometric Hitting Set Problem","authors":"Minati De, Ratnadip Mandal, Satyam Singh","doi":"arxiv-2409.11166","DOIUrl":"https://doi.org/arxiv-2409.11166","url":null,"abstract":"The hitting set problem is one of the fundamental problems in combinatorial\u0000optimization and is well-studied in offline setup. We consider the online\u0000hitting set problem, where only the set of points is known in advance, and\u0000objects are introduced one by one. Our objective is to maintain a minimum-sized\u0000hitting set by making irrevocable decisions. Here, we present the study of two\u0000variants of the online hitting set problem depending on the point set. In the\u0000first variant, we consider the point set to be the entire $mathbb{Z}^d$, while\u0000in the second variant, we consider the point set to be a finite subset of\u0000$mathbb{R}^2$. For hitting similarly sized {$alpha$-fat objects} in $mathbb{R}^d$ with\u0000diameters in the range $[1, M]$ using points in $mathbb{Z}^d$, we propose a\u0000deterministic algorithm having a competitive ratio of at most\u0000${lfloorfrac{2}{alpha}+2rfloor^d}$\u0000$left(lfloorlog_{2}Mrfloor+1right)$. This improves the current best-known\u0000upper bound due to Alefkhani et al. [WAOA'23]. Then, for homothetic hypercubes\u0000in $mathbb{R}^d$ with side lengths in the range $[1, M]$ using points in\u0000$mathbb{Z}^d$, we propose a randomized algorithm having a competitive ratio of\u0000$O(d^2log M)$. To complement this result, we show that no randomized algorithm\u0000can have a competitive ratio better than $Omega(dlog M)$. This improves the\u0000current best-known (deterministic) upper and lower bound of $25^dlog M$ and\u0000$Omega(log M)$, respectively, due to Alefkhani et al. [WAOA'23]. Next, we consider the hitting set problem when the point set consists of $n$\u0000points in $mathbb{R}^2$ and the objects are homothetic regular $k$-gons having\u0000diameter in the range $[1, M]$. We present an $O(log nlog M)$ competitive\u0000randomized algorithm. In particular, for a fixed $M$ this result partially\u0000answers an open question for squares proposed by Khan et al. [SoCG'23] and\u0000Alefkhani et al. [WAOA'23].","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weiran Lyu, Raghavendra Sridharamurthy, Jeff M. Phillips, Bei Wang
Scalar field comparison is a fundamental task in scientific visualization. In topological data analysis, we compare topological descriptors of scalar fields -- such as persistence diagrams and merge trees -- because they provide succinct and robust abstract representations. Several similarity measures for topological descriptors seem to be both asymptotically and practically efficient with polynomial time algorithms, but they do not scale well when handling large-scale, time-varying scientific data and ensembles. In this paper, we propose a new framework to facilitate the comparative analysis of merge trees, inspired by tools from locality sensitive hashing (LSH). LSH hashes similar objects into the same hash buckets with high probability. We propose two new similarity measures for merge trees that can be computed via LSH, using new extensions to Recursive MinHash and subpath signature, respectively. Our similarity measures are extremely efficient to compute and closely resemble the results of existing measures such as merge tree edit distance or geometric interleaving distance. Our experiments demonstrate the utility of our LSH framework in applications such as shape matching, clustering, key event detection, and ensemble summarization.
{"title":"Fast Comparative Analysis of Merge Trees Using Locality Sensitive Hashing","authors":"Weiran Lyu, Raghavendra Sridharamurthy, Jeff M. Phillips, Bei Wang","doi":"arxiv-2409.08519","DOIUrl":"https://doi.org/arxiv-2409.08519","url":null,"abstract":"Scalar field comparison is a fundamental task in scientific visualization. In\u0000topological data analysis, we compare topological descriptors of scalar fields\u0000-- such as persistence diagrams and merge trees -- because they provide\u0000succinct and robust abstract representations. Several similarity measures for\u0000topological descriptors seem to be both asymptotically and practically\u0000efficient with polynomial time algorithms, but they do not scale well when\u0000handling large-scale, time-varying scientific data and ensembles. In this\u0000paper, we propose a new framework to facilitate the comparative analysis of\u0000merge trees, inspired by tools from locality sensitive hashing (LSH). LSH\u0000hashes similar objects into the same hash buckets with high probability. We\u0000propose two new similarity measures for merge trees that can be computed via\u0000LSH, using new extensions to Recursive MinHash and subpath signature,\u0000respectively. Our similarity measures are extremely efficient to compute and\u0000closely resemble the results of existing measures such as merge tree edit\u0000distance or geometric interleaving distance. Our experiments demonstrate the\u0000utility of our LSH framework in applications such as shape matching,\u0000clustering, key event detection, and ensemble summarization.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prosenjit Bose, Jean-Lou De Carufel, Guillermo Esteban, Anil Maheshwari
In this article, we present an approximation algorithm for solving the Weighted Region Problem amidst a set of $ n $ non-overlapping weighted disks in the plane. For a given parameter $ varepsilon in (0,1]$, the length of the approximate path is at most $ (1 +varepsilon) $ times larger than the length of the actual shortest path. The algorithm is based on the discretization of the space by placing points on the boundary of the disks. Using such a discretization we can use Dijkstra's algorithm for computing a shortest path in the geometric graph obtained in (pseudo-)polynomial time.
{"title":"Computing shortest paths amid non-overlapping weighted disks","authors":"Prosenjit Bose, Jean-Lou De Carufel, Guillermo Esteban, Anil Maheshwari","doi":"arxiv-2409.08869","DOIUrl":"https://doi.org/arxiv-2409.08869","url":null,"abstract":"In this article, we present an approximation algorithm for solving the\u0000Weighted Region Problem amidst a set of $ n $ non-overlapping weighted disks in\u0000the plane. For a given parameter $ varepsilon in (0,1]$, the length of the\u0000approximate path is at most $ (1 +varepsilon) $ times larger than the length\u0000of the actual shortest path. The algorithm is based on the discretization of\u0000the space by placing points on the boundary of the disks. Using such a\u0000discretization we can use Dijkstra's algorithm for computing a shortest path in\u0000the geometric graph obtained in (pseudo-)polynomial time.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142251454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Morse-Smale complex is a standard tool in visual data analysis. The classic definition is based on a continuous view of the gradient of a scalar function where its zeros are the critical points. These points are connected via gradient curves and surfaces emanating from saddle points, known as separatrices. In a discrete setting, the Morse-Smale complex is commonly extracted by constructing a combinatorial gradient assuming the steepest descent direction. Previous works have shown that this method results in a geometric embedding of the separatrices that can be fundamentally different from those in the continuous case. To achieve a similar embedding, different approaches for constructing a combinatorial gradient were proposed. In this paper, we show that these approaches generate a different topology, i.e., the connectivity between critical points changes. Additionally, we demonstrate that the steepest descent method can compute topologically and geometrically accurate Morse-Smale complexes when applied to certain types of grids. Based on these observations, we suggest a method to attain both geometric and topological accuracy for the Morse-Smale complex of data sampled on a uniform grid.
{"title":"Revisiting Accurate Geometry for Morse-Smale Complexes","authors":"Son Le Thanh, Michael Ankele, Tino Weinkauf","doi":"arxiv-2409.05532","DOIUrl":"https://doi.org/arxiv-2409.05532","url":null,"abstract":"The Morse-Smale complex is a standard tool in visual data analysis. The\u0000classic definition is based on a continuous view of the gradient of a scalar\u0000function where its zeros are the critical points. These points are connected\u0000via gradient curves and surfaces emanating from saddle points, known as\u0000separatrices. In a discrete setting, the Morse-Smale complex is commonly\u0000extracted by constructing a combinatorial gradient assuming the steepest\u0000descent direction. Previous works have shown that this method results in a\u0000geometric embedding of the separatrices that can be fundamentally different\u0000from those in the continuous case. To achieve a similar embedding, different\u0000approaches for constructing a combinatorial gradient were proposed. In this\u0000paper, we show that these approaches generate a different topology, i.e., the\u0000connectivity between critical points changes. Additionally, we demonstrate that\u0000the steepest descent method can compute topologically and geometrically\u0000accurate Morse-Smale complexes when applied to certain types of grids. Based on\u0000these observations, we suggest a method to attain both geometric and\u0000topological accuracy for the Morse-Smale complex of data sampled on a uniform\u0000grid.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"108 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142214996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The persistence barcode is a topological descriptor of data that plays a fundamental role in topological data analysis. Given a filtration of the space of data, a persistence barcode tracks the evolution of its homological features. In this paper, we introduce a novel type of barcode, referred to as the canonical barcode of harmonic chains, or harmonic chain barcode for short, which tracks the evolution of harmonic chains. As our main result, we show that the harmonic chain barcode is stable and it captures both geometric and topological information of data. Moreover, given a filtration of a simplicial complex of size $n$ with $m$ time steps, we can compute its harmonic chain barcode in $O(m^2n^{omega} + mn^3)$ time, where $n^omega$ is the matrix multiplication time. Consequently, a harmonic chain barcode can be utilized in applications in which a persistence barcode is applicable, such as feature vectorization and machine learning. Our work provides strong evidence in a growing list of literature that geometric (not just topological) information can be recovered from a persistence filtration.
{"title":"Harmonic Chain Barcode and Stability","authors":"Salman Parsa, Bei Wang","doi":"arxiv-2409.06093","DOIUrl":"https://doi.org/arxiv-2409.06093","url":null,"abstract":"The persistence barcode is a topological descriptor of data that plays a\u0000fundamental role in topological data analysis. Given a filtration of the space\u0000of data, a persistence barcode tracks the evolution of its homological\u0000features. In this paper, we introduce a novel type of barcode, referred to as\u0000the canonical barcode of harmonic chains, or harmonic chain barcode for short,\u0000which tracks the evolution of harmonic chains. As our main result, we show that\u0000the harmonic chain barcode is stable and it captures both geometric and\u0000topological information of data. Moreover, given a filtration of a simplicial\u0000complex of size $n$ with $m$ time steps, we can compute its harmonic chain\u0000barcode in $O(m^2n^{omega} + mn^3)$ time, where $n^omega$ is the matrix\u0000multiplication time. Consequently, a harmonic chain barcode can be utilized in\u0000applications in which a persistence barcode is applicable, such as feature\u0000vectorization and machine learning. Our work provides strong evidence in a\u0000growing list of literature that geometric (not just topological) information\u0000can be recovered from a persistence filtration.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142214994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alejandro García-Castellanos, Aniss Aiman Medbouhi, Giovanni Luca Marchetti, Erik J. Bekkers, Danica Kragic
We propose HyperSteiner -- an efficient heuristic algorithm for computing Steiner minimal trees in the hyperbolic space. HyperSteiner extends the Euclidean Smith-Lee-Liebman algorithm, which is grounded in a divide-and-conquer approach involving the Delaunay triangulation. The central idea is rephrasing Steiner tree problems with three terminals as a system of equations in the Klein-Beltrami model. Motivated by the fact that hyperbolic geometry is well-suited for representing hierarchies, we explore applications to hierarchy discovery in data. Results show that HyperSteiner infers more realistic hierarchies than the Minimum Spanning Tree and is more scalable to large datasets than Neighbor Joining.
{"title":"HyperSteiner: Computing Heuristic Hyperbolic Steiner Minimal Trees","authors":"Alejandro García-Castellanos, Aniss Aiman Medbouhi, Giovanni Luca Marchetti, Erik J. Bekkers, Danica Kragic","doi":"arxiv-2409.05671","DOIUrl":"https://doi.org/arxiv-2409.05671","url":null,"abstract":"We propose HyperSteiner -- an efficient heuristic algorithm for computing\u0000Steiner minimal trees in the hyperbolic space. HyperSteiner extends the\u0000Euclidean Smith-Lee-Liebman algorithm, which is grounded in a\u0000divide-and-conquer approach involving the Delaunay triangulation. The central\u0000idea is rephrasing Steiner tree problems with three terminals as a system of\u0000equations in the Klein-Beltrami model. Motivated by the fact that hyperbolic\u0000geometry is well-suited for representing hierarchies, we explore applications\u0000to hierarchy discovery in data. Results show that HyperSteiner infers more\u0000realistic hierarchies than the Minimum Spanning Tree and is more scalable to\u0000large datasets than Neighbor Joining.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142214995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raunak Sarbajna, Karima Elgarroussi, Hoang D Vo, Jianyuan Ni, Christoph F. Eick
In response to the ongoing pandemic and health emergency of COVID-19, several models have been used to understand the dynamics of virus spread. Some employ mathematical models like the compartmental SEIHRD approach and others rely on agent-based modeling (ABM). In this paper, a new city-based agent-based modeling approach called COVID19-CBABM is introduced. It considers not only the transmission mechanism simulated by the SEHIRD compartments but also models people movements and their interactions with their surroundings, particularly their interactions at different types of Points of Interest (POI), such as supermarkets. Through the development of knowledge extraction procedures for Safegraph data, our approach simulates realistic conditions based on spatial patterns and infection conditions considering locations where people spend their time in a given city. Our model was implemented in Python using the Mesa-Geo framework. COVID19-CBABM is portable and can be easily extended by adding more complicated scenarios. Therefore, it is a useful tool to assist the government and health authorities in evaluating strategic decisions and actions efficiently against this epidemic, using the unique mobility patterns of each city.
{"title":"COVID19-CBABM: A City-Based Agent Based Disease Spread Modeling Framework","authors":"Raunak Sarbajna, Karima Elgarroussi, Hoang D Vo, Jianyuan Ni, Christoph F. Eick","doi":"arxiv-2409.05235","DOIUrl":"https://doi.org/arxiv-2409.05235","url":null,"abstract":"In response to the ongoing pandemic and health emergency of COVID-19, several\u0000models have been used to understand the dynamics of virus spread. Some employ\u0000mathematical models like the compartmental SEIHRD approach and others rely on\u0000agent-based modeling (ABM). In this paper, a new city-based agent-based\u0000modeling approach called COVID19-CBABM is introduced. It considers not only the\u0000transmission mechanism simulated by the SEHIRD compartments but also models\u0000people movements and their interactions with their surroundings, particularly\u0000their interactions at different types of Points of Interest (POI), such as\u0000supermarkets. Through the development of knowledge extraction procedures for\u0000Safegraph data, our approach simulates realistic conditions based on spatial\u0000patterns and infection conditions considering locations where people spend\u0000their time in a given city. Our model was implemented in Python using the\u0000Mesa-Geo framework. COVID19-CBABM is portable and can be easily extended by\u0000adding more complicated scenarios. Therefore, it is a useful tool to assist the\u0000government and health authorities in evaluating strategic decisions and actions\u0000efficiently against this epidemic, using the unique mobility patterns of each\u0000city.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"97 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142214997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We give an algorithm to morph planar graph drawings that achieves small grid size at the expense of allowing a constant number of bends on each edge. The input is an $n$-vertex planar graph and two planar straight-line drawings of the graph on an $O(n) times O(n)$ grid. The planarity-preserving morph is composed of $O(n)$ linear morphs between successive pairs of drawings, each on an $O(n) times O(n)$ grid with a constant number of bends per edge. The algorithm to compute the morph runs in $O(n^2)$ time on a word RAM model with standard arithmetic operations -- in particular no square roots or cube roots are required. The first step of the algorithm is to morph each input drawing to a planar orthogonal box drawing where vertices are represented by boxes and each edge is drawn as a horizontal or vertical segment. The second step is to morph between planar orthogonal box drawings. This is done by extending known techniques for morphing planar orthogonal drawings with point vertices.
{"title":"Morphing Planar Graph Drawings via Orthogonal Box Drawings","authors":"Therese Biedl, Anna Lubiw, Jack Spalding-Jamieson","doi":"arxiv-2409.04074","DOIUrl":"https://doi.org/arxiv-2409.04074","url":null,"abstract":"We give an algorithm to morph planar graph drawings that achieves small grid\u0000size at the expense of allowing a constant number of bends on each edge. The\u0000input is an $n$-vertex planar graph and two planar straight-line drawings of\u0000the graph on an $O(n) times O(n)$ grid. The planarity-preserving morph is\u0000composed of $O(n)$ linear morphs between successive pairs of drawings, each on\u0000an $O(n) times O(n)$ grid with a constant number of bends per edge. The\u0000algorithm to compute the morph runs in $O(n^2)$ time on a word RAM model with\u0000standard arithmetic operations -- in particular no square roots or cube roots\u0000are required. The first step of the algorithm is to morph each input drawing to a planar\u0000orthogonal box drawing where vertices are represented by boxes and each edge is\u0000drawn as a horizontal or vertical segment. The second step is to morph between\u0000planar orthogonal box drawings. This is done by extending known techniques for\u0000morphing planar orthogonal drawings with point vertices.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142214755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}