Oswin Aichholzer, Julia Obmann, Pavel Paták, Daniel Perz, Josef Tkadlec, Birgit Vogtenhuber
Two plane drawings of graphs on the same set of points are called disjoint compatible if their union is plane and they do not have an edge in common. Let $S$ be a convex point set of $2n geq 10$ points and let $mathcal{H}$ be a family of plane drawings on $S$. Two plane perfect matchings $M_1$ and $M_2$ on $S$ (which do not need to be disjoint nor compatible) are emph{disjoint $mathcal{H}$-compatible} if there exists a drawing in $mathcal{H}$ which is disjoint compatible to both $M_1$ and $M_2$ In this work, we consider the graph which has all plane perfect matchings as vertices and where two vertices are connected by an edge if the matchings are disjoint $mathcal{H}$-compatible. We study the diameter of this graph when $mathcal{H}$ is the family of all plane spanning trees, caterpillars or paths. We show that in the first two cases the graph is connected with constant and linear diameter, respectively, while in the third case it is disconnected.
{"title":"Disjoint Compatibility via Graph Classes","authors":"Oswin Aichholzer, Julia Obmann, Pavel Paták, Daniel Perz, Josef Tkadlec, Birgit Vogtenhuber","doi":"arxiv-2409.03579","DOIUrl":"https://doi.org/arxiv-2409.03579","url":null,"abstract":"Two plane drawings of graphs on the same set of points are called disjoint\u0000compatible if their union is plane and they do not have an edge in common. Let\u0000$S$ be a convex point set of $2n geq 10$ points and let $mathcal{H}$ be a\u0000family of plane drawings on $S$. Two plane perfect matchings $M_1$ and $M_2$ on\u0000$S$ (which do not need to be disjoint nor compatible) are emph{disjoint\u0000$mathcal{H}$-compatible} if there exists a drawing in $mathcal{H}$ which is\u0000disjoint compatible to both $M_1$ and $M_2$ In this work, we consider the graph\u0000which has all plane perfect matchings as vertices and where two vertices are\u0000connected by an edge if the matchings are disjoint $mathcal{H}$-compatible. We\u0000study the diameter of this graph when $mathcal{H}$ is the family of all plane\u0000spanning trees, caterpillars or paths. We show that in the first two cases the\u0000graph is connected with constant and linear diameter, respectively, while in\u0000the third case it is disconnected.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142214756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study some variants of the $k$-textsc{Watchman Routes} problem, the cooperative version of the classic textsc{Watchman Routes} problem in a simple polygon. The watchmen may be required to see the whole polygon, or some pre-determined quota of area within the polygon, and we want to minimize the maximum length traveled by any watchman. While the single watchman version of the problem has received much attention is rather well understood, it is not the case for multiple watchmen version. We provide the first tight approximability results for the anchored $k$-textsc{Watchman Routes} problem in a simple polygon, assuming $k$ is fixed, by a fully-polynomial time approximation scheme. The basis for the FPTAS is provided by an exact dynamic programming algorithm. If $k$ is a variable, we give constant-factor approximations.
{"title":"Approximation Algorithms for Anchored Multiwatchman Routes","authors":"Joseph S. B. Mitchell, Linh Nguyen","doi":"arxiv-2408.17343","DOIUrl":"https://doi.org/arxiv-2408.17343","url":null,"abstract":"We study some variants of the $k$-textsc{Watchman Routes} problem, the\u0000cooperative version of the classic textsc{Watchman Routes} problem in a simple\u0000polygon. The watchmen may be required to see the whole polygon, or some\u0000pre-determined quota of area within the polygon, and we want to minimize the\u0000maximum length traveled by any watchman. While the single watchman version of\u0000the problem has received much attention is rather well understood, it is not\u0000the case for multiple watchmen version. We provide the first tight approximability results for the anchored\u0000$k$-textsc{Watchman Routes} problem in a simple polygon, assuming $k$ is\u0000fixed, by a fully-polynomial time approximation scheme. The basis for the FPTAS\u0000is provided by an exact dynamic programming algorithm. If $k$ is a variable, we\u0000give constant-factor approximations.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142214757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Volume calculation of configurational spaces acts as a vital part in configurational entropy calculation, which contributes towards calculating free energy landscape for molecular systems. In this article, we present our sampling-based volume computation method using distance-based Cayley coordinate, mitigating drawbacks: our method guarantees that the sampling procedure stays in lower-dimensional coordinate space (instead of higher-dimensional Cartesian space) throughout the whole process; and our mapping function, utilizing Cayley parameterization, can be applied in both directions with low computational cost. Our method uniformly samples and computes a discrete volume measure of a Cartesian configuration space of point sets satisfying systems of distance inequality constraints. The systems belong to a large natural class whose feasible configuration spaces are effectively lower dimensional subsets of high dimensional ambient space. Their topological complexity makes discrete volume computation challenging, yet necessary in several application scenarios including free energy calculation in soft matter assembly modeling. The algorithm runs in linear time and empirically sub-linear space in the number of grid hypercubes (used to define the discrete volume measure) textit{that intersect} the configuration space. In other words, the number of wasted grid cube visits is insignificant compared to prevailing methods typically based on gradient descent. Specifically, the traversal stays within the feasible configuration space by viewing it as a branched covering, using a recent theory of Cayley or distance coordinates to convexify the base space, and by employing a space-efficient, frontier hypercube traversal data structure. A software implementation and comparison with existing methods is provided.
{"title":"Best of two worlds: Cartesian sampling and volume computation for distance-constrained configuration spaces using Cayley coordinates","authors":"Yichi Zhang, Meera Sitharam","doi":"arxiv-2408.16946","DOIUrl":"https://doi.org/arxiv-2408.16946","url":null,"abstract":"Volume calculation of configurational spaces acts as a vital part in\u0000configurational entropy calculation, which contributes towards calculating free\u0000energy landscape for molecular systems. In this article, we present our\u0000sampling-based volume computation method using distance-based Cayley\u0000coordinate, mitigating drawbacks: our method guarantees that the sampling\u0000procedure stays in lower-dimensional coordinate space (instead of\u0000higher-dimensional Cartesian space) throughout the whole process; and our\u0000mapping function, utilizing Cayley parameterization, can be applied in both\u0000directions with low computational cost. Our method uniformly samples and\u0000computes a discrete volume measure of a Cartesian configuration space of point\u0000sets satisfying systems of distance inequality constraints. The systems belong\u0000to a large natural class whose feasible configuration spaces are effectively\u0000lower dimensional subsets of high dimensional ambient space. Their topological\u0000complexity makes discrete volume computation challenging, yet necessary in\u0000several application scenarios including free energy calculation in soft matter\u0000assembly modeling. The algorithm runs in linear time and empirically sub-linear\u0000space in the number of grid hypercubes (used to define the discrete volume\u0000measure) textit{that intersect} the configuration space. In other words, the\u0000number of wasted grid cube visits is insignificant compared to prevailing\u0000methods typically based on gradient descent. Specifically, the traversal stays\u0000within the feasible configuration space by viewing it as a branched covering,\u0000using a recent theory of Cayley or distance coordinates to convexify the base\u0000space, and by employing a space-efficient, frontier hypercube traversal data\u0000structure. A software implementation and comparison with existing methods is\u0000provided.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"42 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142214758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image triangulation, the practice of decomposing images into triangles, deliberately employs simplification to create an abstracted representation. While triangulating an image is a relatively simple process, difficulties arise when determining which vertices produce recognizable and visually pleasing output images. With the goal of producing art, we discuss an image triangulation algorithm in Python that utilizes Sobel edge detection and point cloud sparsification to determine final vertices for a triangulation, resulting in the creation of artistic triangulated compositions.
{"title":"Image Triangulation Using the Sobel Operator for Vertex Selection","authors":"Olivia Laske, Lori Ziegelmeier","doi":"arxiv-2408.16112","DOIUrl":"https://doi.org/arxiv-2408.16112","url":null,"abstract":"Image triangulation, the practice of decomposing images into triangles,\u0000deliberately employs simplification to create an abstracted representation.\u0000While triangulating an image is a relatively simple process, difficulties arise\u0000when determining which vertices produce recognizable and visually pleasing\u0000output images. With the goal of producing art, we discuss an image\u0000triangulation algorithm in Python that utilizes Sobel edge detection and point\u0000cloud sparsification to determine final vertices for a triangulation, resulting\u0000in the creation of artistic triangulated compositions.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142214759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we study the Minimum Sum of Moving-Distance and Opening-Costs Target Coverage problem (MinMD$+$OCTC). Given a set of targets and a set of base stations on the plane, an opening cost function for every base station, the opened base stations can emit mobile sensors with a radius of $r$ from base station to cover the targets. The goal of MinMD$+$OCTC is to cover all the targets and minimize the sum of the opening cost and the moving distance of mobile sensors. We give the optimal solution in polynomial time for the MinMD$+$OCTC problem with targets on a straight line, and present a 8.928 approximation algorithm for a special case of the MinMD$+$OCTC problem with the targets on the plane.
{"title":"Approximation Algorithms for Minimum Sum of Moving-Distance and Opening-Costs Target Coverage Problem","authors":"Lei Zhao, Zhao Zhang","doi":"arxiv-2408.13797","DOIUrl":"https://doi.org/arxiv-2408.13797","url":null,"abstract":"In this paper, we study the Minimum Sum of Moving-Distance and Opening-Costs\u0000Target Coverage problem (MinMD$+$OCTC). Given a set of targets and a set of\u0000base stations on the plane, an opening cost function for every base station,\u0000the opened base stations can emit mobile sensors with a radius of $r$ from base\u0000station to cover the targets. The goal of MinMD$+$OCTC is to cover all the\u0000targets and minimize the sum of the opening cost and the moving distance of\u0000mobile sensors. We give the optimal solution in polynomial time for the\u0000MinMD$+$OCTC problem with targets on a straight line, and present a 8.928\u0000approximation algorithm for a special case of the MinMD$+$OCTC problem with the\u0000targets on the plane.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142214760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guanqun Ma, David Lenz, Tom Peterka, Hanqi Guo, Bei Wang
Advances in high-performance computing require new ways to represent large-scale scientific data to support data storage, data transfers, and data analysis within scientific workflows. Multivariate functional approximation (MFA) has recently emerged as a new continuous meshless representation that approximates raw discrete data with a set of piecewise smooth functions. An MFA model of data thus offers a compact representation and supports high-order evaluation of values and derivatives anywhere in the domain. In this paper, we present CPE-MFA, the first critical point extraction framework designed for MFA models of large-scale, high-dimensional data. CPE-MFA extracts critical points directly from an MFA model without the need for discretization or resampling. This is the first step toward enabling continuous implicit models such as MFA to support topological data analysis at scale.
{"title":"Critical Point Extraction from Multivariate Functional Approximation","authors":"Guanqun Ma, David Lenz, Tom Peterka, Hanqi Guo, Bei Wang","doi":"arxiv-2408.13193","DOIUrl":"https://doi.org/arxiv-2408.13193","url":null,"abstract":"Advances in high-performance computing require new ways to represent\u0000large-scale scientific data to support data storage, data transfers, and data\u0000analysis within scientific workflows. Multivariate functional approximation\u0000(MFA) has recently emerged as a new continuous meshless representation that\u0000approximates raw discrete data with a set of piecewise smooth functions. An MFA\u0000model of data thus offers a compact representation and supports high-order\u0000evaluation of values and derivatives anywhere in the domain. In this paper, we\u0000present CPE-MFA, the first critical point extraction framework designed for MFA\u0000models of large-scale, high-dimensional data. CPE-MFA extracts critical points\u0000directly from an MFA model without the need for discretization or resampling.\u0000This is the first step toward enabling continuous implicit models such as MFA\u0000to support topological data analysis at scale.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"271 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142214761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aaron T. Becker, Sándor P. Fekete, Li Huang, Phillip Keldenich, Linda Kleist, Dominik Krupke, Christian Rieck, Arne Schmidt
We investigate algorithmic approaches for targeted drug delivery in a complex, maze-like environment, such as a vascular system. The basic scenario is given by a large swarm of micro-scale particles (''agents'') and a particular target region (''tumor'') within a system of passageways. Agents are too small to contain on-board power or computation and are instead controlled by a global external force that acts uniformly on all particles, such as an applied fluidic flow or electromagnetic field. The challenge is to deliver all agents to the target region with a minimum number of actuation steps. We provide a number of results for this challenge. We show that the underlying problem is NP-complete, which explains why previous work did not provide provably efficient algorithms. We also develop several algorithmic approaches that greatly improve the worst-case guarantees for the number of required actuation steps. We evaluate our algorithmic approaches by numerous simulations, both for deterministic algorithms and searches supported by deep learning, which show that the performance is practically promising.
{"title":"Targeted Drug Delivery: Algorithmic Methods for Collecting a Swarm of Particles with Uniform External Forces","authors":"Aaron T. Becker, Sándor P. Fekete, Li Huang, Phillip Keldenich, Linda Kleist, Dominik Krupke, Christian Rieck, Arne Schmidt","doi":"arxiv-2408.09729","DOIUrl":"https://doi.org/arxiv-2408.09729","url":null,"abstract":"We investigate algorithmic approaches for targeted drug delivery in a\u0000complex, maze-like environment, such as a vascular system. The basic scenario\u0000is given by a large swarm of micro-scale particles (''agents'') and a\u0000particular target region (''tumor'') within a system of passageways. Agents are\u0000too small to contain on-board power or computation and are instead controlled\u0000by a global external force that acts uniformly on all particles, such as an\u0000applied fluidic flow or electromagnetic field. The challenge is to deliver all\u0000agents to the target region with a minimum number of actuation steps. We provide a number of results for this challenge. We show that the\u0000underlying problem is NP-complete, which explains why previous work did not\u0000provide provably efficient algorithms. We also develop several algorithmic\u0000approaches that greatly improve the worst-case guarantees for the number of\u0000required actuation steps. We evaluate our algorithmic approaches by numerous\u0000simulations, both for deterministic algorithms and searches supported by deep\u0000learning, which show that the performance is practically promising.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142214762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose conformal polynomial coordinates for 2D closed high-order cages, which consist of polynomial curves of any order. The coordinates enable the transformation of the input polynomial curves into polynomial curves of any order. We extend the classical 2D Green coordinates to define our coordinates, thereby leading to cage-aware conformal harmonic deformations. We extensively test our method on various 2D deformations, allowing users to manipulate the Bezier control points to easily generate the desired deformation.
{"title":"Polynomial 2D Green Coordinates for High-order Cages","authors":"Shibo Liu, Ligang Liu, Xiao-Ming Fu","doi":"arxiv-2408.06831","DOIUrl":"https://doi.org/arxiv-2408.06831","url":null,"abstract":"We propose conformal polynomial coordinates for 2D closed high-order cages,\u0000which consist of polynomial curves of any order. The coordinates enable the\u0000transformation of the input polynomial curves into polynomial curves of any\u0000order. We extend the classical 2D Green coordinates to define our coordinates,\u0000thereby leading to cage-aware conformal harmonic deformations. We extensively\u0000test our method on various 2D deformations, allowing users to manipulate the\u0000Bezier control points to easily generate the desired deformation.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142214763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the problem of searching for a target at some unknown location in $mathbb{R}^d$ when additional information regarding the position of the target is available in the form of predictions. In our setting, predictions come as approximate distances to the target: for each point $pin mathbb{R}^d$ that the searcher visits, we obtain a value $lambda(p)$ such that $|pmathbf{t}|le lambda(p) le ccdot |pmathbf{t}|$, where $cge 1$ is a fixed constant, $mathbf{t}$ is the position of the target, and $|pmathbf{t}|$ is the Euclidean distance of $p$ to $mathbf{t}$. The cost of the search is the length of the path followed by the searcher. Our main positive result is a strategy that achieves $(12c)^{d+1}$-competitive ratio, even when the constant $c$ is unknown. We also give a lower bound of roughly $(c/16)^{d-1}$ on the competitive ratio of any search strategy in $mathbb{R}^d$.
{"title":"Searching in Euclidean Spaces with Predictions","authors":"Sergio Cabello, Panos Giannopoulos","doi":"arxiv-2408.04964","DOIUrl":"https://doi.org/arxiv-2408.04964","url":null,"abstract":"We study the problem of searching for a target at some unknown location in\u0000$mathbb{R}^d$ when additional information regarding the position of the target\u0000is available in the form of predictions. In our setting, predictions come as\u0000approximate distances to the target: for each point $pin mathbb{R}^d$ that\u0000the searcher visits, we obtain a value $lambda(p)$ such that $|pmathbf{t}|le\u0000lambda(p) le ccdot |pmathbf{t}|$, where $cge 1$ is a fixed constant,\u0000$mathbf{t}$ is the position of the target, and $|pmathbf{t}|$ is the\u0000Euclidean distance of $p$ to $mathbf{t}$. The cost of the search is the length\u0000of the path followed by the searcher. Our main positive result is a strategy\u0000that achieves $(12c)^{d+1}$-competitive ratio, even when the constant $c$ is\u0000unknown. We also give a lower bound of roughly $(c/16)^{d-1}$ on the\u0000competitive ratio of any search strategy in $mathbb{R}^d$.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reyan Ahmed, Cesim Erten, Stephen Kobourov, Jonah Lotz, Jacob Miller, Hamlet Taraz
The normalized stress metric measures how closely distances between vertices in a graph drawing match the graph-theoretic distances between those vertices. It is one of the most widely employed quality metrics for graph drawing, and is even the optimization goal of several popular graph layout algorithms. However, normalized stress can be misleading when used to compare the outputs of two or more algorithms, as it is sensitive to the size of the drawing compared to the graph-theoretic distances used. Uniformly scaling a layout will change the value of stress despite not meaningfully changing the drawing. In fact, the change in stress values can be so significant that a clearly better layout can appear to have a worse stress score than a random layout. In this paper, we study different variants for calculating stress used in the literature (raw stress, normalized stress, etc.) and show that many of them are affected by this problem, which threatens the validity of experiments that compare the quality of one algorithm to that of another. We then experimentally justify one of the stress calculation variants, scale-normalized stress, as one that fairly compares drawing outputs regardless of their size. We also describe an efficient computation for scale-normalized stress and provide an open source implementation.
{"title":"Size Should not Matter: Scale-invariant Stress Metrics","authors":"Reyan Ahmed, Cesim Erten, Stephen Kobourov, Jonah Lotz, Jacob Miller, Hamlet Taraz","doi":"arxiv-2408.04688","DOIUrl":"https://doi.org/arxiv-2408.04688","url":null,"abstract":"The normalized stress metric measures how closely distances between vertices\u0000in a graph drawing match the graph-theoretic distances between those vertices.\u0000It is one of the most widely employed quality metrics for graph drawing, and is\u0000even the optimization goal of several popular graph layout algorithms. However,\u0000normalized stress can be misleading when used to compare the outputs of two or\u0000more algorithms, as it is sensitive to the size of the drawing compared to the\u0000graph-theoretic distances used. Uniformly scaling a layout will change the\u0000value of stress despite not meaningfully changing the drawing. In fact, the\u0000change in stress values can be so significant that a clearly better layout can\u0000appear to have a worse stress score than a random layout. In this paper, we\u0000study different variants for calculating stress used in the literature (raw\u0000stress, normalized stress, etc.) and show that many of them are affected by\u0000this problem, which threatens the validity of experiments that compare the\u0000quality of one algorithm to that of another. We then experimentally justify one\u0000of the stress calculation variants, scale-normalized stress, as one that fairly\u0000compares drawing outputs regardless of their size. We also describe an\u0000efficient computation for scale-normalized stress and provide an open source\u0000implementation.","PeriodicalId":501570,"journal":{"name":"arXiv - CS - Computational Geometry","volume":"95 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}