Pub Date : 2023-03-25DOI: https://dl.acm.org/doi/10.1145/3582267
Roberto Bruni, Roberto Giacobazzi, Roberta Gori, Francesco Ranzato
Abstract interpretation is a well-known and extensively used method to extract over-approximate program invariants by a sound program analysis algorithm. Soundness means that no program errors are lost and it is, in principle, guaranteed by construction. Completeness means that the abstract interpreter reports no false alarms for all possible inputs, but this is extremely rare because it needs a very precise analysis. We introduce a weaker notion of completeness, called local completeness, which requires that no false alarms are produced only relatively to some fixed program inputs. Based on this idea, we introduce a program logic, called Local Completeness Logic for an abstract domain A, for proving both the correctness and incorrectness of program specifications. Our proof system, which is parameterized by an abstract domain A, combines over- and under-approximating reasoning. In a provable triple ⊦A [p] 𝖼 [q], 𝖼 is a program, q is an under-approximation of the strongest post-condition of 𝖼 on input p such that their abstractions in A coincide. This means that q is never too coarse, namely, under some mild assumptions, the abstract interpretation of 𝖼 does not yield false alarms for the input p iff q has no alarm. Therefore, proving ⊦A [p] 𝖼 [q] not only ensures that all the alarms raised in q are true ones, but also that if q does not raise alarms, then 𝖼 is correct. We also prove that if A is the straightforward abstraction making all program properties equivalent, then our program logic coincides with O’Hearn’s incorrectness logic, while for any other abstraction, contrary to the case of incorrectness logic, our logic can also establish program correctness.
{"title":"A Correctness and Incorrectness Program Logic","authors":"Roberto Bruni, Roberto Giacobazzi, Roberta Gori, Francesco Ranzato","doi":"https://dl.acm.org/doi/10.1145/3582267","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3582267","url":null,"abstract":"<p>Abstract interpretation is a well-known and extensively used method to extract over-approximate program invariants by a sound program analysis algorithm. Soundness means that no program errors are lost and it is, in principle, guaranteed by construction. Completeness means that the abstract interpreter reports no false alarms for all possible inputs, but this is extremely rare because it needs a very precise analysis. We introduce a weaker notion of completeness, called <i>local completeness</i>, which requires that no false alarms are produced only relatively to some fixed program inputs. Based on this idea, we introduce a program logic, called Local Completeness Logic for an abstract domain <i>A</i>, for proving both the correctness and incorrectness of program specifications. Our proof system, which is parameterized by an abstract domain <i>A</i>, combines over- and under-approximating reasoning. In a provable triple ⊦<sub><i>A</i></sub> [<i>p</i>] 𝖼 [<i>q</i>], 𝖼 is a program, <i>q</i> is an under-approximation of the strongest post-condition of 𝖼 on input <i>p</i> such that their abstractions in <i>A</i> coincide. This means that <i>q</i> is never too coarse, namely, under some mild assumptions, <i>the abstract interpretation of 𝖼 does not yield false alarms for the input <i>p</i> iff <i>q</i> has no alarm</i>. Therefore, proving ⊦<sub><i>A</i></sub> [<i>p</i>] 𝖼 [<i>q</i>] not only ensures that all the alarms raised in <i>q</i> are true ones, but also that if <i>q</i> does not raise alarms, then 𝖼 is correct. We also prove that if <i>A</i> is the straightforward abstraction making all program properties equivalent, then our program logic coincides with O’Hearn’s incorrectness logic, while for any other abstraction, contrary to the case of incorrectness logic, our logic can also establish program correctness.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"40 4","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138525697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-25DOI: https://dl.acm.org/doi/10.1145/3580474
Panagiotis Charalampopoulos, Paweł Gawrychowski, Yaowei Long, Shay Mozes, Seth Pettie, Oren Weimann, Christian Wulff-Nilsen
We consider the problem of preprocessing a weighted directed planar graph in order to quickly answer exact distance queries. The main tension in this problem is between spaceS and query timeQ, and since the mid-1990s all results had polynomial time-space tradeoffs, e.g., Q = ~ Θ(n/√ S) or Q = ~Θ(n5/2/S3/2).
In this article we show that there is no polynomial tradeoff between time and space and that it is possible to simultaneously achieve almost optimal space n1+o(1) and almost optimal query time no(1). More precisely, we achieve the following space-time tradeoffs:
n1+o(1) space and log2+o(1)n query time,
n log2+o(1)n space and no(1) query time,
n4/3+o(1) space and log1+o(1)n query time.
We reduce a distance query to a variety of point location problems in additively weighted Voronoi diagrams and develop new algorithms for the point location problem itself using several partially persistent dynamic tree data structures.
{"title":"Almost Optimal Exact Distance Oracles for Planar Graphs","authors":"Panagiotis Charalampopoulos, Paweł Gawrychowski, Yaowei Long, Shay Mozes, Seth Pettie, Oren Weimann, Christian Wulff-Nilsen","doi":"https://dl.acm.org/doi/10.1145/3580474","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3580474","url":null,"abstract":"<p>We consider the problem of preprocessing a weighted directed planar graph in order to quickly answer exact distance queries. The main tension in this problem is between <i>space</i> <i>S</i> and <i>query time</i> <i>Q</i>, and since the mid-1990s all results had polynomial time-space tradeoffs, e.g., <i>Q</i> = ~ Θ(<i>n/√ S</i>) or <i>Q</i> = ~Θ(<i>n<sup>5/2</sup>/S<sup>3/2</sup></i>).</p><p>In this article we show that there is no polynomial tradeoff between time and space and that it is possible to <i>simultaneously</i> achieve almost optimal space <i>n</i><sup>1+<i>o</i>(1)</sup> and almost optimal query time <i>n</i><sup><i>o</i>(1)</sup>. More precisely, we achieve the following space-time tradeoffs:\u0000<p><ul><li><p><i>n</i><sup>1+<i>o</i>(1)</sup> space and log<sup>2+<i>o</i>(1)</sup> <i>n</i> query time,</p></li><li><p><i>n</i> log<sup>2+<i>o</i>(1)</sup> <i>n</i> space and <i>n</i><sup><i>o</i>(1)</sup> query time,</p></li><li><p><i>n</i><sup>4/3+<i>o</i>(1)</sup> space and log<sup>1+<i>o</i>(1)</sup> <i>n</i> query time.</p></li></ul></p></p><p>We reduce a distance query to a variety of <i>point location</i> problems in additively weighted <i>Voronoi diagrams</i> and develop new algorithms for the point location problem itself using several partially persistent dynamic tree data structures.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"23 4","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138525663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Charalampopoulos, Paweł Gawrychowski, Yaowei Long, S. Mozes, S. Pettie, O. Weimann, Christian Wulff-Nilsen
We consider the problem of preprocessing a weighted directed planar graph in order to quickly answer exact distance queries. The main tension in this problem is between space S and query time Q, and since the mid-1990s all results had polynomial time-space tradeoffs, e.g., Q = ~ Θ(n/√ S) or Q = ~Θ(n5/2/S3/2). In this article we show that there is no polynomial tradeoff between time and space and that it is possible to simultaneously achieve almost optimal space n1+o(1) and almost optimal query time no(1). More precisely, we achieve the following space-time tradeoffs: n1+o(1) space and log2+o(1) n query time, n log2+o(1) n space and no(1) query time, n4/3+o(1) space and log1+o(1) n query time. We reduce a distance query to a variety of point location problems in additively weighted Voronoi diagrams and develop new algorithms for the point location problem itself using several partially persistent dynamic tree data structures.
{"title":"Almost Optimal Exact Distance Oracles for Planar Graphs","authors":"P. Charalampopoulos, Paweł Gawrychowski, Yaowei Long, S. Mozes, S. Pettie, O. Weimann, Christian Wulff-Nilsen","doi":"10.1145/3580474","DOIUrl":"https://doi.org/10.1145/3580474","url":null,"abstract":"We consider the problem of preprocessing a weighted directed planar graph in order to quickly answer exact distance queries. The main tension in this problem is between space S and query time Q, and since the mid-1990s all results had polynomial time-space tradeoffs, e.g., Q = ~ Θ(n/√ S) or Q = ~Θ(n5/2/S3/2). In this article we show that there is no polynomial tradeoff between time and space and that it is possible to simultaneously achieve almost optimal space n1+o(1) and almost optimal query time no(1). More precisely, we achieve the following space-time tradeoffs: n1+o(1) space and log2+o(1) n query time, n log2+o(1) n space and no(1) query time, n4/3+o(1) space and log1+o(1) n query time. We reduce a distance query to a variety of point location problems in additively weighted Voronoi diagrams and develop new algorithms for the point location problem itself using several partially persistent dynamic tree data structures.","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"31 1","pages":"1 - 50"},"PeriodicalIF":2.5,"publicationDate":"2023-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83709653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the search for a logic capturing polynomial time the most promising candidates are Choiceless Polynomial Time (CPT) and rank logic. Rank logic extends fixed-point logic with counting by a rank operator over prime fields. We show that the isomorphism problem for CFI graphs over ℤ 2 i cannot be defined in rank logic, even if the base graph is totally ordered. However, CPT can define this isomorphism problem. We thereby separate rank logic from CPT and in particular from polynomial time.
{"title":"Separating Rank Logic from Polynomial Time","authors":"Moritz Lichter","doi":"10.1145/3572918","DOIUrl":"https://doi.org/10.1145/3572918","url":null,"abstract":"In the search for a logic capturing polynomial time the most promising candidates are Choiceless Polynomial Time (CPT) and rank logic. Rank logic extends fixed-point logic with counting by a rank operator over prime fields. We show that the isomorphism problem for CFI graphs over ℤ 2 i cannot be defined in rank logic, even if the base graph is totally ordered. However, CPT can define this isomorphism problem. We thereby separate rank logic from CPT and in particular from polynomial time.","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135996376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider two-player stochastic games played on a finite graph for infinitely many rounds. Stochastic games generalize both Markov decision processes (MDP) by adding an adversary player, and two-player deterministic games by adding stochasticity. The outcome of the game is a sequence of distributions over the graph states, representing the evolution of a population consisting of a continuum number of identical copies of a process modeled by the game graph. We consider synchronization objectives, which require the probability mass to accumulate in a set of target states, either always, once, infinitely often, or always after some point in the outcome sequence; and the winning modes of sure winning (if the accumulated probability is equal to 1) and almost-sure winning (if the accumulated probability is arbitrarily close to 1). We present algorithms to compute the set of winning distributions for each of these synchronization modes, showing that the corresponding decision problem is PSPACE-complete for synchronizing once and infinitely often and PTIME-complete for synchronizing always and always after some point. These bounds are remarkably in line with the special case of MDPs, while the algorithmic solution and proof technique are considerably more involved, even for deterministic games. This is because those games have a flavor of imperfect information, in particular they are not determined and randomized strategies need to be considered, even if there is no stochastic choice in the game graph. Moreover, in combination with stochasticity in the game graph, finite-memory strategies are not sufficient in general.
{"title":"Stochastic Games with Synchronization Objectives","authors":"L. Doyen","doi":"10.1145/3588866","DOIUrl":"https://doi.org/10.1145/3588866","url":null,"abstract":"We consider two-player stochastic games played on a finite graph for infinitely many rounds. Stochastic games generalize both Markov decision processes (MDP) by adding an adversary player, and two-player deterministic games by adding stochasticity. The outcome of the game is a sequence of distributions over the graph states, representing the evolution of a population consisting of a continuum number of identical copies of a process modeled by the game graph. We consider synchronization objectives, which require the probability mass to accumulate in a set of target states, either always, once, infinitely often, or always after some point in the outcome sequence; and the winning modes of sure winning (if the accumulated probability is equal to 1) and almost-sure winning (if the accumulated probability is arbitrarily close to 1). We present algorithms to compute the set of winning distributions for each of these synchronization modes, showing that the corresponding decision problem is PSPACE-complete for synchronizing once and infinitely often and PTIME-complete for synchronizing always and always after some point. These bounds are remarkably in line with the special case of MDPs, while the algorithmic solution and proof technique are considerably more involved, even for deterministic games. This is because those games have a flavor of imperfect information, in particular they are not determined and randomized strategies need to be considered, even if there is no stochastic choice in the game graph. Moreover, in combination with stochasticity in the game graph, finite-memory strategies are not sufficient in general.","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"172 1","pages":"1 - 35"},"PeriodicalIF":2.5,"publicationDate":"2023-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76679604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-21DOI: https://dl.acm.org/doi/10.1145/3578580
Sébastien Bubeck, Mark Sellke
Classically, data interpolation with a parametrized model class is possible as long as the number of parameters is larger than the number of equations to be satisfied. A puzzling phenomenon in deep learning is that models are trained with many more parameters than what this classical theory would suggest. We propose a partial theoretical explanation for this phenomenon. We prove that for a broad class of data distributions and model classes, overparametrization is necessary if one wants to interpolate the data smoothly. Namely we show that smooth interpolation requires d times more parameters than mere interpolation, where d is the ambient data dimension. We prove this universal law of robustness for any smoothly parametrized function class with polynomial size weights, and any covariate distribution verifying isoperimetry (or a mixture thereof). In the case of two-layer neural networks and Gaussian covariates, this law was conjectured in prior work by Bubeck, Li, and Nagaraj. We also give an interpretation of our result as an improved generalization bound for model classes consisting of smooth functions.
{"title":"A Universal Law of Robustness via Isoperimetry","authors":"Sébastien Bubeck, Mark Sellke","doi":"https://dl.acm.org/doi/10.1145/3578580","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3578580","url":null,"abstract":"<p>Classically, data interpolation with a parametrized model class is possible as long as the number of parameters is larger than the number of equations to be satisfied. A puzzling phenomenon in deep learning is that models are trained with many more parameters than what this classical theory would suggest. We propose a partial theoretical explanation for this phenomenon. We prove that for a broad class of data distributions and model classes, overparametrization is <i>necessary</i> if one wants to interpolate the data <i>smoothly</i>. Namely we show that <i>smooth</i> interpolation requires <i>d</i> times more parameters than mere interpolation, where <i>d</i> is the ambient data dimension. We prove this universal law of robustness for any smoothly parametrized function class with polynomial size weights, and any covariate distribution verifying isoperimetry (or a mixture thereof). In the case of two-layer neural networks and Gaussian covariates, this law was conjectured in prior work by Bubeck, Li, and Nagaraj. We also give an interpretation of our result as an improved generalization bound for model classes consisting of smooth functions.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"405 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138525671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given a set of pairwise disjoint polygonal obstacles in the plane, finding an obstacle-avoiding Euclidean shortest path between two points is a classical problem in computational geometry and has been studied extensively. Previously, Hershberger and Suri (in SIAM Journal on Computing, 1999) gave an algorithm of O(n log n) time and O(n log n) space, where n is the total number of vertices of all obstacles. Recently, by modifying Hershberger and Suri’s algorithm, Wang (in SODA’21) reduced the space to O(n) while the runtime of the algorithm is still O(n log n). In this article, we present a new algorithm of O(n+h log h) time and O(n) space, provided that a triangulation of the free space is given, where h is the number of obstacles. The algorithm is better than the previous work when h is relatively small. Our algorithm builds a shortest path map for a source point s so that given any query point t, the shortest path length from s to t can be computed in O(log n) time and a shortest s-t path can be produced in additional time linear in the number of edges of the path.
{"title":"A New Algorithm for Euclidean Shortest Paths in the Plane","authors":"Haitao Wang","doi":"10.1145/3580475","DOIUrl":"https://doi.org/10.1145/3580475","url":null,"abstract":"Given a set of pairwise disjoint polygonal obstacles in the plane, finding an obstacle-avoiding Euclidean shortest path between two points is a classical problem in computational geometry and has been studied extensively. Previously, Hershberger and Suri (in SIAM Journal on Computing, 1999) gave an algorithm of O(n log n) time and O(n log n) space, where n is the total number of vertices of all obstacles. Recently, by modifying Hershberger and Suri’s algorithm, Wang (in SODA’21) reduced the space to O(n) while the runtime of the algorithm is still O(n log n). In this article, we present a new algorithm of O(n+h log h) time and O(n) space, provided that a triangulation of the free space is given, where h is the number of obstacles. The algorithm is better than the previous work when h is relatively small. Our algorithm builds a shortest path map for a source point s so that given any query point t, the shortest path length from s to t can be computed in O(log n) time and a shortest s-t path can be produced in additional time linear in the number of edges of the path.","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"1 1","pages":"1 - 62"},"PeriodicalIF":2.5,"publicationDate":"2023-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89169609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-21DOI: https://dl.acm.org/doi/10.1145/3575807
Bruno Bauwens*, Marius Zimand
In a lossless compression system with target lengths, a compressor 𝒞 maps an integer m and a binary string x to an m-bit code p, and if m is sufficiently large, a decompressor 𝒟 reconstructs x from p. We call a pair (m,x) achievable for (𝒞,𝒟) if this reconstruction is successful. We introduce the notion of an optimal compressor 𝒞opt by the following universality property: For any compressor-decompressor pair (𝒞,𝒟), there exists a decompressor 𝒟′ such that if (m,x) is achievable for (𝒞,𝒟), then (m + Δ , x) is achievable for (𝒞opt, 𝒟′), where Δ is some small value called the overhead. We show that there exists an optimal compressor that has only polylogarithmic overhead and works in probabilistic polynomial time. Differently said, for any pair (𝒞,𝒟), no matter how slow 𝒞 is, or even if 𝒞 is non-computable, 𝒞opt is a fixed compressor that in polynomial time produces codes almost as short as those of 𝒞. The cost is that the corresponding decompressor is slower.
We also show that each such optimal compressor can be used for distributed compression, in which case it can achieve optimal compression rates as given in the Slepian–Wolf theorem and even for the Kolmogorov complexity variant of this theorem.
{"title":"Universal almost Optimal Compression and Slepian-wolf Coding in Probabilistic Polynomial Time","authors":"Bruno Bauwens*, Marius Zimand","doi":"https://dl.acm.org/doi/10.1145/3575807","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3575807","url":null,"abstract":"<p>In a lossless compression system with target lengths, a compressor 𝒞 maps an integer <i>m</i> and a binary string <i>x</i> to an <i>m</i>-bit code <i>p</i>, and if <i>m</i> is sufficiently large, a decompressor 𝒟 reconstructs <i>x</i> from <i>p</i>. We call a pair (<i>m,x</i>) <i>achievable</i> for (𝒞,𝒟) if this reconstruction is successful. We introduce the notion of an optimal compressor 𝒞<sub>opt</sub> by the following universality property: For any compressor-decompressor pair (𝒞,𝒟), there exists a decompressor 𝒟<sup>′</sup> such that if <i>(m,x)</i> is achievable for (𝒞,𝒟), then (<i>m</i> + Δ , <i>x</i>) is achievable for (𝒞<sub>opt</sub>, 𝒟<sup>′</sup>), where Δ is some small value called the overhead. We show that there exists an optimal compressor that has only polylogarithmic overhead and works in probabilistic polynomial time. Differently said, for any pair (𝒞,𝒟), no matter how slow 𝒞 is, or even if 𝒞 is non-computable, 𝒞<sub><i>opt</i></sub> is a fixed compressor that in polynomial time produces codes almost as short as those of 𝒞. The cost is that the corresponding decompressor is slower.</p><p>We also show that each such optimal compressor can be used for distributed compression, in which case it can achieve optimal compression rates as given in the Slepian–Wolf theorem and even for the Kolmogorov complexity variant of this theorem.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"22 6","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138525690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-21DOI: https://dl.acm.org/doi/10.1145/3580475
Haitao Wang
Given a set of pairwise disjoint polygonal obstacles in the plane, finding an obstacle-avoiding Euclidean shortest path between two points is a classical problem in computational geometry and has been studied extensively. Previously, Hershberger and Suri (in SIAM Journal on Computing, 1999) gave an algorithm of O(n log n) time and O(n log n) space, where n is the total number of vertices of all obstacles. Recently, by modifying Hershberger and Suri’s algorithm, Wang (in SODA’21) reduced the space to O(n) while the runtime of the algorithm is still O(n log n). In this article, we present a new algorithm of O(n+h log h) time and O(n) space, provided that a triangulation of the free space is given, where h is the number of obstacles. The algorithm is better than the previous work when h is relatively small. Our algorithm builds a shortest path map for a source point s so that given any query point t, the shortest path length from s to t can be computed in O(log n) time and a shortest s-t path can be produced in additional time linear in the number of edges of the path.
{"title":"A New Algorithm for Euclidean Shortest Paths in the Plane","authors":"Haitao Wang","doi":"https://dl.acm.org/doi/10.1145/3580475","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3580475","url":null,"abstract":"<p>Given a set of pairwise disjoint polygonal obstacles in the plane, finding an obstacle-avoiding Euclidean shortest path between two points is a classical problem in computational geometry and has been studied extensively. Previously, Hershberger and Suri (in <i>SIAM Journal on Computing</i>, 1999) gave an algorithm of <i>O(n</i> log <i>n</i>) time and <i>O(n</i> log <i>n</i>) space, where <i>n</i> is the total number of vertices of all obstacles. Recently, by modifying Hershberger and Suri’s algorithm, Wang (in SODA’21) reduced the space to <i>O(n)</i> while the runtime of the algorithm is still <i>O(n</i> log <i>n</i>). In this article, we present a new algorithm of <i>O(n+h</i> log <i>h</i>) time and <i>O(n)</i> space, provided that a triangulation of the free space is given, where <i>h</i> is the number of obstacles. The algorithm is better than the previous work when <i>h</i> is relatively small. Our algorithm builds a shortest path map for a source point <i>s</i> so that given any query point <i>t</i>, the shortest path length from <i>s</i> to <i>t</i> can be computed in <i>O</i>(log <i>n</i>) time and a shortest <i>s</i>-<i>t</i> path can be produced in additional time linear in the number of edges of the path.</p>","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"2 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138525714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bounding the price of anarchy, which quantifies the damage to social welfare due to selfish behavior of the participants, has been an important area of research in algorithmic game theory. Classical work on such bounds in repeated games makes the strong assumption that the subsequent rounds of the repeated games are independent beyond any influence on play from past history. This work studies such bounds in environments that themselves change due to the actions of the agents. Concretely, we consider this problem in discrete-time queuing systems, where competitive queues try to get their packets served. In this model, a queue gets to send a packet at each step to one of the servers, which will attempt to serve the oldest arriving packet, and unprocessed packets are returned to each queue. We model this as a repeated game where queues compete for the capacity of the servers, but where the state of the game evolves as the length of each queue varies. We analyze this queuing system from multiple perspectives. As a baseline measure, we first establish precise conditions on the queuing arrival rates and service capacities that ensure all packets clear efficiently under centralized coordination. We then show that if queues strategically choose servers according to independent and stationary distributions, the system remains stable provided it would be stable under coordination with arrival rates scaled up by a factor of just (frac{e}{e-1}) . Finally, we extend these results to no-regret learning dynamics: if queues use learning algorithms satisfying the no-regret property to choose servers, then the requisite factor increases to 2, and both of these bounds are tight. Both of these results require new probabilistic techniques compared to the classical price of anarchy literature and show that in such settings, no-regret learning can exhibit efficiency loss due to myopia.
{"title":"The Price of Anarchy of Strategic Queuing Systems","authors":"J. Gaitonde, É. Tardos","doi":"10.1145/3587250","DOIUrl":"https://doi.org/10.1145/3587250","url":null,"abstract":"Bounding the price of anarchy, which quantifies the damage to social welfare due to selfish behavior of the participants, has been an important area of research in algorithmic game theory. Classical work on such bounds in repeated games makes the strong assumption that the subsequent rounds of the repeated games are independent beyond any influence on play from past history. This work studies such bounds in environments that themselves change due to the actions of the agents. Concretely, we consider this problem in discrete-time queuing systems, where competitive queues try to get their packets served. In this model, a queue gets to send a packet at each step to one of the servers, which will attempt to serve the oldest arriving packet, and unprocessed packets are returned to each queue. We model this as a repeated game where queues compete for the capacity of the servers, but where the state of the game evolves as the length of each queue varies. We analyze this queuing system from multiple perspectives. As a baseline measure, we first establish precise conditions on the queuing arrival rates and service capacities that ensure all packets clear efficiently under centralized coordination. We then show that if queues strategically choose servers according to independent and stationary distributions, the system remains stable provided it would be stable under coordination with arrival rates scaled up by a factor of just (frac{e}{e-1}) . Finally, we extend these results to no-regret learning dynamics: if queues use learning algorithms satisfying the no-regret property to choose servers, then the requisite factor increases to 2, and both of these bounds are tight. Both of these results require new probabilistic techniques compared to the classical price of anarchy literature and show that in such settings, no-regret learning can exhibit efficiency loss due to myopia.","PeriodicalId":50022,"journal":{"name":"Journal of the ACM","volume":"128 1","pages":"1 - 63"},"PeriodicalIF":2.5,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88165109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}