Pub Date : 2024-07-22DOI: 10.1007/s00453-024-01258-9
Carola Doerr, Duri Andrea Janett, Johannes Lengler
In a seminal paper in 2013, Witt showed that the (1+1) Evolutionary Algorithm with standard bit mutation needs time ((1+o(1))n ln n/p_1) to find the optimum of any linear function, as long as the probability (p_1) to flip exactly one bit is (Theta (1)). In this paper we investigate how this result generalizes if standard bit mutation is replaced by an arbitrary unbiased mutation operator. This situation is notably different, since the stochastic domination argument used for the lower bound by Witt no longer holds. In particular, starting closer to the optimum is not necessarily an advantage, and OneMax is no longer the easiest function for arbitrary starting positions. Nevertheless, we show that Witt’s result carries over if (p_1) is not too small, with different constraints for upper and lower bounds, and if the number of flipped bits has bounded expectation (chi ). Notably, this includes some of the heavy-tail mutation operators used in fast genetic algorithms, but not all of them. We also give examples showing that algorithms with unbounded (chi ) have qualitatively different trajectories close to the optimum.
{"title":"Tight Runtime Bounds for Static Unary Unbiased Evolutionary Algorithms on Linear Functions","authors":"Carola Doerr, Duri Andrea Janett, Johannes Lengler","doi":"10.1007/s00453-024-01258-9","DOIUrl":"10.1007/s00453-024-01258-9","url":null,"abstract":"<div><p>In a seminal paper in 2013, Witt showed that the (1+1) Evolutionary Algorithm with standard bit mutation needs time <span>((1+o(1))n ln n/p_1)</span> to find the optimum of any linear function, as long as the probability <span>(p_1)</span> to flip exactly one bit is <span>(Theta (1))</span>. In this paper we investigate how this result generalizes if standard bit mutation is replaced by an arbitrary unbiased mutation operator. This situation is notably different, since the stochastic domination argument used for the lower bound by Witt no longer holds. In particular, starting closer to the optimum is not necessarily an advantage, and OneMax is no longer the easiest function for arbitrary starting positions. Nevertheless, we show that Witt’s result carries over if <span>(p_1)</span> is not too small, with different constraints for upper and lower bounds, and if the number of flipped bits has bounded expectation <span>(chi )</span>. Notably, this includes some of the heavy-tail mutation operators used in fast genetic algorithms, but not all of them. We also give examples showing that algorithms with unbounded <span>(chi )</span> have qualitatively different trajectories close to the optimum.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 10","pages":"3115 - 3152"},"PeriodicalIF":0.9,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141740781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interval scheduling is a basic algorithmic problem and a classical task in combinatorial optimization. We develop techniques for partitioning and grouping jobs based on their starting/ending times, enabling us to view an instance of interval scheduling on many jobs as a union of multiple interval scheduling instances, each containing only a few jobs. Instantiating these techniques in a dynamic setting produces several new results. For ((1+varepsilon ))-approximation of job scheduling of n jobs on a single machine, we develop a fully dynamic algorithm with (O(nicefrac {log {n}}{varepsilon })) update and (O(log {n})) query worst-case time. Our techniques are also applicable in a setting where jobs have weights. We design a fully dynamic deterministic algorithm whose worst-case update and query times are (text {poly} (log n,frac{1}{varepsilon })). This is the first algorithm that maintains a ((1+varepsilon ))-approximation of the maximum independent set of a collection of weighted intervals in (text {poly} (log n,frac{1}{varepsilon })) time updates/queries. This is an exponential improvement in (1/varepsilon ) over the running time of an algorithm of Henzinger, Neumann, and Wiese [SoCG, 2020]. Our approach also removes all dependence on the values of the jobs’ starting/ending times and weights.
{"title":"New Partitioning Techniques and Faster Algorithms for Approximate Interval Scheduling","authors":"Spencer Compton, Slobodan Mitrović, Ronitt Rubinfeld","doi":"10.1007/s00453-024-01252-1","DOIUrl":"10.1007/s00453-024-01252-1","url":null,"abstract":"<div><p>Interval scheduling is a basic algorithmic problem and a classical task in combinatorial optimization. We develop techniques for partitioning and grouping jobs based on their starting/ending times, enabling us to view an instance of interval scheduling on <i>many</i> jobs as a union of multiple interval scheduling instances, each containing only <i>a few</i> jobs. Instantiating these techniques in a dynamic setting produces several new results. For <span>((1+varepsilon ))</span>-approximation of job scheduling of <i>n</i> jobs on a single machine, we develop a fully dynamic algorithm with <span>(O(nicefrac {log {n}}{varepsilon }))</span> update and <span>(O(log {n}))</span> query worst-case time. Our techniques are also applicable in a setting where jobs have weights. We design a fully dynamic <i>deterministic</i> algorithm whose worst-case update and query times are <span>(text {poly} (log n,frac{1}{varepsilon }))</span>. This is <i>the first</i> algorithm that maintains a <span>((1+varepsilon ))</span>-approximation of the maximum independent set of a collection of weighted intervals in <span>(text {poly} (log n,frac{1}{varepsilon }))</span> time updates/queries. This is an exponential improvement in <span>(1/varepsilon )</span> over the running time of an algorithm of Henzinger, Neumann, and Wiese [SoCG, 2020]. Our approach also removes all dependence on the values of the jobs’ starting/ending times and weights.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2997 - 3026"},"PeriodicalIF":0.9,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142412312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-18DOI: 10.1007/s00453-024-01255-y
David Eppstein
We show that, for planar point sets, the number of non-crossing Hamiltonian paths is polynomially bounded in the number of non-crossing paths, and the number of non-crossing Hamiltonian cycles (polygonalizations) is polynomially bounded in the number of surrounding cycles. As a consequence, we can list the non-crossing Hamiltonian paths or the polygonalizations, in time polynomial in the output size, by filtering the output of simple backtracking algorithms for non-crossing paths or surrounding cycles respectively. We do not assume that the points are in general position. To prove these results we relate the numbers of non-crossing structures to two easily-computed parameters of the point set: the minimum number of points whose removal results in a collinear set, and the number of points interior to the convex hull. These relations also lead to polynomial-time approximation algorithms for the numbers of structures of all four types, accurate to within a constant factor of the logarithm of these numbers.
{"title":"Non-crossing Hamiltonian Paths and Cycles in Output-Polynomial Time","authors":"David Eppstein","doi":"10.1007/s00453-024-01255-y","DOIUrl":"10.1007/s00453-024-01255-y","url":null,"abstract":"<div><p>We show that, for planar point sets, the number of non-crossing Hamiltonian paths is polynomially bounded in the number of non-crossing paths, and the number of non-crossing Hamiltonian cycles (polygonalizations) is polynomially bounded in the number of surrounding cycles. As a consequence, we can list the non-crossing Hamiltonian paths or the polygonalizations, in time polynomial in the output size, by filtering the output of simple backtracking algorithms for non-crossing paths or surrounding cycles respectively. We do not assume that the points are in general position. To prove these results we relate the numbers of non-crossing structures to two easily-computed parameters of the point set: the minimum number of points whose removal results in a collinear set, and the number of points interior to the convex hull. These relations also lead to polynomial-time approximation algorithms for the numbers of structures of all four types, accurate to within a constant factor of the logarithm of these numbers.\u0000</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"3027 - 3053"},"PeriodicalIF":0.9,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01255-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142412335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-18DOI: 10.1007/s00453-024-01257-w
József Balogh, Felix Christian Clemen, Adrian Dumitrescu
Let X be an n-element point set in the k-dimensional unit cube ([0,1]^k) where (k ge 2). According to an old result of Bollobás and Meir (Oper Res Lett 11:19–21, 1992) , there exists a cycle (tour) (x_1, x_2, ldots , x_n) through the n points, such that (left( sum _{i=1}^n |x_i - x_{i+1}|^k right) ^{1/k} le c_k), where (|x-y|) is the Euclidean distance between x and y, and (c_k) is an absolute constant that depends only on k, where (x_{n+1} equiv x_1). From the other direction, for every (k ge 2) and (n ge 2), there exist n points in ([0,1]^k), such that their shortest tour satisfies (left( sum _{i=1}^n |x_i - x_{i+1}|^k right) ^{1/k} = 2^{1/k} cdot sqrt{k}). For the plane, the best constant is (c_2=2) and this is the only exact value known. Bollobás and Meir showed that one can take (c_k = 9 left( frac{2}{3} right) ^{1/k} cdot sqrt{k}) for every (k ge 3) and conjectured that the best constant is (c_k = 2^{1/k} cdot sqrt{k}), for every (k ge 2). Here we significantly improve the upper bound and show that one can take (c_k = 3 sqrt{5} left( frac{2}{3} right) ^{1/k} cdot sqrt{k}) or (c_k = 2.91 sqrt{k} (1+o_k(1))). Our bounds are constructive. We also show that (c_3 ge 2^{7/6}), which disproves the conjecture for (k=3). Connections to matching problems, power assignment problems, related problems, including algorithms, are discussed in this context. A slightly revised version of the Bollobás–Meir conjecture is proposed.
让 X 是 k 维单位立方体 ([0,1]^k)中的一个 n 元素点集,其中 (k ge 2).根据 Bollobás 和 Meir 的老结果(Oper Res Lett 11:19-21, 1992),存在一个经过 n 个点的循环(tour)(x_1, x_2, ldots , x_n),使得 (left( sum _{i=1}^n |x_i - x_{i+1}|^k right) ^{1/k})c_k), 其中 (|x-y|) 是 x 和 y 之间的欧几里得距离,而 (c_k) 是一个只取决于 k 的绝对常数,其中 (x_{n+1} equiv x_1).从另一个方向来看,对于每一个(k)和(n),在([0,1]^k)中存在n个点,使得它们的最短巡回满足(left( sum _{i=1}^n |x_i - x_{i+1}|^k right) ^{1/k} = 2^{1/k}cdot sqrt{k}/)。对于平面来说,最佳常数是 c_2=2,这是唯一已知的精确值。Bollobás和Meir证明,可以取(c_k = 9 left( frac{2}{3} right) ^{1/k}c_k = 2^{1/k} cdot sqrt{k}), for every (k ge 3) and conjectured that the best constant is (c_k = 2^{1/k} cdot sqrt{k}), for every (k ge 2).在这里,我们极大地改进了上界,并证明我们可以把(c_k = 3 sqrt{5}left( frac{2}{3} right) ^{1/k}cdot sqrt{k}) or (c_k = 2.91 sqrt{k} (1+o_k(1))).我们的边界是建设性的。我们还证明了 (c_3 ge 2^{7/6}),这推翻了对(k=3)的猜想。在此背景下,我们讨论了与匹配问题、幂赋值问题、相关问题(包括算法)的联系。还提出了一个稍作修订的 Bollobás-Meir 猜想。
{"title":"On a Traveling Salesman Problem for Points in the Unit Cube","authors":"József Balogh, Felix Christian Clemen, Adrian Dumitrescu","doi":"10.1007/s00453-024-01257-w","DOIUrl":"10.1007/s00453-024-01257-w","url":null,"abstract":"<div><p>Let <i>X</i> be an <i>n</i>-element point set in the <i>k</i>-dimensional unit cube <span>([0,1]^k)</span> where <span>(k ge 2)</span>. According to an old result of Bollobás and Meir (Oper Res Lett 11:19–21, 1992) , there exists a cycle (tour) <span>(x_1, x_2, ldots , x_n)</span> through the <i>n</i> points, such that <span>(left( sum _{i=1}^n |x_i - x_{i+1}|^k right) ^{1/k} le c_k)</span>, where <span>(|x-y|)</span> is the Euclidean distance between <i>x</i> and <i>y</i>, and <span>(c_k)</span> is an absolute constant that depends only on <i>k</i>, where <span>(x_{n+1} equiv x_1)</span>. From the other direction, for every <span>(k ge 2)</span> and <span>(n ge 2)</span>, there exist <i>n</i> points in <span>([0,1]^k)</span>, such that their shortest tour satisfies <span>(left( sum _{i=1}^n |x_i - x_{i+1}|^k right) ^{1/k} = 2^{1/k} cdot sqrt{k})</span>. For the plane, the best constant is <span>(c_2=2)</span> and this is the only exact value known. Bollobás and Meir showed that one can take <span>(c_k = 9 left( frac{2}{3} right) ^{1/k} cdot sqrt{k})</span> for every <span>(k ge 3)</span> and conjectured that the best constant is <span>(c_k = 2^{1/k} cdot sqrt{k})</span>, for every <span>(k ge 2)</span>. Here we significantly improve the upper bound and show that one can take <span>(c_k = 3 sqrt{5} left( frac{2}{3} right) ^{1/k} cdot sqrt{k})</span> or <span>(c_k = 2.91 sqrt{k} (1+o_k(1)))</span>. Our bounds are constructive. We also show that <span>(c_3 ge 2^{7/6})</span>, which disproves the conjecture for <span>(k=3)</span>. Connections to matching problems, power assignment problems, related problems, including algorithms, are discussed in this context. A slightly revised version of the Bollobás–Meir conjecture is proposed.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"3054 - 3078"},"PeriodicalIF":0.9,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01257-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142412318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-12DOI: 10.1007/s00453-024-01250-3
Irvan Jahja, Haifeng Yu
We consider standard T-interval dynamic networks, under the synchronous timing model and the broadcast CONGEST model. In a T-interval dynamic network, the set of nodes is always fixed and there are no node failures. The edges in the network are always undirected, but the set of edges in the topology may change arbitrarily from round to round, as determined by some adversary and subject to the following constraint: For every T consecutive rounds, the topologies in those rounds must contain a common connected spanning subgraph. Let (H_r) to be the maximum (in terms of number of edges) such subgraph for round r through (r+T-1). We define the backbone diameterd of a T-interval dynamic network to be the maximum diameter of all such (H_r)’s, for (rge 1). We use n to denote the number of nodes in the network. Within such a context, we consider a range of fundamental distributed computing problems including Count/Max/Median/Sum/LeaderElect/Consensus/ConfirmedFlood. Existing algorithms for these problems all have time complexity of (Omega (n)) rounds, even for (T=infty ) and even when d is as small as O(1). This paper presents a novel approach/framework, based on the idea of massively parallel aggregation. Following this approach, we develop a novel deterministic Count algorithm with (O(d^3 log ^2 n)) complexity, for T-interval dynamic networks with (T ge ccdot d^2 log ^2n). Here c is a (sufficiently large) constant independent of d, n, and T. To our knowledge, our algorithm is the very first such algorithm whose complexity does not contain a (Theta (n)) term. This paper further develops novel algorithms for solving Max/Median/Sum/LeaderElect/Consensus/ConfirmedFlood, while incurring (O(d^3 text{ polylog }(n))) complexity. Again, for all these problems, our algorithms are the first ones whose time complexity does not contain a (Theta (n)) term.
我们考虑的是同步定时模型和广播 CONGEST 模型下的标准 T 间隔动态网络。在 T 间隔动态网络中,节点集总是固定的,不存在节点故障。网络中的边总是无向的,但拓扑结构中的边集可以在各轮之间任意变化,由某个对手决定,并受到以下约束:每连续进行 T 轮,这些轮中的拓扑图必须包含一个共同的连通跨越子图。让 (H_r)成为第 r 轮通过 (r+T-1)的最大子图(以边的数量计算)。我们定义一个 T 期动态网络的主干直径 d 是所有这样的 (H_r) 的最大直径,对于 (rge 1) 来说。我们用 n 表示网络中的节点数。在这样的背景下,我们考虑了一系列基本的分布式计算问题,包括 Count/Max/Median/Sum/LeaderElect/Consensus/ConfirmedFlood 等。这些问题的现有算法的时间复杂度都是(Omega (n))轮,即使是(T=infty ),甚至当d小到O(1)时也是如此。本文提出了一种基于大规模并行聚合思想的新方法/框架。按照这种方法,我们开发了一种复杂度为(O(d^3 log ^2 n))的新型确定性计数算法,适用于具有(T ge ccdot d^2 log ^2n )的 T 期动态网络。据我们所知,我们的算法是第一个复杂度不包含(Theta (n))项的算法。本文进一步开发了解决 Max/Median/Sum/LeaderElect/Consensus/ConfirmedFlood 问题的新算法,同时产生了 (O(d^3 text{ polylog }(n))) 复杂性。同样,对于所有这些问题,我们的算法是第一个时间复杂度不包含一个(θ (n))项的算法。
{"title":"Sublinear Algorithms in T-Interval Dynamic Networks","authors":"Irvan Jahja, Haifeng Yu","doi":"10.1007/s00453-024-01250-3","DOIUrl":"10.1007/s00453-024-01250-3","url":null,"abstract":"<div><p>We consider standard <i>T</i>-<i>interval dynamic networks</i>, under the synchronous timing model and the broadcast CONGEST model. In a <i>T</i>-<i>interval dynamic network</i>, the set of nodes is always fixed and there are no node failures. The edges in the network are always undirected, but the set of edges in the topology may change arbitrarily from round to round, as determined by some <i>adversary</i> and subject to the following constraint: For every <i>T</i> consecutive rounds, the topologies in those rounds must contain a common connected spanning subgraph. Let <span>(H_r)</span> to be the maximum (in terms of number of edges) such subgraph for round <i>r</i> through <span>(r+T-1)</span>. We define the <i>backbone diameter</i> <i>d</i> of a <i>T</i>-interval dynamic network to be the maximum diameter of all such <span>(H_r)</span>’s, for <span>(rge 1)</span>. We use <i>n</i> to denote the number of nodes in the network. Within such a context, we consider a range of fundamental distributed computing problems including <span>Count</span>/<span>Max</span>/<span>Median</span>/<span>Sum</span>/<span>LeaderElect</span>/<span>Consensus</span>/<span>ConfirmedFlood</span>. Existing algorithms for these problems all have time complexity of <span>(Omega (n))</span> rounds, even for <span>(T=infty )</span> and even when <i>d</i> is as small as <i>O</i>(1). This paper presents a novel approach/framework, based on the idea of <i>massively parallel aggregation</i>. Following this approach, we develop a novel deterministic <span>Count</span> algorithm with <span>(O(d^3 log ^2 n))</span> complexity, for <i>T</i>-interval dynamic networks with <span>(T ge ccdot d^2 log ^2n)</span>. Here <i>c</i> is a (sufficiently large) constant independent of <i>d</i>, <i>n</i>, and <i>T</i>. To our knowledge, our algorithm is the very first such algorithm whose complexity does not contain a <span>(Theta (n))</span> term. This paper further develops novel algorithms for solving <span>Max</span>/<span>Median</span>/<span>Sum</span>/<span>LeaderElect</span>/<span>Consensus</span>/<span>ConfirmedFlood</span>, while incurring <span>(O(d^3 text{ polylog }(n)))</span> complexity. Again, for all these problems, our algorithms are the first ones whose time complexity does not contain a <span>(Theta (n))</span> term.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2959 - 2996"},"PeriodicalIF":0.9,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141612701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1007/s00453-024-01249-w
Amirhossein Rajabi, Carsten Witt
Stagnation detection has been proposed as a mechanism for randomized search heuristics to escape from local optima by automatically increasing the size of the neighborhood to find the so-called gap size, i. e., the distance to the next improvement. Its usefulness has mostly been considered in simple multimodal landscapes with few local optima that could be crossed one after another. In multimodal landscapes with a more complex location of optima of similar gap size, stagnation detection suffers from the fact that the neighborhood size is frequently reset to 1 without using gap sizes that were promising in the past. In this paper, we investigate a new mechanism called radius memory which can be added to stagnation detection to control the search radius more carefully by giving preference to values that were successful in the past. We implement this idea in an algorithm called SD-RLS(^{text {m}}) and show compared to previous variants of stagnation detection that it yields speed-ups for linear functions under uniform constraints and the minimum spanning tree problem. Moreover, its running time does not significantly deteriorate on unimodal functions and a generalization of the Jump benchmark. Finally, we present experimental results carried out to study SD-RLS(^{text {m}}) and compare it with other algorithms.
{"title":"Stagnation Detection in Highly Multimodal Fitness Landscapes","authors":"Amirhossein Rajabi, Carsten Witt","doi":"10.1007/s00453-024-01249-w","DOIUrl":"10.1007/s00453-024-01249-w","url":null,"abstract":"<div><p>Stagnation detection has been proposed as a mechanism for randomized search heuristics to escape from local optima by automatically increasing the size of the neighborhood to find the so-called gap size, i. e., the distance to the next improvement. Its usefulness has mostly been considered in simple multimodal landscapes with few local optima that could be crossed one after another. In multimodal landscapes with a more complex location of optima of similar gap size, stagnation detection suffers from the fact that the neighborhood size is frequently reset to 1 without using gap sizes that were promising in the past. In this paper, we investigate a new mechanism called <i>radius memory</i> which can be added to stagnation detection to control the search radius more carefully by giving preference to values that were successful in the past. We implement this idea in an algorithm called SD-RLS<span>(^{text {m}})</span> and show compared to previous variants of stagnation detection that it yields speed-ups for linear functions under uniform constraints and the minimum spanning tree problem. Moreover, its running time does not significantly deteriorate on unimodal functions and a generalization of the <span>Jump</span> benchmark. Finally, we present experimental results carried out to study SD-RLS<span>(^{text {m}})</span> and compare it with other algorithms.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2929 - 2958"},"PeriodicalIF":0.9,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01249-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141550368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-19DOI: 10.1007/s00453-024-01246-z
Jelle J. Oostveen, Erik Jan van Leeuwen
We initiate the investigation of the parameterized complexity of Diameter and Connectivity in the streaming paradigm. On the positive end, we show that knowing a vertex cover of size k allows for algorithms in the Adjacency List (AL) streaming model whose number of passes is constant and memory is (mathcal {O}(log n)) for any fixed k. Underlying these algorithms is a method to execute a breadth-first search in (mathcal {O}(k)) passes and (mathcal {O}(k log n)) bits of memory. On the negative end, we show that many other parameters lead to lower bounds in the AL model, where (Omega (n/p)) bits of memory is needed for any p-pass algorithm even for constant parameter values. In particular, this holds for graphs with a known modulator (deletion set) of constant size to a graph that has no induced subgraph isomorphic to a fixed graph H, for most H. For some cases, we can also show one-pass, (Omega (n log n)) bits of memory lower bounds. We also prove a much stronger (Omega (n^2/p)) lower bound for Diameter on bipartite graphs. Finally, using the insights we developed into streaming parameterized graph exploration algorithms, we show a new streaming kernelization algorithm for computing a vertex cover of size k. This yields a kernel of 2k vertices (with (mathcal {O}(k^2)) edges) produced as a stream in (text {poly}(k)) passes and only (mathcal {O}(k log n)) bits of memory.
{"title":"Parameterized Complexity of Streaming Diameter and Connectivity Problems","authors":"Jelle J. Oostveen, Erik Jan van Leeuwen","doi":"10.1007/s00453-024-01246-z","DOIUrl":"10.1007/s00453-024-01246-z","url":null,"abstract":"<div><p>We initiate the investigation of the parameterized complexity of <span>Diameter</span> and <span>Connectivity</span> in the streaming paradigm. On the positive end, we show that knowing a vertex cover of size <i>k</i> allows for algorithms in the Adjacency List (AL) streaming model whose number of passes is constant and memory is <span>(mathcal {O}(log n))</span> for any fixed <i>k</i>. Underlying these algorithms is a method to execute a breadth-first search in <span>(mathcal {O}(k))</span> passes and <span>(mathcal {O}(k log n))</span> bits of memory. On the negative end, we show that many other parameters lead to lower bounds in the AL model, where <span>(Omega (n/p))</span> bits of memory is needed for any <i>p</i>-pass algorithm even for constant parameter values. In particular, this holds for graphs with a known modulator (deletion set) of constant size to a graph that has no induced subgraph isomorphic to a fixed graph <i>H</i>, for most <i>H</i>. For some cases, we can also show one-pass, <span>(Omega (n log n))</span> bits of memory lower bounds. We also prove a much stronger <span>(Omega (n^2/p))</span> lower bound for <span>Diameter</span> on bipartite graphs. Finally, using the insights we developed into streaming parameterized graph exploration algorithms, we show a new streaming kernelization algorithm for computing a vertex cover of size <i>k</i>. This yields a kernel of 2<i>k</i> vertices (with <span>(mathcal {O}(k^2))</span> edges) produced as a stream in <span>(text {poly}(k))</span> passes and only <span>(mathcal {O}(k log n))</span> bits of memory.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2885 - 2928"},"PeriodicalIF":0.9,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01246-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141550330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-19DOI: 10.1007/s00453-024-01245-0
Bengt J. Nilsson, Eli Packer
The two-watchman route problem is that of computing a pair of closed tours in an environment so that the two tours together see the whole environment and some length measure on the two tours is minimized. Two standard measures are: the minmax measure, where we want the tours where the longest of them has smallest length, and the minsum measure, where we want the tours for which the sum of their lengths is the smallest. It is known that computing a minmax two-watchman route is NP-hard for simple rectilinear polygons and thus also for simple polygons. Also, any c-approximation algorithm for the minmax two-watchman route is automatically a 2c-approximation algorithm for the minsum two-watchman route. We exhibit two constant factor approximation algorithms for computing minmax two-watchman routes in simple polygons with approximation factors 5.969 and 11.939, having running times (O(n^8)) and (O(n^4)) respectively, where n is the number of vertices of the polygon. We also use the same techniques to obtain a 6.922-approximation for the fixed two-watchman route problem running in (O(n^2)) time, i.e., when two starting points of the two tours are given as input.
双守望者路线问题是指在一个环境中计算一对封闭的巡回路线,使这两条巡回路线一起看到整个环境,并使这两条巡回路线上的某个长度度量最小。两种标准度量方法是:最小最大度量,即我们希望最长的两条路线的长度最小;最小总和度量,即我们希望两条路线的长度总和最小。众所周知,对于简单的直角多边形以及简单的多边形来说,计算一条最小双守望者路线是 NP 难的。此外,任何最小双守望者路线的 c 近似算法都自动是最小双守望者路线的 2c 近似算法。我们展示了在简单多边形中计算 minmax 双守望者路线的两种常数因子近似算法,其近似因子分别为 5.969 和 11.939,运行时间分别为 (O(n^8))和 (O(n^4)),其中 n 是多边形的顶点数。我们还使用同样的技术得到了固定双守望者路线问题的 6.922 个近似值,运行时间为 (O(n^2)),也就是说,当两个游程的两个起点作为输入时,运行时间为 (O(n^2))。
{"title":"Approximation Algorithms for the Two-Watchman Route in a Simple Polygon","authors":"Bengt J. Nilsson, Eli Packer","doi":"10.1007/s00453-024-01245-0","DOIUrl":"10.1007/s00453-024-01245-0","url":null,"abstract":"<div><p>The <i>two-watchman route problem</i> is that of computing a pair of closed tours in an environment so that the two tours together see the whole environment and some length measure on the two tours is minimized. Two standard measures are: the minmax measure, where we want the tours where the longest of them has smallest length, and the minsum measure, where we want the tours for which the sum of their lengths is the smallest. It is known that computing a minmax two-watchman route is NP-hard for simple rectilinear polygons and thus also for simple polygons. Also, any <i>c</i>-approximation algorithm for the minmax two-watchman route is automatically a 2<i>c</i>-approximation algorithm for the minsum two-watchman route. We exhibit two constant factor approximation algorithms for computing minmax two-watchman routes in simple polygons with approximation factors 5.969 and 11.939, having running times <span>(O(n^8))</span> and <span>(O(n^4))</span> respectively, where <i>n</i> is the number of vertices of the polygon. We also use the same techniques to obtain a 6.922-approximation for the <i>fixed two-watchman route problem</i> running in <span>(O(n^2))</span> time, i.e., when two starting points of the two tours are given as input.\u0000</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2845 - 2884"},"PeriodicalIF":0.9,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01245-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-17DOI: 10.1007/s00453-024-01247-y
David G. Harris
As proposed by Karppa and Kaski (in: Proceedings 30th ACM-SIAM Symposium on Discrete Algorithms (SODA), 2019) a novel “broken" or "opportunistic" matrix multiplication algorithm, based on a variant of Strassen’s algorithm, and used this to develop new algorithms for Boolean matrix multiplication, among other tasks. Their algorithm can compute Boolean matrix multiplication in (O(n^{2.778})) time. While asymptotically faster matrix multiplication algorithms exist, most such algorithms are infeasible for practical problems. We describe an alternative way to use the broken multiplication algorithm to approximately compute matrix multiplication, either for real-valued or Boolean matrices. In brief, instead of running multiple iterations of the broken algorithm on the original input matrix, we form a new larger matrix by sampling and run a single iteration of the broken algorithm on it. Asymptotically, our algorithm has runtime (O(n^{2.763})), a slight improvement over the Karppa–Kaski algorithm. Since the goal is to obtain new practical matrix-multiplication algorithms, we also estimate the concrete runtime for our algorithm for some large-scale sample problems. It appears that for these parameters, further optimizations are still needed to make our algorithm competitive.
{"title":"Algorithms for Matrix Multiplication via Sampling and Opportunistic Matrix Multiplication","authors":"David G. Harris","doi":"10.1007/s00453-024-01247-y","DOIUrl":"10.1007/s00453-024-01247-y","url":null,"abstract":"<div><p>As proposed by Karppa and Kaski (in: Proceedings 30th ACM-SIAM Symposium on Discrete Algorithms (SODA), 2019) a novel “broken\" or \"opportunistic\" matrix multiplication algorithm, based on a variant of Strassen’s algorithm, and used this to develop new algorithms for Boolean matrix multiplication, among other tasks. Their algorithm can compute Boolean matrix multiplication in <span>(O(n^{2.778}))</span> time. While asymptotically faster matrix multiplication algorithms exist, most such algorithms are infeasible for practical problems. We describe an alternative way to use the broken multiplication algorithm to approximately compute matrix multiplication, either for real-valued or Boolean matrices. In brief, instead of running multiple iterations of the broken algorithm on the original input matrix, we form a new larger matrix by sampling and run a single iteration of the broken algorithm on it. Asymptotically, our algorithm has runtime <span>(O(n^{2.763}))</span>, a slight improvement over the Karppa–Kaski algorithm. Since the goal is to obtain new practical matrix-multiplication algorithms, we also estimate the concrete runtime for our algorithm for some large-scale sample problems. It appears that for these parameters, further optimizations are still needed to make our algorithm competitive.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2822 - 2844"},"PeriodicalIF":0.9,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141550367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13DOI: 10.1007/s00453-024-01239-y
Joan Boyar, Lene M. Favrholdt, Kim S. Larsen
A variant of the online knapsack problem is considered in the setting of predictions. In Unit Profit Knapsack, the items have unit profit, i.e., the goal is to pack as many items as possible. For Online Unit Profit Knapsack, the competitive ratio is unbounded. In contrast, it is easy to find an optimal solution offline: Pack as many of the smallest items as possible into the knapsack. The prediction available to the online algorithm is the average size of those smallest items that fit in the knapsack. For the prediction error in this hard online problem, we use the ratio (r=frac{a}{hat{a}}) where a is the actual value for this average size and (hat{a}) is the prediction. We give an algorithm which is (frac{e-1}{e})-competitive, if (r=1), and this is best possible among online algorithms knowing a and nothing else. More generally, the algorithm has a competitive ratio of (frac{e-1}{e}r), if (r le 1), and (frac{e-r}{e}r), if (1 le r < e). Any algorithm with a better competitive ratio for some (r<1) will have a worse competitive ratio for some (r>1). To obtain a positive competitive ratio for all r, we adjust the algorithm, resulting in a competitive ratio of (frac{1}{2r}) for (rge 1) and (frac{r}{2}) for (rle 1). We show that improving the result for any (r< 1) leads to a worse result for some (r>1).
在预测的背景下,我们考虑了在线可纳包问题的一个变体。在单位利润包中,物品具有单位利润,即目标是尽可能多地打包物品。对于在线单位利润可纳包,竞争率是无限制的。相比之下,很容易找到离线最优解:将尽可能多的最小物品装入背包。在线算法可以利用的预测结果是装入背包的最小物品的平均尺寸。对于这个困难的在线问题的预测误差,我们使用比率(r=frac{a}{hat{a}}),其中 a 是这个平均大小的实际值,(hat{a}) 是预测值。如果 (r=1) ,我们给出的算法是具有竞争性的((frac{e-1}{e}),这在知道 a 而不知道其他信息的在线算法中是最好的。更一般地说,如果(r=1),算法的竞争率是(frac{e-1}{e}r);如果(r=1),算法的竞争率是(frac{e-r}{e}r)。任何算法在某些(r<1)情况下具有较好的竞争比率,在某些(r>1)情况下就会具有较差的竞争比率。为了获得对所有r的正竞争比率,我们调整了算法,结果是对(rge 1 )的竞争比率为(frac{1}{2r} ),对(rle 1 )的竞争比率为(frac{r}{2} )。我们证明,对任何(r< 1) 结果的改进都会导致对某些(r> 1) 结果的恶化。
{"title":"Online Unit Profit Knapsack with Predictions","authors":"Joan Boyar, Lene M. Favrholdt, Kim S. Larsen","doi":"10.1007/s00453-024-01239-y","DOIUrl":"10.1007/s00453-024-01239-y","url":null,"abstract":"<div><p>A variant of the online knapsack problem is considered in the setting of predictions. In Unit Profit Knapsack, the items have unit profit, i.e., the goal is to pack as many items as possible. For Online Unit Profit Knapsack, the competitive ratio is unbounded. In contrast, it is easy to find an optimal solution offline: Pack as many of the smallest items as possible into the knapsack. The prediction available to the online algorithm is the average size of those smallest items that fit in the knapsack. For the prediction error in this hard online problem, we use the ratio <span>(r=frac{a}{hat{a}})</span> where <i>a</i> is the actual value for this average size and <span>(hat{a})</span> is the prediction. We give an algorithm which is <span>(frac{e-1}{e})</span>-competitive, if <span>(r=1)</span>, and this is best possible among online algorithms knowing <i>a</i> and nothing else. More generally, the algorithm has a competitive ratio of <span>(frac{e-1}{e}r)</span>, if <span>(r le 1)</span>, and <span>(frac{e-r}{e}r)</span>, if <span>(1 le r < e)</span>. Any algorithm with a better competitive ratio for some <span>(r<1)</span> will have a worse competitive ratio for some <span>(r>1)</span>. To obtain a positive competitive ratio for all <i>r</i>, we adjust the algorithm, resulting in a competitive ratio of <span>(frac{1}{2r})</span> for <span>(rge 1)</span> and <span>(frac{r}{2})</span> for <span>(rle 1)</span>. We show that improving the result for any <span>(r< 1)</span> leads to a worse result for some <span>(r>1)</span>.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2786 - 2821"},"PeriodicalIF":0.9,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01239-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141345495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}