Pub Date : 2024-07-02DOI: 10.1007/s00453-024-01249-w
Amirhossein Rajabi, Carsten Witt
Stagnation detection has been proposed as a mechanism for randomized search heuristics to escape from local optima by automatically increasing the size of the neighborhood to find the so-called gap size, i. e., the distance to the next improvement. Its usefulness has mostly been considered in simple multimodal landscapes with few local optima that could be crossed one after another. In multimodal landscapes with a more complex location of optima of similar gap size, stagnation detection suffers from the fact that the neighborhood size is frequently reset to 1 without using gap sizes that were promising in the past. In this paper, we investigate a new mechanism called radius memory which can be added to stagnation detection to control the search radius more carefully by giving preference to values that were successful in the past. We implement this idea in an algorithm called SD-RLS(^{text {m}}) and show compared to previous variants of stagnation detection that it yields speed-ups for linear functions under uniform constraints and the minimum spanning tree problem. Moreover, its running time does not significantly deteriorate on unimodal functions and a generalization of the Jump benchmark. Finally, we present experimental results carried out to study SD-RLS(^{text {m}}) and compare it with other algorithms.
{"title":"Stagnation Detection in Highly Multimodal Fitness Landscapes","authors":"Amirhossein Rajabi, Carsten Witt","doi":"10.1007/s00453-024-01249-w","DOIUrl":"10.1007/s00453-024-01249-w","url":null,"abstract":"<div><p>Stagnation detection has been proposed as a mechanism for randomized search heuristics to escape from local optima by automatically increasing the size of the neighborhood to find the so-called gap size, i. e., the distance to the next improvement. Its usefulness has mostly been considered in simple multimodal landscapes with few local optima that could be crossed one after another. In multimodal landscapes with a more complex location of optima of similar gap size, stagnation detection suffers from the fact that the neighborhood size is frequently reset to 1 without using gap sizes that were promising in the past. In this paper, we investigate a new mechanism called <i>radius memory</i> which can be added to stagnation detection to control the search radius more carefully by giving preference to values that were successful in the past. We implement this idea in an algorithm called SD-RLS<span>(^{text {m}})</span> and show compared to previous variants of stagnation detection that it yields speed-ups for linear functions under uniform constraints and the minimum spanning tree problem. Moreover, its running time does not significantly deteriorate on unimodal functions and a generalization of the <span>Jump</span> benchmark. Finally, we present experimental results carried out to study SD-RLS<span>(^{text {m}})</span> and compare it with other algorithms.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2929 - 2958"},"PeriodicalIF":0.9,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01249-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141550368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-19DOI: 10.1007/s00453-024-01246-z
Jelle J. Oostveen, Erik Jan van Leeuwen
We initiate the investigation of the parameterized complexity of Diameter and Connectivity in the streaming paradigm. On the positive end, we show that knowing a vertex cover of size k allows for algorithms in the Adjacency List (AL) streaming model whose number of passes is constant and memory is (mathcal {O}(log n)) for any fixed k. Underlying these algorithms is a method to execute a breadth-first search in (mathcal {O}(k)) passes and (mathcal {O}(k log n)) bits of memory. On the negative end, we show that many other parameters lead to lower bounds in the AL model, where (Omega (n/p)) bits of memory is needed for any p-pass algorithm even for constant parameter values. In particular, this holds for graphs with a known modulator (deletion set) of constant size to a graph that has no induced subgraph isomorphic to a fixed graph H, for most H. For some cases, we can also show one-pass, (Omega (n log n)) bits of memory lower bounds. We also prove a much stronger (Omega (n^2/p)) lower bound for Diameter on bipartite graphs. Finally, using the insights we developed into streaming parameterized graph exploration algorithms, we show a new streaming kernelization algorithm for computing a vertex cover of size k. This yields a kernel of 2k vertices (with (mathcal {O}(k^2)) edges) produced as a stream in (text {poly}(k)) passes and only (mathcal {O}(k log n)) bits of memory.
{"title":"Parameterized Complexity of Streaming Diameter and Connectivity Problems","authors":"Jelle J. Oostveen, Erik Jan van Leeuwen","doi":"10.1007/s00453-024-01246-z","DOIUrl":"10.1007/s00453-024-01246-z","url":null,"abstract":"<div><p>We initiate the investigation of the parameterized complexity of <span>Diameter</span> and <span>Connectivity</span> in the streaming paradigm. On the positive end, we show that knowing a vertex cover of size <i>k</i> allows for algorithms in the Adjacency List (AL) streaming model whose number of passes is constant and memory is <span>(mathcal {O}(log n))</span> for any fixed <i>k</i>. Underlying these algorithms is a method to execute a breadth-first search in <span>(mathcal {O}(k))</span> passes and <span>(mathcal {O}(k log n))</span> bits of memory. On the negative end, we show that many other parameters lead to lower bounds in the AL model, where <span>(Omega (n/p))</span> bits of memory is needed for any <i>p</i>-pass algorithm even for constant parameter values. In particular, this holds for graphs with a known modulator (deletion set) of constant size to a graph that has no induced subgraph isomorphic to a fixed graph <i>H</i>, for most <i>H</i>. For some cases, we can also show one-pass, <span>(Omega (n log n))</span> bits of memory lower bounds. We also prove a much stronger <span>(Omega (n^2/p))</span> lower bound for <span>Diameter</span> on bipartite graphs. Finally, using the insights we developed into streaming parameterized graph exploration algorithms, we show a new streaming kernelization algorithm for computing a vertex cover of size <i>k</i>. This yields a kernel of 2<i>k</i> vertices (with <span>(mathcal {O}(k^2))</span> edges) produced as a stream in <span>(text {poly}(k))</span> passes and only <span>(mathcal {O}(k log n))</span> bits of memory.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2885 - 2928"},"PeriodicalIF":0.9,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01246-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141550330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-19DOI: 10.1007/s00453-024-01245-0
Bengt J. Nilsson, Eli Packer
The two-watchman route problem is that of computing a pair of closed tours in an environment so that the two tours together see the whole environment and some length measure on the two tours is minimized. Two standard measures are: the minmax measure, where we want the tours where the longest of them has smallest length, and the minsum measure, where we want the tours for which the sum of their lengths is the smallest. It is known that computing a minmax two-watchman route is NP-hard for simple rectilinear polygons and thus also for simple polygons. Also, any c-approximation algorithm for the minmax two-watchman route is automatically a 2c-approximation algorithm for the minsum two-watchman route. We exhibit two constant factor approximation algorithms for computing minmax two-watchman routes in simple polygons with approximation factors 5.969 and 11.939, having running times (O(n^8)) and (O(n^4)) respectively, where n is the number of vertices of the polygon. We also use the same techniques to obtain a 6.922-approximation for the fixed two-watchman route problem running in (O(n^2)) time, i.e., when two starting points of the two tours are given as input.
双守望者路线问题是指在一个环境中计算一对封闭的巡回路线,使这两条巡回路线一起看到整个环境,并使这两条巡回路线上的某个长度度量最小。两种标准度量方法是:最小最大度量,即我们希望最长的两条路线的长度最小;最小总和度量,即我们希望两条路线的长度总和最小。众所周知,对于简单的直角多边形以及简单的多边形来说,计算一条最小双守望者路线是 NP 难的。此外,任何最小双守望者路线的 c 近似算法都自动是最小双守望者路线的 2c 近似算法。我们展示了在简单多边形中计算 minmax 双守望者路线的两种常数因子近似算法,其近似因子分别为 5.969 和 11.939,运行时间分别为 (O(n^8))和 (O(n^4)),其中 n 是多边形的顶点数。我们还使用同样的技术得到了固定双守望者路线问题的 6.922 个近似值,运行时间为 (O(n^2)),也就是说,当两个游程的两个起点作为输入时,运行时间为 (O(n^2))。
{"title":"Approximation Algorithms for the Two-Watchman Route in a Simple Polygon","authors":"Bengt J. Nilsson, Eli Packer","doi":"10.1007/s00453-024-01245-0","DOIUrl":"10.1007/s00453-024-01245-0","url":null,"abstract":"<div><p>The <i>two-watchman route problem</i> is that of computing a pair of closed tours in an environment so that the two tours together see the whole environment and some length measure on the two tours is minimized. Two standard measures are: the minmax measure, where we want the tours where the longest of them has smallest length, and the minsum measure, where we want the tours for which the sum of their lengths is the smallest. It is known that computing a minmax two-watchman route is NP-hard for simple rectilinear polygons and thus also for simple polygons. Also, any <i>c</i>-approximation algorithm for the minmax two-watchman route is automatically a 2<i>c</i>-approximation algorithm for the minsum two-watchman route. We exhibit two constant factor approximation algorithms for computing minmax two-watchman routes in simple polygons with approximation factors 5.969 and 11.939, having running times <span>(O(n^8))</span> and <span>(O(n^4))</span> respectively, where <i>n</i> is the number of vertices of the polygon. We also use the same techniques to obtain a 6.922-approximation for the <i>fixed two-watchman route problem</i> running in <span>(O(n^2))</span> time, i.e., when two starting points of the two tours are given as input.\u0000</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2845 - 2884"},"PeriodicalIF":0.9,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01245-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-17DOI: 10.1007/s00453-024-01247-y
David G. Harris
As proposed by Karppa and Kaski (in: Proceedings 30th ACM-SIAM Symposium on Discrete Algorithms (SODA), 2019) a novel “broken" or "opportunistic" matrix multiplication algorithm, based on a variant of Strassen’s algorithm, and used this to develop new algorithms for Boolean matrix multiplication, among other tasks. Their algorithm can compute Boolean matrix multiplication in (O(n^{2.778})) time. While asymptotically faster matrix multiplication algorithms exist, most such algorithms are infeasible for practical problems. We describe an alternative way to use the broken multiplication algorithm to approximately compute matrix multiplication, either for real-valued or Boolean matrices. In brief, instead of running multiple iterations of the broken algorithm on the original input matrix, we form a new larger matrix by sampling and run a single iteration of the broken algorithm on it. Asymptotically, our algorithm has runtime (O(n^{2.763})), a slight improvement over the Karppa–Kaski algorithm. Since the goal is to obtain new practical matrix-multiplication algorithms, we also estimate the concrete runtime for our algorithm for some large-scale sample problems. It appears that for these parameters, further optimizations are still needed to make our algorithm competitive.
{"title":"Algorithms for Matrix Multiplication via Sampling and Opportunistic Matrix Multiplication","authors":"David G. Harris","doi":"10.1007/s00453-024-01247-y","DOIUrl":"10.1007/s00453-024-01247-y","url":null,"abstract":"<div><p>As proposed by Karppa and Kaski (in: Proceedings 30th ACM-SIAM Symposium on Discrete Algorithms (SODA), 2019) a novel “broken\" or \"opportunistic\" matrix multiplication algorithm, based on a variant of Strassen’s algorithm, and used this to develop new algorithms for Boolean matrix multiplication, among other tasks. Their algorithm can compute Boolean matrix multiplication in <span>(O(n^{2.778}))</span> time. While asymptotically faster matrix multiplication algorithms exist, most such algorithms are infeasible for practical problems. We describe an alternative way to use the broken multiplication algorithm to approximately compute matrix multiplication, either for real-valued or Boolean matrices. In brief, instead of running multiple iterations of the broken algorithm on the original input matrix, we form a new larger matrix by sampling and run a single iteration of the broken algorithm on it. Asymptotically, our algorithm has runtime <span>(O(n^{2.763}))</span>, a slight improvement over the Karppa–Kaski algorithm. Since the goal is to obtain new practical matrix-multiplication algorithms, we also estimate the concrete runtime for our algorithm for some large-scale sample problems. It appears that for these parameters, further optimizations are still needed to make our algorithm competitive.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2822 - 2844"},"PeriodicalIF":0.9,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141550367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13DOI: 10.1007/s00453-024-01239-y
Joan Boyar, Lene M. Favrholdt, Kim S. Larsen
A variant of the online knapsack problem is considered in the setting of predictions. In Unit Profit Knapsack, the items have unit profit, i.e., the goal is to pack as many items as possible. For Online Unit Profit Knapsack, the competitive ratio is unbounded. In contrast, it is easy to find an optimal solution offline: Pack as many of the smallest items as possible into the knapsack. The prediction available to the online algorithm is the average size of those smallest items that fit in the knapsack. For the prediction error in this hard online problem, we use the ratio (r=frac{a}{hat{a}}) where a is the actual value for this average size and (hat{a}) is the prediction. We give an algorithm which is (frac{e-1}{e})-competitive, if (r=1), and this is best possible among online algorithms knowing a and nothing else. More generally, the algorithm has a competitive ratio of (frac{e-1}{e}r), if (r le 1), and (frac{e-r}{e}r), if (1 le r < e). Any algorithm with a better competitive ratio for some (r<1) will have a worse competitive ratio for some (r>1). To obtain a positive competitive ratio for all r, we adjust the algorithm, resulting in a competitive ratio of (frac{1}{2r}) for (rge 1) and (frac{r}{2}) for (rle 1). We show that improving the result for any (r< 1) leads to a worse result for some (r>1).
在预测的背景下,我们考虑了在线可纳包问题的一个变体。在单位利润包中,物品具有单位利润,即目标是尽可能多地打包物品。对于在线单位利润可纳包,竞争率是无限制的。相比之下,很容易找到离线最优解:将尽可能多的最小物品装入背包。在线算法可以利用的预测结果是装入背包的最小物品的平均尺寸。对于这个困难的在线问题的预测误差,我们使用比率(r=frac{a}{hat{a}}),其中 a 是这个平均大小的实际值,(hat{a}) 是预测值。如果 (r=1) ,我们给出的算法是具有竞争性的((frac{e-1}{e}),这在知道 a 而不知道其他信息的在线算法中是最好的。更一般地说,如果(r=1),算法的竞争率是(frac{e-1}{e}r);如果(r=1),算法的竞争率是(frac{e-r}{e}r)。任何算法在某些(r<1)情况下具有较好的竞争比率,在某些(r>1)情况下就会具有较差的竞争比率。为了获得对所有r的正竞争比率,我们调整了算法,结果是对(rge 1 )的竞争比率为(frac{1}{2r} ),对(rle 1 )的竞争比率为(frac{r}{2} )。我们证明,对任何(r< 1) 结果的改进都会导致对某些(r> 1) 结果的恶化。
{"title":"Online Unit Profit Knapsack with Predictions","authors":"Joan Boyar, Lene M. Favrholdt, Kim S. Larsen","doi":"10.1007/s00453-024-01239-y","DOIUrl":"10.1007/s00453-024-01239-y","url":null,"abstract":"<div><p>A variant of the online knapsack problem is considered in the setting of predictions. In Unit Profit Knapsack, the items have unit profit, i.e., the goal is to pack as many items as possible. For Online Unit Profit Knapsack, the competitive ratio is unbounded. In contrast, it is easy to find an optimal solution offline: Pack as many of the smallest items as possible into the knapsack. The prediction available to the online algorithm is the average size of those smallest items that fit in the knapsack. For the prediction error in this hard online problem, we use the ratio <span>(r=frac{a}{hat{a}})</span> where <i>a</i> is the actual value for this average size and <span>(hat{a})</span> is the prediction. We give an algorithm which is <span>(frac{e-1}{e})</span>-competitive, if <span>(r=1)</span>, and this is best possible among online algorithms knowing <i>a</i> and nothing else. More generally, the algorithm has a competitive ratio of <span>(frac{e-1}{e}r)</span>, if <span>(r le 1)</span>, and <span>(frac{e-r}{e}r)</span>, if <span>(1 le r < e)</span>. Any algorithm with a better competitive ratio for some <span>(r<1)</span> will have a worse competitive ratio for some <span>(r>1)</span>. To obtain a positive competitive ratio for all <i>r</i>, we adjust the algorithm, resulting in a competitive ratio of <span>(frac{1}{2r})</span> for <span>(rge 1)</span> and <span>(frac{r}{2})</span> for <span>(rle 1)</span>. We show that improving the result for any <span>(r< 1)</span> leads to a worse result for some <span>(r>1)</span>.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2786 - 2821"},"PeriodicalIF":0.9,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01239-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141345495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-12DOI: 10.1007/s00453-024-01238-z
Argyrios Deligkas, George B. Mertzios, Paul G. Spirakis, Viktor Zamaraev
In this paper we consider the following problem: Given a Hamiltonian graph G, and a Hamiltonian cycle C of G, can we compute a second Hamiltonian cycle (C^{prime } ne C) of G, and if yes, how quickly? If the input graph G satisfies certain conditions (e.g. if every vertex of G is odd, or if the minimum degree is large enough), it is known that such a second Hamiltonian cycle always exists. Despite substantial efforts, no subexponential-time algorithm is known for this problem. In this paper we relax the problem of computing a second Hamiltonian cycle in two ways. First, we consider approximating the length of a second longest cycle on n-vertex graphs with minimum degree (delta ) and maximum degree (Delta ). We provide a linear-time algorithm for computing a cycle (C^{prime } ne C) of length at least (n-4alpha (sqrt{n}+2alpha )+8), where (alpha = frac{Delta -2}{delta -2}). This results provides a constructive proof of a recent result by Girão, Kittipassorn, and Narayanan in the regime of (frac{Delta }{delta } = o(sqrt{n})). Our second relaxation of the problem is probabilistic. We propose a randomized algorithm which computes a second Hamiltonian cycle with high probability, given that the input graph G has a large enough minimum degree. More specifically, we prove that for every (0<ple 0.02), if the minimum degree of G is at least (frac{8}{p} log sqrt{8}n + 4), then a second Hamiltonian cycle can be computed with probability at least (1 - frac{1}{n}left( frac{50}{p^4} + 1 right) ) in (poly(n) cdot 2^{4pn}) time. This result implies that, when the minimum degree (delta ) is sufficiently large, we can compute with high probability a second Hamiltonian cycle faster than any known deterministic algorithm. In particular, when (delta = omega (log n)), our probabilistic algorithm works in (2^{o(n)}) time.
在本文中,我们考虑以下问题:给定一个哈密顿图 G 和 G 的哈密顿循环 C,我们能否计算出 G 的第二个哈密顿循环 (C^{prime } ne C) ,如果能,计算速度如何?如果输入图 G 满足某些条件(例如,如果 G 的每个顶点都是奇数,或者最小度数足够大),那么众所周知,这样的第二个哈密顿循环总是存在的。尽管做了大量努力,但目前还没有针对这一问题的亚指数时间算法。在本文中,我们从两个方面放宽了计算第二个哈密顿周期的问题。首先,我们考虑在具有最小度(delta )和最大度(Delta )的 n 顶点图上近似计算第二个最长周期的长度。我们提供了一种线性时间算法来计算一个长度至少为 (n-4alpha (sqrt{n}+2alpha )+8) 的循环 (C^{prime }ne C) ,其中 (alpha = frac{Delta -2}{delta -2})。这一结果为吉朗、基蒂帕索恩和纳拉亚南最近在 (frac{Delta }{delta } = o(sqrt{n})) 机制下的结果提供了一个构造性证明。我们对问题的第二种放松是概率放松。我们提出了一种随机算法,只要输入图 G 的最小度数足够大,它就能高概率地计算出第二个哈密顿循环。更具体地说,我们证明了对于每一个 (0<ple 0.02),如果 G 的最小度至少是 (frac{8}{p} log sqrt{8}n + 4) ,那么第二个哈密顿循环就可以在 (poly(n) cdot 2^{4pn}) 时间内以至少 (1 - frac{1}{n}left( frac{50}{p^4} + 1 right))的概率计算出来。这一结果意味着,当最小度数足够大时,我们计算第二个哈密顿周期的速度很有可能比任何已知的确定性算法都快。特别是,当 (delta = omega (log n))时,我们的概率算法只需要 (2^{o(n)}) 时间。
{"title":"Approximate and Randomized Algorithms for Computing a Second Hamiltonian Cycle","authors":"Argyrios Deligkas, George B. Mertzios, Paul G. Spirakis, Viktor Zamaraev","doi":"10.1007/s00453-024-01238-z","DOIUrl":"10.1007/s00453-024-01238-z","url":null,"abstract":"<div><p>In this paper we consider the following problem: Given a Hamiltonian graph <i>G</i>, and a Hamiltonian cycle <i>C</i> of <i>G</i>, can we compute a second Hamiltonian cycle <span>(C^{prime } ne C)</span> of <i>G</i>, and if yes, how quickly? If the input graph <i>G</i> satisfies certain conditions (e.g. if every vertex of <i>G</i> is odd, or if the minimum degree is large enough), it is known that such a second Hamiltonian cycle always exists. Despite substantial efforts, no subexponential-time algorithm is known for this problem. In this paper we relax the problem of computing a second Hamiltonian cycle in two ways. First, we consider <i>approximating</i> the length of a second longest cycle on <i>n</i>-vertex graphs with minimum degree <span>(delta )</span> and maximum degree <span>(Delta )</span>. We provide a linear-time algorithm for computing a cycle <span>(C^{prime } ne C)</span> of length at least <span>(n-4alpha (sqrt{n}+2alpha )+8)</span>, where <span>(alpha = frac{Delta -2}{delta -2})</span>. This results provides a constructive proof of a recent result by Girão, Kittipassorn, and Narayanan in the regime of <span>(frac{Delta }{delta } = o(sqrt{n}))</span>. Our second relaxation of the problem is probabilistic. We propose a randomized algorithm which computes a second Hamiltonian cycle <i>with high probability</i>, given that the input graph <i>G</i> has a large enough minimum degree. More specifically, we prove that for every <span>(0<ple 0.02)</span>, if the minimum degree of <i>G</i> is at least <span>(frac{8}{p} log sqrt{8}n + 4)</span>, then a second Hamiltonian cycle can be computed with probability at least <span>(1 - frac{1}{n}left( frac{50}{p^4} + 1 right) )</span> in <span>(poly(n) cdot 2^{4pn})</span> time. This result implies that, when the minimum degree <span>(delta )</span> is sufficiently large, we can compute with high probability a second Hamiltonian cycle faster than any known deterministic algorithm. In particular, when <span>(delta = omega (log n))</span>, our probabilistic algorithm works in <span>(2^{o(n)})</span> time.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2766 - 2785"},"PeriodicalIF":0.9,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01238-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141351753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-03DOI: 10.1007/s00453-024-01244-1
Minati De, Saksham Jain, Sarat Varma Kallepalli, Satyam Singh
We consider the online version of the piercing set problem, where geometric objects arrive one by one, and the online algorithm must maintain a valid piercing set for the already arrived objects by making irrevocable decisions. It is easy to observe that any deterministic algorithm solving this problem for intervals in (mathbb {R}) has a competitive ratio of at least (Omega (n)). This paper considers the piercing set problem for similarly sized objects. We propose a deterministic online algorithm for similarly sized fat objects in (mathbb {R}^d). For homothetic hypercubes in (mathbb {R}^d) with side length in the range [1, k], we propose a deterministic algorithm having a competitive ratio of at most (3^dlceil log _2 krceil +2^d). In the end, we show deterministic lower bounds of the competitive ratio for similarly sized (alpha )-fat objects in (mathbb {R}^2) and homothetic hypercubes in (mathbb {R}^d). Note that piercing translated copies of a convex object is equivalent to the unit covering problem, which is well-studied in the online setup. Surprisingly, no upper bound of the competitive ratio was known for the unit covering problem when the corresponding object is anything other than a ball or a hypercube. Our result yields an upper bound of the competitive ratio for the unit covering problem when the corresponding object is any convex object in (mathbb {R}^d).
{"title":"Online Geometric Covering and Piercing","authors":"Minati De, Saksham Jain, Sarat Varma Kallepalli, Satyam Singh","doi":"10.1007/s00453-024-01244-1","DOIUrl":"10.1007/s00453-024-01244-1","url":null,"abstract":"<div><p>We consider the online version of the piercing set problem, where geometric objects arrive one by one, and the online algorithm must maintain a valid piercing set for the already arrived objects by making irrevocable decisions. It is easy to observe that any deterministic algorithm solving this problem for intervals in <span>(mathbb {R})</span> has a competitive ratio of at least <span>(Omega (n))</span>. This paper considers the piercing set problem for similarly sized objects. We propose a deterministic online algorithm for similarly sized fat objects in <span>(mathbb {R}^d)</span>. For homothetic hypercubes in <span>(mathbb {R}^d)</span> with side length in the range [1, <i>k</i>], we propose a deterministic algorithm having a competitive ratio of at most <span>(3^dlceil log _2 krceil +2^d)</span>. In the end, we show deterministic lower bounds of the competitive ratio for similarly sized <span>(alpha )</span>-fat objects in <span>(mathbb {R}^2)</span> and homothetic hypercubes in <span>(mathbb {R}^d)</span>. Note that piercing translated copies of a convex object is equivalent to the unit covering problem, which is well-studied in the online setup. Surprisingly, no upper bound of the competitive ratio was known for the unit covering problem when the corresponding object is anything other than a ball or a hypercube. Our result yields an upper bound of the competitive ratio for the unit covering problem when the corresponding object is any convex object in <span>(mathbb {R}^d)</span>.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2739 - 2765"},"PeriodicalIF":0.9,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141255935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01DOI: 10.1007/s00453-024-01241-4
Robert Ganian, Viktoriia Korchemna
Tree-cut width is a parameter that has been introduced as an attempt to obtain an analogue of treewidth for edge cuts. Unfortunately, in spite of its desirable structural properties, it turned out that tree-cut width falls short as an edge-cut based alternative to treewidth in algorithmic aspects. This has led to the very recent introduction of a simple edge-based parameter called edge-cut width [WG 2022], which has precisely the algorithmic applications one would expect from an analogue of treewidth for edge cuts, but does not have the desired structural properties. In this paper, we study a variant of tree-cut width obtained by changing the threshold for so-called thin nodes in tree-cut decompositions from 2 to 1. We show that this “slim tree-cut width” satisfies all the requirements of an edge-cut based analogue of treewidth, both structural and algorithmic, while being less restrictive than edge-cut width. Our results also include an alternative characterization of slim tree-cut width via an easy-to-use spanning-tree decomposition akin to the one used for edge-cut width, a characterization of slim tree-cut width in terms of forbidden immersions as well as approximation algorithm for computing the parameter.
{"title":"Slim Tree-Cut Width","authors":"Robert Ganian, Viktoriia Korchemna","doi":"10.1007/s00453-024-01241-4","DOIUrl":"10.1007/s00453-024-01241-4","url":null,"abstract":"<div><p>Tree-cut width is a parameter that has been introduced as an attempt to obtain an analogue of treewidth for edge cuts. Unfortunately, in spite of its desirable structural properties, it turned out that tree-cut width falls short as an edge-cut based alternative to treewidth in algorithmic aspects. This has led to the very recent introduction of a simple edge-based parameter called edge-cut width [WG 2022], which has precisely the algorithmic applications one would expect from an analogue of treewidth for edge cuts, but does not have the desired structural properties. In this paper, we study a variant of tree-cut width obtained by changing the threshold for so-called thin nodes in tree-cut decompositions from 2 to 1. We show that this “slim tree-cut width” satisfies all the requirements of an edge-cut based analogue of treewidth, both structural and algorithmic, while being less restrictive than edge-cut width. Our results also include an alternative characterization of slim tree-cut width via an easy-to-use spanning-tree decomposition akin to the one used for edge-cut width, a characterization of slim tree-cut width in terms of forbidden immersions as well as approximation algorithm for computing the parameter.\u0000</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 8","pages":"2714 - 2738"},"PeriodicalIF":0.9,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01241-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141189002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-30DOI: 10.1007/s00453-024-01240-5
Fedor V. Fomin, Petr A. Golovach, Danil Sagunov, Kirill Simonov
Parameterization above (or below) a guarantee is a successful concept in parameterized algorithms. The idea is that many computational problems admit “natural” guarantees bringing to algorithmic questions whether a better solution (above the guarantee) could be obtained efficiently. For example, for every boolean CNF formula on m clauses, there is an assignment that satisfies at least m/2 clauses. How difficult is it to decide whether there is an assignment satisfying more than (m/2 +k) clauses? Or, if an n-vertex graph has a perfect matching, then its vertex cover is at least n/2. Is there a vertex cover of size at least (n/2 +k) for some (kge 1) and how difficult is it to find such a vertex cover? The above guarantee paradigm has led to several exciting discoveries in the areas of parameterized algorithms and kernelization. We argue that this paradigm could bring forth fresh perspectives on well-studied problems in approximation algorithms. Our example is the longest cycle problem. One of the oldest results in extremal combinatorics is the celebrated Dirac’s theorem from 1952. Dirac’s theorem provides the following guarantee on the length of the longest cycle: for every 2-connected n-vertex graph G with minimum degree (delta (G)le n/2), the length of a longest cycle L is at least (2delta (G)). Thus the “essential” part in finding the longest cycle is in approximating the “offset” (k = L - 2 delta (G)). The main result of this paper is the above-guarantee approximation theorem for k. Informally, the theorem says that approximating the offset k is not harder than approximating the total length L of a cycle. In other words, for any (reasonably well-behaved) function f, a polynomial time algorithm constructing a cycle of length f(L) in an undirected graph with a cycle of length L, yields a polynomial time algorithm constructing a cycle of length (2delta (G)+Omega (f(k))).
高于(或低于)保证的参数化是参数化算法中的一个成功概念。其理念是,许多计算问题都有 "天然 "的保证,这就带来了算法问题,即是否能高效地获得更好的解决方案(高于保证)。例如,对于 m 个分句上的每个布尔 CNF 公式,都有一个至少满足 m/2 个分句的赋值。决定是否有一个赋值满足多于(m/2 +k)个条款有多难?或者,如果一个 n 个顶点的图有一个完美匹配,那么它的顶点覆盖至少是 n/2。对于某个 (kge 1) 是否存在大小至少为 (n/2 +k) 的顶点覆盖,找到这样的顶点覆盖有多难?上述保证范式已经在参数化算法和内核化领域带来了一些激动人心的发现。我们认为,这一范式可以为近似算法中研究得很透彻的问题带来全新的视角。我们以最长周期问题为例。极值组合学中最古老的成果之一是 1952 年著名的狄拉克定理。狄拉克定理为最长循环的长度提供了如下保证:对于每个最小度数为 (delta (G)le n/2) 的 2 连接 n 顶点图 G,最长循环 L 的长度至少为 (2delta (G))。因此,找到最长周期的 "关键 "部分是近似地计算 "偏移量"(k = L - 2 delta (G))。本文的主要结果是上述关于 k 的近似定理。非正式地说,该定理表明,近似偏移量 k 不会比近似一个周期的总长度 L 更难。换句话说,对于任何(合理良好的)函数 f,在具有长度为 L 的循环的无向图中构造长度为 f(L) 的循环的多项式时间算法,可以得到构造长度为 (2delta (G)+Omega (f(k))) 的循环的多项式时间算法。
{"title":"Approximating Long Cycle Above Dirac’s Guarantee","authors":"Fedor V. Fomin, Petr A. Golovach, Danil Sagunov, Kirill Simonov","doi":"10.1007/s00453-024-01240-5","DOIUrl":"10.1007/s00453-024-01240-5","url":null,"abstract":"<div><p>Parameterization above (or below) a guarantee is a successful concept in parameterized algorithms. The idea is that many computational problems admit “natural” guarantees bringing to algorithmic questions whether a better solution (above the guarantee) could be obtained efficiently. For example, for every boolean CNF formula on <i>m</i> clauses, there is an assignment that satisfies at least <i>m</i>/2 clauses. How difficult is it to decide whether there is an assignment satisfying more than <span>(m/2 +k)</span> clauses? Or, if an <i>n</i>-vertex graph has a perfect matching, then its vertex cover is at least <i>n</i>/2. Is there a vertex cover of size at least <span>(n/2 +k)</span> for some <span>(kge 1)</span> and how difficult is it to find such a vertex cover? The above guarantee paradigm has led to several exciting discoveries in the areas of parameterized algorithms and kernelization. We argue that this paradigm could bring forth fresh perspectives on well-studied problems in approximation algorithms. Our example is the longest cycle problem. One of the oldest results in extremal combinatorics is the celebrated Dirac’s theorem from 1952. Dirac’s theorem provides the following guarantee on the length of the longest cycle: for every 2-connected <i>n</i>-vertex graph <i>G</i> with minimum degree <span>(delta (G)le n/2)</span>, the length of a longest cycle <i>L</i> is at least <span>(2delta (G))</span>. Thus the “essential” part in finding the longest cycle is in approximating the “offset” <span>(k = L - 2 delta (G))</span>. The main result of this paper is the above-guarantee approximation theorem for <i>k</i>. Informally, the theorem says that approximating the offset <i>k</i> is not harder than approximating the total length <i>L</i> of a cycle. In other words, for any (reasonably well-behaved) function <i>f</i>, a polynomial time algorithm constructing a cycle of length <i>f</i>(<i>L</i>) in an undirected graph with a cycle of length <i>L</i>, yields a polynomial time algorithm constructing a cycle of length <span>(2delta (G)+Omega (f(k)))</span>.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 8","pages":"2676 - 2713"},"PeriodicalIF":0.9,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01240-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141189001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-29DOI: 10.1007/s00453-024-01243-2
Davide Bilò
Reoptimization is a setting in which we are given a good approximate solution of an optimization problem instance and a local modification that slightly changes the instance. The main goal is that of finding a good approximate solution of the modified instance. We investigate one of the most studied scenarios in reoptimization known as Steiner tree reoptimization. Steiner tree reoptimization is a collection of strongly (textsf {NP})-hard optimization problems that are defined on top of the classical Steiner tree problem and for which several constant-factor approximation algorithms have been designed in the last decades. In this paper we improve upon all these results by developing a novel technique that allows us to design polynomial-time approximation schemes. Remarkably, prior to this paper, no approximation algorithm better than recomputing a solution from scratch was known for the elusive scenario in which the cost of a single edge decreases. Our results are best possible since none of the problems addressed in this paper admits a fully polynomial-time approximation scheme, unless (textsf {P}=textsf {NP})
{"title":"New Algorithms for Steiner Tree Reoptimization","authors":"Davide Bilò","doi":"10.1007/s00453-024-01243-2","DOIUrl":"10.1007/s00453-024-01243-2","url":null,"abstract":"<div><p><i>Reoptimization</i> is a setting in which we are given a good approximate solution of an optimization problem instance and a local modification that slightly changes the instance. The main goal is that of finding a good approximate solution of the modified instance. We investigate one of the most studied scenarios in reoptimization known as <i>Steiner tree reoptimization</i>. Steiner tree reoptimization is a collection of strongly <span>(textsf {NP})</span>-hard optimization problems that are defined on top of the classical Steiner tree problem and for which several constant-factor approximation algorithms have been designed in the last decades. In this paper we improve upon all these results by developing a novel technique that allows us to design <i>polynomial-time approximation schemes</i>. Remarkably, prior to this paper, no approximation algorithm better than recomputing a solution from scratch was known for the elusive scenario in which the cost of a single edge decreases. Our results are best possible since none of the problems addressed in this paper admits a fully polynomial-time approximation scheme, unless <span>(textsf {P}=textsf {NP})</span></p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 8","pages":"2652 - 2675"},"PeriodicalIF":0.9,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01243-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141170110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}