首页 > 最新文献

Algorithmica最新文献

英文 中文
Tight Runtime Bounds for Static Unary Unbiased Evolutionary Algorithms on Linear Functions 线性函数静态一元无偏进化算法的严格运行时间界限
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-22 DOI: 10.1007/s00453-024-01258-9
Carola Doerr, Duri Andrea Janett, Johannes Lengler

In a seminal paper in 2013, Witt showed that the (1+1) Evolutionary Algorithm with standard bit mutation needs time ((1+o(1))n ln n/p_1) to find the optimum of any linear function, as long as the probability (p_1) to flip exactly one bit is (Theta (1)). In this paper we investigate how this result generalizes if standard bit mutation is replaced by an arbitrary unbiased mutation operator. This situation is notably different, since the stochastic domination argument used for the lower bound by Witt no longer holds. In particular, starting closer to the optimum is not necessarily an advantage, and OneMax is no longer the easiest function for arbitrary starting positions. Nevertheless, we show that Witt’s result carries over if (p_1) is not too small, with different constraints for upper and lower bounds, and if the number of flipped bits has bounded expectation (chi ). Notably, this includes some of the heavy-tail mutation operators used in fast genetic algorithms, but not all of them. We also give examples showing that algorithms with unbounded (chi ) have qualitatively different trajectories close to the optimum.

在2013年的一篇开创性论文中,维特(Witt)表明,只要翻转一个比特的概率(p_1)是(theta (1)),带有标准比特突变的(1+1)进化算法就需要时间((1+o(1))n ln n/p_1)来找到任何线性函数的最优值。在本文中,如果用任意无偏突变算子代替标准位突变,我们将研究这一结果是如何推广的。这种情况明显不同,因为维特用于下界的随机支配论证不再成立。特别是,从更接近最优位置开始并不一定是优势,OneMax 也不再是任意起始位置的最简单函数。尽管如此,我们证明如果 (p_1)不是太小,上界和下界有不同的约束,并且翻转比特的数量有有界期望 (chi),威特的结果就会继续下去。值得注意的是,这包括快速遗传算法中使用的一些重尾突变算子,但不是全部。我们还举例说明,具有无界(chi )的算法在接近最优值时具有质的不同轨迹。
{"title":"Tight Runtime Bounds for Static Unary Unbiased Evolutionary Algorithms on Linear Functions","authors":"Carola Doerr,&nbsp;Duri Andrea Janett,&nbsp;Johannes Lengler","doi":"10.1007/s00453-024-01258-9","DOIUrl":"10.1007/s00453-024-01258-9","url":null,"abstract":"<div><p>In a seminal paper in 2013, Witt showed that the (1+1) Evolutionary Algorithm with standard bit mutation needs time <span>((1+o(1))n ln n/p_1)</span> to find the optimum of any linear function, as long as the probability <span>(p_1)</span> to flip exactly one bit is <span>(Theta (1))</span>. In this paper we investigate how this result generalizes if standard bit mutation is replaced by an arbitrary unbiased mutation operator. This situation is notably different, since the stochastic domination argument used for the lower bound by Witt no longer holds. In particular, starting closer to the optimum is not necessarily an advantage, and OneMax is no longer the easiest function for arbitrary starting positions. Nevertheless, we show that Witt’s result carries over if <span>(p_1)</span> is not too small, with different constraints for upper and lower bounds, and if the number of flipped bits has bounded expectation <span>(chi )</span>. Notably, this includes some of the heavy-tail mutation operators used in fast genetic algorithms, but not all of them. We also give examples showing that algorithms with unbounded <span>(chi )</span> have qualitatively different trajectories close to the optimum.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 10","pages":"3115 - 3152"},"PeriodicalIF":0.9,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141740781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
New Partitioning Techniques and Faster Algorithms for Approximate Interval Scheduling 近似间隔调度的新分区技术和更快算法
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-18 DOI: 10.1007/s00453-024-01252-1
Spencer Compton, Slobodan Mitrović, Ronitt Rubinfeld

Interval scheduling is a basic algorithmic problem and a classical task in combinatorial optimization. We develop techniques for partitioning and grouping jobs based on their starting/ending times, enabling us to view an instance of interval scheduling on many jobs as a union of multiple interval scheduling instances, each containing only a few jobs. Instantiating these techniques in a dynamic setting produces several new results. For ((1+varepsilon ))-approximation of job scheduling of n jobs on a single machine, we develop a fully dynamic algorithm with (O(nicefrac {log {n}}{varepsilon })) update and (O(log {n})) query worst-case time. Our techniques are also applicable in a setting where jobs have weights. We design a fully dynamic deterministic algorithm whose worst-case update and query times are (text {poly} (log n,frac{1}{varepsilon })). This is the first algorithm that maintains a ((1+varepsilon ))-approximation of the maximum independent set of a collection of weighted intervals in (text {poly} (log n,frac{1}{varepsilon })) time updates/queries. This is an exponential improvement in (1/varepsilon ) over the running time of an algorithm of Henzinger, Neumann, and Wiese  [SoCG, 2020]. Our approach also removes all dependence on the values of the jobs’ starting/ending times and weights.

间隔调度是一个基本算法问题,也是组合优化中的一项经典任务。我们开发了根据作业的开始/结束时间对作业进行分区和分组的技术,这样我们就能把一个包含许多作业的间隔调度实例看作是多个间隔调度实例的联合,每个实例只包含几个作业。在动态环境中应用这些技术会产生一些新的结果。对于单台机器上n个作业的((1+varepsilon))近似作业调度,我们开发了一种全动态算法,其更新时间为(O(nicefrac {log {n}}{varepsilon }),查询最坏情况时间为(O(log {n}))。我们的技术也适用于工作有权重的情况。我们设计了一种全动态的确定性算法,其最坏情况下的更新和查询时间为(text {poly} (log n,frac{1}{varepsilon }))。这是第一个在更新/查询时间内保持加权区间集合的最大独立集的((1+varepsilon ))近似值的算法。与 Henzinger、Neumann 和 Wiese [SoCG, 2020] 算法的运行时间相比,这是指数级的(1/varepsilon )改进。我们的方法还消除了对作业开始/结束时间和权重值的所有依赖。
{"title":"New Partitioning Techniques and Faster Algorithms for Approximate Interval Scheduling","authors":"Spencer Compton,&nbsp;Slobodan Mitrović,&nbsp;Ronitt Rubinfeld","doi":"10.1007/s00453-024-01252-1","DOIUrl":"10.1007/s00453-024-01252-1","url":null,"abstract":"<div><p>Interval scheduling is a basic algorithmic problem and a classical task in combinatorial optimization. We develop techniques for partitioning and grouping jobs based on their starting/ending times, enabling us to view an instance of interval scheduling on <i>many</i> jobs as a union of multiple interval scheduling instances, each containing only <i>a few</i> jobs. Instantiating these techniques in a dynamic setting produces several new results. For <span>((1+varepsilon ))</span>-approximation of job scheduling of <i>n</i> jobs on a single machine, we develop a fully dynamic algorithm with <span>(O(nicefrac {log {n}}{varepsilon }))</span> update and <span>(O(log {n}))</span> query worst-case time. Our techniques are also applicable in a setting where jobs have weights. We design a fully dynamic <i>deterministic</i> algorithm whose worst-case update and query times are <span>(text {poly} (log n,frac{1}{varepsilon }))</span>. This is <i>the first</i> algorithm that maintains a <span>((1+varepsilon ))</span>-approximation of the maximum independent set of a collection of weighted intervals in <span>(text {poly} (log n,frac{1}{varepsilon }))</span> time updates/queries. This is an exponential improvement in <span>(1/varepsilon )</span> over the running time of an algorithm of Henzinger, Neumann, and Wiese  [SoCG, 2020]. Our approach also removes all dependence on the values of the jobs’ starting/ending times and weights.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2997 - 3026"},"PeriodicalIF":0.9,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142412312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-crossing Hamiltonian Paths and Cycles in Output-Polynomial Time 输出多项式时间内的非交叉哈密顿路径和循环
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-18 DOI: 10.1007/s00453-024-01255-y
David Eppstein

We show that, for planar point sets, the number of non-crossing Hamiltonian paths is polynomially bounded in the number of non-crossing paths, and the number of non-crossing Hamiltonian cycles (polygonalizations) is polynomially bounded in the number of surrounding cycles. As a consequence, we can list the non-crossing Hamiltonian paths or the polygonalizations, in time polynomial in the output size, by filtering the output of simple backtracking algorithms for non-crossing paths or surrounding cycles respectively. We do not assume that the points are in general position. To prove these results we relate the numbers of non-crossing structures to two easily-computed parameters of the point set: the minimum number of points whose removal results in a collinear set, and the number of points interior to the convex hull. These relations also lead to polynomial-time approximation algorithms for the numbers of structures of all four types, accurate to within a constant factor of the logarithm of these numbers.

我们证明,对于平面点集,非交叉哈密顿路径的数量与非交叉路径的数量同多项式有界,而非交叉哈密顿循环(多边形化)的数量与周围循环的数量同多项式有界。因此,我们可以通过过滤简单反向跟踪算法输出的非交叉路径或周围循环,分别列出非交叉哈密顿路径或多边形化,时间与输出大小成多项式关系。我们并不假设这些点处于一般位置。为了证明这些结果,我们将非交叉结构的数量与点集的两个易于计算的参数联系起来:移除后形成碰撞集的最小点数,以及凸壳内部的点数。通过这些关系,我们还可以得到所有四种类型结构的多项式时间近似计算法,其精确度可达到这些数字对数的一个常数因子。
{"title":"Non-crossing Hamiltonian Paths and Cycles in Output-Polynomial Time","authors":"David Eppstein","doi":"10.1007/s00453-024-01255-y","DOIUrl":"10.1007/s00453-024-01255-y","url":null,"abstract":"<div><p>We show that, for planar point sets, the number of non-crossing Hamiltonian paths is polynomially bounded in the number of non-crossing paths, and the number of non-crossing Hamiltonian cycles (polygonalizations) is polynomially bounded in the number of surrounding cycles. As a consequence, we can list the non-crossing Hamiltonian paths or the polygonalizations, in time polynomial in the output size, by filtering the output of simple backtracking algorithms for non-crossing paths or surrounding cycles respectively. We do not assume that the points are in general position. To prove these results we relate the numbers of non-crossing structures to two easily-computed parameters of the point set: the minimum number of points whose removal results in a collinear set, and the number of points interior to the convex hull. These relations also lead to polynomial-time approximation algorithms for the numbers of structures of all four types, accurate to within a constant factor of the logarithm of these numbers.\u0000</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"3027 - 3053"},"PeriodicalIF":0.9,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01255-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142412335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On a Traveling Salesman Problem for Points in the Unit Cube 关于单位立方体中点的旅行推销员问题
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-18 DOI: 10.1007/s00453-024-01257-w
József Balogh, Felix Christian Clemen, Adrian Dumitrescu

Let X be an n-element point set in the k-dimensional unit cube ([0,1]^k) where (k ge 2). According to an old result of Bollobás and Meir (Oper Res Lett 11:19–21, 1992) , there exists a cycle (tour) (x_1, x_2, ldots , x_n) through the n points, such that (left( sum _{i=1}^n |x_i - x_{i+1}|^k right) ^{1/k} le c_k), where (|x-y|) is the Euclidean distance between x and y, and (c_k) is an absolute constant that depends only on k, where (x_{n+1} equiv x_1). From the other direction, for every (k ge 2) and (n ge 2), there exist n points in ([0,1]^k), such that their shortest tour satisfies (left( sum _{i=1}^n |x_i - x_{i+1}|^k right) ^{1/k} = 2^{1/k} cdot sqrt{k}). For the plane, the best constant is (c_2=2) and this is the only exact value known. Bollobás and Meir showed that one can take (c_k = 9 left( frac{2}{3} right) ^{1/k} cdot sqrt{k}) for every (k ge 3) and conjectured that the best constant is (c_k = 2^{1/k} cdot sqrt{k}), for every (k ge 2). Here we significantly improve the upper bound and show that one can take (c_k = 3 sqrt{5} left( frac{2}{3} right) ^{1/k} cdot sqrt{k}) or (c_k = 2.91 sqrt{k} (1+o_k(1))). Our bounds are constructive. We also show that (c_3 ge 2^{7/6}), which disproves the conjecture for (k=3). Connections to matching problems, power assignment problems, related problems, including algorithms, are discussed in this context. A slightly revised version of the Bollobás–Meir conjecture is proposed.

让 X 是 k 维单位立方体 ([0,1]^k)中的一个 n 元素点集,其中 (k ge 2).根据 Bollobás 和 Meir 的老结果(Oper Res Lett 11:19-21, 1992),存在一个经过 n 个点的循环(tour)(x_1, x_2, ldots , x_n),使得 (left( sum _{i=1}^n |x_i - x_{i+1}|^k right) ^{1/k})c_k), 其中 (|x-y|) 是 x 和 y 之间的欧几里得距离,而 (c_k) 是一个只取决于 k 的绝对常数,其中 (x_{n+1} equiv x_1).从另一个方向来看,对于每一个(k)和(n),在([0,1]^k)中存在n个点,使得它们的最短巡回满足(left( sum _{i=1}^n |x_i - x_{i+1}|^k right) ^{1/k} = 2^{1/k}cdot sqrt{k}/)。对于平面来说,最佳常数是 c_2=2,这是唯一已知的精确值。Bollobás和Meir证明,可以取(c_k = 9 left( frac{2}{3} right) ^{1/k}c_k = 2^{1/k} cdot sqrt{k}), for every (k ge 3) and conjectured that the best constant is (c_k = 2^{1/k} cdot sqrt{k}), for every (k ge 2).在这里,我们极大地改进了上界,并证明我们可以把(c_k = 3 sqrt{5}left( frac{2}{3} right) ^{1/k}cdot sqrt{k}) or (c_k = 2.91 sqrt{k} (1+o_k(1))).我们的边界是建设性的。我们还证明了 (c_3 ge 2^{7/6}),这推翻了对(k=3)的猜想。在此背景下,我们讨论了与匹配问题、幂赋值问题、相关问题(包括算法)的联系。还提出了一个稍作修订的 Bollobás-Meir 猜想。
{"title":"On a Traveling Salesman Problem for Points in the Unit Cube","authors":"József Balogh,&nbsp;Felix Christian Clemen,&nbsp;Adrian Dumitrescu","doi":"10.1007/s00453-024-01257-w","DOIUrl":"10.1007/s00453-024-01257-w","url":null,"abstract":"<div><p>Let <i>X</i> be an <i>n</i>-element point set in the <i>k</i>-dimensional unit cube <span>([0,1]^k)</span> where <span>(k ge 2)</span>. According to an old result of Bollobás and Meir (Oper Res Lett 11:19–21, 1992) , there exists a cycle (tour) <span>(x_1, x_2, ldots , x_n)</span> through the <i>n</i> points, such that <span>(left( sum _{i=1}^n |x_i - x_{i+1}|^k right) ^{1/k} le c_k)</span>, where <span>(|x-y|)</span> is the Euclidean distance between <i>x</i> and <i>y</i>, and <span>(c_k)</span> is an absolute constant that depends only on <i>k</i>, where <span>(x_{n+1} equiv x_1)</span>. From the other direction, for every <span>(k ge 2)</span> and <span>(n ge 2)</span>, there exist <i>n</i> points in <span>([0,1]^k)</span>, such that their shortest tour satisfies <span>(left( sum _{i=1}^n |x_i - x_{i+1}|^k right) ^{1/k} = 2^{1/k} cdot sqrt{k})</span>. For the plane, the best constant is <span>(c_2=2)</span> and this is the only exact value known. Bollobás and Meir showed that one can take <span>(c_k = 9 left( frac{2}{3} right) ^{1/k} cdot sqrt{k})</span> for every <span>(k ge 3)</span> and conjectured that the best constant is <span>(c_k = 2^{1/k} cdot sqrt{k})</span>, for every <span>(k ge 2)</span>. Here we significantly improve the upper bound and show that one can take <span>(c_k = 3 sqrt{5} left( frac{2}{3} right) ^{1/k} cdot sqrt{k})</span> or <span>(c_k = 2.91 sqrt{k} (1+o_k(1)))</span>. Our bounds are constructive. We also show that <span>(c_3 ge 2^{7/6})</span>, which disproves the conjecture for <span>(k=3)</span>. Connections to matching problems, power assignment problems, related problems, including algorithms, are discussed in this context. A slightly revised version of the Bollobás–Meir conjecture is proposed.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"3054 - 3078"},"PeriodicalIF":0.9,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01257-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142412318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sublinear Algorithms in T-Interval Dynamic Networks T 型间隔动态网络中的次线性算法
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-12 DOI: 10.1007/s00453-024-01250-3
Irvan Jahja, Haifeng Yu

We consider standard T-interval dynamic networks, under the synchronous timing model and the broadcast CONGEST model. In a T-interval dynamic network, the set of nodes is always fixed and there are no node failures. The edges in the network are always undirected, but the set of edges in the topology may change arbitrarily from round to round, as determined by some adversary and subject to the following constraint: For every T consecutive rounds, the topologies in those rounds must contain a common connected spanning subgraph. Let (H_r) to be the maximum (in terms of number of edges) such subgraph for round r through (r+T-1). We define the backbone diameter d of a T-interval dynamic network to be the maximum diameter of all such (H_r)’s, for (rge 1). We use n to denote the number of nodes in the network. Within such a context, we consider a range of fundamental distributed computing problems including Count/Max/Median/Sum/LeaderElect/Consensus/ConfirmedFlood. Existing algorithms for these problems all have time complexity of (Omega (n)) rounds, even for (T=infty ) and even when d is as small as O(1). This paper presents a novel approach/framework, based on the idea of massively parallel aggregation. Following this approach, we develop a novel deterministic Count algorithm with (O(d^3 log ^2 n)) complexity, for T-interval dynamic networks with (T ge ccdot d^2 log ^2n). Here c is a (sufficiently large) constant independent of d, n, and T. To our knowledge, our algorithm is the very first such algorithm whose complexity does not contain a (Theta (n)) term. This paper further develops novel algorithms for solving Max/Median/Sum/LeaderElect/Consensus/ConfirmedFlood, while incurring (O(d^3 text{ polylog }(n))) complexity. Again, for all these problems, our algorithms are the first ones whose time complexity does not contain a (Theta (n)) term.

我们考虑的是同步定时模型和广播 CONGEST 模型下的标准 T 间隔动态网络。在 T 间隔动态网络中,节点集总是固定的,不存在节点故障。网络中的边总是无向的,但拓扑结构中的边集可以在各轮之间任意变化,由某个对手决定,并受到以下约束:每连续进行 T 轮,这些轮中的拓扑图必须包含一个共同的连通跨越子图。让 (H_r)成为第 r 轮通过 (r+T-1)的最大子图(以边的数量计算)。我们定义一个 T 期动态网络的主干直径 d 是所有这样的 (H_r) 的最大直径,对于 (rge 1) 来说。我们用 n 表示网络中的节点数。在这样的背景下,我们考虑了一系列基本的分布式计算问题,包括 Count/Max/Median/Sum/LeaderElect/Consensus/ConfirmedFlood 等。这些问题的现有算法的时间复杂度都是(Omega (n))轮,即使是(T=infty ),甚至当d小到O(1)时也是如此。本文提出了一种基于大规模并行聚合思想的新方法/框架。按照这种方法,我们开发了一种复杂度为(O(d^3 log ^2 n))的新型确定性计数算法,适用于具有(T ge ccdot d^2 log ^2n )的 T 期动态网络。据我们所知,我们的算法是第一个复杂度不包含(Theta (n))项的算法。本文进一步开发了解决 Max/Median/Sum/LeaderElect/Consensus/ConfirmedFlood 问题的新算法,同时产生了 (O(d^3 text{ polylog }(n))) 复杂性。同样,对于所有这些问题,我们的算法是第一个时间复杂度不包含一个(θ (n))项的算法。
{"title":"Sublinear Algorithms in T-Interval Dynamic Networks","authors":"Irvan Jahja,&nbsp;Haifeng Yu","doi":"10.1007/s00453-024-01250-3","DOIUrl":"10.1007/s00453-024-01250-3","url":null,"abstract":"<div><p>We consider standard <i>T</i>-<i>interval dynamic networks</i>, under the synchronous timing model and the broadcast CONGEST model. In a <i>T</i>-<i>interval dynamic network</i>, the set of nodes is always fixed and there are no node failures. The edges in the network are always undirected, but the set of edges in the topology may change arbitrarily from round to round, as determined by some <i>adversary</i> and subject to the following constraint: For every <i>T</i> consecutive rounds, the topologies in those rounds must contain a common connected spanning subgraph. Let <span>(H_r)</span> to be the maximum (in terms of number of edges) such subgraph for round <i>r</i> through <span>(r+T-1)</span>. We define the <i>backbone diameter</i> <i>d</i> of a <i>T</i>-interval dynamic network to be the maximum diameter of all such <span>(H_r)</span>’s, for <span>(rge 1)</span>. We use <i>n</i> to denote the number of nodes in the network. Within such a context, we consider a range of fundamental distributed computing problems including <span>Count</span>/<span>Max</span>/<span>Median</span>/<span>Sum</span>/<span>LeaderElect</span>/<span>Consensus</span>/<span>ConfirmedFlood</span>. Existing algorithms for these problems all have time complexity of <span>(Omega (n))</span> rounds, even for <span>(T=infty )</span> and even when <i>d</i> is as small as <i>O</i>(1). This paper presents a novel approach/framework, based on the idea of <i>massively parallel aggregation</i>. Following this approach, we develop a novel deterministic <span>Count</span> algorithm with <span>(O(d^3 log ^2 n))</span> complexity, for <i>T</i>-interval dynamic networks with <span>(T ge ccdot d^2 log ^2n)</span>. Here <i>c</i> is a (sufficiently large) constant independent of <i>d</i>, <i>n</i>, and <i>T</i>. To our knowledge, our algorithm is the very first such algorithm whose complexity does not contain a <span>(Theta (n))</span> term. This paper further develops novel algorithms for solving <span>Max</span>/<span>Median</span>/<span>Sum</span>/<span>LeaderElect</span>/<span>Consensus</span>/<span>ConfirmedFlood</span>, while incurring <span>(O(d^3 text{ polylog }(n)))</span> complexity. Again, for all these problems, our algorithms are the first ones whose time complexity does not contain a <span>(Theta (n))</span> term.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2959 - 2996"},"PeriodicalIF":0.9,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141612701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stagnation Detection in Highly Multimodal Fitness Landscapes 高度多模态健身景观中的停滞检测
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-02 DOI: 10.1007/s00453-024-01249-w
Amirhossein Rajabi, Carsten Witt

Stagnation detection has been proposed as a mechanism for randomized search heuristics to escape from local optima by automatically increasing the size of the neighborhood to find the so-called gap size, i. e., the distance to the next improvement. Its usefulness has mostly been considered in simple multimodal landscapes with few local optima that could be crossed one after another. In multimodal landscapes with a more complex location of optima of similar gap size, stagnation detection suffers from the fact that the neighborhood size is frequently reset to  1 without using gap sizes that were promising in the past. In this paper, we investigate a new mechanism called radius memory which can be added to stagnation detection to control the search radius more carefully by giving preference to values that were successful in the past. We implement this idea in an algorithm called SD-RLS(^{text {m}}) and show compared to previous variants of stagnation detection that it yields speed-ups for linear functions under uniform constraints and the minimum spanning tree problem. Moreover, its running time does not significantly deteriorate on unimodal functions and a generalization of the Jump benchmark. Finally, we present experimental results carried out to study SD-RLS(^{text {m}}) and compare it with other algorithms.

停滞检测被认为是随机搜索启发式摆脱局部最优的一种机制,它通过自动增加邻域的大小来找到所谓的间隙大小,也就是到下一个改进点的距离。这种方法的实用性主要体现在简单的多模式景观中,这些景观中的局部最优点很少,而且可以一个接一个地跨越。在间隙大小相似的最优点位置较为复杂的多模态景观中,停滞检测的问题在于邻域大小经常被重置为 1,而不使用过去有希望的间隙大小。在本文中,我们研究了一种名为 "半径记忆 "的新机制,它可以添加到停滞检测中,通过优先使用过去的成功值来更谨慎地控制搜索半径。我们在一个名为 SD-RLS(^{text {m}}/)的算法中实现了这一想法,并证明与之前的停滞检测变体相比,它能加快均匀约束下线性函数和最小生成树问题的速度。此外,它的运行时间在单模态函数和跳跃基准的广义化问题上也没有明显恶化。最后,我们介绍了研究 SD-RLS(^{text {m}})的实验结果,并将其与其他算法进行了比较。
{"title":"Stagnation Detection in Highly Multimodal Fitness Landscapes","authors":"Amirhossein Rajabi,&nbsp;Carsten Witt","doi":"10.1007/s00453-024-01249-w","DOIUrl":"10.1007/s00453-024-01249-w","url":null,"abstract":"<div><p>Stagnation detection has been proposed as a mechanism for randomized search heuristics to escape from local optima by automatically increasing the size of the neighborhood to find the so-called gap size, i. e., the distance to the next improvement. Its usefulness has mostly been considered in simple multimodal landscapes with few local optima that could be crossed one after another. In multimodal landscapes with a more complex location of optima of similar gap size, stagnation detection suffers from the fact that the neighborhood size is frequently reset to  1 without using gap sizes that were promising in the past. In this paper, we investigate a new mechanism called <i>radius memory</i> which can be added to stagnation detection to control the search radius more carefully by giving preference to values that were successful in the past. We implement this idea in an algorithm called SD-RLS<span>(^{text {m}})</span> and show compared to previous variants of stagnation detection that it yields speed-ups for linear functions under uniform constraints and the minimum spanning tree problem. Moreover, its running time does not significantly deteriorate on unimodal functions and a generalization of the <span>Jump</span> benchmark. Finally, we present experimental results carried out to study SD-RLS<span>(^{text {m}})</span> and compare it with other algorithms.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2929 - 2958"},"PeriodicalIF":0.9,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01249-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141550368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parameterized Complexity of Streaming Diameter and Connectivity Problems 流媒体直径和连接问题的参数化复杂性
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-19 DOI: 10.1007/s00453-024-01246-z
Jelle J. Oostveen, Erik Jan van Leeuwen

We initiate the investigation of the parameterized complexity of Diameter and Connectivity in the streaming paradigm. On the positive end, we show that knowing a vertex cover of size k allows for algorithms in the Adjacency List (AL) streaming model whose number of passes is constant and memory is (mathcal {O}(log n)) for any fixed k. Underlying these algorithms is a method to execute a breadth-first search in (mathcal {O}(k)) passes and (mathcal {O}(k log n)) bits of memory. On the negative end, we show that many other parameters lead to lower bounds in the AL model, where (Omega (n/p)) bits of memory is needed for any p-pass algorithm even for constant parameter values. In particular, this holds for graphs with a known modulator (deletion set) of constant size to a graph that has no induced subgraph isomorphic to a fixed graph H, for most H. For some cases, we can also show one-pass, (Omega (n log n)) bits of memory lower bounds. We also prove a much stronger (Omega (n^2/p)) lower bound for Diameter on bipartite graphs. Finally, using the insights we developed into streaming parameterized graph exploration algorithms, we show a new streaming kernelization algorithm for computing a vertex cover of size k. This yields a kernel of 2k vertices (with (mathcal {O}(k^2)) edges) produced as a stream in (text {poly}(k)) passes and only (mathcal {O}(k log n)) bits of memory.

我们开始研究流范式中 Diameter 和 Connectivity 的参数化复杂性。这些算法的基础是一种在 (mathcal {O}(k)) 次和 (mathcal {O}(k log n)) 位内存中执行广度优先搜索的方法。从反面来看,我们证明了许多其他参数会导致 AL 模型中的下限,在这个模型中,即使参数值不变,任何 p-pass 算法也需要 (ω (n/p)) 位内存。在某些情况下,我们还可以证明一次通过,(Omega (n log n))比特的内存下界。我们还证明了一个更强的针对二方图上的 Diameter 的 (Omega (n^2/p)) 下界。最后,利用我们对流参数化图探索算法的深入理解,我们展示了一种新的流核化算法,用于计算大小为 k 的顶点覆盖。这种算法在 (text {poly}(k)) 传递中以流的形式产生了 2k 个顶点的核(具有 (mathcal {O}(k^2)) 条边),并且只需要 (mathcal {O}(k log n)) 位内存。
{"title":"Parameterized Complexity of Streaming Diameter and Connectivity Problems","authors":"Jelle J. Oostveen,&nbsp;Erik Jan van Leeuwen","doi":"10.1007/s00453-024-01246-z","DOIUrl":"10.1007/s00453-024-01246-z","url":null,"abstract":"<div><p>We initiate the investigation of the parameterized complexity of <span>Diameter</span> and <span>Connectivity</span> in the streaming paradigm. On the positive end, we show that knowing a vertex cover of size <i>k</i> allows for algorithms in the Adjacency List (AL) streaming model whose number of passes is constant and memory is <span>(mathcal {O}(log n))</span> for any fixed <i>k</i>. Underlying these algorithms is a method to execute a breadth-first search in <span>(mathcal {O}(k))</span> passes and <span>(mathcal {O}(k log n))</span> bits of memory. On the negative end, we show that many other parameters lead to lower bounds in the AL model, where <span>(Omega (n/p))</span> bits of memory is needed for any <i>p</i>-pass algorithm even for constant parameter values. In particular, this holds for graphs with a known modulator (deletion set) of constant size to a graph that has no induced subgraph isomorphic to a fixed graph <i>H</i>, for most <i>H</i>. For some cases, we can also show one-pass, <span>(Omega (n log n))</span> bits of memory lower bounds. We also prove a much stronger <span>(Omega (n^2/p))</span> lower bound for <span>Diameter</span> on bipartite graphs. Finally, using the insights we developed into streaming parameterized graph exploration algorithms, we show a new streaming kernelization algorithm for computing a vertex cover of size <i>k</i>. This yields a kernel of 2<i>k</i> vertices (with <span>(mathcal {O}(k^2))</span> edges) produced as a stream in <span>(text {poly}(k))</span> passes and only <span>(mathcal {O}(k log n))</span> bits of memory.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2885 - 2928"},"PeriodicalIF":0.9,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01246-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141550330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approximation Algorithms for the Two-Watchman Route in a Simple Polygon 简单多边形中双守望者路线的近似算法
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-19 DOI: 10.1007/s00453-024-01245-0
Bengt J. Nilsson, Eli Packer

The two-watchman route problem is that of computing a pair of closed tours in an environment so that the two tours together see the whole environment and some length measure on the two tours is minimized. Two standard measures are: the minmax measure, where we want the tours where the longest of them has smallest length, and the minsum measure, where we want the tours for which the sum of their lengths is the smallest. It is known that computing a minmax two-watchman route is NP-hard for simple rectilinear polygons and thus also for simple polygons. Also, any c-approximation algorithm for the minmax two-watchman route is automatically a 2c-approximation algorithm for the minsum two-watchman route. We exhibit two constant factor approximation algorithms for computing minmax two-watchman routes in simple polygons with approximation factors 5.969 and 11.939, having running times (O(n^8)) and (O(n^4)) respectively, where n is the number of vertices of the polygon. We also use the same techniques to obtain a 6.922-approximation for the fixed two-watchman route problem running in (O(n^2)) time, i.e., when two starting points of the two tours are given as input.

双守望者路线问题是指在一个环境中计算一对封闭的巡回路线,使这两条巡回路线一起看到整个环境,并使这两条巡回路线上的某个长度度量最小。两种标准度量方法是:最小最大度量,即我们希望最长的两条路线的长度最小;最小总和度量,即我们希望两条路线的长度总和最小。众所周知,对于简单的直角多边形以及简单的多边形来说,计算一条最小双守望者路线是 NP 难的。此外,任何最小双守望者路线的 c 近似算法都自动是最小双守望者路线的 2c 近似算法。我们展示了在简单多边形中计算 minmax 双守望者路线的两种常数因子近似算法,其近似因子分别为 5.969 和 11.939,运行时间分别为 (O(n^8))和 (O(n^4)),其中 n 是多边形的顶点数。我们还使用同样的技术得到了固定双守望者路线问题的 6.922 个近似值,运行时间为 (O(n^2)),也就是说,当两个游程的两个起点作为输入时,运行时间为 (O(n^2))。
{"title":"Approximation Algorithms for the Two-Watchman Route in a Simple Polygon","authors":"Bengt J. Nilsson,&nbsp;Eli Packer","doi":"10.1007/s00453-024-01245-0","DOIUrl":"10.1007/s00453-024-01245-0","url":null,"abstract":"<div><p>The <i>two-watchman route problem</i> is that of computing a pair of closed tours in an environment so that the two tours together see the whole environment and some length measure on the two tours is minimized. Two standard measures are: the minmax measure, where we want the tours where the longest of them has smallest length, and the minsum measure, where we want the tours for which the sum of their lengths is the smallest. It is known that computing a minmax two-watchman route is NP-hard for simple rectilinear polygons and thus also for simple polygons. Also, any <i>c</i>-approximation algorithm for the minmax two-watchman route is automatically a 2<i>c</i>-approximation algorithm for the minsum two-watchman route. We exhibit two constant factor approximation algorithms for computing minmax two-watchman routes in simple polygons with approximation factors 5.969 and 11.939, having running times <span>(O(n^8))</span> and <span>(O(n^4))</span> respectively, where <i>n</i> is the number of vertices of the polygon. We also use the same techniques to obtain a 6.922-approximation for the <i>fixed two-watchman route problem</i> running in <span>(O(n^2))</span> time, i.e., when two starting points of the two tours are given as input.\u0000</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2845 - 2884"},"PeriodicalIF":0.9,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01245-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Algorithms for Matrix Multiplication via Sampling and Opportunistic Matrix Multiplication 通过采样和机会矩阵乘法的矩阵乘法算法
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-17 DOI: 10.1007/s00453-024-01247-y
David G. Harris

As proposed by Karppa and Kaski (in: Proceedings 30th ACM-SIAM Symposium on Discrete Algorithms (SODA), 2019) a novel “broken" or "opportunistic" matrix multiplication algorithm, based on a variant of Strassen’s algorithm, and used this to develop new algorithms for Boolean matrix multiplication, among other tasks. Their algorithm can compute Boolean matrix multiplication in (O(n^{2.778})) time. While asymptotically faster matrix multiplication algorithms exist, most such algorithms are infeasible for practical problems. We describe an alternative way to use the broken multiplication algorithm to approximately compute matrix multiplication, either for real-valued or Boolean matrices. In brief, instead of running multiple iterations of the broken algorithm on the original input matrix, we form a new larger matrix by sampling and run a single iteration of the broken algorithm on it. Asymptotically, our algorithm has runtime (O(n^{2.763})), a slight improvement over the Karppa–Kaski algorithm. Since the goal is to obtain new practical matrix-multiplication algorithms, we also estimate the concrete runtime for our algorithm for some large-scale sample problems. It appears that for these parameters, further optimizations are still needed to make our algorithm competitive.

正如 Karppa 和 Kaski 所提出的那样(载于Proceedings 30th ACM-SIAM Symposium on Discrete Algorithms (SODA), 2019)基于 Strassen 算法的变体,提出了一种新颖的 "破碎 "或 "机会主义 "矩阵乘法算法,并以此为基础开发了布尔矩阵乘法等任务的新算法。他们的算法可以在 (O(n^{2.778})) 时间内计算布尔矩阵乘法。虽然存在近似更快的矩阵乘法算法,但大多数此类算法在实际问题中并不可行。我们描述了另一种使用破碎乘法算法来近似计算矩阵乘法的方法,无论是实值矩阵还是布尔矩阵。简而言之,我们不是在原始输入矩阵上运行多次破缺算法迭代,而是通过采样形成一个新的更大矩阵,并在其上运行一次破缺算法迭代。从渐近的角度看,我们的算法运行时间为(O(n^{2.763})),比卡帕-卡斯基算法略有改进。由于我们的目标是获得新的实用矩阵乘法算法,因此我们还估算了我们算法在一些大规模样本问题上的具体运行时间。对于这些参数,似乎仍需进一步优化,才能使我们的算法具有竞争力。
{"title":"Algorithms for Matrix Multiplication via Sampling and Opportunistic Matrix Multiplication","authors":"David G. Harris","doi":"10.1007/s00453-024-01247-y","DOIUrl":"10.1007/s00453-024-01247-y","url":null,"abstract":"<div><p>As proposed by Karppa and Kaski (in: Proceedings 30th ACM-SIAM Symposium on Discrete Algorithms (SODA), 2019) a novel “broken\" or \"opportunistic\" matrix multiplication algorithm, based on a variant of Strassen’s algorithm, and used this to develop new algorithms for Boolean matrix multiplication, among other tasks. Their algorithm can compute Boolean matrix multiplication in <span>(O(n^{2.778}))</span> time. While asymptotically faster matrix multiplication algorithms exist, most such algorithms are infeasible for practical problems. We describe an alternative way to use the broken multiplication algorithm to approximately compute matrix multiplication, either for real-valued or Boolean matrices. In brief, instead of running multiple iterations of the broken algorithm on the original input matrix, we form a new larger matrix by sampling and run a single iteration of the broken algorithm on it. Asymptotically, our algorithm has runtime <span>(O(n^{2.763}))</span>, a slight improvement over the Karppa–Kaski algorithm. Since the goal is to obtain new practical matrix-multiplication algorithms, we also estimate the concrete runtime for our algorithm for some large-scale sample problems. It appears that for these parameters, further optimizations are still needed to make our algorithm competitive.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2822 - 2844"},"PeriodicalIF":0.9,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141550367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Unit Profit Knapsack with Predictions 带预测的在线单位利润包
IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-06-13 DOI: 10.1007/s00453-024-01239-y
Joan Boyar, Lene M. Favrholdt, Kim S. Larsen

A variant of the online knapsack problem is considered in the setting of predictions. In Unit Profit Knapsack, the items have unit profit, i.e., the goal is to pack as many items as possible. For Online Unit Profit Knapsack, the competitive ratio is unbounded. In contrast, it is easy to find an optimal solution offline: Pack as many of the smallest items as possible into the knapsack. The prediction available to the online algorithm is the average size of those smallest items that fit in the knapsack. For the prediction error in this hard online problem, we use the ratio (r=frac{a}{hat{a}}) where a is the actual value for this average size and (hat{a}) is the prediction. We give an algorithm which is (frac{e-1}{e})-competitive, if (r=1), and this is best possible among online algorithms knowing a and nothing else. More generally, the algorithm has a competitive ratio of (frac{e-1}{e}r), if (r le 1), and (frac{e-r}{e}r), if (1 le r < e). Any algorithm with a better competitive ratio for some (r<1) will have a worse competitive ratio for some (r>1). To obtain a positive competitive ratio for all r, we adjust the algorithm, resulting in a competitive ratio of (frac{1}{2r}) for (rge 1) and (frac{r}{2}) for (rle 1). We show that improving the result for any (r< 1) leads to a worse result for some (r>1).

在预测的背景下,我们考虑了在线可纳包问题的一个变体。在单位利润包中,物品具有单位利润,即目标是尽可能多地打包物品。对于在线单位利润可纳包,竞争率是无限制的。相比之下,很容易找到离线最优解:将尽可能多的最小物品装入背包。在线算法可以利用的预测结果是装入背包的最小物品的平均尺寸。对于这个困难的在线问题的预测误差,我们使用比率(r=frac{a}{hat{a}}),其中 a 是这个平均大小的实际值,(hat{a}) 是预测值。如果 (r=1) ,我们给出的算法是具有竞争性的((frac{e-1}{e}),这在知道 a 而不知道其他信息的在线算法中是最好的。更一般地说,如果(r=1),算法的竞争率是(frac{e-1}{e}r);如果(r=1),算法的竞争率是(frac{e-r}{e}r)。任何算法在某些(r<1)情况下具有较好的竞争比率,在某些(r>1)情况下就会具有较差的竞争比率。为了获得对所有r的正竞争比率,我们调整了算法,结果是对(rge 1 )的竞争比率为(frac{1}{2r} ),对(rle 1 )的竞争比率为(frac{r}{2} )。我们证明,对任何(r< 1) 结果的改进都会导致对某些(r> 1) 结果的恶化。
{"title":"Online Unit Profit Knapsack with Predictions","authors":"Joan Boyar,&nbsp;Lene M. Favrholdt,&nbsp;Kim S. Larsen","doi":"10.1007/s00453-024-01239-y","DOIUrl":"10.1007/s00453-024-01239-y","url":null,"abstract":"<div><p>A variant of the online knapsack problem is considered in the setting of predictions. In Unit Profit Knapsack, the items have unit profit, i.e., the goal is to pack as many items as possible. For Online Unit Profit Knapsack, the competitive ratio is unbounded. In contrast, it is easy to find an optimal solution offline: Pack as many of the smallest items as possible into the knapsack. The prediction available to the online algorithm is the average size of those smallest items that fit in the knapsack. For the prediction error in this hard online problem, we use the ratio <span>(r=frac{a}{hat{a}})</span> where <i>a</i> is the actual value for this average size and <span>(hat{a})</span> is the prediction. We give an algorithm which is <span>(frac{e-1}{e})</span>-competitive, if <span>(r=1)</span>, and this is best possible among online algorithms knowing <i>a</i> and nothing else. More generally, the algorithm has a competitive ratio of <span>(frac{e-1}{e}r)</span>, if <span>(r le 1)</span>, and <span>(frac{e-r}{e}r)</span>, if <span>(1 le r &lt; e)</span>. Any algorithm with a better competitive ratio for some <span>(r&lt;1)</span> will have a worse competitive ratio for some <span>(r&gt;1)</span>. To obtain a positive competitive ratio for all <i>r</i>, we adjust the algorithm, resulting in a competitive ratio of <span>(frac{1}{2r})</span> for <span>(rge 1)</span> and <span>(frac{r}{2})</span> for <span>(rle 1)</span>. We show that improving the result for any <span>(r&lt; 1)</span> leads to a worse result for some <span>(r&gt;1)</span>.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2786 - 2821"},"PeriodicalIF":0.9,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01239-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141345495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Algorithmica
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1