首页 > 最新文献

Algorithmica最新文献

英文 中文
Reconfiguration of Multisets with Applications to Bin Packing 多集的重新配置及其在装箱中的应用
IF 0.7 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-21 DOI: 10.1007/s00453-025-01324-w
Jeffrey Kam, Shahin Kamali, Avery Miller, Naomi Nishimura

We use the reconfiguration framework to analyze problems that involve the rearrangement of items among groups. In various applications, a group of items could correspond to the files or jobs assigned to a particular machine, and the goal of rearrangement could be improving efficiency or increasing locality. To cover problems arising in a wide range of application areas, we define the general Repacking problem as the rearrangement of multisets of multisets. We present hardness results for the general case and algorithms for various restricted classes of instances. By limiting the total size of items in each multiset, our results can be viewed as an offline approach to Bin Packing, in which each bin is represented as a multiset. In addition to providing the first results on reconfiguration of multisets, our contributions open up several research avenues: the interplay between reconfiguration and online algorithms and parallel algorithms; the use of the tools of linear programming in reconfiguration; and, in the longer term, a focus on extra resources in reconfiguration. A preliminary version of this paper appeared in the proceedings of the 18th International Conference and Workshops on Algorithms and Computation (WALCOM 2024).

我们使用重新配置框架来分析涉及组间项目重新排列的问题。在各种应用程序中,一组项目可能对应于分配给特定机器的文件或作业,重排的目标可能是提高效率或增加局部性。为了涵盖广泛应用领域中出现的问题,我们将一般重包装问题定义为多集的多集的重排问题。我们给出了一般情况下的硬度结果和各种受限类型的实例的算法。通过限制每个多集中物品的总大小,我们的结果可以被视为一种离线的Bin Packing方法,其中每个Bin都被表示为一个多集。除了提供多集重构的第一个结果外,我们的贡献开辟了几个研究途径:重构与在线算法和并行算法之间的相互作用;线性规划工具在重构中的应用从长远来看,重点是在重新配置中增加额外的资源。本文的初步版本出现在第18届算法与计算国际会议和研讨会(WALCOM 2024)的会议记录中。
{"title":"Reconfiguration of Multisets with Applications to Bin Packing","authors":"Jeffrey Kam,&nbsp;Shahin Kamali,&nbsp;Avery Miller,&nbsp;Naomi Nishimura","doi":"10.1007/s00453-025-01324-w","DOIUrl":"10.1007/s00453-025-01324-w","url":null,"abstract":"<div><p>We use the reconfiguration framework to analyze problems that involve the rearrangement of items among groups. In various applications, a group of items could correspond to the files or jobs assigned to a particular machine, and the goal of rearrangement could be improving efficiency or increasing locality. To cover problems arising in a wide range of application areas, we define the general <span>Repacking</span> problem as the rearrangement of multisets of multisets. We present hardness results for the general case and algorithms for various restricted classes of instances. By limiting the total size of items in each multiset, our results can be viewed as an offline approach to <span>Bin Packing</span>, in which each bin is represented as a multiset. In addition to providing the first results on reconfiguration of multisets, our contributions open up several research avenues: the interplay between reconfiguration and online algorithms and parallel algorithms; the use of the tools of linear programming in reconfiguration; and, in the longer term, a focus on extra resources in reconfiguration. A preliminary version of this paper appeared in the proceedings of the 18th International Conference and Workshops on Algorithms and Computation (WALCOM 2024).</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 12","pages":"1933 - 1996"},"PeriodicalIF":0.7,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145227882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Log-Diameter MST Verification and Sensitivity in MPC 测井径MST在MPC中的验证和灵敏度
IF 0.7 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-20 DOI: 10.1007/s00453-025-01332-w
Sam Coy, Artur Czumaj, Gopinath Mishra, Anish Mukherjee

We consider two natural variants of the problem of minimum spanning tree ((text {MST})) of a graph in the parallel setting: MST verification (verifying if a given tree is an (text {MST})) and the sensitivity analysis of an MST (finding the lowest cost replacement edge for each edge of the (text {MST})). These two problems have been studied extensively for sequential algorithms and for parallel algorithms in the (textrm{PRAM}) model of computation. In this paper, we extend the study to the standard model of Massive Parallel Computation ((textrm{MPC})). It is known that for graphs of diameter D, the connectivity problem can be solved in (O(log D + log log n)) rounds on an (textrm{MPC}) with low local memory (each machine can store only (O(n^{delta })) words for an arbitrary constant (delta > 0)) and with linear global memory, that is, with optimal utilization. However, for the related task of finding an (text {MST}), we need (Omega (log D_{text {MST}})) rounds, where (D_{text {MST}}) denotes the diameter of the minimum spanning tree. The state of the art upper bound for (text {MST}) is (O(log n)) rounds; the result follows by simulating existing (textrm{PRAM}) algorithms. While this bound may be optimal for general graphs, the benchmark of connectivity and lower bound for (text {MST}) suggest the target bound of (O(log D_text {MST})) rounds, or possibly (O(log D_text {MST} + log log n)) rounds. As for now, we do not know if this bound is achievable for the (text {MST}) problem on an (textrm{MPC}) with low local memory and linear global memory. In this paper, we show that two natural variants of the (text {MST}) problem: (text {MST}) verification and sensitivity analysis of an (text {MST}), can be completed in (O(log D_T)) rounds on an (textrm{MPC}) with low local memory and with linear global memory, that is, with optimal utilization; here (D_T) is the diameter of the input “candidate (text {MST})T. The algorithms asymptotically match our lower bound, conditioned on the 1-vs-2-cycle conjecture.

我们考虑了并行设置中图的最小生成树((text {MST}))问题的两个自然变体:MST验证(验证给定树是否为(text {MST}))和MST的灵敏度分析(为(text {MST})的每个边找到成本最低的替换边)。这两个问题在(textrm{PRAM})计算模型中对顺序算法和并行算法进行了广泛的研究。在本文中,我们将研究扩展到大规模并行计算的标准模型((textrm{MPC}))。众所周知,对于直径为D的图,连接问题可以在具有低本地内存(每台机器只能为任意常数(delta > 0)存储(O(n^{delta }))个单词)和线性全局内存(即最佳利用率)的(textrm{MPC})上以(O(log D + log log n))轮的形式解决。然而,对于寻找(text {MST})的相关任务,我们需要(Omega (log D_{text {MST}}))轮数,其中(D_{text {MST}})表示最小生成树的直径。目前(text {MST})的上界是(O(log n))轮;通过对现有(textrm{PRAM})算法的模拟得到了结果。虽然这个边界对于一般图来说可能是最优的,但是连接性基准和(text {MST})的下界建议的目标边界是(O(log D_text {MST}))轮,或者可能是(O(log D_text {MST} + log log n))轮。到目前为止,我们还不知道在具有低本地内存和线性全局内存的(textrm{MPC})上的(text {MST})问题是否可以实现这个界限。在本文中,我们证明了(text {MST})问题的两个自然变体:(text {MST})的验证和(text {MST})的灵敏度分析,可以在具有低局部内存和线性全局内存的(textrm{MPC})上以(O(log D_T))轮完成,即具有最佳利用率;这里(D_T)是输入“候选(text {MST})”t的直径,算法渐近地匹配我们的下界,条件是1 vs 2周期猜想。
{"title":"Log-Diameter MST Verification and Sensitivity in MPC","authors":"Sam Coy,&nbsp;Artur Czumaj,&nbsp;Gopinath Mishra,&nbsp;Anish Mukherjee","doi":"10.1007/s00453-025-01332-w","DOIUrl":"10.1007/s00453-025-01332-w","url":null,"abstract":"<div><p>We consider two natural variants of the problem of minimum spanning tree (<span>(text {MST})</span>) of a graph in the parallel setting: <i>MST verification</i> (verifying if a given tree is an <span>(text {MST})</span>) and the <i>sensitivity analysis of an MST</i> (finding the lowest cost replacement edge for each edge of the <span>(text {MST})</span>). These two problems have been studied extensively for sequential algorithms and for parallel algorithms in the <span>(textrm{PRAM})</span> model of computation. In this paper, we extend the study to the standard model of <i>Massive Parallel Computation</i> (<span>(textrm{MPC})</span>). It is known that for graphs of diameter <i>D</i>, the connectivity problem can be solved in <span>(O(log D + log log n))</span> rounds on an <span>(textrm{MPC})</span> with <i>low local memory</i> (each machine can store only <span>(O(n^{delta }))</span> words for an arbitrary constant <span>(delta &gt; 0)</span>) and with <i>linear global memory</i>, that is, with <i>optimal utilization</i>. However, for the related task of finding an <span>(text {MST})</span>, we need <span>(Omega (log D_{text {MST}}))</span> rounds, where <span>(D_{text {MST}})</span> denotes the diameter of the minimum spanning tree. The state of the art upper bound for <span>(text {MST})</span> is <span>(O(log n))</span> rounds; the result follows by simulating existing <span>(textrm{PRAM})</span> algorithms. While this bound may be optimal for general graphs, the benchmark of connectivity and lower bound for <span>(text {MST})</span> suggest the target bound of <span>(O(log D_text {MST}))</span> rounds, or possibly <span>(O(log D_text {MST} + log log n))</span> rounds. As for now, we do not know if this bound is achievable for the <span>(text {MST})</span> problem on an <span>(textrm{MPC})</span> with low local memory and linear global memory. In this paper, we show that two natural variants of the <span>(text {MST})</span> problem: <span>(text {MST})</span> verification and sensitivity analysis of an <span>(text {MST})</span>, can be completed in <span>(O(log D_T))</span> rounds on an <span>(textrm{MPC})</span> with low local memory and with linear global memory, that is, with optimal utilization; here <span>(D_T)</span> is the diameter of the input “candidate <span>(text {MST})</span> ” <i>T</i>. The algorithms asymptotically match our lower bound, conditioned on the 1-vs-2-cycle conjecture.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 12","pages":"1899 - 1932"},"PeriodicalIF":0.7,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145227911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parameterized Complexity of Path Set Packing 路径集填充的参数化复杂度
IF 0.7 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-19 DOI: 10.1007/s00453-025-01329-5
N. R. Aravind, Roopam Saxena

In Path Set Packing, the input is an undirected graph G, a collection (mathcal{P}) of simple paths in G, and a positive integer k. The problem is to decide whether there exist k edge-disjoint paths in (mathcal{P}). We study the parameterized complexity of Path Set Packing with respect to both natural and structural parameters. We show that the problem is W[1]-hard with respect to vertex cover number, and W[1]-hard respect to pathwidth plus solution size when input graph is a grid. These results answer an open question raised in Xu and Zhang (in: Wang L, Zhu D (eds) Computing and combinatorics—24th international conference, COCOON 2018, Qing Dao, China, July 2–4, 2018, proceedings. Lecture notes in computer science, vol 10976, pp 305–315. Springer, 2018, https://doi.org/10.1007/978-3-319-94776-1_26). On the positive side, we present an FPT algorithm parameterized by feedback vertex number plus maximum degree, and present an FPT algorithm parameterized by treewidth plus maximum degree plus maximum length of a path in (mathcal{P}). These positive results complement the hardness of Path Set Packing with respect to any subset of the parameters used in the FPT algorithms. We also give a 4-approximation algorithm for maximum path set packing problem which runs in FPT time when parameterized by feedback edge number.

在路径集填充中,输入是一个无向图G, G中简单路径的集合(mathcal{P})和一个正整数k,问题是确定(mathcal{P})中是否存在k条不相交的路径。本文从自然参数和结构参数两方面研究了路径集填充的参数化复杂度。我们表明,当输入图是网格时,问题是W[1]-关于顶点覆盖数的困难,W[1]-关于路径宽度加解大小的困难。这些结果回答了Xu和Zhang提出的一个开放性问题(in: Wang L, Zhu D(编))计算与组合-第24届国际会议,COCOON 2018,中国青岛,2018年7月2-4日,proceedings。《计算机科学》,第10976卷,第305-315页。b施普林格,2018,https://doi.org/10.1007/978-3-319-94776-1_26)。在积极的方面,我们提出了一个参数化的FPT算法的反馈顶点数加上最大度,并提出了一个参数化的FPT算法的树宽加上最大度加上路径的最大长度(mathcal{P})。这些积极的结果补充了路径集填充相对于FPT算法中使用的任何参数子集的硬度。对于用反馈边数参数化的FPT时间内运行的最大路径集布局问题,给出了一个4逼近算法。
{"title":"Parameterized Complexity of Path Set Packing","authors":"N. R. Aravind,&nbsp;Roopam Saxena","doi":"10.1007/s00453-025-01329-5","DOIUrl":"10.1007/s00453-025-01329-5","url":null,"abstract":"<div><p>In <span>Path Set Packing</span>, the input is an undirected graph <i>G</i>, a collection <span>(mathcal{P})</span> of simple paths in <i>G</i>, and a positive integer <i>k</i>. The problem is to decide whether there exist <i>k</i> edge-disjoint paths in <span>(mathcal{P})</span>. We study the parameterized complexity of <span>Path Set Packing</span> with respect to both natural and structural parameters. We show that the problem is W[1]-hard with respect to vertex cover number, and W[1]-hard respect to pathwidth plus solution size when input graph is a grid. These results answer an open question raised in Xu and Zhang (in: Wang L, Zhu D (eds) Computing and combinatorics—24th international conference, COCOON 2018, Qing Dao, China, July 2–4, 2018, proceedings. Lecture notes in computer science, vol 10976, pp 305–315. Springer, 2018, https://doi.org/10.1007/978-3-319-94776-1_26). On the positive side, we present an FPT algorithm parameterized by feedback vertex number plus maximum degree, and present an FPT algorithm parameterized by treewidth plus maximum degree plus maximum length of a path in <span>(mathcal{P})</span>. These positive results complement the hardness of <span>Path Set Packing</span> with respect to any subset of the parameters used in the FPT algorithms. We also give a 4-approximation algorithm for maximum path set packing problem which runs in FPT time when parameterized by feedback edge number.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 12","pages":"1864 - 1898"},"PeriodicalIF":0.7,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145227913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comma Selection Outperforms Plus Selection on OneMax with Randomly Planted Optima 逗号选择优于加选择在OneMax随机种植的最优
IF 0.7 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-18 DOI: 10.1007/s00453-025-01330-y
Joost Jorritsma, Johannes Lengler, Dirk Sudholt

Evolutionary algorithms (EAs) are general-purpose optimisation algorithms that maintain a population (multiset) of candidate solutions and apply variation operators to create new solutions called offspring. A new population is typically formed using one of two strategies: a ((mu +lambda )) EA (plus selection) keeps the best (mu ) search points out of the union of (mu ) parents in the old population and (lambda ) offspring, whereas a ((mu ,lambda )) EA (comma selection) discards all parents and only keeps the best (mu ) out of (lambda ) offspring. Comma selection may help to escape from local optima, however when and how it is beneficial is subject to an ongoing debate. We propose a new benchmark function to investigate the benefits of comma selection: the well known benchmark function OneMaxwith randomly planted local optima, generated by frozen noise. We show that comma selection (the ({(1,lambda )}) EA) is faster than plus selection (the ({(1+lambda )}) EA) on this benchmark, in a fixed-target scenario, and for offspring population sizes (lambda ) for which both algorithms behave differently. For certain parameters, the ({(1,lambda )}) EAfinds the target in (Theta (n ln n)) evaluations, with high probability (w.h.p.), while the ({(1+lambda )}) EAw.h.p. requires (omega (n^2)) evaluations. We further show that the advantage of comma selection is not arbitrarily large: w.h.p. comma selection outperforms plus selection at most by a factor of (O(n ln n)) for most reasonable parameter choices. We develop novel methods for analysing frozen noise and give powerful and general fixed-target results with tail bounds that are of independent interest.

进化算法(EAs)是一种通用的优化算法,它维持候选解的种群(多集),并应用变异算子来创建称为后代的新解。新种群的形成通常采用以下两种策略之一:((mu +lambda )) EA(加选择)从老种群的(mu )亲本和(lambda )后代的结合中保留最好的(mu )搜索点,而((mu ,lambda )) EA(逗号选择)放弃所有亲本,只从(lambda )后代中保留最好的(mu )。逗号的选择可能有助于摆脱局部最优状态,但是何时以及如何有益是一个正在进行的辩论的主题。我们提出了一个新的基准函数来研究逗号选择的好处:众所周知的基准函数onemax,它具有随机种植的局部最优,由冻结噪声产生。我们表明,在这个基准测试中,在固定目标场景中,对于两种算法表现不同的后代种群大小(lambda ),逗号选择(({(1,lambda )}) EA)比加号选择(({(1+lambda )}) EA)快。对于某些参数,({(1,lambda )}) eaa在(Theta (n ln n))评估中找到目标,具有高概率(w.h.p),而({(1+lambda )}) eaa .h.p。需要(omega (n^2))评估。我们进一步表明,逗号选择的优势并不是任意大的:对于最合理的参数选择,w.h.p.逗号选择最多比加号选择高出(O(n ln n))倍。我们开发了分析冻结噪声的新方法,并给出了具有独立兴趣的尾界的强大且通用的固定目标结果。
{"title":"Comma Selection Outperforms Plus Selection on OneMax with Randomly Planted Optima","authors":"Joost Jorritsma,&nbsp;Johannes Lengler,&nbsp;Dirk Sudholt","doi":"10.1007/s00453-025-01330-y","DOIUrl":"10.1007/s00453-025-01330-y","url":null,"abstract":"<div><p>Evolutionary algorithms (EAs) are general-purpose optimisation algorithms that maintain a population (multiset) of candidate solutions and apply variation operators to create new solutions called offspring. A new population is typically formed using one of two strategies: a <span>((mu +lambda ))</span> EA (plus selection) keeps the best <span>(mu )</span> search points out of the union of <span>(mu )</span> parents in the old population and <span>(lambda )</span> offspring, whereas a <span>((mu ,lambda ))</span> EA (comma selection) discards all parents and only keeps the best <span>(mu )</span> out of <span>(lambda )</span> offspring. Comma selection may help to escape from local optima, however when and how it is beneficial is subject to an ongoing debate. We propose a new benchmark function to investigate the benefits of comma selection: the well known benchmark function <span>OneMax</span>with randomly planted local optima, generated by frozen noise. We show that comma selection (the <span>({(1,lambda )})</span> EA) is faster than plus selection (the <span>({(1+lambda )})</span> EA) on this benchmark, in a fixed-target scenario, and for offspring population sizes <span>(lambda )</span> for which both algorithms behave differently. For certain parameters, the <span>({(1,lambda )})</span> EAfinds the target in <span>(Theta (n ln n))</span> evaluations, with high probability (w.h.p.), while the <span>({(1+lambda )})</span> EAw.h.p. requires <span>(omega (n^2))</span> evaluations. We further show that the advantage of comma selection is not arbitrarily large: w.h.p. comma selection outperforms plus selection at most by a factor of <span>(O(n ln n))</span> for most reasonable parameter choices. We develop novel methods for analysing frozen noise and give powerful and general fixed-target results with tail bounds that are of independent interest.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 12","pages":"1804 - 1863"},"PeriodicalIF":0.7,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-025-01330-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145227910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decidability of Fully Quantum Nonlocal Games with Noisy Maximally Entangled States 具有噪声最大纠缠态的全量子非局部对策的可判决性
IF 0.7 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-13 DOI: 10.1007/s00453-025-01339-3
Minglong Qin, Penghui Yao

This paper considers the decidability of fully quantum nonlocal games with noisy maximally entangled states. Fully quantum nonlocal games are a generalization of nonlocal games, where both questions and answers are quantum and the referee performs a binary POVM measurement to decide whether they win the game after receiving the quantum answers from the players. The quantum value of a fully quantum nonlocal game is the supremum of the probability that they win the game, where the supremum is taken over all the possible entangled states shared between the players and all the valid quantum operations performed by the players. The seminal work (text {MIP}^*=text {RE}) ( Ji et al. MIP ∗ = RE, 2020; Ji et al. Quantum soundness of the classical low individual degree test, 2020) implies that it is undecidable to approximate the quantum value of a fully nonlocal game. This still holds even if the players are only allowed to share (arbitrarily many copies of) maximally entangled states. This paper investigates the case that the shared maximally entangled states are noisy. We prove that there is a computable upper bound on the copies of noisy maximally entangled states for the players to win a fully quantum nonlocal game with a probability arbitrarily close to the quantum value. This implies that it is decidable to approximate the quantum values of these games. Hence, the hardness of approximating the quantum value of a fully quantum nonlocal game is not robust against the noise in the shared states. This paper is built on the framework for the decidability of non-interactive simulations of joint distributions (Ghazi et al. in: 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), Los Alamitos, 2016; De et al. in: Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, Philadelphia, 2018; Ghazi et al. Proceedings of the 33rd Computational Complexity Conference, 2018) and generalizes the analogous result for nonlocal games in Qin and Yao (SIAM J Comput 50(6):1800–1891, 2021). We extend the theory of Fourier analysis to the space of super-operators and prove several key results including an invariance principle and a dimension reduction for super-operators. These results are interesting in their own right and are believed to have further applications.

研究了具有噪声最大纠缠态的全量子非局部对策的可判决性。全量子非局部博弈是非局部博弈的泛化,其中问题和答案都是量子的,裁判在收到玩家的量子答案后执行二进制POVM测量来决定他们是否赢得比赛。全量子非局域博弈的量子值是他们赢得博弈的概率的最高值,其中最高值是参与者之间共享的所有可能的纠缠态和参与者执行的所有有效量子操作。开创性的工作(text {MIP}^*=text {RE}) (Ji et al.)Mip∗= re, 2020;Ji等人。经典的低个体度检验(2020)的量子稳健性意味着完全非局部博弈的量子值近似是不可确定的。即使玩家只被允许共享(任意多个副本)最大纠缠状态,这一点仍然成立。本文研究了共享最大纠缠态是有噪声的情况。我们证明了在一个概率任意接近于量子值的全量子非局部博弈中,参与者在噪声最大纠缠态的副本上存在一个可计算的上界。这意味着这些游戏的量子值是可以确定的。因此,近似全量子非局部对策的量子值的硬度对共享状态中的噪声不具有鲁棒性。本文建立在联合分布的非交互式模拟的可确定性框架(Ghazi等人:2016年IEEE第57届计算机科学基础年度研讨会(FOCS), Los Alamitos, 2016;De等人:第29届ACM-SIAM离散算法研讨会论文集,费城,2018;Ghazi等人。并对秦尧非局部博弈的模拟结果进行了推广[J] .计算学报,50(6):1800-1891,2021)。将傅里叶分析理论推广到超级算子空间,证明了超级算子的不变性原理和降维性。这些结果本身就很有趣,并被认为有进一步的应用。
{"title":"Decidability of Fully Quantum Nonlocal Games with Noisy Maximally Entangled States","authors":"Minglong Qin,&nbsp;Penghui Yao","doi":"10.1007/s00453-025-01339-3","DOIUrl":"10.1007/s00453-025-01339-3","url":null,"abstract":"<div><p>This paper considers the decidability of fully quantum nonlocal games with noisy maximally entangled states. Fully quantum nonlocal games are a generalization of nonlocal games, where both questions and answers are quantum and the referee performs a binary POVM measurement to decide whether they win the game after receiving the quantum answers from the players. The quantum value of a fully quantum nonlocal game is the supremum of the probability that they win the game, where the supremum is taken over all the possible entangled states shared between the players and all the valid quantum operations performed by the players. The seminal work <span>(text {MIP}^*=text {RE})</span> ( Ji et al. MIP ∗ = RE, 2020; Ji et al. Quantum soundness of the classical low individual degree test, 2020) implies that it is undecidable to approximate the quantum value of a fully nonlocal game. This still holds even if the players are only allowed to share (arbitrarily many copies of) maximally entangled states. This paper investigates the case that the shared maximally entangled states are noisy. We prove that there is a computable upper bound on the copies of noisy maximally entangled states for the players to win a fully quantum nonlocal game with a probability arbitrarily close to the quantum value. This implies that it is decidable to approximate the quantum values of these games. Hence, the hardness of approximating the quantum value of a fully quantum nonlocal game is not robust against the noise in the shared states. This paper is built on the framework for the decidability of non-interactive simulations of joint distributions (Ghazi et al. in: 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), Los Alamitos, 2016; De et al. in: Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, Philadelphia, 2018; Ghazi et al. Proceedings of the 33rd Computational Complexity Conference, 2018) and generalizes the analogous result for nonlocal games in Qin and Yao (SIAM J Comput 50(6):1800–1891, 2021). We extend the theory of Fourier analysis to the space of super-operators and prove several key results including an invariance principle and a dimension reduction for super-operators. These results are interesting in their own right and are believed to have further applications.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 12","pages":"1732 - 1803"},"PeriodicalIF":0.7,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145227914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Equivalence of Parameterized Inapproximability of k-Median, k-Max-Coverage, and 2-CSP k-Median、k-Max-Coverage和2-CSP的参数化不可逼近性的等价性
IF 0.7 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-13 DOI: 10.1007/s00453-025-01338-4
Karthik C.S., Euiwoong Lee, Pasin Manurangsi

Parameterized Inapproximability Hypothesis ((textsf{PIH})) is a central question in the field of parameterized complexity. (textsf{PIH}) asserts that given as input a 2-(textsf{CSP}) on k variables and alphabet size n, it is (textsf{W})[1]-hard parameterized by k to distinguish if the input is perfectly satisfiable or if every assignment to the input violates 1% of the constraints. An important implication of (textsf{PIH}) is that it yields the tight parameterized inapproximability of the (k)-(textsf{maxcoverage}) problem. In the (k)-(textsf{maxcoverage}) problem, we are given as input a set system, a threshold (tau >0), and a parameter k and the goal is to determine if there exist k sets in the input whose union is at least (tau ) fraction of the entire universe. (textsf{PIH}) is known to imply that it is (textsf{W})[1]-hard parameterized by k to distinguish if there are k input sets whose union is at least (tau ) fraction of the universe or if the union of every k input sets is not much larger than (tau cdot (1-frac{1}{e})) fraction of the universe. In this work we present a gap preserving (textsf{FPT}) reduction (in the reverse direction) from the (k)-(textsf{maxcoverage}) problem to the aforementioned 2-(textsf{CSP}) problem, thus showing that the assertion that approximating the (k)-(textsf{maxcoverage}) problem to some constant factor is (textsf{W})[1]-hard implies (textsf{PIH}). In addition, we present a gap preserving (textsf{FPT}) reduction from the (k)-(textsf{median}) problem (in general metrics) to the (k)-(textsf{maxcoverage}) problem, further highlighting the power of gap preserving (textsf{FPT}) reductions over classical gap preserving polynomial time reductions.

参数化不可逼近性假说((textsf{PIH}))是参数化复杂性研究领域的一个核心问题。(textsf{PIH})断言,给定k个变量和字母大小为n的输入为2- (textsf{CSP}),则(textsf{W})[1]-很难用k参数化,以区分输入是完全可满足的,还是对输入的每个赋值都违反1% of the constraints. An important implication of (textsf{PIH}) is that it yields the tight parameterized inapproximability of the (k)-(textsf{maxcoverage}) problem. In the (k)-(textsf{maxcoverage}) problem, we are given as input a set system, a threshold (tau >0), and a parameter k and the goal is to determine if there exist k sets in the input whose union is at least (tau ) fraction of the entire universe. (textsf{PIH}) is known to imply that it is (textsf{W})[1]-hard parameterized by k to distinguish if there are k input sets whose union is at least (tau ) fraction of the universe or if the union of every k input sets is not much larger than (tau cdot (1-frac{1}{e})) fraction of the universe. In this work we present a gap preserving (textsf{FPT}) reduction (in the reverse direction) from the (k)-(textsf{maxcoverage}) problem to the aforementioned 2-(textsf{CSP}) problem, thus showing that the assertion that approximating the (k)-(textsf{maxcoverage}) problem to some constant factor is (textsf{W})[1]-hard implies (textsf{PIH}). In addition, we present a gap preserving (textsf{FPT}) reduction from the (k)-(textsf{median}) problem (in general metrics) to the (k)-(textsf{maxcoverage}) problem, further highlighting the power of gap preserving (textsf{FPT}) reductions over classical gap preserving polynomial time reductions.
{"title":"On Equivalence of Parameterized Inapproximability of k-Median, k-Max-Coverage, and 2-CSP","authors":"Karthik C.S.,&nbsp;Euiwoong Lee,&nbsp;Pasin Manurangsi","doi":"10.1007/s00453-025-01338-4","DOIUrl":"10.1007/s00453-025-01338-4","url":null,"abstract":"<div><p>Parameterized Inapproximability Hypothesis (<span>(textsf{PIH})</span>) is a central question in the field of parameterized complexity. <span>(textsf{PIH})</span> asserts that given as input a 2-<span>(textsf{CSP})</span> on <i>k</i> variables and alphabet size <i>n</i>, it is <span>(textsf{W})</span>[1]-hard parameterized by <i>k</i> to distinguish if the input is perfectly satisfiable or if every assignment to the input violates 1% of the constraints. An important implication of <span>(textsf{PIH})</span> is that it yields the tight parameterized inapproximability of the <span>(k)</span>-<span>(textsf{maxcoverage})</span> problem. In the <span>(k)</span>-<span>(textsf{maxcoverage})</span> problem, we are given as input a set system, a threshold <span>(tau &gt;0)</span>, and a parameter <i>k</i> and the goal is to determine if there exist <i>k</i> sets in the input whose union is at least <span>(tau )</span> fraction of the entire universe. <span>(textsf{PIH})</span> is known to imply that it is <span>(textsf{W})</span>[1]-hard parameterized by <i>k</i> to distinguish if there are <i>k</i> input sets whose union is at least <span>(tau )</span> fraction of the universe or if the union of every <i>k</i> input sets is not much larger than <span>(tau cdot (1-frac{1}{e}))</span> fraction of the universe. In this work we present a gap preserving <span>(textsf{FPT})</span> reduction (in the reverse direction) from the <span>(k)</span>-<span>(textsf{maxcoverage})</span> problem to the aforementioned 2-<span>(textsf{CSP})</span> problem, thus showing that the assertion that approximating the <span>(k)</span>-<span>(textsf{maxcoverage})</span> problem to some constant factor is <span>(textsf{W})</span>[1]-hard implies <span>(textsf{PIH})</span>. In addition, we present a gap preserving <span>(textsf{FPT})</span> reduction from the <span>(k)</span>-<span>(textsf{median})</span> problem (in general metrics) to the <span>(k)</span>-<span>(textsf{maxcoverage})</span> problem, further highlighting the power of gap preserving <span>(textsf{FPT})</span> reductions over classical gap preserving polynomial time reductions.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 12","pages":"1711 - 1731"},"PeriodicalIF":0.7,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145227912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Parameterized Complexity of Eulerian Strong Component Arc Deletion 欧拉强分量圆弧剔除的参数化复杂度
IF 0.7 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-07 DOI: 10.1007/s00453-025-01336-6
Václav Blažej, Satyabrata Jana, M. S. Ramanujan, Peter Strulo

In this paper, we study the Eulerian Strong Component Arc Deletion problem, where the input is a directed multigraph and the goal is to delete the minimum number of arcs to ensure every strongly connected component of the resulting digraph is Eulerian. This problem is a natural extension of the Directed Feedback Arc Set problem and is also known to be motivated by certain scenarios arising in the study of housing markets. The complexity of the problem, when parameterized by solution size (i.e., size of the deletion set), has remained unresolved and has been highlighted in several papers. In this work, we answer this question by ruling out (subject to the usual complexity assumptions) a fixed-parameter algorithm (FPT algorithm) for this parameter and conduct a broad analysis of the problem with respect to other natural parameterizations. We prove both positive and negative results. Among these, we demonstrate that the problem is also hard (W[1]-hard or even para-NP-hard) when parameterized by either treewidth or maximum degree alone. Complementing our lower bounds, we establish that the problem is in XP when parameterized by treewidth and FPT when parameterized either by both treewidth and maximum degree or by both treewidth and solution size. We show that on simple digraphs, these algorithms have near-optimal asymptotic dependence on the treewidth assuming the Exponential Time Hypothesis.

本文研究了欧拉强分量圆弧删除问题,该问题的输入是一个有向多图,目标是删除最小数量的圆弧,以保证得到的有向图的每个强连接分量都是欧拉的。这个问题是有向反馈弧集问题的自然延伸,也被认为是由住房市场研究中出现的某些场景所激发的。当用解决方案大小(即删除集的大小)参数化时,问题的复杂性仍然没有得到解决,并在几篇论文中得到了强调。在这项工作中,我们通过排除(根据通常的复杂性假设)该参数的固定参数算法(FPT算法)来回答这个问题,并针对其他自然参数化对问题进行了广泛的分析。我们证明了积极和消极的结果。其中,我们证明了当仅用树宽度或最大度参数化时,问题也是困难的(W[1]-困难甚至是准np -困难)。补充我们的下界,我们确定当用树宽参数化问题是XP,当用树宽和最大度参数化问题是FPT,或者同时用树宽和解大小参数化问题是FPT。我们证明了在简单有向图上,这些算法在指数时间假设下对树宽具有近最优的渐近依赖性。
{"title":"On the Parameterized Complexity of Eulerian Strong Component Arc Deletion","authors":"Václav Blažej,&nbsp;Satyabrata Jana,&nbsp;M. S. Ramanujan,&nbsp;Peter Strulo","doi":"10.1007/s00453-025-01336-6","DOIUrl":"10.1007/s00453-025-01336-6","url":null,"abstract":"<div><p>In this paper, we study the Eulerian Strong Component Arc Deletion problem, where the input is a directed multigraph and the goal is to delete the minimum number of arcs to ensure every strongly connected component of the resulting digraph is Eulerian. This problem is a natural extension of the Directed Feedback Arc Set problem and is also known to be motivated by certain scenarios arising in the study of housing markets. The complexity of the problem, when parameterized by solution size (i.e., size of the deletion set), has remained unresolved and has been highlighted in several papers. In this work, we answer this question by ruling out (subject to the usual complexity assumptions) a fixed-parameter algorithm (FPT algorithm) for this parameter and conduct a broad analysis of the problem with respect to other natural parameterizations. We prove both positive and negative results. Among these, we demonstrate that the problem is also hard (W[1]-hard or even para-NP-hard) when parameterized by either treewidth or maximum degree alone. Complementing our lower bounds, we establish that the problem is in XP when parameterized by treewidth and FPT when parameterized either by both treewidth and maximum degree or by both treewidth and solution size. We show that on simple digraphs, these algorithms have near-optimal asymptotic dependence on the treewidth assuming the Exponential Time Hypothesis.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 11","pages":"1669 - 1709"},"PeriodicalIF":0.7,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-025-01336-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145090612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ShockHash: Near Optimal-Space Minimal Perfect Hashing Beyond Brute-Force shockash:超越蛮力的近最优空间最小完美哈希
IF 0.7 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-02 DOI: 10.1007/s00453-025-01321-z
Hans-Peter Lehmann, Peter Sanders, Stefan Walzer

A minimal perfect hash function (MPHF) maps a set S of n keys to the first n integers without collisions. There is a lower bound of (nlog _2e-mathcal {O}(log n) approx 1.44n) bits needed to represent an MPHF. This can be reached by a brute-force algorithm that tries (e^n) hash function seeds in expectation and stores the first seed that leads to an MPHF. The most space-efficient previous algorithms for constructing MPHFs all use such a brute-force approach as a basic building block. In this paper, we introduce ShockHash – Small, heavily overloaded cuckoo hash tables for minimal perfect hashing. ShockHash uses two hash functions (h_0) and (h_1), hoping for the existence of a function (f : S rightarrow {0,1}) such that (x mapsto h_{f(x)}(x)) is an MPHF on S. It then uses a 1-bit retrieval data structure to store f using (n + o(n)) bits. In graph terminology, ShockHash generates n-edge random graphs until stumbling on a pseudoforest – where each component contains as many edges as nodes. Using cuckoo hashing, ShockHash then derives an MPHF from the pseudoforest in linear time. We show that ShockHash needs to try only about ((e/2)^n approx 1.359^n) seeds in expectation. This reduces the space for storing the seed by roughly n bits (maintaining the asymptotically optimal space consumption) and speeds up construction by almost a factor of (2^n) compared to brute-force. Bipartite ShockHash reduces the expected construction time again to about (1.166^n) by maintaining a pool of candidate hash functions and checking all possible pairs. Using ShockHash as a building block within the RecSplit framework we obtain ShockHash-RS, which can be constructed up to 3 orders of magnitude faster than competing approaches. ShockHash-RS can build an MPHF for 10 million keys with 1.489 bits per key in about half an hour. When instead using ShockHash after an efficient k-perfect hash function, it achieves space usage similar to the best competitors, while being significantly faster to construct and query.

最小完美哈希函数(MPHF)将一个包含n个键的集合S映射到前n个整数而不发生冲突。表示MPHF需要一个(nlog _2e-mathcal {O}(log n) approx 1.44n)位的下界。这可以通过一种蛮力算法来实现,该算法尝试(e^n)哈希函数种子,并存储导致MPHF的第一个种子。以前构造mphf的最节省空间的算法都使用这种蛮力方法作为基本构建块。在本文中,我们引入了shockash——用于最小完美哈希的小的、重重载的布谷鸟哈希表。shockash使用两个散列函数(h_0)和(h_1),希望存在一个函数(f : S rightarrow {0,1}),使得(x mapsto h_{f(x)}(x))是s上的MPHF。然后使用1位检索数据结构使用(n + o(n))位来存储f。在图的术语中,shockash生成n条边的随机图,直到偶然发现一个伪森林——其中每个组件包含与节点一样多的边。然后,使用布谷鸟哈希,shockash在线性时间内从伪森林中获得一个MPHF。我们证明了shockash只需要在预期中尝试((e/2)^n approx 1.359^n)种子。这将存储种子的空间减少了大约n位(保持渐近最优的空间消耗),并且与暴力破解相比,几乎可以将构建速度提高(2^n)。通过维护候选散列函数池并检查所有可能的对,Bipartite shockash将预期的构建时间再次减少到(1.166^n)左右。使用shockash作为RecSplit框架中的构建块,我们获得了shockash - rs,它的构建速度比竞争对手的方法快3个数量级。shockash - rs可以在大约半小时内构建1000万个密钥的MPHF,每个密钥为1,489位。当在一个有效的k-完美哈希函数之后使用shockash时,它实现了与最佳竞争对手相似的空间使用,同时构造和查询速度明显更快。
{"title":"ShockHash: Near Optimal-Space Minimal Perfect Hashing Beyond Brute-Force","authors":"Hans-Peter Lehmann,&nbsp;Peter Sanders,&nbsp;Stefan Walzer","doi":"10.1007/s00453-025-01321-z","DOIUrl":"10.1007/s00453-025-01321-z","url":null,"abstract":"<div><p>A minimal perfect hash function (MPHF) maps a set <i>S</i> of <i>n</i> keys to the first <i>n</i> integers without collisions. There is a lower bound of <span>(nlog _2e-mathcal {O}(log n) approx 1.44n)</span> bits needed to represent an MPHF. This can be reached by a <i>brute-force</i> algorithm that tries <span>(e^n)</span> hash function seeds in expectation and stores the first seed that leads to an MPHF. The most space-efficient previous algorithms for constructing MPHFs all use such a brute-force approach as a basic building block. In this paper, we introduce ShockHash – <b>S</b>mall, <b>h</b>eavily <b>o</b>verloaded cu<b>ck</b>oo <b>hash</b> tables for minimal perfect hashing. ShockHash uses two hash functions <span>(h_0)</span> and <span>(h_1)</span>, hoping for the existence of a function <span>(f : S rightarrow {0,1})</span> such that <span>(x mapsto h_{f(x)}(x))</span> is an MPHF on <i>S</i>. It then uses a 1-bit retrieval data structure to store <i>f</i> using <span>(n + o(n))</span> bits. In graph terminology, ShockHash generates <i>n</i>-edge random graphs until stumbling on a <i>pseudoforest</i> – where each component contains as many edges as nodes. Using cuckoo hashing, ShockHash then derives an MPHF from the pseudoforest in linear time. We show that ShockHash needs to try only about <span>((e/2)^n approx 1.359^n)</span> seeds in expectation. This reduces the space for storing the seed by roughly <i>n</i> bits (maintaining the asymptotically optimal space consumption) and speeds up construction by almost a factor of <span>(2^n)</span> compared to brute-force. <i>Bipartite</i> ShockHash reduces the expected construction time again to about <span>(1.166^n)</span> by maintaining a pool of candidate hash functions and checking all possible pairs. Using ShockHash as a building block within the RecSplit framework we obtain ShockHash-RS, which can be constructed up to 3 orders of magnitude faster than competing approaches. ShockHash-RS can build an MPHF for 10 million keys with 1.489 bits per key in about half an hour. When instead using ShockHash after an efficient <i>k</i>-perfect hash function, it achieves space usage similar to the best competitors, while being significantly faster to construct and query.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 11","pages":"1620 - 1668"},"PeriodicalIF":0.7,"publicationDate":"2025-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-025-01321-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145090312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Achieving Tight (O(4^k)) Runtime Bounds on Jumpk by Proving that Genetic Algorithms Evolve Near-Maximal Population Diversity 通过证明遗传算法进化出接近最大种群多样性来实现跳跃的紧密(O(4^k))运行时间界限
IF 0.7 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-28 DOI: 10.1007/s00453-025-01323-x
Andre Opris, Johannes Lengler, Dirk Sudholt

The (textsc {Jump} _k) benchmark was the first problem for which crossover was proven to give a speed-up over mutation-only evolutionary algorithms. Jansen and Wegener (Algorithmica 2002) proved an upper bound of (O(textrm{poly}(n) + 4^k/p_c)) for the ((mu )+1) Genetic Algorithm (((mu )+1) GA), but only for unrealistically small crossover probabilities (p_c). To this date, it remains an open problem to prove similar upper bounds for realistic (p_c); the best known runtime bound, in terms of function evaluations, for (p_c = Omega (1)) is (O((n/chi )^{k-1})), (chi ) a positive constant. We provide a novel approach and analyse the evolution of the population diversity, measured as sum of pairwise Hamming distances, for a variant of the ((mu )+1) GA on (textsc {Jump} _k). The ((mu )+1)-({lambda _c})-GA creates one offspring in each generation either by applying mutation to one parent or by applying crossover ({lambda _c}) times to the same two parents (followed by mutation), to amplify the probability of creating an accepted offspring in generations with crossover. We show that population diversity in the ((mu )+1)-({lambda _c})-GA converges to an equilibrium of near-perfect diversity. This yields an improved time bound of (O(mu n log (mu ) + 4^k)) function evaluations for a range of k under the mild assumptions (p_c = O(1/k)) and (mu in Omega (kn)). For all constant k, the restriction is satisfied for some (p_c = Omega (1)) and it implies that the expected runtime for all constant k and an appropriate (mu = Theta (kn)) is bounded by (O(n^2 log n)), irrespective of k. For larger k, the expected time of the ((mu )+1)-({lambda _c})-GA is (Theta (4^k)), which is tight for a large class of unbiased black-box algorithms and faster than the original ((mu )+1) GA by a factor of (Omega (1/p_c)). We also show that our analysis can be extended to other unitation functions such as (textsc {Jump} _{k, delta }) and Hurdle.

(textsc {Jump} _k)基准是第一个证明交叉比仅突变的进化算法更快的问题。Jansen和Wegener (Algorithmica 2002)证明了((mu ) +1)遗传算法(((mu ) +1) GA)的上界(O(textrm{poly}(n) + 4^k/p_c)),但仅适用于不切实际的小交叉概率(p_c)。到目前为止,证明类似的上界对于现实(p_c)仍然是一个悬而未决的问题;就函数求值而言,(p_c = Omega (1))最著名的运行时边界是(O((n/chi )^{k-1})), (chi )是一个正常数。我们提供了一种新颖的方法,并分析了种群多样性的演变,以(textsc {Jump} _k)上((mu ) +1)遗传变异的成对汉明距离和来衡量。((mu ) +1)- ({lambda _c}) -遗传在每一代中产生一个后代,要么对亲本中的一个施加突变,要么对同样的两个亲本施加({lambda _c})次交叉(接着是突变),以增加在有交叉的几代中产生被接受的后代的概率。结果表明,((mu ) +1)- ({lambda _c}) -遗传算法的种群多样性收敛到接近完美的均衡状态。在温和的假设(p_c = O(1/k))和(mu in Omega (kn))下,对于k的范围,这产生了一个改进的(O(mu n log (mu ) + 4^k))函数计算的时间界限。对于所有常数k,对于某些(p_c = Omega (1))满足限制,这意味着所有常数k和适当的(mu = Theta (kn))的期望运行时间都以(O(n^2 log n))为界,而与k无关。对于较大的k, ((mu ) +1)- ({lambda _c}) -GA的期望时间为(Theta (4^k)),这对于大型无偏黑箱算法来说是紧密的,并且比原始((mu ) +1) GA快了一个(Omega (1/p_c))因子。我们还表明,我们的分析可以扩展到其他统一函数,如(textsc {Jump} _{k, delta })和障碍。
{"title":"Achieving Tight (O(4^k)) Runtime Bounds on Jumpk by Proving that Genetic Algorithms Evolve Near-Maximal Population Diversity","authors":"Andre Opris,&nbsp;Johannes Lengler,&nbsp;Dirk Sudholt","doi":"10.1007/s00453-025-01323-x","DOIUrl":"10.1007/s00453-025-01323-x","url":null,"abstract":"<div><p>The <span>(textsc {Jump} _k)</span> benchmark was the first problem for which crossover was proven to give a speed-up over mutation-only evolutionary algorithms. Jansen and Wegener (Algorithmica 2002) proved an upper bound of <span>(O(textrm{poly}(n) + 4^k/p_c))</span> for the (<span>(mu )</span>+1) Genetic Algorithm ((<span>(mu )</span>+1) GA), but only for unrealistically small crossover probabilities <span>(p_c)</span>. To this date, it remains an open problem to prove similar upper bounds for realistic <span>(p_c)</span>; the best known runtime bound, in terms of function evaluations, for <span>(p_c = Omega (1))</span> is <span>(O((n/chi )^{k-1}))</span>, <span>(chi )</span> a positive constant. We provide a novel approach and analyse the evolution of the population diversity, measured as sum of pairwise Hamming distances, for a variant of the (<span>(mu )</span>+1) GA on <span>(textsc {Jump} _k)</span>. The (<span>(mu )</span>+1)-<span>({lambda _c})</span>-GA creates one offspring in each generation either by applying mutation to one parent or by applying crossover <span>({lambda _c})</span> times to the same two parents (followed by mutation), to amplify the probability of creating an accepted offspring in generations with crossover. We show that population diversity in the (<span>(mu )</span>+1)-<span>({lambda _c})</span>-GA converges to an equilibrium of near-perfect diversity. This yields an improved time bound of <span>(O(mu n log (mu ) + 4^k))</span> function evaluations for a range of <i>k</i> under the mild assumptions <span>(p_c = O(1/k))</span> and <span>(mu in Omega (kn))</span>. For all constant <i>k</i>, the restriction is satisfied for some <span>(p_c = Omega (1))</span> and it implies that the expected runtime for all constant <i>k</i> and an appropriate <span>(mu = Theta (kn))</span> is bounded by <span>(O(n^2 log n))</span>, irrespective of <i>k</i>. For larger <i>k</i>, the expected time of the (<span>(mu )</span>+1)-<span>({lambda _c})</span>-GA is <span>(Theta (4^k))</span>, which is tight for a large class of unbiased black-box algorithms and faster than the original (<span>(mu )</span>+1) GA by a factor of <span>(Omega (1/p_c))</span>. We also show that our analysis can be extended to other unitation functions such as <span>(textsc {Jump} _{k, delta })</span> and H<span>urdle</span>.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 11","pages":"1564 - 1619"},"PeriodicalIF":0.7,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-025-01323-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145090348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smoothed Analysis of the 2-Opt Heuristic for the TSP under Gaussian Noise 高斯噪声下TSP的2-Opt启发式平滑分析
IF 0.7 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-21 DOI: 10.1007/s00453-025-01335-7
Marvin Künnemann, Bodo Manthey, Rianne Veenstra

The 2-opt heuristic is a very simple local search heuristic for the traveling salesperson problem. In practice it usually converges quickly to solutions within a few percentages of optimality. In contrast to this, its running-time is exponential and its approximation performance is poor in the worst case. Englert, Röglin, and Vöcking (Algorithmica, 2014) provided a smoothed analysis in the so-called one-step model in order to explain the performance of 2-opt on d-dimensional Euclidean instances, both in terms of running-time and in terms of approximation ratio. However, translating their results to the classical model of smoothed analysis, where points are perturbed by Gaussian distributions with standard deviation (sigma ), yields only weak bounds. We prove bounds that are polynomial in n and (1/sigma ) for the smoothed running-time with Gaussian perturbations. In addition, our analysis for Euclidean distances is much simpler than the existing smoothed analysis. Furthermore, we prove a smoothed approximation ratio of (O(log (1/sigma ))). This bound is almost tight, as we also provide a lower bound of (Omega (frac{log n}{log log n})) for (sigma = O(1/sqrt{n})). Our main technical novelty here is that, different from existing smoothed analyses, we do not separately analyze objective values of the global and local optimum on all inputs (which only allows for a bound of (O(1/sigma ))), but simultaneously bound them on the same input.

对于旅行销售人员问题,2-opt启发式是一种非常简单的局部搜索启发式。在实践中,它通常会在几个百分比的最优性范围内迅速收敛到解决方案。与此相反,它的运行时间是指数级的,在最坏的情况下,它的近似性能很差。Englert, Röglin和Vöcking (Algorithmica, 2014)在所谓的一步模型中提供了平滑分析,以解释2-opt在d维欧几里得实例上的性能,无论是在运行时间方面还是在近似比率方面。然而,将他们的结果转化为平滑分析的经典模型,其中的点受到具有标准差(sigma )的高斯分布的扰动,只产生弱边界。我们证明了具有高斯扰动的平滑运行时的界是n和(1/sigma )的多项式。此外,我们对欧氏距离的分析比现有的平滑分析简单得多。进一步证明了(O(log (1/sigma )))的光滑近似比。这个边界几乎是紧的,因为我们也为(sigma = O(1/sqrt{n}))提供了一个(Omega (frac{log n}{log log n}))的下界。我们的主要技术新颖之处在于,与现有的平滑分析不同,我们不单独分析所有输入上的全局和局部最优的客观值(这只允许(O(1/sigma ))的范围),而是同时将它们绑定在相同的输入上。
{"title":"Smoothed Analysis of the 2-Opt Heuristic for the TSP under Gaussian Noise","authors":"Marvin Künnemann,&nbsp;Bodo Manthey,&nbsp;Rianne Veenstra","doi":"10.1007/s00453-025-01335-7","DOIUrl":"10.1007/s00453-025-01335-7","url":null,"abstract":"<div><p>The 2-opt heuristic is a very simple local search heuristic for the traveling salesperson problem. In practice it usually converges quickly to solutions within a few percentages of optimality. In contrast to this, its running-time is exponential and its approximation performance is poor in the worst case. Englert, Röglin, and Vöcking (<i>Algorithmica</i>, 2014) provided a smoothed analysis in the so-called one-step model in order to explain the performance of 2-opt on <i>d</i>-dimensional Euclidean instances, both in terms of running-time and in terms of approximation ratio. However, translating their results to the classical model of smoothed analysis, where points are perturbed by Gaussian distributions with standard deviation <span>(sigma )</span>, yields only weak bounds. We prove bounds that are polynomial in <i>n</i> and <span>(1/sigma )</span> for the smoothed running-time with Gaussian perturbations. In addition, our analysis for Euclidean distances is much simpler than the existing smoothed analysis. Furthermore, we prove a smoothed approximation ratio of <span>(O(log (1/sigma )))</span>. This bound is almost tight, as we also provide a lower bound of <span>(Omega (frac{log n}{log log n}))</span> for <span>(sigma = O(1/sqrt{n}))</span>. Our main technical novelty here is that, different from existing smoothed analyses, we do not separately analyze objective values of the global and local optimum on all inputs (which only allows for a bound of <span>(O(1/sigma ))</span>), but simultaneously bound them on the same input.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 11","pages":"1518 - 1563"},"PeriodicalIF":0.7,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-025-01335-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145090695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Algorithmica
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1