Pub Date : 2024-08-13DOI: 10.1007/s00453-024-01254-z
Jakob Bossek, Dirk Sudholt
Quality diversity (QD) is a branch of evolutionary computation that gained increasing interest in recent years. The Map-Elites QD approach defines a feature space, i.e., a partition of the search space, and stores the best solution for each cell of this space. We study a simple QD algorithm in the context of pseudo-Boolean optimisation on the “number of ones” feature space, where the ith cell stores the best solution amongst those with a number of ones in ([(i-1)k, ik-1]). Here k is a granularity parameter (1 le k le n+1). We give a tight bound on the expected time until all cells are covered for arbitrary fitness functions and for all k and analyse the expected optimisation time of QD on OneMax and other problems whose structure aligns favourably with the feature space. On combinatorial problems we show that QD finds a ({(1-1/e)})-approximation when maximising any monotone sub-modular function with a single uniform cardinality constraint efficiently. Defining the feature space as the number of connected components of an edge-weighted graph, we show that QD finds a minimum spanning forest in expected polynomial time. We further consider QD’s performance on classes of transformed functions in which the feature space is not well aligned with the problem. The asymptotic performance is unaffected by transformations on easy functions like OneMax. Applying a worst-case transformation to a deceptive problem increases the expected optimisation time from (O(n^2 log n)) to an exponential time. However, QD is still faster than a (1+1) EA by an exponential factor.
质量多样性(QD)是进化计算的一个分支,近年来越来越受到关注。Map-Elites QD 方法定义了一个特征空间,即搜索空间的一个分区,并为该空间的每个单元存储最佳解决方案。我们在 "1 的个数 "特征空间的伪布尔优化背景下研究了一种简单的 QD 算法,其中第 i 个单元格存储了 1 的个数在 ([(i-1)k, ik-1]) 中的最佳解决方案。这里 k 是一个粒度参数(1 le k le n+1)。我们给出了在任意拟合函数和所有 k 条件下直到覆盖所有单元的预期时间的严格约束,并分析了 QD 在 OneMax 和其他结构与特征空间一致的问题上的预期优化时间。在组合问题上,我们证明当最大化任何单调子模函数时,QD能高效地找到一个({(1-1/e)}/)近似值,该函数具有一个单一的均匀万有引力约束。将特征空间定义为边缘加权图的连接成分数,我们证明 QD 可以在预期多项式时间内找到最小生成林。我们进一步考虑了 QD 在特征空间与问题不完全一致的变换函数类别中的性能。渐近性能不受 OneMax 等简单函数变换的影响。将最坏情况下的转换应用到欺骗性问题上,预期优化时间会从(O(n^2 log n))增加到指数时间。然而,QD 仍比 (1+1) EA 快指数倍。
{"title":"Runtime Analysis of Quality Diversity Algorithms","authors":"Jakob Bossek, Dirk Sudholt","doi":"10.1007/s00453-024-01254-z","DOIUrl":"10.1007/s00453-024-01254-z","url":null,"abstract":"<div><p>Quality diversity (QD) is a branch of evolutionary computation that gained increasing interest in recent years. The Map-Elites QD approach defines a feature space, i.e., a partition of the search space, and stores the best solution for each cell of this space. We study a simple QD algorithm in the context of pseudo-Boolean optimisation on the “number of ones” feature space, where the <i>i</i>th cell stores the best solution amongst those with a number of ones in <span>([(i-1)k, ik-1])</span>. Here <i>k</i> is a granularity parameter <span>(1 le k le n+1)</span>. We give a tight bound on the expected time until all cells are covered for arbitrary fitness functions and for all <i>k</i> and analyse the expected optimisation time of QD on <span>OneMax</span> and other problems whose structure aligns favourably with the feature space. On combinatorial problems we show that QD finds a <span>({(1-1/e)})</span>-approximation when maximising any monotone sub-modular function with a single uniform cardinality constraint efficiently. Defining the feature space as the number of connected components of an edge-weighted graph, we show that QD finds a minimum spanning forest in expected polynomial time. We further consider QD’s performance on classes of transformed functions in which the feature space is not well aligned with the problem. The asymptotic performance is unaffected by transformations on easy functions like <span>OneMax</span>. Applying a worst-case transformation to a deceptive problem increases the expected optimisation time from <span>(O(n^2 log n))</span> to an exponential time. However, QD is still faster than a (1+1) EA by an exponential factor.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 10","pages":"3252 - 3283"},"PeriodicalIF":0.9,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01254-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142197069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-04DOI: 10.1007/s00453-024-01260-1
Emilio Di Giacomo, Walter Didimo, Giuseppe Liotta, Fabrizio Montecchiani, Giacomo Ortali
Computing planar orthogonal drawings with the minimum number of bends is one of the most studied topics in Graph Drawing. The problem is known to be NP-hard, even when we want to test the existence of a rectilinear planar drawing, i.e., an orthogonal drawing without bends (Garg and Tamassia in SIAM J Comput 31(2):601–625, 2001). From the parameterized complexity perspective, the problem is fixed-parameter tractable when parameterized by the sum of three parameters: the number b of bends, the number k of vertices of degree at most two, and the treewidth (textsf{tw}) of the input graph (Di Giacomo et al. in J Comput Syst Sci 125:129–148, 2022). We improve this last result by showing that the problem remains fixed-parameter tractable when parameterized only by (b+k). As a consequence, rectilinear planarity testing lies in FPT parameterized by the number of vertices of degree at most two. We also prove that our choice of parameters is minimal, as deciding if an orthogonal drawing with at most b bends exists is already NP-hard when k is zero (i.e., the problem is para-NP-hard parameterized in k); hence, there is neither an FPT nor an XP algorithm parameterized only by the parameter k (unless P = NP). In addition, we prove that the problem is W[1]-hard parameterized by (k+textsf{tw}), complementing a recent result (Jansen et al. in Upward and orthogonal planarity are W[1]-hard parameterized by treewidth. CoRR, abs/2309.01264, 2023; in: Bekos MA, Chimani M (eds) Graph Drawing and Network Visualization, vol 14466, Springer, Cham, pp 203–217, 2023) that shows W[1]-hardness for the parameterization (b+textsf{tw}). As a consequence, we are able to trace a clear parameterized tractability landscape for the bend-minimum orthogonal planarity problem with respect to the three parameters b, k, and (textsf{tw}).
计算弯曲次数最少的平面正交图是图形绘制中研究最多的课题之一。众所周知,即使我们想测试是否存在直角平面图,即没有弯曲的正交图,这个问题也是 NP 难(Garg 和 Tamassia 在 SIAM J Comput 31(2):601-625, 2001 中)。从参数化复杂度的角度来看,当以三个参数之和为参数时,问题是固定参数可控的:弯曲数 b、阶数至多为 2 的顶点数 k 以及输入图的树宽(textsf{tw})(Di Giacomo 等人在 J Comput Syst Sci 125:129-148, 2022 中)。我们改进了最后一个结果,证明当参数仅为 (b+k)时,问题仍然是固定参数可控的。因此,直角平面性检验属于以最多两个度的顶点数为参数的 FPT。我们还证明了我们对参数的选择是最小的,因为当 k 为零时,决定是否存在一个最多有 b 个弯曲的正交图形已经是 NP-困难的了(即该问题是以 k 为参数的准 NP-困难问题);因此,既不存在仅以参数 k 为参数的 FPT 算法,也不存在仅以参数 k 为参数的 XP 算法(除非 P = NP)。此外,我们证明了该问题是以(k+textsf{tw})为参数的 W[1]-hard,补充了最近的一个结果(Jansen 等人,在 Upward and orthogonal planarity are W[1]-hard parameterized by treewidth.CoRR, abs/2309.01264, 2023; in:Bekos MA, Chimani M (eds) Graph Drawing and Network Visualization, vol 14466, Springer, Cham, pp 203-217, 2023)中显示了参数化 (b+textsf{tw}) 的 W[1]-hardness 性。因此,我们能够根据 b、k 和 (textsf{tw})这三个参数,为弯曲最小正交平面问题追踪出一个清晰的参数化可操作性图景。
{"title":"On the Parameterized Complexity of Bend-Minimum Orthogonal Planarity","authors":"Emilio Di Giacomo, Walter Didimo, Giuseppe Liotta, Fabrizio Montecchiani, Giacomo Ortali","doi":"10.1007/s00453-024-01260-1","DOIUrl":"10.1007/s00453-024-01260-1","url":null,"abstract":"<div><p>Computing planar orthogonal drawings with the minimum number of bends is one of the most studied topics in Graph Drawing. The problem is known to be NP-hard, even when we want to test the existence of a rectilinear planar drawing, i.e., an orthogonal drawing without bends (Garg and Tamassia in SIAM J Comput 31(2):601–625, 2001). From the parameterized complexity perspective, the problem is fixed-parameter tractable when parameterized by the sum of three parameters: the number <i>b</i> of bends, the number <i>k</i> of vertices of degree at most two, and the treewidth <span>(textsf{tw})</span> of the input graph (Di Giacomo et al. in J Comput Syst Sci 125:129–148, 2022). We improve this last result by showing that the problem remains fixed-parameter tractable when parameterized only by <span>(b+k)</span>. As a consequence, rectilinear planarity testing lies in FPT parameterized by the number of vertices of degree at most two. We also prove that our choice of parameters is minimal, as deciding if an orthogonal drawing with at most <i>b</i> bends exists is already NP-hard when <i>k</i> is zero (i.e., the problem is para-NP-hard parameterized in <i>k</i>); hence, there is neither an FPT nor an XP algorithm parameterized only by the parameter <i>k</i> (unless P = NP). In addition, we prove that the problem is W[1]-hard parameterized by <span>(k+textsf{tw})</span>, complementing a recent result (Jansen et al. in Upward and orthogonal planarity are W[1]-hard parameterized by treewidth. CoRR, abs/2309.01264, 2023; in: Bekos MA, Chimani M (eds) Graph Drawing and Network Visualization, vol 14466, Springer, Cham, pp 203–217, 2023) that shows W[1]-hardness for the parameterization <span>(b+textsf{tw})</span>. As a consequence, we are able to trace a clear parameterized tractability landscape for the bend-minimum orthogonal planarity problem with respect to the three parameters <i>b</i>, <i>k</i>, and <span>(textsf{tw})</span>.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 10","pages":"3231 - 3251"},"PeriodicalIF":0.9,"publicationDate":"2024-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01260-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141948974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-31DOI: 10.1007/s00453-024-01256-x
Jessica Enright, Duncan Lee, Kitty Meeks, William Pettersson, John Sylvester
Understanding spatial correlation is vital in many fields including epidemiology and social science. Lee et al. (Stat Comput 31(4):51, 2021. https://doi.org/10.1007/s11222-021-10025-7) recently demonstrated that improved inference for areal unit count data can be achieved by carrying out modifications to a graph representing spatial correlations; specifically, they delete edges of the planar graph derived from border-sharing between geographic regions in order to maximise a specific objective function. In this paper, we address the computational complexity of the associated graph optimisation problem. We demonstrate that this optimisation problem is NP-hard; we further show intractability for two simpler variants of the problem. We follow these results with two parameterised algorithms that exactly solve the problem. The first is parameterised by both treewidth and maximum degree, while the second is parameterised by the maximum number of edges that can be removed and is also restricted to settings where the input graph has maximum degree three. Both of these algorithms solve not only the decision problem, but also enumerate all solutions with polynomial time precalculation, delay, and postcalculation time in respective restricted settings. For this problem, efficient enumeration allows the uncertainty in the spatial correlation to be utilised in the modelling. The first enumeration algorithm utilises dynamic programming on a tree decomposition of the input graph, and has polynomial time precalculation and linear delay if both the treewidth and maximum degree are bounded. The second algorithm is restricted to problem instances with maximum degree three, as may arise from triangulations of planar surfaces, but can output all solutions with FPT precalculation time and linear delay when the maximum number of edges that can be removed is taken as the parameter.
{"title":"The Complexity of Finding and Enumerating Optimal Subgraphs to Represent Spatial Correlation","authors":"Jessica Enright, Duncan Lee, Kitty Meeks, William Pettersson, John Sylvester","doi":"10.1007/s00453-024-01256-x","DOIUrl":"10.1007/s00453-024-01256-x","url":null,"abstract":"<div><p>Understanding spatial correlation is vital in many fields including epidemiology and social science. Lee et al. (Stat Comput 31(4):51, 2021. https://doi.org/10.1007/s11222-021-10025-7) recently demonstrated that improved inference for areal unit count data can be achieved by carrying out modifications to a graph representing spatial correlations; specifically, they delete edges of the planar graph derived from border-sharing between geographic regions in order to maximise a specific objective function. In this paper, we address the computational complexity of the associated graph optimisation problem. We demonstrate that this optimisation problem is NP-hard; we further show intractability for two simpler variants of the problem. We follow these results with two parameterised algorithms that exactly solve the problem. The first is parameterised by both treewidth and maximum degree, while the second is parameterised by the maximum number of edges that can be removed and is also restricted to settings where the input graph has maximum degree three. Both of these algorithms solve not only the decision problem, but also enumerate all solutions with polynomial time precalculation, delay, and postcalculation time in respective restricted settings. For this problem, efficient enumeration allows the uncertainty in the spatial correlation to be utilised in the modelling. The first enumeration algorithm utilises dynamic programming on a tree decomposition of the input graph, and has polynomial time precalculation and linear delay if both the treewidth and maximum degree are bounded. The second algorithm is restricted to problem instances with maximum degree three, as may arise from triangulations of planar surfaces, but can output all solutions with FPT precalculation time and linear delay when the maximum number of edges that can be removed is taken as the parameter.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 10","pages":"3186 - 3230"},"PeriodicalIF":0.9,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01256-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141882957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-29DOI: 10.1007/s00453-024-01251-2
William M. Hoza, Edward Pyne, Salil Vadhan
The classic Impagliazzo–Nisan–Wigderson (INW) pseudorandom generator (PRG) (STOC ‘94) for space-bounded computation uses a seed of length (O(log n cdot log (nw/varepsilon )+log d)) to fool ordered branching programs of length n, width w, and alphabet size d to within error (varepsilon ). A series of works have shown that the analysis of the INW generator can be improved for the class of permutation branching programs or the more general regular branching programs, improving the (O(log ^2 n)) dependence on the length n to (O(log n)) or ({tilde{O}}(log n)). However, when also considering the dependence on the other parameters, these analyses still fall short of the optimal PRG seed length (O(log (nwd/varepsilon ))). In this paper, we prove that any “spectral analysis” of the INW generator requires seed length
to fool ordered permutation branching programs of length n, width w, and alphabet size d to within error (varepsilon ). By “spectral analysis” we mean an analysis of the INW generator that relies only on the spectral expansion of the graphs used to construct the generator; this encompasses all prior analyses of the INW generator. Our lower bound matches the upper bound of Braverman–Rao–Raz–Yehudayoff (FOCS 2010, SICOMP 2014) for regular branching programs of alphabet size (d=2) except for a gap between their (Oleft( log n cdot log log nright) ) term and our (Omega left( log n cdot log log min {n,d}right) ) term. It also matches the upper bounds of Koucký–Nimbhorkar–Pudlák (STOC 2011), De (CCC 2011), and Steinke (ECCC 2012) for constant-width ((w=O(1))) permutation branching programs of alphabet size (d=2) to within a constant factor. To fool permutation branching programs in the measure of spectral norm, we prove that any spectral analysis of the INW generator requires a seed of length (Omega left( log ncdot log log n+log ncdot log (1/varepsilon )right) ) when the width is at least polynomial in n ((w=n^{Omega (1)})), matching the recent upper bound of Hoza–Pyne–Vadhan (ITCS 2021) to within a constant factor.
{"title":"Limitations of the Impagliazzo–Nisan–Wigderson Pseudorandom Generator Against Permutation Branching Programs","authors":"William M. Hoza, Edward Pyne, Salil Vadhan","doi":"10.1007/s00453-024-01251-2","DOIUrl":"10.1007/s00453-024-01251-2","url":null,"abstract":"<div><p>The classic Impagliazzo–Nisan–Wigderson (INW) pseudorandom generator (PRG) (STOC ‘94) for space-bounded computation uses a seed of length <span>(O(log n cdot log (nw/varepsilon )+log d))</span> to fool ordered branching programs of length <i>n</i>, width <i>w</i>, and alphabet size <i>d</i> to within error <span>(varepsilon )</span>. A series of works have shown that the analysis of the INW generator can be improved for the class of <i>permutation</i> branching programs or the more general <i>regular</i> branching programs, improving the <span>(O(log ^2 n))</span> dependence on the length <i>n</i> to <span>(O(log n))</span> or <span>({tilde{O}}(log n))</span>. However, when also considering the dependence on the other parameters, these analyses still fall short of the optimal PRG seed length <span>(O(log (nwd/varepsilon )))</span>. In this paper, we prove that any “spectral analysis” of the INW generator requires seed length </p><div><div><span>$$begin{aligned} Omega left( log ncdot log log left( min {n,d}right) +log ncdot log left( w/varepsilon right) +log dright) end{aligned}$$</span></div></div><p>to fool ordered permutation branching programs of length <i>n</i>, width <i>w</i>, and alphabet size <i>d</i> to within error <span>(varepsilon )</span>. By “spectral analysis” we mean an analysis of the INW generator that relies only on the spectral expansion of the graphs used to construct the generator; this encompasses all prior analyses of the INW generator. Our lower bound matches the upper bound of Braverman–Rao–Raz–Yehudayoff (FOCS 2010, SICOMP 2014) for regular branching programs of alphabet size <span>(d=2)</span> except for a gap between their <span>(Oleft( log n cdot log log nright) )</span> term and our <span>(Omega left( log n cdot log log min {n,d}right) )</span> term. It also matches the upper bounds of Koucký–Nimbhorkar–Pudlák (STOC 2011), De (CCC 2011), and Steinke (ECCC 2012) for constant-width (<span>(w=O(1))</span>) permutation branching programs of alphabet size <span>(d=2)</span> to within a constant factor. To fool permutation branching programs in the measure of <i>spectral norm</i>, we prove that any spectral analysis of the INW generator requires a seed of length <span>(Omega left( log ncdot log log n+log ncdot log (1/varepsilon )right) )</span> when the width is at least polynomial in <i>n</i> (<span>(w=n^{Omega (1)})</span>), matching the recent upper bound of Hoza–Pyne–Vadhan (ITCS 2021) to within a constant factor.\u0000</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 10","pages":"3153 - 3185"},"PeriodicalIF":0.9,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01251-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141868112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-22DOI: 10.1007/s00453-024-01253-0
Yiqin Gao, Li Han, Jing Liu, Yves Robert, Frédéric Vivien
As real-time systems are safety critical, guaranteeing a high reliability threshold is as important as meeting all deadlines. Periodic tasks are replicated to mitigate the negative impact of transient faults, which leads to redundancy and high energy consumption. On the other hand, energy saving is widely identified as increasingly relevant issues in real-time systems. In this paper, we formalize this challenging tri-criteria optimization problem, i.e., minimizing the expected energy consumption while enforcing the reliability threshold and meeting all task deadlines, and propose several mapping and scheduling heuristics to solve it. Specifically, a novel approach is designed to (i) map an arbitrary number of replicas onto processors, (ii) schedule each replica of each task instance on its assigned processor with less temporal overlap. The platform is composed of processing units with different characteristics, including speed profile, energy cost and fault rate. The heterogeneity of the computing platform makes the problem more complicated, because different mappings achieve different levels of reliability and consume different amounts of energy. Moreover, scheduling plays an important role in energy saving, as the expected energy consumption is the average over all failure scenarios. Once a task replica is successful, the other replicas of that task instance can be canceled, which calls for minimizing the overlap between any replica pair. Finally, to quantitatively analyze our methods, we derive a theoretical lower-bound for the expected energy consumption. Comprehensive experiments are conducted on a large set of execution scenarios and parameters. The comparison results reveal that our strategies perform better than the random baseline under almost all settings, with an average gain in energy consumption of more than 40%, and our best heuristic achieves an excellent performance: its energy saving is only 2% less than the lower-bound on average.
{"title":"Minimizing Energy Consumption for Real-Time Tasks on Heterogeneous Platforms Under Deadline and Reliability Constraints","authors":"Yiqin Gao, Li Han, Jing Liu, Yves Robert, Frédéric Vivien","doi":"10.1007/s00453-024-01253-0","DOIUrl":"10.1007/s00453-024-01253-0","url":null,"abstract":"<div><p>As real-time systems are safety critical, guaranteeing a high reliability threshold is as important as meeting all deadlines. Periodic tasks are replicated to mitigate the negative impact of transient faults, which leads to redundancy and high energy consumption. On the other hand, energy saving is widely identified as increasingly relevant issues in real-time systems. In this paper, we formalize this challenging tri-criteria optimization problem, i.e., minimizing the expected energy consumption while enforcing the reliability threshold and meeting all task deadlines, and propose several mapping and scheduling heuristics to solve it. Specifically, a novel approach is designed to (i) map an arbitrary number of replicas onto processors, (ii) schedule each replica of each task instance on its assigned processor with less temporal overlap. The platform is composed of processing units with different characteristics, including speed profile, energy cost and fault rate. The heterogeneity of the computing platform makes the problem more complicated, because different mappings achieve different levels of reliability and consume different amounts of energy. Moreover, scheduling plays an important role in energy saving, as the expected energy consumption is the average over all failure scenarios. Once a task replica is successful, the other replicas of that task instance can be canceled, which calls for minimizing the overlap between any replica pair. Finally, to quantitatively analyze our methods, we derive a theoretical lower-bound for the expected energy consumption. Comprehensive experiments are conducted on a large set of execution scenarios and parameters. The comparison results reveal that our strategies perform better than the random baseline under almost all settings, with an average gain in energy consumption of more than 40%, and our best heuristic achieves an excellent performance: its energy saving is only 2% less than the lower-bound on average.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 10","pages":"3079 - 3114"},"PeriodicalIF":0.9,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141740782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-22DOI: 10.1007/s00453-024-01258-9
Carola Doerr, Duri Andrea Janett, Johannes Lengler
In a seminal paper in 2013, Witt showed that the (1+1) Evolutionary Algorithm with standard bit mutation needs time ((1+o(1))n ln n/p_1) to find the optimum of any linear function, as long as the probability (p_1) to flip exactly one bit is (Theta (1)). In this paper we investigate how this result generalizes if standard bit mutation is replaced by an arbitrary unbiased mutation operator. This situation is notably different, since the stochastic domination argument used for the lower bound by Witt no longer holds. In particular, starting closer to the optimum is not necessarily an advantage, and OneMax is no longer the easiest function for arbitrary starting positions. Nevertheless, we show that Witt’s result carries over if (p_1) is not too small, with different constraints for upper and lower bounds, and if the number of flipped bits has bounded expectation (chi ). Notably, this includes some of the heavy-tail mutation operators used in fast genetic algorithms, but not all of them. We also give examples showing that algorithms with unbounded (chi ) have qualitatively different trajectories close to the optimum.
{"title":"Tight Runtime Bounds for Static Unary Unbiased Evolutionary Algorithms on Linear Functions","authors":"Carola Doerr, Duri Andrea Janett, Johannes Lengler","doi":"10.1007/s00453-024-01258-9","DOIUrl":"10.1007/s00453-024-01258-9","url":null,"abstract":"<div><p>In a seminal paper in 2013, Witt showed that the (1+1) Evolutionary Algorithm with standard bit mutation needs time <span>((1+o(1))n ln n/p_1)</span> to find the optimum of any linear function, as long as the probability <span>(p_1)</span> to flip exactly one bit is <span>(Theta (1))</span>. In this paper we investigate how this result generalizes if standard bit mutation is replaced by an arbitrary unbiased mutation operator. This situation is notably different, since the stochastic domination argument used for the lower bound by Witt no longer holds. In particular, starting closer to the optimum is not necessarily an advantage, and OneMax is no longer the easiest function for arbitrary starting positions. Nevertheless, we show that Witt’s result carries over if <span>(p_1)</span> is not too small, with different constraints for upper and lower bounds, and if the number of flipped bits has bounded expectation <span>(chi )</span>. Notably, this includes some of the heavy-tail mutation operators used in fast genetic algorithms, but not all of them. We also give examples showing that algorithms with unbounded <span>(chi )</span> have qualitatively different trajectories close to the optimum.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 10","pages":"3115 - 3152"},"PeriodicalIF":0.9,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141740781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interval scheduling is a basic algorithmic problem and a classical task in combinatorial optimization. We develop techniques for partitioning and grouping jobs based on their starting/ending times, enabling us to view an instance of interval scheduling on many jobs as a union of multiple interval scheduling instances, each containing only a few jobs. Instantiating these techniques in a dynamic setting produces several new results. For ((1+varepsilon ))-approximation of job scheduling of n jobs on a single machine, we develop a fully dynamic algorithm with (O(nicefrac {log {n}}{varepsilon })) update and (O(log {n})) query worst-case time. Our techniques are also applicable in a setting where jobs have weights. We design a fully dynamic deterministic algorithm whose worst-case update and query times are (text {poly} (log n,frac{1}{varepsilon })). This is the first algorithm that maintains a ((1+varepsilon ))-approximation of the maximum independent set of a collection of weighted intervals in (text {poly} (log n,frac{1}{varepsilon })) time updates/queries. This is an exponential improvement in (1/varepsilon ) over the running time of an algorithm of Henzinger, Neumann, and Wiese [SoCG, 2020]. Our approach also removes all dependence on the values of the jobs’ starting/ending times and weights.
{"title":"New Partitioning Techniques and Faster Algorithms for Approximate Interval Scheduling","authors":"Spencer Compton, Slobodan Mitrović, Ronitt Rubinfeld","doi":"10.1007/s00453-024-01252-1","DOIUrl":"10.1007/s00453-024-01252-1","url":null,"abstract":"<div><p>Interval scheduling is a basic algorithmic problem and a classical task in combinatorial optimization. We develop techniques for partitioning and grouping jobs based on their starting/ending times, enabling us to view an instance of interval scheduling on <i>many</i> jobs as a union of multiple interval scheduling instances, each containing only <i>a few</i> jobs. Instantiating these techniques in a dynamic setting produces several new results. For <span>((1+varepsilon ))</span>-approximation of job scheduling of <i>n</i> jobs on a single machine, we develop a fully dynamic algorithm with <span>(O(nicefrac {log {n}}{varepsilon }))</span> update and <span>(O(log {n}))</span> query worst-case time. Our techniques are also applicable in a setting where jobs have weights. We design a fully dynamic <i>deterministic</i> algorithm whose worst-case update and query times are <span>(text {poly} (log n,frac{1}{varepsilon }))</span>. This is <i>the first</i> algorithm that maintains a <span>((1+varepsilon ))</span>-approximation of the maximum independent set of a collection of weighted intervals in <span>(text {poly} (log n,frac{1}{varepsilon }))</span> time updates/queries. This is an exponential improvement in <span>(1/varepsilon )</span> over the running time of an algorithm of Henzinger, Neumann, and Wiese [SoCG, 2020]. Our approach also removes all dependence on the values of the jobs’ starting/ending times and weights.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2997 - 3026"},"PeriodicalIF":0.9,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142412312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-18DOI: 10.1007/s00453-024-01255-y
David Eppstein
We show that, for planar point sets, the number of non-crossing Hamiltonian paths is polynomially bounded in the number of non-crossing paths, and the number of non-crossing Hamiltonian cycles (polygonalizations) is polynomially bounded in the number of surrounding cycles. As a consequence, we can list the non-crossing Hamiltonian paths or the polygonalizations, in time polynomial in the output size, by filtering the output of simple backtracking algorithms for non-crossing paths or surrounding cycles respectively. We do not assume that the points are in general position. To prove these results we relate the numbers of non-crossing structures to two easily-computed parameters of the point set: the minimum number of points whose removal results in a collinear set, and the number of points interior to the convex hull. These relations also lead to polynomial-time approximation algorithms for the numbers of structures of all four types, accurate to within a constant factor of the logarithm of these numbers.
{"title":"Non-crossing Hamiltonian Paths and Cycles in Output-Polynomial Time","authors":"David Eppstein","doi":"10.1007/s00453-024-01255-y","DOIUrl":"10.1007/s00453-024-01255-y","url":null,"abstract":"<div><p>We show that, for planar point sets, the number of non-crossing Hamiltonian paths is polynomially bounded in the number of non-crossing paths, and the number of non-crossing Hamiltonian cycles (polygonalizations) is polynomially bounded in the number of surrounding cycles. As a consequence, we can list the non-crossing Hamiltonian paths or the polygonalizations, in time polynomial in the output size, by filtering the output of simple backtracking algorithms for non-crossing paths or surrounding cycles respectively. We do not assume that the points are in general position. To prove these results we relate the numbers of non-crossing structures to two easily-computed parameters of the point set: the minimum number of points whose removal results in a collinear set, and the number of points interior to the convex hull. These relations also lead to polynomial-time approximation algorithms for the numbers of structures of all four types, accurate to within a constant factor of the logarithm of these numbers.\u0000</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"3027 - 3053"},"PeriodicalIF":0.9,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01255-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142412335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-18DOI: 10.1007/s00453-024-01257-w
József Balogh, Felix Christian Clemen, Adrian Dumitrescu
Let X be an n-element point set in the k-dimensional unit cube ([0,1]^k) where (k ge 2). According to an old result of Bollobás and Meir (Oper Res Lett 11:19–21, 1992) , there exists a cycle (tour) (x_1, x_2, ldots , x_n) through the n points, such that (left( sum _{i=1}^n |x_i - x_{i+1}|^k right) ^{1/k} le c_k), where (|x-y|) is the Euclidean distance between x and y, and (c_k) is an absolute constant that depends only on k, where (x_{n+1} equiv x_1). From the other direction, for every (k ge 2) and (n ge 2), there exist n points in ([0,1]^k), such that their shortest tour satisfies (left( sum _{i=1}^n |x_i - x_{i+1}|^k right) ^{1/k} = 2^{1/k} cdot sqrt{k}). For the plane, the best constant is (c_2=2) and this is the only exact value known. Bollobás and Meir showed that one can take (c_k = 9 left( frac{2}{3} right) ^{1/k} cdot sqrt{k}) for every (k ge 3) and conjectured that the best constant is (c_k = 2^{1/k} cdot sqrt{k}), for every (k ge 2). Here we significantly improve the upper bound and show that one can take (c_k = 3 sqrt{5} left( frac{2}{3} right) ^{1/k} cdot sqrt{k}) or (c_k = 2.91 sqrt{k} (1+o_k(1))). Our bounds are constructive. We also show that (c_3 ge 2^{7/6}), which disproves the conjecture for (k=3). Connections to matching problems, power assignment problems, related problems, including algorithms, are discussed in this context. A slightly revised version of the Bollobás–Meir conjecture is proposed.
让 X 是 k 维单位立方体 ([0,1]^k)中的一个 n 元素点集,其中 (k ge 2).根据 Bollobás 和 Meir 的老结果(Oper Res Lett 11:19-21, 1992),存在一个经过 n 个点的循环(tour)(x_1, x_2, ldots , x_n),使得 (left( sum _{i=1}^n |x_i - x_{i+1}|^k right) ^{1/k})c_k), 其中 (|x-y|) 是 x 和 y 之间的欧几里得距离,而 (c_k) 是一个只取决于 k 的绝对常数,其中 (x_{n+1} equiv x_1).从另一个方向来看,对于每一个(k)和(n),在([0,1]^k)中存在n个点,使得它们的最短巡回满足(left( sum _{i=1}^n |x_i - x_{i+1}|^k right) ^{1/k} = 2^{1/k}cdot sqrt{k}/)。对于平面来说,最佳常数是 c_2=2,这是唯一已知的精确值。Bollobás和Meir证明,可以取(c_k = 9 left( frac{2}{3} right) ^{1/k}c_k = 2^{1/k} cdot sqrt{k}), for every (k ge 3) and conjectured that the best constant is (c_k = 2^{1/k} cdot sqrt{k}), for every (k ge 2).在这里,我们极大地改进了上界,并证明我们可以把(c_k = 3 sqrt{5}left( frac{2}{3} right) ^{1/k}cdot sqrt{k}) or (c_k = 2.91 sqrt{k} (1+o_k(1))).我们的边界是建设性的。我们还证明了 (c_3 ge 2^{7/6}),这推翻了对(k=3)的猜想。在此背景下,我们讨论了与匹配问题、幂赋值问题、相关问题(包括算法)的联系。还提出了一个稍作修订的 Bollobás-Meir 猜想。
{"title":"On a Traveling Salesman Problem for Points in the Unit Cube","authors":"József Balogh, Felix Christian Clemen, Adrian Dumitrescu","doi":"10.1007/s00453-024-01257-w","DOIUrl":"10.1007/s00453-024-01257-w","url":null,"abstract":"<div><p>Let <i>X</i> be an <i>n</i>-element point set in the <i>k</i>-dimensional unit cube <span>([0,1]^k)</span> where <span>(k ge 2)</span>. According to an old result of Bollobás and Meir (Oper Res Lett 11:19–21, 1992) , there exists a cycle (tour) <span>(x_1, x_2, ldots , x_n)</span> through the <i>n</i> points, such that <span>(left( sum _{i=1}^n |x_i - x_{i+1}|^k right) ^{1/k} le c_k)</span>, where <span>(|x-y|)</span> is the Euclidean distance between <i>x</i> and <i>y</i>, and <span>(c_k)</span> is an absolute constant that depends only on <i>k</i>, where <span>(x_{n+1} equiv x_1)</span>. From the other direction, for every <span>(k ge 2)</span> and <span>(n ge 2)</span>, there exist <i>n</i> points in <span>([0,1]^k)</span>, such that their shortest tour satisfies <span>(left( sum _{i=1}^n |x_i - x_{i+1}|^k right) ^{1/k} = 2^{1/k} cdot sqrt{k})</span>. For the plane, the best constant is <span>(c_2=2)</span> and this is the only exact value known. Bollobás and Meir showed that one can take <span>(c_k = 9 left( frac{2}{3} right) ^{1/k} cdot sqrt{k})</span> for every <span>(k ge 3)</span> and conjectured that the best constant is <span>(c_k = 2^{1/k} cdot sqrt{k})</span>, for every <span>(k ge 2)</span>. Here we significantly improve the upper bound and show that one can take <span>(c_k = 3 sqrt{5} left( frac{2}{3} right) ^{1/k} cdot sqrt{k})</span> or <span>(c_k = 2.91 sqrt{k} (1+o_k(1)))</span>. Our bounds are constructive. We also show that <span>(c_3 ge 2^{7/6})</span>, which disproves the conjecture for <span>(k=3)</span>. Connections to matching problems, power assignment problems, related problems, including algorithms, are discussed in this context. A slightly revised version of the Bollobás–Meir conjecture is proposed.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"3054 - 3078"},"PeriodicalIF":0.9,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01257-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142412318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-12DOI: 10.1007/s00453-024-01250-3
Irvan Jahja, Haifeng Yu
We consider standard T-interval dynamic networks, under the synchronous timing model and the broadcast CONGEST model. In a T-interval dynamic network, the set of nodes is always fixed and there are no node failures. The edges in the network are always undirected, but the set of edges in the topology may change arbitrarily from round to round, as determined by some adversary and subject to the following constraint: For every T consecutive rounds, the topologies in those rounds must contain a common connected spanning subgraph. Let (H_r) to be the maximum (in terms of number of edges) such subgraph for round r through (r+T-1). We define the backbone diameterd of a T-interval dynamic network to be the maximum diameter of all such (H_r)’s, for (rge 1). We use n to denote the number of nodes in the network. Within such a context, we consider a range of fundamental distributed computing problems including Count/Max/Median/Sum/LeaderElect/Consensus/ConfirmedFlood. Existing algorithms for these problems all have time complexity of (Omega (n)) rounds, even for (T=infty ) and even when d is as small as O(1). This paper presents a novel approach/framework, based on the idea of massively parallel aggregation. Following this approach, we develop a novel deterministic Count algorithm with (O(d^3 log ^2 n)) complexity, for T-interval dynamic networks with (T ge ccdot d^2 log ^2n). Here c is a (sufficiently large) constant independent of d, n, and T. To our knowledge, our algorithm is the very first such algorithm whose complexity does not contain a (Theta (n)) term. This paper further develops novel algorithms for solving Max/Median/Sum/LeaderElect/Consensus/ConfirmedFlood, while incurring (O(d^3 text{ polylog }(n))) complexity. Again, for all these problems, our algorithms are the first ones whose time complexity does not contain a (Theta (n)) term.
我们考虑的是同步定时模型和广播 CONGEST 模型下的标准 T 间隔动态网络。在 T 间隔动态网络中,节点集总是固定的,不存在节点故障。网络中的边总是无向的,但拓扑结构中的边集可以在各轮之间任意变化,由某个对手决定,并受到以下约束:每连续进行 T 轮,这些轮中的拓扑图必须包含一个共同的连通跨越子图。让 (H_r)成为第 r 轮通过 (r+T-1)的最大子图(以边的数量计算)。我们定义一个 T 期动态网络的主干直径 d 是所有这样的 (H_r) 的最大直径,对于 (rge 1) 来说。我们用 n 表示网络中的节点数。在这样的背景下,我们考虑了一系列基本的分布式计算问题,包括 Count/Max/Median/Sum/LeaderElect/Consensus/ConfirmedFlood 等。这些问题的现有算法的时间复杂度都是(Omega (n))轮,即使是(T=infty ),甚至当d小到O(1)时也是如此。本文提出了一种基于大规模并行聚合思想的新方法/框架。按照这种方法,我们开发了一种复杂度为(O(d^3 log ^2 n))的新型确定性计数算法,适用于具有(T ge ccdot d^2 log ^2n )的 T 期动态网络。据我们所知,我们的算法是第一个复杂度不包含(Theta (n))项的算法。本文进一步开发了解决 Max/Median/Sum/LeaderElect/Consensus/ConfirmedFlood 问题的新算法,同时产生了 (O(d^3 text{ polylog }(n))) 复杂性。同样,对于所有这些问题,我们的算法是第一个时间复杂度不包含一个(θ (n))项的算法。
{"title":"Sublinear Algorithms in T-Interval Dynamic Networks","authors":"Irvan Jahja, Haifeng Yu","doi":"10.1007/s00453-024-01250-3","DOIUrl":"10.1007/s00453-024-01250-3","url":null,"abstract":"<div><p>We consider standard <i>T</i>-<i>interval dynamic networks</i>, under the synchronous timing model and the broadcast CONGEST model. In a <i>T</i>-<i>interval dynamic network</i>, the set of nodes is always fixed and there are no node failures. The edges in the network are always undirected, but the set of edges in the topology may change arbitrarily from round to round, as determined by some <i>adversary</i> and subject to the following constraint: For every <i>T</i> consecutive rounds, the topologies in those rounds must contain a common connected spanning subgraph. Let <span>(H_r)</span> to be the maximum (in terms of number of edges) such subgraph for round <i>r</i> through <span>(r+T-1)</span>. We define the <i>backbone diameter</i> <i>d</i> of a <i>T</i>-interval dynamic network to be the maximum diameter of all such <span>(H_r)</span>’s, for <span>(rge 1)</span>. We use <i>n</i> to denote the number of nodes in the network. Within such a context, we consider a range of fundamental distributed computing problems including <span>Count</span>/<span>Max</span>/<span>Median</span>/<span>Sum</span>/<span>LeaderElect</span>/<span>Consensus</span>/<span>ConfirmedFlood</span>. Existing algorithms for these problems all have time complexity of <span>(Omega (n))</span> rounds, even for <span>(T=infty )</span> and even when <i>d</i> is as small as <i>O</i>(1). This paper presents a novel approach/framework, based on the idea of <i>massively parallel aggregation</i>. Following this approach, we develop a novel deterministic <span>Count</span> algorithm with <span>(O(d^3 log ^2 n))</span> complexity, for <i>T</i>-interval dynamic networks with <span>(T ge ccdot d^2 log ^2n)</span>. Here <i>c</i> is a (sufficiently large) constant independent of <i>d</i>, <i>n</i>, and <i>T</i>. To our knowledge, our algorithm is the very first such algorithm whose complexity does not contain a <span>(Theta (n))</span> term. This paper further develops novel algorithms for solving <span>Max</span>/<span>Median</span>/<span>Sum</span>/<span>LeaderElect</span>/<span>Consensus</span>/<span>ConfirmedFlood</span>, while incurring <span>(O(d^3 text{ polylog }(n)))</span> complexity. Again, for all these problems, our algorithms are the first ones whose time complexity does not contain a <span>(Theta (n))</span> term.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 9","pages":"2959 - 2996"},"PeriodicalIF":0.9,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141612701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}