Pub Date : 2024-01-10DOI: 10.1007/s00453-023-01193-1
Philip Bille, Inge Li Gørtz, Tord Stordalen
We consider the predecessor problem on the ultra-wide word RAM model of computation, which extends the word RAM model with ultrawords consisting of (w^2) bits (TAMC, 2015). The model supports arithmetic and boolean operations on ultrawords, in addition to scattered memory operations that access or modify w (potentially non-contiguous) memory addresses simultaneously. The ultra-wide word RAM model captures (and idealizes) modern vector processor architectures. Our main result is a simple, linear space data structure that supports predecessor in constant time and updates in amortized, expected constant time. This improves the space of the previous constant time solution that uses space in the order of the size of the universe. Our result holds even in a weaker model where ultrawords consist of (w^{1+epsilon }) bits for any (epsilon > 0 ). It is based on a new implementation of the classic x-fast trie data structure of Willard (Inform Process Lett 17(2):81–84, https://doi.org/10.1016/0020-0190(83)90075-3, 1983) combined with a new dictionary data structure that supports fast parallel lookups.
我们考虑的是超宽字 RAM 计算模型的前身问题,它扩展了字 RAM 模型,超字由 (w^2) 位组成(TAMC,2015 年)。该模型支持对超字进行算术和布尔操作,此外还支持同时访问或修改 w 个(可能是非连续的)内存地址的分散内存操作。超宽字 RAM 模型捕捉(并理想化)了现代矢量处理器架构。我们的主要成果是一种简单的线性空间数据结构,它能在恒定时间内支持前置操作,并在摊销后的预期恒定时间内支持更新。这改进了之前恒定时间解决方案的空间,恒定时间解决方案使用的空间与宇宙大小相当。我们的结果甚至在一个较弱的模型中也成立,在这个模型中,超字由任意(epsilon > 0 )的(w^{1+epsilon } )比特组成。它是基于威拉德(Inform Process Lett 17(2):81-84, https://doi.org/10.1016/0020-0190(83)90075-3,1983)的经典 x-fast trie 数据结构的新实现,结合了支持快速并行查找的新字典数据结构。
{"title":"Predecessor on the Ultra-Wide Word RAM","authors":"Philip Bille, Inge Li Gørtz, Tord Stordalen","doi":"10.1007/s00453-023-01193-1","DOIUrl":"10.1007/s00453-023-01193-1","url":null,"abstract":"<div><p>We consider the predecessor problem on the ultra-wide word RAM model of computation, which extends the word RAM model with <i>ultrawords</i> consisting of <span>(w^2)</span> bits (TAMC, 2015). The model supports arithmetic and boolean operations on ultrawords, in addition to <i>scattered</i> memory operations that access or modify <i>w</i> (potentially non-contiguous) memory addresses simultaneously. The ultra-wide word RAM model captures (and idealizes) modern vector processor architectures. Our main result is a simple, linear space data structure that supports predecessor in constant time and updates in amortized, expected constant time. This improves the space of the previous constant time solution that uses space in the order of the size of the universe. Our result holds even in a weaker model where ultrawords consist of <span>(w^{1+epsilon })</span> bits for any <span>(epsilon > 0 )</span>. It is based on a new implementation of the classic <i>x</i>-fast trie data structure of Willard (Inform Process Lett 17(2):81–84, https://doi.org/10.1016/0020-0190(83)90075-3, 1983) combined with a new dictionary data structure that supports fast parallel lookups.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 5","pages":"1578 - 1599"},"PeriodicalIF":0.9,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-023-01193-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139421172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-09DOI: 10.1007/s00453-023-01201-4
Dylan Hyatt-Denesik, Mirmahdi Rahgoshay, Mohammad R. Salavatipour
In this paper we study the classical problem of throughput maximization. In this problem we have a collection J of n jobs, each having a release time (r_j), deadline (d_j), and processing time (p_j). They have to be scheduled non-preemptively on m identical parallel machines. The goal is to find a schedule which maximizes the number of jobs scheduled entirely in their ([r_j,d_j]) window. This problem has been studied extensively (even for the case of (m=1)). Several special cases of the problem remain open. Bar-Noy et al. (Proceedings of the Thirty-First Annual ACM Symposium on Theory of Computing, May 1–4, 1999, Atlanta, Georgia, USA, pp. 622–631. ACM, 1999, https://doi.org/10.1145/301250.301420) presented an algorithm with ratio (1-1/(1+1/m)^m) for m machines, which approaches (1-1/e) as m increases. For (m=1), Chuzhoy et al. (42nd Annual Symposium on Foundations of Computer Science (FOCS) 2001, 14–17 October 2001, Las Vegas, Nevada, USA, pp. 348–356. IEEE Computer Society, 2001) presented an algorithm with approximation with ratio (1-frac{1}{e}-varepsilon ) (for any (varepsilon >0)). Recently Im et al. (SIAM J Discrete Math 34(3):1649–1669, 2020) presented an algorithm with ratio (1-1/e+varepsilon _0) for some absolute constant (varepsilon _0>0) for any fixed m. They also presented an algorithm with ratio (1-O(sqrt{log m/m})-varepsilon ) for general m which approaches 1 as m grows. The approximability of the problem for (m=O(1)) remains a major open question. Even for the case of (m=1) and (c=O(1)) distinct processing times the problem is open (Sgall in: Algorithms - ESA 2012 - 20th Annual European Symposium, Ljubljana, Slovenia, September 10–12, 2012. Proceedings, pp 2–11, 2012). In this paper we study the case of (m=O(1)) and show that if there are c distinct processing times, i.e. (p_j)’s come from a set of size c, then there is a randomized ((1-{varepsilon }))-approximation that runs in time (O(n^{mc^7{varepsilon }^{-6}}log T)), where T is the largest deadline. Therefore, for constant m and constant c this yields a PTAS. Our algorithm is based on proving structural properties for a near optimum solution that allows one to use a dynamic programming with pruning.
本文研究的是吞吐量最大化的经典问题。在这个问题中,我们有一个由 n 个作业组成的集合 J,每个作业都有发布时间 (r_j )、截止时间 (d_j )和处理时间 (p_j )。它们必须在 m 台相同的并行机器上进行非抢占式调度。我们的目标是找到一个计划表,它能最大限度地增加完全安排在其 ([r_j,d_j]) 窗口内的作业数量。这个问题已经被广泛地研究过了(即使是在(m=1)的情况下)。该问题的几个特例仍未解决。Bar-Noy et al. (Proceedings of the Thirty-First Annual ACM Symposium on Theory of Computing, May 1-4, 1999, Atlanta, Georgia, USA, pp.ACM,1999,https://doi.org/10.1145/301250.301420)提出了一种算法,对于m台机器,其比率为(1-1/(1+1/m)^m),随着m的增加,该比率接近(1-1/e)。对于 (m=1), Chuzhoy 等人 (42nd Annual Symposium on Foundations of Computer Science (FOCS) 2001, 14-17 October 2001, Las Vegas, Nevada, USA, pp.IEEE Computer Society, 2001)提出了一种具有比率 (1-frac{1}{e}-varepsilon ) (对于任意 (varepsilon >0))的近似计算算法。最近,Im等人(SIAM J Discrete Math 34(3):1649-1669, 2020)提出了一种算法,对于任意固定的m,在某个绝对常数(varepsilon _0>0) 下,比率为(1-1/e+varepsilon _0)。对于 (m=O(1)) 问题的近似性仍然是一个重大的悬而未决的问题。即使对于 (m=1) 和 (c=O(1)) 不同处理时间的情况,这个问题也是悬而未决的(Sgall in:Algorithms - ESA 2012 - 20th Annual European Symposium, Ljubljana, Slovenia, September 10-12, 2012.论文集,第 2-11 页,2012 年)。在本文中,我们研究了 (m=O(1)) 的情况,并证明如果有 c 个不同的处理时间,即 (p_j) 来自大小为 c 的集合,那么有一个随机的 ((1-{varepsilon }))-approximation 可以在 (O(n^{mc^7{varepsilon }^{-6}}log T))的时间内运行,其中 T 是最大的截止日期。因此,对于常数 m 和常数 c,这就产生了一个 PTAS。我们的算法是基于证明近似最优解的结构特性,从而可以使用带剪枝的动态编程。
{"title":"Approximations for Throughput Maximization","authors":"Dylan Hyatt-Denesik, Mirmahdi Rahgoshay, Mohammad R. Salavatipour","doi":"10.1007/s00453-023-01201-4","DOIUrl":"10.1007/s00453-023-01201-4","url":null,"abstract":"<div><p>In this paper we study the classical problem of throughput maximization. In this problem we have a collection <i>J</i> of <i>n</i> jobs, each having a release time <span>(r_j)</span>, deadline <span>(d_j)</span>, and processing time <span>(p_j)</span>. They have to be scheduled non-preemptively on <i>m</i> identical parallel machines. The goal is to find a schedule which maximizes the number of jobs scheduled entirely in their <span>([r_j,d_j])</span> window. This problem has been studied extensively (even for the case of <span>(m=1)</span>). Several special cases of the problem remain open. Bar-Noy et al. (Proceedings of the Thirty-First Annual ACM Symposium on Theory of Computing, May 1–4, 1999, Atlanta, Georgia, USA, pp. 622–631. ACM, 1999, https://doi.org/10.1145/301250.301420) presented an algorithm with ratio <span>(1-1/(1+1/m)^m)</span> for <i>m</i> machines, which approaches <span>(1-1/e)</span> as <i>m</i> increases. For <span>(m=1)</span>, Chuzhoy et al. (42nd Annual Symposium on Foundations of Computer Science (FOCS) 2001, 14–17 October 2001, Las Vegas, Nevada, USA, pp. 348–356. IEEE Computer Society, 2001) presented an algorithm with approximation with ratio <span>(1-frac{1}{e}-varepsilon )</span> (for any <span>(varepsilon >0)</span>). Recently Im et al. (SIAM J Discrete Math 34(3):1649–1669, 2020) presented an algorithm with ratio <span>(1-1/e+varepsilon _0)</span> for some absolute constant <span>(varepsilon _0>0)</span> for any fixed <i>m</i>. They also presented an algorithm with ratio <span>(1-O(sqrt{log m/m})-varepsilon )</span> for general <i>m</i> which approaches 1 as <i>m</i> grows. The approximability of the problem for <span>(m=O(1))</span> remains a major open question. Even for the case of <span>(m=1)</span> and <span>(c=O(1))</span> distinct processing times the problem is open (Sgall in: Algorithms - ESA 2012 - 20th Annual European Symposium, Ljubljana, Slovenia, September 10–12, 2012. Proceedings, pp 2–11, 2012). In this paper we study the case of <span>(m=O(1))</span> and show that if there are <i>c</i> distinct processing times, i.e. <span>(p_j)</span>’s come from a set of size <i>c</i>, then there is a randomized <span>((1-{varepsilon }))</span>-approximation that runs in time <span>(O(n^{mc^7{varepsilon }^{-6}}log T))</span>, where <i>T</i> is the largest deadline. Therefore, for constant <i>m</i> and constant <i>c</i> this yields a PTAS. Our algorithm is based on proving structural properties for a near optimum solution that allows one to use a dynamic programming with pruning.\u0000</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 5","pages":"1545 - 1577"},"PeriodicalIF":0.9,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139414997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-09DOI: 10.1007/s00453-023-01199-9
Ajinkya Gaikwad, Soumen Maity
In this paper, we study the Harmless Set problem from a parameterized complexity perspective. Given a graph (G = (V,E)), a threshold function(~t~:~ V rightarrow {mathbb {N}}) and an integer k, we study Harmless Set, where the goal is to find a subset of vertices (S subseteq V) of size at least k such that every vertex (vin V) has fewer than t(v) neighbours in S. On the positive side, we obtain fixed-parameter algorithms for the problem when parameterized by the neighbourhood diversity, the twin-cover number and the vertex integrity of the input graph. We complement two of these results from the negative side. On dense graphs, we show that the problem is W[1]-hard parameterized by cluster vertex deletion number—a natural generalization of the twin-cover number. We show that the problem is W[1]-hard parameterized by a wide range of fairly restrictive structural parameters such as the feedback vertex set number, pathwidth, and treedepth—a natural generalization of the vertex integrity. We thereby resolve one open question stated by Bazgan and Chopin (Discrete Optim 14(C):170–182, 2014) concerning the complexity of Harmless Set parameterized by the treewidth of the input graph. We also show that Harmless Set for a special case where each vertex has the threshold set to half of its degree (the so-called Majority Harmless Set problem) is W[1]-hard when parameterized by the treewidth of the input graph. Given a graph G and an irredundant c-expression of G, we prove that Harmless Set can be solved in XP-time when parameterized by clique-width.
在本文中,我们从参数化复杂性的角度来研究无损集问题。给定一个图(G = (V,E))、一个阈值函数(~t~:~ V rightarrow {mathbb {N}} )和一个整数 k,我们研究无害集问题,其目标是找到一个大小至少为 k 的顶点子集(S subseteq V ),使得每个顶点 (vin V) 在 S 中的邻居少于 t(v)。从积极的一面来看,当以邻域多样性、孪生覆盖数和输入图的顶点完整性为参数时,我们得到了该问题的固定参数算法。我们从反面补充了其中两个结果。在稠密图上,我们证明了以簇顶点删除数--孪生覆盖数的自然概括--为参数的问题是 W[1]-hard 的。我们证明,以反馈顶点集数、路径宽度和树深度等一系列相当严格的结构参数为参数,该问题的难度为 W[1]-ard--顶点完整性的自然概括。因此,我们解决了 Bazgan 和 Chopin(《离散优化》14(C):170-182, 2014)提出的一个未决问题,即以输入图的树宽为参数的 Harmless Set 复杂性。我们还证明,当以输入图的树宽为参数时,对于每个顶点的阈值设为其度的一半的特殊情况(即所谓的多数无害集问题),无害集的复杂度为 W[1]-hard。给定一个图 G 和 G 的一个非冗余 c 表达式,我们证明当以簇宽为参数时,无害集问题可以在 XP 时间内求解。
{"title":"On Structural Parameterizations of the Harmless Set Problem","authors":"Ajinkya Gaikwad, Soumen Maity","doi":"10.1007/s00453-023-01199-9","DOIUrl":"10.1007/s00453-023-01199-9","url":null,"abstract":"<div><p>In this paper, we study the <span>Harmless Set</span> problem from a parameterized complexity perspective. Given a graph <span>(G = (V,E))</span>, a threshold function<span>(~t~:~ V rightarrow {mathbb {N}})</span> and an integer <i>k</i>, we study <span>Harmless Set</span>, where the goal is to find a subset of vertices <span>(S subseteq V)</span> of size at least <i>k</i> such that every vertex <span>(vin V)</span> has fewer than <i>t</i>(<i>v</i>) neighbours in <i>S</i>. On the positive side, we obtain fixed-parameter algorithms for the problem when parameterized by the neighbourhood diversity, the twin-cover number and the vertex integrity of the input graph. We complement two of these results from the negative side. On dense graphs, we show that the problem is W[1]-hard parameterized by cluster vertex deletion number—a natural generalization of the twin-cover number. We show that the problem is W[1]-hard parameterized by a wide range of fairly restrictive structural parameters such as the feedback vertex set number, pathwidth, and treedepth—a natural generalization of the vertex integrity. We thereby resolve one open question stated by Bazgan and Chopin (Discrete Optim 14(C):170–182, 2014) concerning the complexity of <span>Harmless Set</span> parameterized by the treewidth of the input graph. We also show that <span>Harmless Set</span> for a special case where each vertex has the threshold set to half of its degree (the so-called <span>Majority Harmless Set</span> problem) is W[1]-hard when parameterized by the treewidth of the input graph. Given a graph <i>G</i> and an irredundant <i>c</i>-expression of <i>G</i>, we prove that <span>Harmless Set</span> can be solved in XP-time when parameterized by clique-width.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 5","pages":"1475 - 1511"},"PeriodicalIF":0.9,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139414925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-09DOI: 10.1007/s00453-023-01200-5
Sergio Cabello, David Gajser
For a set ({mathcal {Q}}) of points in the plane and a real number (delta ge 0), let ({mathbb {G}}_delta ({mathcal {Q}})) be the graph defined on ({mathcal {Q}}) by connecting each pair of points at distance at most (delta ).We consider the connectivity of ({mathbb {G}}_delta ({mathcal {Q}})) in the best scenario when the location of a few of the points is uncertain, but we know for each uncertain point a line segment that contains it. More precisely, we consider the following optimization problem: given a set ({mathcal {P}}) of (n-k) points in the plane and a set ({mathcal {S}}) of k line segments in the plane, find the minimum (delta ge 0) with the property that we can select one point (p_sin s) for each segment (sin {mathcal {S}}) and the corresponding graph ({mathbb {G}}_delta ( {mathcal {P}}cup { p_smid sin {mathcal {S}}})) is connected. It is known that the problem is NP-hard. We provide an algorithm to exactly compute an optimal solution in ({{,mathrm{{mathcal {O}}},}}(f(k) n log n)) time, for a computable function (f(cdot )). This implies that the problem is FPT when parameterized by k. The best previous algorithm uses ({{,mathrm{{mathcal {O}}},}}((k!)^k k^{k+1}cdot n^{2k})) time and computes the solution up to fixed precision.
{"title":"Connectivity with Uncertainty Regions Given as Line Segments","authors":"Sergio Cabello, David Gajser","doi":"10.1007/s00453-023-01200-5","DOIUrl":"10.1007/s00453-023-01200-5","url":null,"abstract":"<div><p>For a set <span>({mathcal {Q}})</span> of points in the plane and a real number <span>(delta ge 0)</span>, let <span>({mathbb {G}}_delta ({mathcal {Q}}))</span> be the graph defined on <span>({mathcal {Q}})</span> by connecting each pair of points at distance at most <span>(delta )</span>.We consider the connectivity of <span>({mathbb {G}}_delta ({mathcal {Q}}))</span> in the best scenario when the location of a few of the points is uncertain, but we know for each uncertain point a line segment that contains it. More precisely, we consider the following optimization problem: given a set <span>({mathcal {P}})</span> of <span>(n-k)</span> points in the plane and a set <span>({mathcal {S}})</span> of <i>k</i> line segments in the plane, find the minimum <span>(delta ge 0)</span> with the property that we can select one point <span>(p_sin s)</span> for each segment <span>(sin {mathcal {S}})</span> and the corresponding graph <span>({mathbb {G}}_delta ( {mathcal {P}}cup { p_smid sin {mathcal {S}}}))</span> is connected. It is known that the problem is NP-hard. We provide an algorithm to exactly compute an optimal solution in <span>({{,mathrm{{mathcal {O}}},}}(f(k) n log n))</span> time, for a computable function <span>(f(cdot ))</span>. This implies that the problem is FPT when parameterized by <i>k</i>. The best previous algorithm uses <span>({{,mathrm{{mathcal {O}}},}}((k!)^k k^{k+1}cdot n^{2k}))</span> time and computes the solution up to fixed precision.\u0000</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 5","pages":"1512 - 1544"},"PeriodicalIF":0.9,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-023-01200-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139414935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-29DOI: 10.1007/s00453-023-01196-y
Esther Galby, Dániel Marx, Philipp Schepper, Roohani Sharma, Prafullkumar Tale
The leafage of a chordal graph G is the minimum integer (ell ) such that G can be realized as an intersection graph of subtrees of a tree with (ell ) leaves. We consider structural parameterization by the leafage of classical domination and cut problems on chordal graphs. Fomin, Golovach, and Raymond [ESA 2018, Algorithmica 2020] proved, among other things, that Dominating Set on chordal graphs admits an algorithm running in time (2^{mathcal {O}(ell ^2)} cdot n^{mathcal {O}(1)}). We present a conceptually much simpler algorithm that runs in time (2^{mathcal {O}(ell )} cdot n^{mathcal {O}(1)}). We extend our approach to obtain similar results for Connected Dominating Set and Steiner Tree. We then consider the two classical cut problems MultiCut with Undeletable Terminals and Multiway Cut with Undeletable Terminals. We prove that the former is W[1]-hard when parameterized by the leafage and complement this result by presenting a simple (n^{mathcal {O}(ell )})-time algorithm. To our surprise, we find that Multiway Cut with Undeletable Terminals on chordal graphs can be solved, in contrast, in (n^{{{mathcal {O}}}(1)})-time.
弦图 G 的叶子数是最小整数(ell ),使得 G 可以实现为具有 (ell ) 个叶子的树的子树的交集图。我们考虑了弦图上经典支配和切割问题的叶子结构参数化。Fomin, Golovach 和 Raymond [ESA 2018, Algorithmica 2020]证明,除其他外,弦图上的支配集(Dominating Set)的算法运行时间为 (2^{mathcal {O}(ell ^2)} cdot n^{/mathcal{O}(1)}/)。我们提出了一种概念上简单得多的算法,运行时间为 (2^{mathcal {O}(ell )}cdot n^{mathcal {O}(1)}).我们将扩展我们的方法,以获得连通支配集和斯坦纳树的类似结果。然后,我们考虑两个经典切割问题:不可删除终端的多路切割(MultiCut with Undeletable Terminals)和不可删除终端的多路切割(Multiway Cut with Undeletable Terminals)。我们证明了前者在以叶子为参数时是 W[1]-hard 的,并提出了一个简单的 (n^{mathcal {O}(ell )})-time 算法来补充这一结果。令我们惊讶的是,我们发现在弦图上有不可删除终端的多向切割(Multiway Cut with Undeletable Terminals)可以在 (n^{{mathcal {O}}(1)})-time 内求解。
{"title":"Domination and Cut Problems on Chordal Graphs with Bounded Leafage","authors":"Esther Galby, Dániel Marx, Philipp Schepper, Roohani Sharma, Prafullkumar Tale","doi":"10.1007/s00453-023-01196-y","DOIUrl":"10.1007/s00453-023-01196-y","url":null,"abstract":"<div><p>The leafage of a chordal graph <i>G</i> is the minimum integer <span>(ell )</span> such that <i>G</i> can be realized as an intersection graph of subtrees of a tree with <span>(ell )</span> leaves. We consider structural parameterization by the leafage of classical domination and cut problems on chordal graphs. Fomin, Golovach, and Raymond [ESA 2018, Algorithmica 2020] proved, among other things, that <span>Dominating Set</span> on chordal graphs admits an algorithm running in time <span>(2^{mathcal {O}(ell ^2)} cdot n^{mathcal {O}(1)})</span>. We present a conceptually much simpler algorithm that runs in time <span>(2^{mathcal {O}(ell )} cdot n^{mathcal {O}(1)})</span>. We extend our approach to obtain similar results for <span>Connected Dominating Set</span> and <span>Steiner Tree</span>. We then consider the two classical cut problems <span>MultiCut with Undeletable Terminals</span> and <span>Multiway Cut with Undeletable Terminals</span>. We prove that the former is <span>W</span>[1]-hard when parameterized by the leafage and complement this result by presenting a simple <span>(n^{mathcal {O}(ell )})</span>-time algorithm. To our surprise, we find that <span>Multiway Cut with Undeletable Terminals</span> on chordal graphs can be solved, in contrast, in <span>(n^{{{mathcal {O}}}(1)})</span>-time.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 5","pages":"1428 - 1474"},"PeriodicalIF":0.9,"publicationDate":"2023-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-023-01196-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139062325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-23DOI: 10.1007/s00453-023-01198-w
Mingyang Gong, Zhi-Zhong Chen, Kuniteru Hayashi
In offline scheduling models, jobs are given with their exact processing times. In their online counterparts, jobs arrive in sequence together with their processing times and the scheduler makes irrevocable decisions on how to execute each of them upon its arrival. We consider a semi-online variant which has equally rich application background, called scheduling with testing, where the exact processing time of a job is revealed only after a required testing operation is finished, or otherwise the job has to be executed for a given possibly over-estimated length of time. For multiprocessor scheduling with testing to minimize the total job completion time, we present several first approximation algorithms with constant competitive ratios for various settings, including a (2 varphi )-competitive algorithm for the non-preemptive general testing case and a ((0.0382 + 2.7925 (1 - frac{1}{2,m})))-competitive randomized algorithm, when the number of machines (m ge 37) or otherwise 2.7925-competitive, where (varphi = (1 + sqrt{5}) / 2 < 1.6181) is the golden ratio and m is the number of machines, a ((3.5 - frac{3}{2,m}))-competitive algorithm allowing job preemption when (m ge 3) or otherwise 3-competitive, and a ((varphi + frac{varphi + 1}{2} (1 - frac{1}{,}m)))-competitive algorithm for the non-preemptive uniform testing case when (m ge 5) or otherwise ((varphi + 1))-competitive. Our results improve three previous best approximation algorithms for the single machine scheduling with testing problems, respectively.
{"title":"Approximation Algorithms for Multiprocessor Scheduling with Testing to Minimize the Total Job Completion Time","authors":"Mingyang Gong, Zhi-Zhong Chen, Kuniteru Hayashi","doi":"10.1007/s00453-023-01198-w","DOIUrl":"10.1007/s00453-023-01198-w","url":null,"abstract":"<div><p>In offline scheduling models, jobs are given with their exact processing times. In their online counterparts, jobs arrive in sequence together with their processing times and the scheduler makes irrevocable decisions on how to execute each of them upon its arrival. We consider a semi-online variant which has equally rich application background, called scheduling with testing, where the exact processing time of a job is revealed only after a required testing operation is finished, or otherwise the job has to be executed for a given possibly over-estimated length of time. For multiprocessor scheduling with testing to minimize the total job completion time, we present several first approximation algorithms with constant competitive ratios for various settings, including a <span>(2 varphi )</span>-competitive algorithm for the non-preemptive general testing case and a <span>((0.0382 + 2.7925 (1 - frac{1}{2,m})))</span>-competitive randomized algorithm, when the number of machines <span>(m ge 37)</span> or otherwise 2.7925-competitive, where <span>(varphi = (1 + sqrt{5}) / 2 < 1.6181)</span> is the golden ratio and <i>m</i> is the number of machines, a <span>((3.5 - frac{3}{2,m}))</span>-competitive algorithm allowing job preemption when <span>(m ge 3)</span> or otherwise 3-competitive, and a <span>((varphi + frac{varphi + 1}{2} (1 - frac{1}{,}m)))</span>-competitive algorithm for the non-preemptive uniform testing case when <span>(m ge 5)</span> or otherwise <span>((varphi + 1))</span>-competitive. Our results improve three previous best approximation algorithms for the single machine scheduling with testing problems, respectively.\u0000</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 5","pages":"1400 - 1427"},"PeriodicalIF":0.9,"publicationDate":"2023-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139022474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-21DOI: 10.1007/s00453-023-01194-0
Julien Courtiel, Paul Dorbec, Romain Lecoq
In this paper, we consider the problem of finding a regression in a version control system (VCS), such as git. The set of versions is modelled by a directed acyclic graph (DAG) where vertices represent versions of the software, and arcs are the changes between different versions. We assume that somewhere in the DAG, a bug was introduced, which persists in all of its subsequent versions. It is possible to query a vertex to check whether the corresponding version carries the bug. Given a DAG and a bugged vertex, the Regression Search Problem consists in finding the first vertex containing the bug in a minimum number of queries in the worst-case scenario. This problem is known to be NP-complete. We study the algorithm used in git to address this problem, known as git bisect. We prove that in a general setting, git bisect can use an exponentially larger number of queries than an optimal algorithm. We also consider the restriction where all vertices have indegree at most 2 (i.e. where merges are made between at most two branches at a time in the VCS), and prove that in this case, git bisect is a (frac{1}{log _2(3/2)})-approximation algorithm, and that this bound is tight. We also provide a better approximation algorithm for this case. Finally, we give an alternative proof of the NP-completeness of the Regression Search Problem, via a variation with bounded indegree.
在本文中,我们考虑的是在版本控制系统(VCS)(如 git)中寻找回归的问题。版本集以有向无环图(DAG)建模,其中顶点代表软件的版本,弧代表不同版本之间的变化。我们假设在 DAG 的某处引入了一个错误,该错误会在随后的所有版本中持续存在。可以通过查询顶点来检查相应的版本是否带有该缺陷。给定一个 DAG 和一个有漏洞的顶点,回归搜索问题就是在最坏的情况下,用最少的查询次数找到第一个包含漏洞的顶点。众所周知,这个问题是 NP-完全的。我们研究了 git 中用于解决这一问题的算法,即 git bisect。我们证明,在一般情况下,与最优算法相比,git bisect 的查询次数会呈指数级增长。我们还考虑了所有顶点的indegree最多为2的限制(即在VCS中一次最多在两个分支之间进行合并),并证明在这种情况下,git bisect是一个(frac{1}{log _2(3/2)})近似算法,而且这个约束很紧。我们还为这种情况提供了更好的近似算法。最后,我们给出了回归搜索问题 NP 完备性的另一种证明,即通过一个有界枚举度的变体来证明回归搜索问题的 NP 完备性。
{"title":"Theoretical Analysis of Git Bisect","authors":"Julien Courtiel, Paul Dorbec, Romain Lecoq","doi":"10.1007/s00453-023-01194-0","DOIUrl":"10.1007/s00453-023-01194-0","url":null,"abstract":"<div><p>In this paper, we consider the problem of finding a regression in a version control system (VCS), such as <span>git</span>. The set of versions is modelled by a directed acyclic graph (DAG) where vertices represent versions of the software, and arcs are the changes between different versions. We assume that somewhere in the DAG, a bug was introduced, which persists in all of its subsequent versions. It is possible to query a vertex to check whether the corresponding version carries the bug. Given a DAG and a bugged vertex, the Regression Search Problem consists in finding the first vertex containing the bug in a minimum number of queries in the worst-case scenario. This problem is known to be NP-complete. We study the algorithm used in <span>git</span> to address this problem, known as <span>git bisect</span>. We prove that in a general setting, <span>git bisect</span> can use an exponentially larger number of queries than an optimal algorithm. We also consider the restriction where all vertices have indegree at most 2 (i.e. where merges are made between at most two branches at a time in the VCS), and prove that in this case, <span>git bisect</span> is a <span>(frac{1}{log _2(3/2)})</span>-approximation algorithm, and that this bound is tight. We also provide a better approximation algorithm for this case. Finally, we give an alternative proof of the NP-completeness of the Regression Search Problem, via a variation with bounded indegree.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 5","pages":"1365 - 1399"},"PeriodicalIF":0.9,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139022472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-19DOI: 10.1007/s00453-023-01195-z
Yuefang Lian, Donglei Du, Xiao Wang, Dachuan Xu, Yang Zhou
Stochastic optimization has experienced significant growth in recent decades, with the increasing prevalence of variance reduction techniques in stochastic optimization algorithms to enhance computational efficiency. In this paper, we introduce two projection-free stochastic approximation algorithms for maximizing diminishing return (DR) submodular functions over convex constraints, building upon the Stochastic Path Integrated Differential EstimatoR (SPIDER) and its variants. Firstly, we present a SPIDER Continuous Greedy (SPIDER-CG) algorithm for the monotone case that guarantees a ((1-e^{-1})text {OPT}-varepsilon ) approximation after (mathcal {O}(varepsilon ^{-1})) iterations and (mathcal {O}(varepsilon ^{-2})) stochastic gradient computations under the mean-squared smoothness assumption. For the non-monotone case, we develop a SPIDER Frank–Wolfe (SPIDER-FW) algorithm that guarantees a (frac{1}{4}(1-min _{xin mathcal {C}}{Vert xVert _{infty }})text {OPT}-varepsilon ) approximation with (mathcal {O}(varepsilon ^{-1})) iterations and (mathcal {O}(varepsilon ^{-2})) stochastic gradient estimates. To address the practical challenge associated with a large number of samples per iteration, we introduce a modified gradient estimator based on SPIDER, leading to a Hybrid SPIDER-FW (Hybrid SPIDER-CG) algorithm, which achieves the same approximation guarantee as SPIDER-FW (SPIDER-CG) algorithm with only (mathcal {O}(1)) samples per iteration. Numerical experiments on both simulated and real data demonstrate the efficiency of the proposed methods.
{"title":"Stochastic Variance Reduction for DR-Submodular Maximization","authors":"Yuefang Lian, Donglei Du, Xiao Wang, Dachuan Xu, Yang Zhou","doi":"10.1007/s00453-023-01195-z","DOIUrl":"10.1007/s00453-023-01195-z","url":null,"abstract":"<div><p>Stochastic optimization has experienced significant growth in recent decades, with the increasing prevalence of variance reduction techniques in stochastic optimization algorithms to enhance computational efficiency. In this paper, we introduce two projection-free stochastic approximation algorithms for maximizing diminishing return (DR) submodular functions over convex constraints, building upon the Stochastic Path Integrated Differential EstimatoR (SPIDER) and its variants. Firstly, we present a SPIDER Continuous Greedy (SPIDER-CG) algorithm for the monotone case that guarantees a <span>((1-e^{-1})text {OPT}-varepsilon )</span> approximation after <span>(mathcal {O}(varepsilon ^{-1}))</span> iterations and <span>(mathcal {O}(varepsilon ^{-2}))</span> stochastic gradient computations under the mean-squared smoothness assumption. For the non-monotone case, we develop a SPIDER Frank–Wolfe (SPIDER-FW) algorithm that guarantees a <span>(frac{1}{4}(1-min _{xin mathcal {C}}{Vert xVert _{infty }})text {OPT}-varepsilon )</span> approximation with <span>(mathcal {O}(varepsilon ^{-1}))</span> iterations and <span>(mathcal {O}(varepsilon ^{-2}))</span> stochastic gradient estimates. To address the practical challenge associated with a large number of samples per iteration, we introduce a modified gradient estimator based on SPIDER, leading to a Hybrid SPIDER-FW (Hybrid SPIDER-CG) algorithm, which achieves the same approximation guarantee as SPIDER-FW (SPIDER-CG) algorithm with only <span>(mathcal {O}(1))</span> samples per iteration. Numerical experiments on both simulated and real data demonstrate the efficiency of the proposed methods.\u0000</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 5","pages":"1335 - 1364"},"PeriodicalIF":0.9,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-023-01195-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138821631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-15DOI: 10.1007/s00453-023-01197-x
Mingyu Xiao, Sen Huang, Xiaoyu Chen
The maximum independent set problem is one of the most important problems in graph algorithms and has been extensively studied in the line of research on the worst-case analysis of exact algorithms for NP-hard problems. In the weighted version, each vertex in the graph is associated with a weight and we are going to find an independent set of maximum total vertex weight. Many reduction rules for the unweighted version have been developed that can be used to effectively reduce the input instance without loss the optimality. However, it seems that reduction rules for the weighted version have not been systemically studied. In this paper, we design a series of reduction rules for the maximum weighted independent set problem and also design a fast exact algorithm based on the reduction rules. By using the measure-and-conquer technique to analyze the algorithm, we show that the algorithm runs in (O^*(1.1443^{(0.624x-0.872)n'})) time and polynomial space, where (n') is the number of vertices of degree at least 2 and x is the average degree of these vertices in the graph. When the average degree is small, our running-time bound beats previous results. For example, on graphs with the minimum degree at least 2 and average degree at most 3.68, our running time bound is better than that of previous polynomial-space algorithms for graphs with maximum degree at most 4.
{"title":"Maximum Weighted Independent Set: Effective Reductions and Fast Algorithms on Sparse Graphs","authors":"Mingyu Xiao, Sen Huang, Xiaoyu Chen","doi":"10.1007/s00453-023-01197-x","DOIUrl":"10.1007/s00453-023-01197-x","url":null,"abstract":"<div><p>The maximum independent set problem is one of the most important problems in graph algorithms and has been extensively studied in the line of research on the worst-case analysis of exact algorithms for NP-hard problems. In the weighted version, each vertex in the graph is associated with a weight and we are going to find an independent set of maximum total vertex weight. Many reduction rules for the unweighted version have been developed that can be used to effectively reduce the input instance without loss the optimality. However, it seems that reduction rules for the weighted version have not been systemically studied. In this paper, we design a series of reduction rules for the maximum weighted independent set problem and also design a fast exact algorithm based on the reduction rules. By using the measure-and-conquer technique to analyze the algorithm, we show that the algorithm runs in <span>(O^*(1.1443^{(0.624x-0.872)n'}))</span> time and polynomial space, where <span>(n')</span> is the number of vertices of degree at least 2 and <i>x</i> is the average degree of these vertices in the graph. When the average degree is small, our running-time bound beats previous results. For example, on graphs with the minimum degree at least 2 and average degree at most 3.68, our running time bound is better than that of previous polynomial-space algorithms for graphs with maximum degree at most 4.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 5","pages":"1293 - 1334"},"PeriodicalIF":0.9,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138682012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-14DOI: 10.1007/s00453-023-01188-y
Miriam Münch, Ignaz Rutter, Peter Stumpf
A natural generalization of the recognition problem for a geometric graph class is the problem of extending a representation of a subgraph to a representation of the whole graph. A related problem is to find representations for multiple input graphs that coincide on subgraphs shared by the input graphs. A common restriction is the sunflower case where the shared graph is the same for each pair of input graphs. These problems translate to the setting of comparability graphs where the representations correspond to transitive orientations of their edges. We use modular decompositions to improve the runtime for the orientation extension problem and the sunflower orientation problem to linear time. We apply these results to improve the runtime for the partial representation problem and the sunflower case of the simultaneous representation problem for permutation graphs to linear time. We also give the first efficient algorithms for these problems on circular permutation graphs.
{"title":"Partial and Simultaneous Transitive Orientations via Modular Decompositions","authors":"Miriam Münch, Ignaz Rutter, Peter Stumpf","doi":"10.1007/s00453-023-01188-y","DOIUrl":"10.1007/s00453-023-01188-y","url":null,"abstract":"<div><p>A natural generalization of the recognition problem for a geometric graph class is the problem of extending a representation of a subgraph to a representation of the whole graph. A related problem is to find representations for multiple input graphs that coincide on subgraphs shared by the input graphs. A common restriction is the sunflower case where the shared graph is the same for each pair of input graphs. These problems translate to the setting of comparability graphs where the representations correspond to transitive orientations of their edges. We use modular decompositions to improve the runtime for the orientation extension problem and the sunflower orientation problem to linear time. We apply these results to improve the runtime for the partial representation problem and the sunflower case of the simultaneous representation problem for permutation graphs to linear time. We also give the first efficient algorithms for these problems on circular permutation graphs.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"86 4","pages":"1263 - 1292"},"PeriodicalIF":0.9,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-023-01188-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138629596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}