Pub Date : 2025-02-19DOI: 10.1007/s00453-025-01298-9
Tobias Friedrich, Timo Kötzing, Aneta Neumann, Frank Neumann, Aishwarya Radhakrishnan
Understanding how evolutionary algorithms perform on constrained problems has gained increasing attention in recent years. In this paper, we study how evolutionary algorithms optimize constrained versions of the classical LeadingOnes problem. We first provide a run time analysis for the classical (1+1) EA on the LeadingOnes problem with a deterministic cardinality constraint, giving (Theta (n (n-B)log (B) + nB)) as the tight bound. Our results show that the behaviour of the algorithm is highly dependent on the constraint bound of the uniform constraint. Afterwards, we consider the problem in the context of stochastic constraints and provide insights using theoretical and experimental studies on how the ((mu )+1) EA is able to deal with these constraints in a sampling-based setting.
{"title":"Analysis of the (1+1) EA on LeadingOnes with Constraints","authors":"Tobias Friedrich, Timo Kötzing, Aneta Neumann, Frank Neumann, Aishwarya Radhakrishnan","doi":"10.1007/s00453-025-01298-9","DOIUrl":"10.1007/s00453-025-01298-9","url":null,"abstract":"<div><p>Understanding how evolutionary algorithms perform on constrained problems has gained increasing attention in recent years. In this paper, we study how evolutionary algorithms optimize constrained versions of the classical LeadingOnes problem. We first provide a run time analysis for the classical (1+1) EA on the LeadingOnes problem with a deterministic cardinality constraint, giving <span>(Theta (n (n-B)log (B) + nB))</span> as the tight bound. Our results show that the behaviour of the algorithm is highly dependent on the constraint bound of the uniform constraint. Afterwards, we consider the problem in the context of stochastic constraints and provide insights using theoretical and experimental studies on how the (<span>(mu )</span>+1) EA is able to deal with these constraints in a sampling-based setting.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 5","pages":"661 - 689"},"PeriodicalIF":0.9,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-025-01298-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143919145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-29DOI: 10.1007/s00453-025-01293-0
Michał Włodarczyk
In Chordal/Interval Vertex Deletion we ask how many vertices one needs to remove from a graph to make it chordal (respectively: interval). We study these problems under the parameterization by treewidth (textbf{tw}) of the input graph G. On the one hand, we present an algorithm for Chordal Vertex Deletion with running time (2^{mathcal {O}(textbf{tw})} cdot |V(G)|), improving upon the running time (2^{mathcal {O}(textbf{tw}^2)} cdot |V(G)|^{mathcal {O}(1)}) by Jansen, de Kroon, and Włodarczyk (STOC’21). When a tree decomposition of width (textbf{tw}) is given, then the base of the exponent equals (2^{omega -1}cdot 3 + 1). Our algorithm is based on a novel link between chordal graphs and graphic matroids, which allows us to employ the framework of representative families. On the other hand, we prove that Interval Vertex Deletion cannot be solved in time (2^{o(textbf{tw}log textbf{tw})} cdot |V(G)|^{mathcal {O}(1)}) assuming the Exponential Time Hypothesis.
{"title":"Tight Bounds for Chordal/Interval Vertex Deletion Parameterized by Treewidth","authors":"Michał Włodarczyk","doi":"10.1007/s00453-025-01293-0","DOIUrl":"10.1007/s00453-025-01293-0","url":null,"abstract":"<div><p>In Chordal/Interval Vertex Deletion we ask how many vertices one needs to remove from a graph to make it chordal (respectively: interval). We study these problems under the parameterization by treewidth <span>(textbf{tw})</span> of the input graph <i>G</i>. On the one hand, we present an algorithm for Chordal Vertex Deletion with running time <span>(2^{mathcal {O}(textbf{tw})} cdot |V(G)|)</span>, improving upon the running time <span>(2^{mathcal {O}(textbf{tw}^2)} cdot |V(G)|^{mathcal {O}(1)})</span> by Jansen, de Kroon, and Włodarczyk (STOC’21). When a tree decomposition of width <span>(textbf{tw})</span> is given, then the base of the exponent equals <span>(2^{omega -1}cdot 3 + 1)</span>. Our algorithm is based on a novel link between chordal graphs and graphic matroids, which allows us to employ the framework of representative families. On the other hand, we prove that Interval Vertex Deletion cannot be solved in time <span>(2^{o(textbf{tw}log textbf{tw})} cdot |V(G)|^{mathcal {O}(1)})</span> assuming the Exponential Time Hypothesis.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 5","pages":"621 - 660"},"PeriodicalIF":0.9,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143919082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the problem of reforming an envy-free matching when each agent has a strict preference over items and is assigned a single item. Given an envy-free matching, we consider an operation to exchange the item of an agent with an unassigned item preferred by the agent that results in another envy-free matching. We repeat this operation as long as we can. We prove that the resulting envy-free matching is uniquely determined up to the choice of an initial envy-free matching, and can be found in polynomial time. We call the resulting matching a reformist envy-free matching, and study a shortest sequence to obtain the reformist envy-free matching from an initial envy-free matching. We prove that a shortest sequence is computationally hard to obtain. We also give polynomial-time algorithms when each agent accepts at most three items or each item is accepted by at most two agents. Inapproximability and fixed-parameter (in)tractability are also discussed.
{"title":"Reforming an Envy-Free Matching","authors":"Takehiro Ito, Yuni Iwamasa, Naonori Kakimura, Naoyuki Kamiyama, Yusuke Kobayashi, Yuta Nozaki, Yoshio Okamoto, Kenta Ozeki","doi":"10.1007/s00453-025-01294-z","DOIUrl":"10.1007/s00453-025-01294-z","url":null,"abstract":"<div><p>We consider the problem of reforming an envy-free matching when each agent has a strict preference over items and is assigned a single item. Given an envy-free matching, we consider an operation to exchange the item of an agent with an unassigned item preferred by the agent that results in another envy-free matching. We repeat this operation as long as we can. We prove that the resulting envy-free matching is uniquely determined up to the choice of an initial envy-free matching, and can be found in polynomial time. We call the resulting matching a reformist envy-free matching, and study a shortest sequence to obtain the reformist envy-free matching from an initial envy-free matching. We prove that a shortest sequence is computationally hard to obtain. We also give polynomial-time algorithms when each agent accepts at most three items or each item is accepted by at most two agents. Inapproximability and fixed-parameter (in)tractability are also discussed.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 4","pages":"594 - 620"},"PeriodicalIF":0.9,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143668425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-11DOI: 10.1007/s00453-024-01292-7
Omrit Filtser, Erik Krohn, Bengt J. Nilsson, Christian Rieck, Christiane Schmidt
We study the Art Gallery Problem under k-hop visibility in polyominoes. In this visibility model, two unit squares of a polyomino can see each other if and only if the shortest path between the respective vertices in the dual graph of the polyomino has length at most k. In this paper, we show that the VC dimension of this problem is 3 in simple polyominoes, and 4 in polyominoes with holes. Furthermore, we provide a reduction from Planar Monotone 3Sat, thereby showing that the problem is NP-complete even in thin polyominoes (i.e., polyominoes that do not a contain a (2times 2) block of cells). Complementarily, we present a linear-time 4-approximation algorithm for simple 2-thin polyominoes (which do not contain a (3times 3) block of cells) for all (kin {mathbb {N}}).
{"title":"Guarding Polyominoes Under k-Hop Visibility","authors":"Omrit Filtser, Erik Krohn, Bengt J. Nilsson, Christian Rieck, Christiane Schmidt","doi":"10.1007/s00453-024-01292-7","DOIUrl":"10.1007/s00453-024-01292-7","url":null,"abstract":"<div><p>We study the <span>Art Gallery Problem</span> under <i>k</i>-hop visibility in polyominoes. In this visibility model, two unit squares of a polyomino can see each other if and only if the shortest path between the respective vertices in the dual graph of the polyomino has length at most <i>k</i>. In this paper, we show that the VC dimension of this problem is 3 in simple polyominoes, and 4 in polyominoes with holes. Furthermore, we provide a reduction from <span>Planar Monotone 3Sat</span>, thereby showing that the problem is <span>NP</span>-complete even in thin polyominoes (i.e., polyominoes that do not a contain a <span>(2times 2)</span> block of cells). Complementarily, we present a linear-time 4-approximation algorithm for simple 2-thin polyominoes (which do not contain a <span>(3times 3)</span> block of cells) for all <span>(kin {mathbb {N}})</span>.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 4","pages":"572 - 593"},"PeriodicalIF":0.9,"publicationDate":"2025-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01292-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143668301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-08DOI: 10.1007/s00453-024-01290-9
Samuel Baguley, Tobias Friedrich, Aneta Neumann, Frank Neumann, Marcus Pappik, Ziena Zeif
Parameterized analysis provides powerful mechanisms for obtaining fine-grained insights into different types of algorithms. In this work, we combine this field with evolutionary algorithms and provide parameterized complexity analysis of evolutionary multi-objective algorithms for the W-separator problem, which is a natural generalization of the vertex cover problem. The goal is to remove the minimum number of vertices such that each connected component in the resulting graph has at most W vertices. We provide different multi-objective formulations involving two or three objectives that provably lead to fixed-parameter evolutionary algorithms with respect to the value of an optimal solution OPT and W. Of particular interest are kernelizations and the reducible structures used for them. We show that in expectation the algorithms make incremental progress in finding such structures and beyond. The current best known kernelization of the W-separator uses linear programming methods and requires non-trivial post-processing steps to extract the reducible structures. We provide additional structural features to show that evolutionary algorithms with appropriate objectives are also capable of extracting them. Our results show that evolutionary algorithms with different objectives guide the search and admit fixed parameterized runtimes to solve or approximate (even arbitrarily close) the W-separator problem.
参数化分析为深入了解不同类型的算法提供了强大的机制。在这项研究中,我们将这一领域与进化算法相结合,针对顶点覆盖问题的自然概括--W-分离器问题,提供了进化多目标算法的参数化复杂性分析。该问题是顶点覆盖问题的自然概括,目标是去除最少数量的顶点,从而使生成图中的每个连通组件最多有 W 个顶点。我们提供了涉及两个或三个目标的不同多目标表述,这些表述可证明最优解 OPT 和 W 值的固定参数进化算法。我们的研究表明,算法在寻找此类结构及其他结构时会取得预期的递增进展。目前最著名的 W 分离器内核化方法使用线性规划方法,需要非繁琐的后处理步骤来提取可还原结构。我们提供了额外的结构特征,以证明具有适当目标的进化算法也能提取这些结构。我们的结果表明,具有不同目标的进化算法可以引导搜索,并允许固定的参数化运行时间来解决或近似(甚至任意接近)W-分离器问题。
{"title":"Fixed Parameter Multi-Objective Evolutionary Algorithms for the W-Separator Problem","authors":"Samuel Baguley, Tobias Friedrich, Aneta Neumann, Frank Neumann, Marcus Pappik, Ziena Zeif","doi":"10.1007/s00453-024-01290-9","DOIUrl":"10.1007/s00453-024-01290-9","url":null,"abstract":"<div><p>Parameterized analysis provides powerful mechanisms for obtaining fine-grained insights into different types of algorithms. In this work, we combine this field with evolutionary algorithms and provide parameterized complexity analysis of evolutionary multi-objective algorithms for the <i>W</i>-separator problem, which is a natural generalization of the vertex cover problem. The goal is to remove the minimum number of vertices such that each connected component in the resulting graph has at most <i>W</i> vertices. We provide different multi-objective formulations involving two or three objectives that provably lead to fixed-parameter evolutionary algorithms with respect to the value of an optimal solution <i>OPT</i> and <i>W</i>. Of particular interest are kernelizations and the reducible structures used for them. We show that in expectation the algorithms make incremental progress in finding such structures and beyond. The current best known kernelization of the <i>W</i>-separator uses linear programming methods and requires non-trivial post-processing steps to extract the reducible structures. We provide additional structural features to show that evolutionary algorithms with appropriate objectives are also capable of extracting them. Our results show that evolutionary algorithms with different objectives guide the search and admit fixed parameterized runtimes to solve or approximate (even arbitrarily close) the <i>W</i>-separator problem.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 4","pages":"537 - 571"},"PeriodicalIF":0.9,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01290-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143668295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-05DOI: 10.1007/s00453-024-01289-2
Matthew Johnson, Barnaby Martin, Jelle J. Oostveen, Sukanya Pandey, Daniël Paulusma, Siani Smith, Erik Jan van Leeuwen
For a set of graphs ({mathcal {H}}), a graph G is ({mathcal {H}})-subgraph-free if G does not contain any graph from ({{{mathcal {H}}}}) as a subgraph. We propose general and easy-to-state conditions on graph problems that explain a large set of results for ({mathcal {H}})-subgraph-free graphs. Namely, a graph problem must be efficiently solvable on graphs of bounded treewidth, computationally hard on subcubic graphs, and computational hardness must be preserved under edge subdivision of subcubic graphs. Our meta-classification says that if a graph problem (Pi ) satisfies all three conditions, then for every finite set ({{{mathcal {H}}}}), it is “efficiently solvable” on ({{{mathcal {H}}}})-subgraph-free graphs if ({mathcal {H}}) contains a disjoint union of one or more paths and subdivided claws, and (Pi ) is “computationally hard” otherwise. We apply our meta-classification on many well-known partitioning, covering and packing problems, network design problems and width parameter problems to obtain a dichotomy between polynomial-time solvability and NP-completeness. For distance-metric problems, we obtain a dichotomy between almost-linear-time solvability and having no subquadratic-time algorithm (conditioned on some hardness hypotheses). Apart from capturing a large number of explicitly and implicitly known results in the literature, we also prove a number of new results. Moreover, we perform an extensive comparison between the subgraph framework and the existing frameworks for the minor and topological minor relations, and pose several new open problems and research directions.
{"title":"Complexity Framework for Forbidden Subgraphs I: The Framework","authors":"Matthew Johnson, Barnaby Martin, Jelle J. Oostveen, Sukanya Pandey, Daniël Paulusma, Siani Smith, Erik Jan van Leeuwen","doi":"10.1007/s00453-024-01289-2","DOIUrl":"10.1007/s00453-024-01289-2","url":null,"abstract":"<div><p>For a set of graphs <span>({mathcal {H}})</span>, a graph <i>G</i> is <span>({mathcal {H}})</span>-subgraph-free if <i>G</i> does not contain any graph from <span>({{{mathcal {H}}}})</span> as a subgraph. We propose general and easy-to-state conditions on graph problems that explain a large set of results for <span>({mathcal {H}})</span>-subgraph-free graphs. Namely, a graph problem must be efficiently solvable on graphs of bounded treewidth, computationally hard on subcubic graphs, and computational hardness must be preserved under edge subdivision of subcubic graphs. Our meta-classification says that if a graph problem <span>(Pi )</span> satisfies all three conditions, then for every finite set <span>({{{mathcal {H}}}})</span>, it is “efficiently solvable” on <span>({{{mathcal {H}}}})</span>-subgraph-free graphs if <span>({mathcal {H}})</span> contains a disjoint union of one or more paths and subdivided claws, and <span>(Pi )</span> is “computationally hard” otherwise. We apply our <i>meta-classification</i> on many well-known partitioning, covering and packing problems, network design problems and width parameter problems to obtain a dichotomy between polynomial-time solvability and <span>NP</span>-completeness. For distance-metric problems, we obtain a dichotomy between almost-linear-time solvability and having no subquadratic-time algorithm (conditioned on some hardness hypotheses). Apart from capturing a large number of explicitly and implicitly known results in the literature, we also prove a number of new results. Moreover, we perform an extensive comparison between the subgraph framework and the existing frameworks for the minor and topological minor relations, and pose several new open problems and research directions.\u0000</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 3","pages":"429 - 464"},"PeriodicalIF":0.9,"publicationDate":"2025-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01289-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143446414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-03DOI: 10.1007/s00453-024-01291-8
Kamal Eyubov, Marcelo Fonseca Faraj, Christian Schulz
Partitioning the vertices of a (hyper)graph into k roughly balanced blocks such that few (hyper)edges run between blocks is a key problem for large-scale distributed processing. A current trend for partitioning huge (hyper)graphs using low computational resources are streaming algorithms. In this work, we propose FREIGHT: a Fast stREamInG Hypergraph parTitioning algorithm which is an adaptation of the widely-known graph-based algorithm Fennel. By using an efficient data structure, we make the overall running of FREIGHT linearly dependent on the pin-count of the hypergraph and the memory consumption linearly dependent on the numbers of nets and blocks. The results of our extensive experimentation showcase the promising performance of FREIGHT as a highly efficient and effective solution for streaming hypergraph partitioning. Our algorithm demonstrates competitive running time with the Hashing algorithm, with a geometric mean runtime within a factor of four compared to the Hashing algorithm. Significantly, our findings highlight the superiority of FREIGHT over all existing (buffered) streaming algorithms and even the in-memory algorithm HYPE, with respect to both cut-net and connectivity measures. This indicates that our proposed algorithm is a promising hypergraph partitioning tool to tackle the challenge posed by large-scale and dynamic data processing.
{"title":"FREIGHT: Fast Streaming Hypergraph Partitioning","authors":"Kamal Eyubov, Marcelo Fonseca Faraj, Christian Schulz","doi":"10.1007/s00453-024-01291-8","DOIUrl":"10.1007/s00453-024-01291-8","url":null,"abstract":"<div><p>Partitioning the vertices of a (hyper)graph into <i>k</i> roughly balanced blocks such that few (hyper)edges run between blocks is a key problem for large-scale distributed processing. A current trend for partitioning huge (hyper)graphs using low computational resources are streaming algorithms. In this work, we propose FREIGHT: a Fast stREamInG Hypergraph parTitioning algorithm which is an adaptation of the widely-known graph-based algorithm Fennel. By using an efficient data structure, we make the overall running of FREIGHT linearly dependent on the pin-count of the hypergraph and the memory consumption linearly dependent on the numbers of nets and blocks. The results of our extensive experimentation showcase the promising performance of FREIGHT as a highly efficient and effective solution for streaming hypergraph partitioning. Our algorithm demonstrates competitive running time with the Hashing algorithm, with a geometric mean runtime within a factor of four compared to the Hashing algorithm. Significantly, our findings highlight the superiority of FREIGHT over all existing (buffered) streaming algorithms and even the in-memory algorithm HYPE, with respect to both cut-net and connectivity measures. This indicates that our proposed algorithm is a promising hypergraph partitioning tool to tackle the challenge posed by large-scale and dynamic data processing.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 3","pages":"405 - 428"},"PeriodicalIF":0.9,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01291-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143446391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-23DOI: 10.1007/s00453-024-01283-8
Thomas Bläsius, Max Göttlicher
The problem Power Dominating Set (PDS) is motivated by the placement of phasor measurement units to monitor electrical networks. It asks for a minimum set of vertices in a graph that observes all remaining vertices by exhaustively applying two observation rules. Our contribution is twofold. First, we determine the parameterized complexity of PDS by proving it is W[P]-complete when parameterized with respect to the solution size. We note that it was only known to be W[2]-hard before. Our second and main contribution is a new algorithm for PDS that efficiently solves practical instances. Our algorithm consists of two complementary parts. The first is a set of reduction rules for PDS that can also be used in conjunction with previously existing algorithms. The second is an algorithm for solving the remaining kernel based on the implicit hitting set approach. Our evaluation on a set of power grid instances from the literature shows that our solver outperforms previous state-of-the-art solvers for PDS by more than one order of magnitude on average. Furthermore, our algorithm can solve previously unsolved instances of continental scale within a few minutes.
{"title":"An Efficient Algorithm for Power Dominating Set","authors":"Thomas Bläsius, Max Göttlicher","doi":"10.1007/s00453-024-01283-8","DOIUrl":"10.1007/s00453-024-01283-8","url":null,"abstract":"<div><p>The problem <span>Power Dominating Set</span> (<span>PDS</span>) is motivated by the placement of phasor measurement units to monitor electrical networks. It asks for a minimum set of vertices in a graph that observes all remaining vertices by exhaustively applying two observation rules. Our contribution is twofold. First, we determine the parameterized complexity of <span>PDS</span> by proving it is <i>W</i>[<i>P</i>]-complete when parameterized with respect to the solution size. We note that it was only known to be <i>W</i>[2]-hard before. Our second and main contribution is a new algorithm for <span>PDS</span> that efficiently solves practical instances. Our algorithm consists of two complementary parts. The first is a set of reduction rules for <span>PDS</span> that can also be used in conjunction with previously existing algorithms. The second is an algorithm for solving the remaining kernel based on the implicit hitting set approach. Our evaluation on a set of power grid instances from the literature shows that our solver outperforms previous state-of-the-art solvers for <span>PDS</span> by more than one order of magnitude on average. Furthermore, our algorithm can solve previously unsolved instances of continental scale within a few minutes.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 3","pages":"344 - 376"},"PeriodicalIF":0.9,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00453-024-01283-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143446557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-23DOI: 10.1007/s00453-024-01288-3
Pedro Montealegre, Diego Ramírez-Romero, Ivan Rapaport
In distributed interactive proofs, the nodes of a graph G interact with a powerful but untrustable prover who tries to convince them, in a small number of rounds and through short messages, that G satisfies some property. This series of rounds is followed by a phase of distributed verification, which may be either deterministic or randomized, where nodes exchange messages with their neighbors. The nature of this last verification round defines the two types of interactive protocols. We say that the protocol is of Arthur–Merlin type if the verification round is deterministic. We say that the protocol is of Merlin–Arthur type if, in the verification round, the nodes are allowed to use a fresh set of random bits. In the original model introduced by Kol, Oshman, and Saxena [PODC 2018], the randomness was private in the sense that each node had only access to an individual source of random coins. Crescenzi, Fraigniaud, and Paz [DISC 2019] initiated the study of the impact of shared randomness (the situation where the coin tosses are visible to all nodes) in the distributed interactive model. In this work, we continue that research line by showing that the impact of the two forms of randomness is very different depending on whether we are considering Arthur–Merlin protocols or Merlin–Arthur protocols. While private randomness gives more power to the first type of protocols, shared randomness provides more power to the second. We also show that there exists at most an exponential gap between the certificate size in distributed interactive proofs with respect to distributed verification protocols without any randomness.
{"title":"Shared Versus Private Randomness in Distributed Interactive Proofs","authors":"Pedro Montealegre, Diego Ramírez-Romero, Ivan Rapaport","doi":"10.1007/s00453-024-01288-3","DOIUrl":"10.1007/s00453-024-01288-3","url":null,"abstract":"<div><p>In distributed interactive proofs, the nodes of a graph G interact with a powerful but untrustable prover who tries to convince them, in a small number of rounds and through short messages, that G satisfies some property. This series of rounds is followed by a phase of distributed verification, which may be either deterministic or randomized, where nodes exchange messages with their neighbors. The nature of this last verification round defines the two types of interactive protocols. We say that the protocol is of Arthur–Merlin type if the verification round is deterministic. We say that the protocol is of Merlin–Arthur type if, in the verification round, the nodes are allowed to use a fresh set of random bits. In the original model introduced by Kol, Oshman, and Saxena [PODC 2018], the randomness was private in the sense that each node had only access to an individual source of random coins. Crescenzi, Fraigniaud, and Paz [DISC 2019] initiated the study of the impact of shared randomness (the situation where the coin tosses are visible to all nodes) in the distributed interactive model. In this work, we continue that research line by showing that the impact of the two forms of randomness is very different depending on whether we are considering Arthur–Merlin protocols or Merlin–Arthur protocols. While private randomness gives more power to the first type of protocols, shared randomness provides more power to the second. We also show that there exists at most an exponential gap between the certificate size in distributed interactive proofs with respect to distributed verification protocols without any randomness.</p></div>","PeriodicalId":50824,"journal":{"name":"Algorithmica","volume":"87 3","pages":"377 - 404"},"PeriodicalIF":0.9,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143446558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}