Carlos Alegria, Susanna Caroppo, Giordano Da Lozzo, Marco D'Elia, Giuseppe Di Battista, Fabrizio Frati, Fabrizio Grosso, Maurizio Patrignani
We study upward pointset embeddings (UPSEs) of planar $st$-graphs. Let $G$ be a planar $st$-graph and let $S subset mathbb{R}^2$ be a pointset with $|S|= |V(G)|$. An UPSE of $G$ on $S$ is an upward planar straight-line drawing of $G$ that maps the vertices of $G$ to the points of $S$. We consider both the problem of testing the existence of an UPSE of $G$ on $S$ (UPSE Testing) and the problem of enumerating all UPSEs of $G$ on $S$. We prove that UPSE Testing is NP-complete even for $st$-graphs that consist of a set of directed $st$-paths sharing only $s$ and $t$. On the other hand, for $n$-vertex planar $st$-graphs whose maximum $st$-cutset has size $k$, we prove that it is possible to solve UPSE Testing in $O(n^{4k})$ time with $O(n^{3k})$ space, and to enumerate all UPSEs of $G$ on $S$ with $O(n)$ worst-case delay, using $O(k n^{4k} log n)$ space, after $O(k n^{4k} log n)$ set-up time. Moreover, for an $n$-vertex $st$-graph whose underlying graph is a cycle, we provide a necessary and sufficient condition for the existence of an UPSE on a given poinset, which can be tested in $O(n log n)$ time. Related to this result, we give an algorithm that, for a set $S$ of $n$ points, enumerates all the non-crossing monotone Hamiltonian cycles on $S$ with $O(n)$ worst-case delay, using $O(n^2)$ space, after $O(n^2)$ set-up time.
{"title":"Upward Pointset Embeddings of Planar st-Graphs","authors":"Carlos Alegria, Susanna Caroppo, Giordano Da Lozzo, Marco D'Elia, Giuseppe Di Battista, Fabrizio Frati, Fabrizio Grosso, Maurizio Patrignani","doi":"arxiv-2408.17369","DOIUrl":"https://doi.org/arxiv-2408.17369","url":null,"abstract":"We study upward pointset embeddings (UPSEs) of planar $st$-graphs. Let $G$ be\u0000a planar $st$-graph and let $S subset mathbb{R}^2$ be a pointset with $|S|=\u0000|V(G)|$. An UPSE of $G$ on $S$ is an upward planar straight-line drawing of $G$\u0000that maps the vertices of $G$ to the points of $S$. We consider both the\u0000problem of testing the existence of an UPSE of $G$ on $S$ (UPSE Testing) and\u0000the problem of enumerating all UPSEs of $G$ on $S$. We prove that UPSE Testing\u0000is NP-complete even for $st$-graphs that consist of a set of directed\u0000$st$-paths sharing only $s$ and $t$. On the other hand, for $n$-vertex planar\u0000$st$-graphs whose maximum $st$-cutset has size $k$, we prove that it is\u0000possible to solve UPSE Testing in $O(n^{4k})$ time with $O(n^{3k})$ space, and\u0000to enumerate all UPSEs of $G$ on $S$ with $O(n)$ worst-case delay, using $O(k\u0000n^{4k} log n)$ space, after $O(k n^{4k} log n)$ set-up time. Moreover, for an\u0000$n$-vertex $st$-graph whose underlying graph is a cycle, we provide a necessary\u0000and sufficient condition for the existence of an UPSE on a given poinset, which\u0000can be tested in $O(n log n)$ time. Related to this result, we give an\u0000algorithm that, for a set $S$ of $n$ points, enumerates all the non-crossing\u0000monotone Hamiltonian cycles on $S$ with $O(n)$ worst-case delay, using $O(n^2)$\u0000space, after $O(n^2)$ set-up time.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142175147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the {em correlated knapsack orienteering} (CSKO) problem: we are given a travel budget $B$, processing-time budget $W$, finite metric space $(V,d)$ with root $rhoin V$, where each vertex is associated with a job with possibly correlated random size and random reward that become known only when the job completes. Random variables are independent across different vertices. The goal is to compute a $rho$-rooted path of length at most $B$, in a possibly adaptive fashion, that maximizes the reward collected from jobs that processed by time $W$. To our knowledge, CSKO has not been considered before, though prior work has considered the uncorrelated problem, {em stochastic knapsack orienteering}, and {em correlated orienteering}, which features only one budget constraint on the {em sum} of travel-time and processing-times. We show that the {em adaptivity gap of CSKO is not a constant, and is at least $Omegabigl(maxsqrt{log{B}},sqrt{loglog{W}}}bigr)$}. Complementing this, we devise {em non-adaptive} algorithms that obtain: (a) $O(loglog W)$-approximation in quasi-polytime; and (b) $O(log W)$-approximation in polytime. We obtain similar guarantees for CSKO with cancellations, wherein a job can be cancelled before its completion time, foregoing its reward. We also consider the special case of CSKO, wherein job sizes are weighted Bernoulli distributions, and more generally where the distributions are supported on at most two points (2-CSKO). Although weighted Bernoulli distributions suffice to yield an $Omega(sqrt{loglog B})$ adaptivity-gap lower bound for (uncorrelated) {em stochastic orienteering}, we show that they are easy instances for CSKO. We develop non-adaptive algorithms that achieve $O(1)$-approximation in polytime for weighted Bernoulli distributions, and in $(n+log B)^{O(log W)}$-time for the more general case of 2-CSKO.
{"title":"Approximation Algorithms for Correlated Knapsack Orienteering","authors":"David Aleman Espinosa, Chaitanya Swamy","doi":"arxiv-2408.16566","DOIUrl":"https://doi.org/arxiv-2408.16566","url":null,"abstract":"We consider the {em correlated knapsack orienteering} (CSKO) problem: we are\u0000given a travel budget $B$, processing-time budget $W$, finite metric space\u0000$(V,d)$ with root $rhoin V$, where each vertex is associated with a job with\u0000possibly correlated random size and random reward that become known only when\u0000the job completes. Random variables are independent across different vertices.\u0000The goal is to compute a $rho$-rooted path of length at most $B$, in a\u0000possibly adaptive fashion, that maximizes the reward collected from jobs that\u0000processed by time $W$. To our knowledge, CSKO has not been considered before,\u0000though prior work has considered the uncorrelated problem, {em stochastic\u0000knapsack orienteering}, and {em correlated orienteering}, which features only\u0000one budget constraint on the {em sum} of travel-time and processing-times. We show that the {em adaptivity gap of CSKO is not a constant, and is at\u0000least $Omegabigl(maxsqrt{log{B}},sqrt{loglog{W}}}bigr)$}.\u0000Complementing this, we devise {em non-adaptive} algorithms that obtain: (a)\u0000$O(loglog W)$-approximation in quasi-polytime; and (b) $O(log\u0000W)$-approximation in polytime. We obtain similar guarantees for CSKO with\u0000cancellations, wherein a job can be cancelled before its completion time,\u0000foregoing its reward. We also consider the special case of CSKO, wherein job\u0000sizes are weighted Bernoulli distributions, and more generally where the\u0000distributions are supported on at most two points (2-CSKO). Although weighted\u0000Bernoulli distributions suffice to yield an $Omega(sqrt{loglog B})$\u0000adaptivity-gap lower bound for (uncorrelated) {em stochastic orienteering}, we\u0000show that they are easy instances for CSKO. We develop non-adaptive algorithms\u0000that achieve $O(1)$-approximation in polytime for weighted Bernoulli\u0000distributions, and in $(n+log B)^{O(log W)}$-time for the more general case\u0000of 2-CSKO.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142175148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given a set of N propositions, if any pair is mutual exclusive, then the set of all propositions are N-way jointly mutually exclusive. This paper provides a new general counterexample to the converse. We prove that for any set of N propositional variables, there exist N propositions such that their N-way conjunction is zero, yet all k-way component conjunctions are non-zero. The consequence is that N-way joint mutual exclusion does not imply any pairwise mutual exclusion. A similar result is true for sets since propositional calculus and set theory are models for two-element Boolean algebra.
给定一个由 N 个命题组成的集合,如果任何一对命题是互斥的,那么所有命题的集合就是 N 路共同互斥的。本文为反义词提供了一个新的一般性反例。我们证明,对于任何由 N 个命题变量组成的集合,都存在 N 个命题,它们的 N 向连词为零,但所有 k 向分量连词都不为零。其结果是,N 向联合互斥并不意味着任何成对互斥。类似的结果也适用于集合,因为命题微积分和集合论是两元素布尔代数的模型。
{"title":"N-Way Joint Mutual Exclusion Does Not Imply Any Pairwise Mutual Exclusion for Propositions","authors":"Roy S. Freedman","doi":"arxiv-2409.03784","DOIUrl":"https://doi.org/arxiv-2409.03784","url":null,"abstract":"Given a set of N propositions, if any pair is mutual exclusive, then the set\u0000of all propositions are N-way jointly mutually exclusive. This paper provides a\u0000new general counterexample to the converse. We prove that for any set of N\u0000propositional variables, there exist N propositions such that their N-way\u0000conjunction is zero, yet all k-way component conjunctions are non-zero. The\u0000consequence is that N-way joint mutual exclusion does not imply any pairwise\u0000mutual exclusion. A similar result is true for sets since propositional\u0000calculus and set theory are models for two-element Boolean algebra.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142175149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We revisit the classical problem of channel allocation for Wi-Fi access points (AP). Using mechanisms such as the CSMA/CA protocol, Wi-Fi access points which are in conflict within a same channel are still able to communicate to terminals. In graph theoretical terms, it means that it is not mandatory for the channel allocation to correspond to a proper coloring of the conflict graph. However, recent studies suggest that the structure -- rather than the number -- of conflicts plays a crucial role in the performance of each AP. More precisely, the graph induced by each channel must satisfy the so-called $1$-extendability property, which requires each vertex to be contained in an independent set of maximum cardinality. In this paper we introduce the 1-extendable chromatic number, which is the minimum size of a partition of the vertex set of a graph such that each part induces a 1-extendable graph. We study this parameter and the related optimization problem through different perspectives: algorithms and complexity, structure, and extremal properties. We first show how to compute this number using modular decompositions of graphs, and analyze the running time with respect to the modular width of the input graph. We also focus on the special case of cographs, and prove that the 1-extendable chromatic number can be computed in quasi-polynomial time in this class. Concerning extremal results, we show that the 1-extendable chromatic number of a graph with $n$ vertices is at most $2sqrt{n}$, whereas the classical chromatic number can be as large as $n$. We are also able to construct graphs whose 1-extendable chromatic number is at least logarithmic in the number of vertices.
{"title":"Channel allocation revisited through 1-extendability of graphs","authors":"Anthony Busson, Malory Marin, Rémi Watrigant","doi":"arxiv-2408.14633","DOIUrl":"https://doi.org/arxiv-2408.14633","url":null,"abstract":"We revisit the classical problem of channel allocation for Wi-Fi access\u0000points (AP). Using mechanisms such as the CSMA/CA protocol, Wi-Fi access points\u0000which are in conflict within a same channel are still able to communicate to\u0000terminals. In graph theoretical terms, it means that it is not mandatory for\u0000the channel allocation to correspond to a proper coloring of the conflict\u0000graph. However, recent studies suggest that the structure -- rather than the\u0000number -- of conflicts plays a crucial role in the performance of each AP. More\u0000precisely, the graph induced by each channel must satisfy the so-called\u0000$1$-extendability property, which requires each vertex to be contained in an\u0000independent set of maximum cardinality. In this paper we introduce the\u00001-extendable chromatic number, which is the minimum size of a partition of the\u0000vertex set of a graph such that each part induces a 1-extendable graph. We\u0000study this parameter and the related optimization problem through different\u0000perspectives: algorithms and complexity, structure, and extremal properties. We\u0000first show how to compute this number using modular decompositions of graphs,\u0000and analyze the running time with respect to the modular width of the input\u0000graph. We also focus on the special case of cographs, and prove that the\u00001-extendable chromatic number can be computed in quasi-polynomial time in this\u0000class. Concerning extremal results, we show that the 1-extendable chromatic\u0000number of a graph with $n$ vertices is at most $2sqrt{n}$, whereas the\u0000classical chromatic number can be as large as $n$. We are also able to\u0000construct graphs whose 1-extendable chromatic number is at least logarithmic in\u0000the number of vertices.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142175150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthias Bentert, Fedor V. Fomin, Fanny Hauser, Saket Saurabh
In Two-Sets Cut-Uncut, we are given an undirected graph $G=(V,E)$ and two terminal sets $S$ and $T$. The task is to find a minimum cut $C$ in $G$ (if there is any) separating $S$ from $T$ under the following ``uncut'' condition. In the graph $(V,E setminus C)$, the terminals in each terminal set remain in the same connected component. In spite of the superficial similarity to the classic problem Minimum $s$-$t$-Cut, Two-Sets Cut-Uncut is computationally challenging. In particular, even deciding whether such a cut of any size exists, is already NP-complete. We initiate a systematic study of Two-Sets Cut-Uncut within the context of parameterized complexity. By leveraging known relations between many well-studied graph parameters, we characterize the structural properties of input graphs that allow for polynomial kernels, fixed-parameter tractability (FPT), and slicewise polynomial algorithms (XP). Our main contribution is the near-complete establishment of the complexity of these algorithmic properties within the described hierarchy of graph parameters. On a technical level, our main results are fixed-parameter tractability for the (vertex-deletion) distance to cographs and an OR-cross composition excluding polynomial kernels for the vertex cover number of the input graph (under the standard complexity assumption NP is not contained in coNP/poly).
{"title":"The Parameterized Complexity Landscape of Two-Sets Cut-Uncut","authors":"Matthias Bentert, Fedor V. Fomin, Fanny Hauser, Saket Saurabh","doi":"arxiv-2408.13543","DOIUrl":"https://doi.org/arxiv-2408.13543","url":null,"abstract":"In Two-Sets Cut-Uncut, we are given an undirected graph $G=(V,E)$ and two\u0000terminal sets $S$ and $T$. The task is to find a minimum cut $C$ in $G$ (if\u0000there is any) separating $S$ from $T$ under the following ``uncut'' condition.\u0000In the graph $(V,E setminus C)$, the terminals in each terminal set remain in\u0000the same connected component. In spite of the superficial similarity to the\u0000classic problem Minimum $s$-$t$-Cut, Two-Sets Cut-Uncut is computationally\u0000challenging. In particular, even deciding whether such a cut of any size\u0000exists, is already NP-complete. We initiate a systematic study of Two-Sets\u0000Cut-Uncut within the context of parameterized complexity. By leveraging known\u0000relations between many well-studied graph parameters, we characterize the\u0000structural properties of input graphs that allow for polynomial kernels,\u0000fixed-parameter tractability (FPT), and slicewise polynomial algorithms (XP).\u0000Our main contribution is the near-complete establishment of the complexity of\u0000these algorithmic properties within the described hierarchy of graph\u0000parameters. On a technical level, our main results are fixed-parameter\u0000tractability for the (vertex-deletion) distance to cographs and an OR-cross\u0000composition excluding polynomial kernels for the vertex cover number of the\u0000input graph (under the standard complexity assumption NP is not contained in\u0000coNP/poly).","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142175151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the fundamental scheduling problem $1mid r_jmidsum w_j U_j$: schedule a set of $n$ jobs with weights, processing times, release dates, and due dates on a single machine, such that each job starts after its release date and we maximize the weighted number of jobs that complete execution before their due date. Problem $1mid r_jmidsum w_j U_j$ generalizes both Knapsack and Partition, and the simplified setting without release dates was studied by Hermelin et al. [Annals of Operations Research, 2021] from a parameterized complexity viewpoint. Our main contribution is a thorough complexity analysis of $1mid r_jmidsum w_j U_j$ in terms of four key problem parameters: the number $p_#$ of processing times, the number $w_#$ of weights, the number $d_#$ of due dates, and the number $r_#$ of release dates of the jobs. $1mid r_jmidsum w_j U_j$ is known to be weakly para-NP-hard even if $w_#+d_#+r_#$ is constant, and Heeger and Hermelin [ESA, 2024] recently showed (weak) W[1]-hardness parameterized by $p_#$ or $w_#$ even if $r_#$ is constant. Algorithmically, we show that $1mid r_jmidsum w_j U_j$ is fixed-parameter tractable parameterized by $p_#$ combined with any two of the remaining three parameters $w_#$, $d_#$, and $r_#$. We further provide pseudo-polynomial XP-time algorithms for parameter $r_#$ and $d_#$. To complement these algorithms, we show that $1mid r_jmidsum w_j U_j$ is (strongly) W[1]-hard when parameterized by $d_#+r_#$ even if $w_#$ is constant. Our results provide a nearly complete picture of the complexity of $1mid r_jmidsum w_j U_j$ for $p_#$, $w_#$, $d_#$, and $r_#$ as parameters, and extend those of Hermelin et al. [Annals of Operations Research, 2021] for the problem $1midmidsum w_j U_j$ without release dates.
{"title":"Single-Machine Scheduling to Minimize the Number of Tardy Jobs with Release Dates","authors":"Matthias Kaul, Matthias Mnich, Hendrik Molter","doi":"arxiv-2408.12967","DOIUrl":"https://doi.org/arxiv-2408.12967","url":null,"abstract":"We study the fundamental scheduling problem $1mid r_jmidsum w_j U_j$:\u0000schedule a set of $n$ jobs with weights, processing times, release dates, and\u0000due dates on a single machine, such that each job starts after its release date\u0000and we maximize the weighted number of jobs that complete execution before\u0000their due date. Problem $1mid r_jmidsum w_j U_j$ generalizes both Knapsack\u0000and Partition, and the simplified setting without release dates was studied by\u0000Hermelin et al. [Annals of Operations Research, 2021] from a parameterized\u0000complexity viewpoint. Our main contribution is a thorough complexity analysis of $1mid r_jmidsum\u0000w_j U_j$ in terms of four key problem parameters: the number $p_#$ of\u0000processing times, the number $w_#$ of weights, the number $d_#$ of due dates,\u0000and the number $r_#$ of release dates of the jobs. $1mid r_jmidsum w_j U_j$\u0000is known to be weakly para-NP-hard even if $w_#+d_#+r_#$ is constant, and\u0000Heeger and Hermelin [ESA, 2024] recently showed (weak) W[1]-hardness\u0000parameterized by $p_#$ or $w_#$ even if $r_#$ is constant. Algorithmically, we show that $1mid r_jmidsum w_j U_j$ is fixed-parameter\u0000tractable parameterized by $p_#$ combined with any two of the remaining three\u0000parameters $w_#$, $d_#$, and $r_#$. We further provide pseudo-polynomial\u0000XP-time algorithms for parameter $r_#$ and $d_#$. To complement these\u0000algorithms, we show that $1mid r_jmidsum w_j U_j$ is (strongly) W[1]-hard\u0000when parameterized by $d_#+r_#$ even if $w_#$ is constant. Our results\u0000provide a nearly complete picture of the complexity of $1mid r_jmidsum w_j\u0000U_j$ for $p_#$, $w_#$, $d_#$, and $r_#$ as parameters, and extend those of\u0000Hermelin et al. [Annals of Operations Research, 2021] for the problem\u0000$1midmidsum w_j U_j$ without release dates.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142175152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present the first constant-approximation algorithm for {em budgeted sweep coverage problem} (BSC). The BSC involves designing routes for a number of mobile sensors (a.k.a. robots) to periodically collect information as much as possible from points of interest (PoIs). To approach this problem, we propose to first examine the {em multi-orienteering problem} (MOP). The MOP aims to find a set of $m$ vertex-disjoint paths that cover as many vertices as possible while adhering to a budget constraint $B$. We develop a constant-approximation algorithm for MOP and utilize it to achieve a constant-approximation for BSC. Our findings open new possibilities for optimizing mobile sensor deployments and related combinatorial optimization tasks.
{"title":"A Constant-Approximation Algorithm for Budgeted Sweep Coverage with Mobile Sensors","authors":"Wei Liang, Shaojie Tang, Zhao Zhang","doi":"arxiv-2408.12468","DOIUrl":"https://doi.org/arxiv-2408.12468","url":null,"abstract":"In this paper, we present the first constant-approximation algorithm for {em\u0000budgeted sweep coverage problem} (BSC). The BSC involves designing routes for a\u0000number of mobile sensors (a.k.a. robots) to periodically collect information as\u0000much as possible from points of interest (PoIs). To approach this problem, we\u0000propose to first examine the {em multi-orienteering problem} (MOP). The MOP\u0000aims to find a set of $m$ vertex-disjoint paths that cover as many vertices as\u0000possible while adhering to a budget constraint $B$. We develop a\u0000constant-approximation algorithm for MOP and utilize it to achieve a\u0000constant-approximation for BSC. Our findings open new possibilities for\u0000optimizing mobile sensor deployments and related combinatorial optimization\u0000tasks.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142175159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mingyang Gong, Zhi-Zhong Chen, Guohui Lin, Lusheng Wang
This paper studies $MPC^{5+}_v$, which is to cover as many vertices as possible in a given graph $G=(V,E)$ by vertex-disjoint $5^+$-paths (i.e., paths each with at least five vertices). $MPC^{5+}_v$ is NP-hard and admits an existing local-search-based approximation algorithm which achieves a ratio of $frac {19}7approx 2.714$ and runs in $O(|V|^6)$ time. In this paper, we present a new approximation algorithm for $MPC^{5+}_v$ which achieves a ratio of $2.511$ and runs in $O(|V|^{2.5} |E|^2)$ time. Unlike the previous algorithm, the new algorithm is based on maximum matching, maximum path-cycle cover, and recursion.
{"title":"Approximately covering vertices by order-$5$ or longer paths","authors":"Mingyang Gong, Zhi-Zhong Chen, Guohui Lin, Lusheng Wang","doi":"arxiv-2408.11225","DOIUrl":"https://doi.org/arxiv-2408.11225","url":null,"abstract":"This paper studies $MPC^{5+}_v$, which is to cover as many vertices as\u0000possible in a given graph $G=(V,E)$ by vertex-disjoint $5^+$-paths (i.e., paths\u0000each with at least five vertices). $MPC^{5+}_v$ is NP-hard and admits an\u0000existing local-search-based approximation algorithm which achieves a ratio of\u0000$frac {19}7approx 2.714$ and runs in $O(|V|^6)$ time. In this paper, we\u0000present a new approximation algorithm for $MPC^{5+}_v$ which achieves a ratio\u0000of $2.511$ and runs in $O(|V|^{2.5} |E|^2)$ time. Unlike the previous\u0000algorithm, the new algorithm is based on maximum matching, maximum path-cycle\u0000cover, and recursion.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142175157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The tolerance of an element of a combinatorial optimization problem with respect to a given optimal solution is the maximum change, i.e., decrease or increase, of its cost, such that this solution remains optimal. The bottleneck path problem, for given an edge-capacitated graph, a source, and a target, is to find the $max$-$min$ value of edge capacities on paths between the source and the target. For this problem and a network with $n$ vertices and $m$ edges, there is known the Ramaswamy-Orlin-Chakravarty's algorithm to compute all tolerances in $O(m+nlog n)$ time. In this paper, for any in advance given sample of the problem with pairwise distinct edge capacities, we present a constant-time algorithm for computing both tolerances of an arbitrary edge with a preprocessing time $Obig(m alpha(m,n)big)$, where $alpha(cdot,cdot)$ is the inverse Ackermann function. For given $k$ source-target pairs, our solution yields an $Obig((alpha(m,n)+k)mbig)$-time algorithm to find tolerances of all edges with respect to optimal paths between the sources and targets, while the known algorithm takes $Obig(k(m+nlog n)big)$ time to find them.
{"title":"Efficient Online Sensitivity Analysis For The Injective Bottleneck Path Problem","authors":"Kirill V. Kaymakov, Dmitry S. Malyshev","doi":"arxiv-2408.09443","DOIUrl":"https://doi.org/arxiv-2408.09443","url":null,"abstract":"The tolerance of an element of a combinatorial optimization problem with\u0000respect to a given optimal solution is the maximum change, i.e., decrease or\u0000increase, of its cost, such that this solution remains optimal. The bottleneck\u0000path problem, for given an edge-capacitated graph, a source, and a target, is\u0000to find the $max$-$min$ value of edge capacities on paths between the source\u0000and the target. For this problem and a network with $n$ vertices and $m$ edges,\u0000there is known the Ramaswamy-Orlin-Chakravarty's algorithm to compute all\u0000tolerances in $O(m+nlog n)$ time. In this paper, for any in advance given\u0000sample of the problem with pairwise distinct edge capacities, we present a\u0000constant-time algorithm for computing both tolerances of an arbitrary edge with\u0000a preprocessing time $Obig(m alpha(m,n)big)$, where $alpha(cdot,cdot)$ is\u0000the inverse Ackermann function. For given $k$ source-target pairs, our solution\u0000yields an $Obig((alpha(m,n)+k)mbig)$-time algorithm to find tolerances of\u0000all edges with respect to optimal paths between the sources and targets, while\u0000the known algorithm takes $Obig(k(m+nlog n)big)$ time to find them.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142175153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amey Bhangale, Mark Braverman, Subhash Khot, Yang P. Liu, Dor Minzer
In a $3$-$mathsf{XOR}$ game $mathcal{G}$, the verifier samples a challenge $(x,y,z)sim mu$ where $mu$ is a probability distribution over $SigmatimesGammatimesPhi$, and a map $tcolon SigmatimesGammatimesPhitomathcal{A}$ for a finite Abelian group $mathcal{A}$ defining a constraint. The verifier sends the questions $x$, $y$ and $z$ to the players Alice, Bob and Charlie respectively, receives answers $f(x)$, $g(y)$ and $h(z)$ that are elements in $mathcal{A}$ and accepts if $f(x)+g(y)+h(z) = t(x,y,z)$. The value, $mathsf{val}(mathcal{G})$, of the game is defined to be the maximum probability the verifier accepts over all players' strategies. We show that if $mathcal{G}$ is a $3$-$mathsf{XOR}$ game with value strictly less than $1$, whose underlying distribution over questions $mu$ does not admit Abelian embeddings into $(mathbb{Z},+)$, then the value of the $n$-fold repetition of $mathcal{G}$ is exponentially decaying. That is, there exists $c=c(mathcal{G})>0$ such that $mathsf{val}(mathcal{G}^{otimes n})leq 2^{-cn}$. This extends a previous result of [Braverman-Khot-Minzer, FOCS 2023] showing exponential decay for the GHZ game. Our proof combines tools from additive combinatorics and tools from discrete Fourier analysis.
{"title":"Parallel Repetition for $3$-Player XOR Games","authors":"Amey Bhangale, Mark Braverman, Subhash Khot, Yang P. Liu, Dor Minzer","doi":"arxiv-2408.09352","DOIUrl":"https://doi.org/arxiv-2408.09352","url":null,"abstract":"In a $3$-$mathsf{XOR}$ game $mathcal{G}$, the verifier samples a challenge\u0000$(x,y,z)sim mu$ where $mu$ is a probability distribution over\u0000$SigmatimesGammatimesPhi$, and a map $tcolon\u0000SigmatimesGammatimesPhitomathcal{A}$ for a finite Abelian group\u0000$mathcal{A}$ defining a constraint. The verifier sends the questions $x$, $y$\u0000and $z$ to the players Alice, Bob and Charlie respectively, receives answers\u0000$f(x)$, $g(y)$ and $h(z)$ that are elements in $mathcal{A}$ and accepts if\u0000$f(x)+g(y)+h(z) = t(x,y,z)$. The value, $mathsf{val}(mathcal{G})$, of the\u0000game is defined to be the maximum probability the verifier accepts over all\u0000players' strategies. We show that if $mathcal{G}$ is a $3$-$mathsf{XOR}$ game with value\u0000strictly less than $1$, whose underlying distribution over questions $mu$ does\u0000not admit Abelian embeddings into $(mathbb{Z},+)$, then the value of the\u0000$n$-fold repetition of $mathcal{G}$ is exponentially decaying. That is, there\u0000exists $c=c(mathcal{G})>0$ such that $mathsf{val}(mathcal{G}^{otimes\u0000n})leq 2^{-cn}$. This extends a previous result of [Braverman-Khot-Minzer,\u0000FOCS 2023] showing exponential decay for the GHZ game. Our proof combines tools\u0000from additive combinatorics and tools from discrete Fourier analysis.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142175155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}