Given a (multi)graph $G$ which contains a bipartite subgraph with $rho$ edges, what is the largest triangle-free subgraph of $G$ that can be found efficiently? We present an SDP-based algorithm that finds one with at least $0.8823 rho$ edges, thus improving on the subgraph with $0.878 rho$ edges obtained by the classic Max-Cut algorithm of Goemans and Williamson. On the other hand, by a reduction from Hastad's 3-bit PCP we show that it is NP-hard to find a triangle-free subgraph with $(25 / 26 + epsilon) rho approx (0.961 + epsilon) rho$ edges. As an application, we classify the Maximum Promise Constraint Satisfaction Problem MaxPCSP($G$,$H$) for all bipartite $G$: Given an input (multi)graph $X$ which admits a $G$-colouring satisfying $rho$ edges, find an $H$-colouring of $X$ that satisfies $rho$ edges. This problem is solvable in polynomial time, apart from trivial cases, if $H$ contains a triangle, and is NP-hard otherwise.
{"title":"Maximum Bipartite vs. Triangle-Free Subgraph","authors":"Tamio-Vesa Nakajima, Stanislav Živný","doi":"arxiv-2406.20069","DOIUrl":"https://doi.org/arxiv-2406.20069","url":null,"abstract":"Given a (multi)graph $G$ which contains a bipartite subgraph with $rho$\u0000edges, what is the largest triangle-free subgraph of $G$ that can be found\u0000efficiently? We present an SDP-based algorithm that finds one with at least\u0000$0.8823 rho$ edges, thus improving on the subgraph with $0.878 rho$ edges\u0000obtained by the classic Max-Cut algorithm of Goemans and Williamson. On the\u0000other hand, by a reduction from Hastad's 3-bit PCP we show that it is NP-hard\u0000to find a triangle-free subgraph with $(25 / 26 + epsilon) rho approx (0.961\u0000+ epsilon) rho$ edges. As an application, we classify the Maximum Promise Constraint Satisfaction\u0000Problem MaxPCSP($G$,$H$) for all bipartite $G$: Given an input (multi)graph $X$\u0000which admits a $G$-colouring satisfying $rho$ edges, find an $H$-colouring of\u0000$X$ that satisfies $rho$ edges. This problem is solvable in polynomial time,\u0000apart from trivial cases, if $H$ contains a triangle, and is NP-hard otherwise.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"141 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This Paper defines and explores solution to the problem of emph{Inversion of a finite Sequence} over the binary field, that of finding a prefix element of the sequence which confirms with a emph{Recurrence Relation} (RR) rule defined by a polynomial and satisfied by the sequence. The minimum number of variables (order) in a polynomial of a fixed degree defining RRs is termed as the emph{Polynomial Complexity} of the sequence at that degree, while the minimum number of variables of such polynomials at a fixed degree which also result in a unique prefix to the sequence and maximum rank of the matrix of evaluation of its monomials, is called emph{Polynomial Complexity of Inversion} at the chosen degree. Solutions of this problems discovers solutions to the problem of emph{Local Inversion} of a map $F:ftwo^nrightarrowftwo^n$ at a point $y$ in $ftwo^n$, that of solving for $x$ in $ftwo^n$ from the equation $y=F(x)$. Local inversion of maps has important applications which provide value to this theory. In previous work it was shown that minimal order emph{Linear Recurrence Relations} (LRR) satisfied by the sequence known as the emph{Linear Complexity} (LC) of the sequence, gives a unique solution to the inversion when the sequence is a part of a periodic sequence. This paper explores extension of this theory for solving the inversion problem by considering emph{Non-linear Recurrence Relations} defined by a polynomials of a fixed degree $>1$ and satisfied by the sequence. The minimal order of polynomials satisfied by a sequence is well known as non-linear complexity (defining a Feedback Shift Register of smallest order which determines the sequences by RRs) and called as emph{Maximal Order Complexity} (MOC) of the sequence. However unlike the LC there is no unique polynomial recurrence relation at any degree.
{"title":"Polynomial Complexity of Inversion of sequences and Local Inversion of Maps","authors":"Virendra Sule","doi":"arxiv-2406.19610","DOIUrl":"https://doi.org/arxiv-2406.19610","url":null,"abstract":"This Paper defines and explores solution to the problem of emph{Inversion of\u0000a finite Sequence} over the binary field, that of finding a prefix element of\u0000the sequence which confirms with a emph{Recurrence Relation} (RR) rule defined\u0000by a polynomial and satisfied by the sequence. The minimum number of variables\u0000(order) in a polynomial of a fixed degree defining RRs is termed as the\u0000emph{Polynomial Complexity} of the sequence at that degree, while the minimum\u0000number of variables of such polynomials at a fixed degree which also result in\u0000a unique prefix to the sequence and maximum rank of the matrix of evaluation of\u0000its monomials, is called emph{Polynomial Complexity of Inversion} at the\u0000chosen degree. Solutions of this problems discovers solutions to the problem of\u0000emph{Local Inversion} of a map $F:ftwo^nrightarrowftwo^n$ at a point $y$ in\u0000$ftwo^n$, that of solving for $x$ in $ftwo^n$ from the equation $y=F(x)$.\u0000Local inversion of maps has important applications which provide value to this\u0000theory. In previous work it was shown that minimal order emph{Linear\u0000Recurrence Relations} (LRR) satisfied by the sequence known as the emph{Linear\u0000Complexity} (LC) of the sequence, gives a unique solution to the inversion when\u0000the sequence is a part of a periodic sequence. This paper explores extension of\u0000this theory for solving the inversion problem by considering emph{Non-linear\u0000Recurrence Relations} defined by a polynomials of a fixed degree $>1$ and\u0000satisfied by the sequence. The minimal order of polynomials satisfied by a\u0000sequence is well known as non-linear complexity (defining a Feedback Shift\u0000Register of smallest order which determines the sequences by RRs) and called as\u0000emph{Maximal Order Complexity} (MOC) of the sequence. However unlike the LC\u0000there is no unique polynomial recurrence relation at any degree.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We connect the mixing behaviour of random walks over a graph to the power of the local-consistency algorithm for the solution of the corresponding constraint satisfaction problem (CSP). We extend this connection to arbitrary CSPs and their promise variant. In this way, we establish a linear-level (and, thus, optimal) lower bound against the local-consistency algorithm applied to the class of aperiodic promise CSPs. The proof is based on a combination of the probabilistic method for random ErdH{o}s-R'enyi hypergraphs and a structural result on the number of fibers (i.e., long chains of hyperedges) in sparse hypergraphs of large girth. As a corollary, we completely classify the power of local consistency for the approximate graph homomorphism problem by establishing that, in the nontrivial cases, the problem has linear width.
{"title":"The periodic structure of local consistency","authors":"Lorenzo Ciardo, Stanislav Živný","doi":"arxiv-2406.19685","DOIUrl":"https://doi.org/arxiv-2406.19685","url":null,"abstract":"We connect the mixing behaviour of random walks over a graph to the power of\u0000the local-consistency algorithm for the solution of the corresponding\u0000constraint satisfaction problem (CSP). We extend this connection to arbitrary\u0000CSPs and their promise variant. In this way, we establish a linear-level (and,\u0000thus, optimal) lower bound against the local-consistency algorithm applied to\u0000the class of aperiodic promise CSPs. The proof is based on a combination of the\u0000probabilistic method for random ErdH{o}s-R'enyi hypergraphs and a structural\u0000result on the number of fibers (i.e., long chains of hyperedges) in sparse\u0000hypergraphs of large girth. As a corollary, we completely classify the power of\u0000local consistency for the approximate graph homomorphism problem by\u0000establishing that, in the nontrivial cases, the problem has linear width.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present several advancements in search-type problems for fleets of mobile agents operating in two dimensions under the wireless model. Potential hidden target locations are equidistant from a central point, forming either a disk (infinite possible locations) or regular polygons (finite possible locations). Building on the foundational disk evacuation problem, the disk priority evacuation problem with $k$ Servants, and the disk $w$-weighted search problem, we make improvements on several fronts. First we establish new upper and lower bounds for the $n$-gon priority evacuation problem with $1$ Servant for $n leq 13$, and for $n_k$-gons with $k=2, 3, 4$ Servants, where $n_2 leq 11$, $n_3 leq 9$, and $n_4 leq 10$, offering tight or nearly tight bounds. The only previous results known were a tight upper bound for $k=1$ and $n=6$ and lower bounds for $k=1$ and $n leq 9$. Second, our work improves the best lower bound known for the disk priority evacuation problem with $k=1$ Servant from $4.46798$ to $4.64666$ and for $k=2$ Servants from $3.6307$ to $3.65332$. Third, we improve the best lower bounds known for the disk $w$-weighted group search problem, significantly reducing the gap between the best upper and lower bounds for $w$ values where the gap was largest. These improvements are based on nearly tight upper and lower bounds for the $11$-gon and $12$-gon $w$-weighted evacuation problems, while previous analyses were limited only to lower bounds and only to $7$-gons.
{"title":"Multi-Agent Search-Type Problems on Polygons","authors":"Konstantinos Georgiou, Caleb Jones, Jesse Lucier","doi":"arxiv-2406.19495","DOIUrl":"https://doi.org/arxiv-2406.19495","url":null,"abstract":"We present several advancements in search-type problems for fleets of mobile\u0000agents operating in two dimensions under the wireless model. Potential hidden\u0000target locations are equidistant from a central point, forming either a disk\u0000(infinite possible locations) or regular polygons (finite possible locations).\u0000Building on the foundational disk evacuation problem, the disk priority\u0000evacuation problem with $k$ Servants, and the disk $w$-weighted search problem,\u0000we make improvements on several fronts. First we establish new upper and lower\u0000bounds for the $n$-gon priority evacuation problem with $1$ Servant for $n leq\u000013$, and for $n_k$-gons with $k=2, 3, 4$ Servants, where $n_2 leq 11$, $n_3\u0000leq 9$, and $n_4 leq 10$, offering tight or nearly tight bounds. The only\u0000previous results known were a tight upper bound for $k=1$ and $n=6$ and lower\u0000bounds for $k=1$ and $n leq 9$. Second, our work improves the best lower bound\u0000known for the disk priority evacuation problem with $k=1$ Servant from\u0000$4.46798$ to $4.64666$ and for $k=2$ Servants from $3.6307$ to $3.65332$.\u0000Third, we improve the best lower bounds known for the disk $w$-weighted group\u0000search problem, significantly reducing the gap between the best upper and lower\u0000bounds for $w$ values where the gap was largest. These improvements are based\u0000on nearly tight upper and lower bounds for the $11$-gon and $12$-gon\u0000$w$-weighted evacuation problems, while previous analyses were limited only to\u0000lower bounds and only to $7$-gons.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"210 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Staff scheduling is a well-known problem in operations research and finds its application at hospitals, airports, supermarkets, and many others. Its goal is to assign shifts to staff members such that a certain objective function, e.g. revenue, is maximized. Meanwhile, various constraints of the staff members and the organization need to be satisfied. Typically in staff scheduling problems, there are hard constraints on the minimum number of employees that should be available at specific points of time. Often multiple hard constraints guaranteeing the availability of specific number of employees with different roles need to be considered. Staff scheduling for demand-responsive services, such as, e.g., ride-pooling and ride-hailing services, differs in a key way from this: There are often no hard constraints on the minimum number of employees needed at fixed points in time. Rather, the number of employees working at different points in time should vary according to the demand at those points in time. Having too few employees at a point in time results in lost revenue, while having too many employees at a point in time results in not having enough employees at other points in time, since the total personnel-hours are limited. The objective is to maximize the total reward generated over a planning horizon, given a monotonic relationship between the number of shifts active at a point in time and the instantaneous reward generated at that point in time. This key difference makes it difficult to use existing staff scheduling algorithms for planning shifts in demand-responsive services. In this article, we present a novel approach for modelling and solving staff scheduling problems for demand-responsive services that optimizes for the relevant reward function.
{"title":"Staff Scheduling for Demand-Responsive Services","authors":"Debsankha Manik, Rico Raber","doi":"arxiv-2406.19053","DOIUrl":"https://doi.org/arxiv-2406.19053","url":null,"abstract":"Staff scheduling is a well-known problem in operations research and finds its\u0000application at hospitals, airports, supermarkets, and many others. Its goal is\u0000to assign shifts to staff members such that a certain objective function, e.g.\u0000revenue, is maximized. Meanwhile, various constraints of the staff members and\u0000the organization need to be satisfied. Typically in staff scheduling problems,\u0000there are hard constraints on the minimum number of employees that should be\u0000available at specific points of time. Often multiple hard constraints\u0000guaranteeing the availability of specific number of employees with different\u0000roles need to be considered. Staff scheduling for demand-responsive services,\u0000such as, e.g., ride-pooling and ride-hailing services, differs in a key way\u0000from this: There are often no hard constraints on the minimum number of\u0000employees needed at fixed points in time. Rather, the number of employees\u0000working at different points in time should vary according to the demand at\u0000those points in time. Having too few employees at a point in time results in\u0000lost revenue, while having too many employees at a point in time results in not\u0000having enough employees at other points in time, since the total\u0000personnel-hours are limited. The objective is to maximize the total reward\u0000generated over a planning horizon, given a monotonic relationship between the\u0000number of shifts active at a point in time and the instantaneous reward\u0000generated at that point in time. This key difference makes it difficult to use\u0000existing staff scheduling algorithms for planning shifts in demand-responsive\u0000services. In this article, we present a novel approach for modelling and\u0000solving staff scheduling problems for demand-responsive services that optimizes\u0000for the relevant reward function.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider emph{weighted group search on a disk}, which is a search-type problem involving 2 mobile agents with unit-speed. The two agents start collocated and their goal is to reach a (hidden) target at an unknown location and a known distance of exactly 1 (i.e., the search domain is the unit disk). The agents operate in the so-called emph{wireless} model that allows them instantaneous knowledge of each others findings. The termination cost of agents' trajectories is the worst-case emph{arithmetic weighted average}, which we quantify by parameter $w$, of the times it takes each agent to reach the target, hence the name of the problem. Our work follows a long line of research in search and evacuation, but quite importantly it is a variation and extension of two well-studied problems, respectively. The known variant is the one in which the search domain is the line, and for which an optimal solution is known. Our problem is also the extension of the so-called emph{priority evacuation}, which we obtain by setting the weight parameter $w$ to $0$. For the latter problem the best upper/lower bound gap known is significant. Our contributions for weighted group search on a disk are threefold. textit{First}, we derive upper bounds for the entire spectrum of weighted averages $w$. Our algorithms are obtained as a adaptations of known techniques, however the analysis is much more technical. textit{Second}, our main contribution is the derivation of lower bounds for all weighted averages. This follows from a emph{novel framework} for proving lower bounds for combinatorial search problems based on linear programming and inspired by metric embedding relaxations. textit{Third}, we apply our framework to the priority evacuation problem, improving the previously best lower bound known from $4.38962$ to $4.56798$, thus reducing the upper/lower bound gap from $0.42892$ to $0.25056$.
{"title":"Weighted Group Search on the Disk & Improved Lower Bounds for Priority Evacuation","authors":"Konstantinos Georgiou, Xin Wang","doi":"arxiv-2406.19490","DOIUrl":"https://doi.org/arxiv-2406.19490","url":null,"abstract":"We consider emph{weighted group search on a disk}, which is a search-type\u0000problem involving 2 mobile agents with unit-speed. The two agents start\u0000collocated and their goal is to reach a (hidden) target at an unknown location\u0000and a known distance of exactly 1 (i.e., the search domain is the unit disk).\u0000The agents operate in the so-called emph{wireless} model that allows them\u0000instantaneous knowledge of each others findings. The termination cost of\u0000agents' trajectories is the worst-case emph{arithmetic weighted average},\u0000which we quantify by parameter $w$, of the times it takes each agent to reach\u0000the target, hence the name of the problem. Our work follows a long line of\u0000research in search and evacuation, but quite importantly it is a variation and\u0000extension of two well-studied problems, respectively. The known variant is the\u0000one in which the search domain is the line, and for which an optimal solution\u0000is known. Our problem is also the extension of the so-called emph{priority\u0000evacuation}, which we obtain by setting the weight parameter $w$ to $0$. For\u0000the latter problem the best upper/lower bound gap known is significant. Our\u0000contributions for weighted group search on a disk are threefold.\u0000textit{First}, we derive upper bounds for the entire spectrum of weighted\u0000averages $w$. Our algorithms are obtained as a adaptations of known techniques,\u0000however the analysis is much more technical. textit{Second}, our main\u0000contribution is the derivation of lower bounds for all weighted averages. This\u0000follows from a emph{novel framework} for proving lower bounds for\u0000combinatorial search problems based on linear programming and inspired by\u0000metric embedding relaxations. textit{Third}, we apply our framework to the\u0000priority evacuation problem, improving the previously best lower bound known\u0000from $4.38962$ to $4.56798$, thus reducing the upper/lower bound gap from\u0000$0.42892$ to $0.25056$.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"263 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Crossing Number is a celebrated problem in graph drawing. It is known to be NP-complete since 1980s, and fairly involved techniques were already required to show its fixed-parameter tractability when parameterized by the vertex cover number. In this paper we prove that computing exactly the crossing number is NP-hard even for graphs of path-width 12 (and as a result, even of tree-width 9). Thus, while tree-width and path-width have been very successful tools in many graph algorithm scenarios, our result shows that general crossing number computations unlikely (under P!=NP) could be successfully tackled using bounded width of graph decompositions, which has been a 'tantalizing open problem' [S. Cabello, Hardness of Approximation for Crossing Number, 2013] till now.
{"title":"Crossing Number is NP-hard for Constant Path-width (and Tree-width)","authors":"Petr Hliněný, Liana Khazaliya","doi":"arxiv-2406.18933","DOIUrl":"https://doi.org/arxiv-2406.18933","url":null,"abstract":"Crossing Number is a celebrated problem in graph drawing. It is known to be\u0000NP-complete since 1980s, and fairly involved techniques were already required\u0000to show its fixed-parameter tractability when parameterized by the vertex cover\u0000number. In this paper we prove that computing exactly the crossing number is\u0000NP-hard even for graphs of path-width 12 (and as a result, even of tree-width\u00009). Thus, while tree-width and path-width have been very successful tools in\u0000many graph algorithm scenarios, our result shows that general crossing number\u0000computations unlikely (under P!=NP) could be successfully tackled using bounded\u0000width of graph decompositions, which has been a 'tantalizing open problem' [S.\u0000Cabello, Hardness of Approximation for Crossing Number, 2013] till now.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"2013 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141526048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Combinatorial Optimization (CO) plays a crucial role in addressing various significant problems, among them the challenging Maximum Independent Set (MIS) problem. In light of recent advancements in deep learning methods, efforts have been directed towards leveraging data-driven learning approaches, typically rooted in supervised learning and reinforcement learning, to tackle the NP-hard MIS problem. However, these approaches rely on labeled datasets, exhibit weak generalization, and often depend on problem-specific heuristics. Recently, ReLU-based dataless neural networks were introduced to address combinatorial optimization problems. This paper introduces a novel dataless quadratic neural network formulation, featuring a continuous quadratic relaxation for the MIS problem. Notably, our method eliminates the need for training data by treating the given MIS instance as a trainable entity. More specifically, the graph structure and constraints of the MIS instance are used to define the structure and parameters of the neural network such that training it on a fixed input provides a solution to the problem, thereby setting it apart from traditional supervised or reinforcement learning approaches. By employing a gradient-based optimization algorithm like ADAM and leveraging an efficient off-the-shelf GPU parallel implementation, our straightforward yet effective approach demonstrates competitive or superior performance compared to state-of-the-art learning-based methods. Another significant advantage of our approach is that, unlike exact and heuristic solvers, the running time of our method scales only with the number of nodes in the graph, not the number of edges.
组合优化(Combinatorial Optimization,CO)在解决各种重大问题中发挥着至关重要的作用,其中包括极具挑战性的最大独立集(MIS)问题。鉴于深度学习方法的最新进展,人们开始努力利用数据驱动的学习方法(通常以监督学习和强化学习为基础)来解决 NP-hardMIS问题。然而,这些方法依赖于有标签的数据集,表现出弱泛化性,而且往往依赖于特定问题的启发式方法。最近,基于 ReLU 的无数据神经网络被引入解决组合优化问题。本文介绍了一种新颖的无数据二次神经网络公式,其特点是对 MIS 问题进行连续的二次松弛。值得注意的是,我们的方法将给定的 MIS 实例视为可训练实体,从而消除了对训练数据的需求。更具体地说,MIS 实例的图结构和约束条件被用来定义神经网络的结构和参数,这样在固定输入上对其进行训练就能得到问题的解决方案,从而使其有别于传统的监督或强化学习方法。通过采用基于梯度的优化算法(如 ADAM),并利用高效的现成 GPU 并行执行,我们的方法简单而有效,与基于学习的先进方法相比,性能具有竞争力或更优越。我们方法的另一个显著优势是,与精确求解器和启发式求解器不同,我们方法的运行时间只与图中的节点数量而不是边的数量有关。
{"title":"Dataless Quadratic Neural Networks for the Maximum Independent Set Problem","authors":"Ismail Alkhouri, Cedric Le Denmat, Yingjie Li, Cunxi Yu, Jia Liu, Rongrong Wang, Alvaro Velasquez","doi":"arxiv-2406.19532","DOIUrl":"https://doi.org/arxiv-2406.19532","url":null,"abstract":"Combinatorial Optimization (CO) plays a crucial role in addressing various\u0000significant problems, among them the challenging Maximum Independent Set (MIS)\u0000problem. In light of recent advancements in deep learning methods, efforts have\u0000been directed towards leveraging data-driven learning approaches, typically\u0000rooted in supervised learning and reinforcement learning, to tackle the NP-hard\u0000MIS problem. However, these approaches rely on labeled datasets, exhibit weak\u0000generalization, and often depend on problem-specific heuristics. Recently,\u0000ReLU-based dataless neural networks were introduced to address combinatorial\u0000optimization problems. This paper introduces a novel dataless quadratic neural\u0000network formulation, featuring a continuous quadratic relaxation for the MIS\u0000problem. Notably, our method eliminates the need for training data by treating\u0000the given MIS instance as a trainable entity. More specifically, the graph\u0000structure and constraints of the MIS instance are used to define the structure\u0000and parameters of the neural network such that training it on a fixed input\u0000provides a solution to the problem, thereby setting it apart from traditional\u0000supervised or reinforcement learning approaches. By employing a gradient-based\u0000optimization algorithm like ADAM and leveraging an efficient off-the-shelf GPU\u0000parallel implementation, our straightforward yet effective approach\u0000demonstrates competitive or superior performance compared to state-of-the-art\u0000learning-based methods. Another significant advantage of our approach is that,\u0000unlike exact and heuristic solvers, the running time of our method scales only\u0000with the number of nodes in the graph, not the number of edges.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141532204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a comprehensive framework that unifies several research areas within the context of vertex-weighted bipartite graphs, providing deeper insights and improved solutions. The fundamental solution concept for each problem involves refinement, where vertex weights on one side are distributed among incident edges. The primary objective is to identify a refinement pair with specific optimality conditions that can be verified locally. This framework connects existing and new problems that are traditionally studied in different contexts. We explore three main problems: (1) density-friendly hypergraph decomposition, (2) universally closest distribution refinements problem, and (3) symmetric Fisher Market equilibrium. Our framework presents a symmetric view of density-friendly hypergraph decomposition, wherein hyperedges and nodes play symmetric roles. This symmetric decomposition serves as a tool for deriving precise characterizations of optimal solutions for other problems and enables the application of algorithms from one problem to another.
{"title":"Symmetric Splendor: Unraveling Universally Closest Refinements and Fisher Market Equilibrium through Density-Friendly Decomposition","authors":"T-H. Hubert Chan, Quan Xue","doi":"arxiv-2406.17964","DOIUrl":"https://doi.org/arxiv-2406.17964","url":null,"abstract":"We present a comprehensive framework that unifies several research areas\u0000within the context of vertex-weighted bipartite graphs, providing deeper\u0000insights and improved solutions. The fundamental solution concept for each\u0000problem involves refinement, where vertex weights on one side are distributed\u0000among incident edges. The primary objective is to identify a refinement pair\u0000with specific optimality conditions that can be verified locally. This\u0000framework connects existing and new problems that are traditionally studied in\u0000different contexts. We explore three main problems: (1) density-friendly hypergraph\u0000decomposition, (2) universally closest distribution refinements problem, and\u0000(3) symmetric Fisher Market equilibrium. Our framework presents a symmetric view of density-friendly hypergraph\u0000decomposition, wherein hyperedges and nodes play symmetric roles. This\u0000symmetric decomposition serves as a tool for deriving precise characterizations\u0000of optimal solutions for other problems and enables the application of\u0000algorithms from one problem to another.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"134 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the complexity of the recognition problem for two families of combinatorial structures. A graph $G=(V,E)$ is said to be an intersection graph of lines in space if every $vin V$ can be mapped to a straight line $ell (v)$ in $mathbb{R}^3$ so that $vw$ is an edge in $E$ if and only if $ell(v)$ and $ell(w)$ intersect. A partially ordered set $(X,prec)$ is said to be a circle order, or a 2-space-time order, if every $xin X$ can be mapped to a closed circular disk $C(x)$ so that $yprec x$ if and only if $C(y)$ is contained in $C(x)$. We prove that the recognition problems for intersection graphs of lines and circle orders are both $existsmathbb{R}$-complete, hence polynomial-time equivalent to deciding whether a system of polynomial equalities and inequalities has a solution over the reals. The second result addresses an open problem posed by Brightwell and Luczak.
{"title":"The Complexity of Intersection Graphs of Lines in Space and Circle Orders","authors":"Jean Cardinal","doi":"arxiv-2406.17504","DOIUrl":"https://doi.org/arxiv-2406.17504","url":null,"abstract":"We consider the complexity of the recognition problem for two families of\u0000combinatorial structures. A graph $G=(V,E)$ is said to be an intersection graph\u0000of lines in space if every $vin V$ can be mapped to a straight line $ell (v)$\u0000in $mathbb{R}^3$ so that $vw$ is an edge in $E$ if and only if $ell(v)$ and\u0000$ell(w)$ intersect. A partially ordered set $(X,prec)$ is said to be a circle\u0000order, or a 2-space-time order, if every $xin X$ can be mapped to a closed\u0000circular disk $C(x)$ so that $yprec x$ if and only if $C(y)$ is contained in\u0000$C(x)$. We prove that the recognition problems for intersection graphs of lines\u0000and circle orders are both $existsmathbb{R}$-complete, hence polynomial-time\u0000equivalent to deciding whether a system of polynomial equalities and\u0000inequalities has a solution over the reals. The second result addresses an open\u0000problem posed by Brightwell and Luczak.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141526044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}