{"title":"Dataless Quadratic Neural Networks for the Maximum Independent Set Problem","authors":"Ismail Alkhouri, Cedric Le Denmat, Yingjie Li, Cunxi Yu, Jia Liu, Rongrong Wang, Alvaro Velasquez","doi":"arxiv-2406.19532","DOIUrl":null,"url":null,"abstract":"Combinatorial Optimization (CO) plays a crucial role in addressing various\nsignificant problems, among them the challenging Maximum Independent Set (MIS)\nproblem. In light of recent advancements in deep learning methods, efforts have\nbeen directed towards leveraging data-driven learning approaches, typically\nrooted in supervised learning and reinforcement learning, to tackle the NP-hard\nMIS problem. However, these approaches rely on labeled datasets, exhibit weak\ngeneralization, and often depend on problem-specific heuristics. Recently,\nReLU-based dataless neural networks were introduced to address combinatorial\noptimization problems. This paper introduces a novel dataless quadratic neural\nnetwork formulation, featuring a continuous quadratic relaxation for the MIS\nproblem. Notably, our method eliminates the need for training data by treating\nthe given MIS instance as a trainable entity. More specifically, the graph\nstructure and constraints of the MIS instance are used to define the structure\nand parameters of the neural network such that training it on a fixed input\nprovides a solution to the problem, thereby setting it apart from traditional\nsupervised or reinforcement learning approaches. By employing a gradient-based\noptimization algorithm like ADAM and leveraging an efficient off-the-shelf GPU\nparallel implementation, our straightforward yet effective approach\ndemonstrates competitive or superior performance compared to state-of-the-art\nlearning-based methods. Another significant advantage of our approach is that,\nunlike exact and heuristic solvers, the running time of our method scales only\nwith the number of nodes in the graph, not the number of edges.","PeriodicalId":501216,"journal":{"name":"arXiv - CS - Discrete Mathematics","volume":"38 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Discrete Mathematics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2406.19532","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Combinatorial Optimization (CO) plays a crucial role in addressing various
significant problems, among them the challenging Maximum Independent Set (MIS)
problem. In light of recent advancements in deep learning methods, efforts have
been directed towards leveraging data-driven learning approaches, typically
rooted in supervised learning and reinforcement learning, to tackle the NP-hard
MIS problem. However, these approaches rely on labeled datasets, exhibit weak
generalization, and often depend on problem-specific heuristics. Recently,
ReLU-based dataless neural networks were introduced to address combinatorial
optimization problems. This paper introduces a novel dataless quadratic neural
network formulation, featuring a continuous quadratic relaxation for the MIS
problem. Notably, our method eliminates the need for training data by treating
the given MIS instance as a trainable entity. More specifically, the graph
structure and constraints of the MIS instance are used to define the structure
and parameters of the neural network such that training it on a fixed input
provides a solution to the problem, thereby setting it apart from traditional
supervised or reinforcement learning approaches. By employing a gradient-based
optimization algorithm like ADAM and leveraging an efficient off-the-shelf GPU
parallel implementation, our straightforward yet effective approach
demonstrates competitive or superior performance compared to state-of-the-art
learning-based methods. Another significant advantage of our approach is that,
unlike exact and heuristic solvers, the running time of our method scales only
with the number of nodes in the graph, not the number of edges.