Generating a Graph Colouring Heuristic with Deep Q-Learning and Graph Neural Networks

George Watkins, G. Montana, Juergen Branke
{"title":"Generating a Graph Colouring Heuristic with Deep Q-Learning and Graph Neural Networks","authors":"George Watkins, G. Montana, Juergen Branke","doi":"10.48550/arXiv.2304.04051","DOIUrl":null,"url":null,"abstract":"The graph colouring problem consists of assigning labels, or colours, to the vertices of a graph such that no two adjacent vertices share the same colour. In this work we investigate whether deep reinforcement learning can be used to discover a competitive construction heuristic for graph colouring. Our proposed approach, ReLCol, uses deep Q-learning together with a graph neural network for feature extraction, and employs a novel way of parameterising the graph that results in improved performance. Using standard benchmark graphs with varied topologies, we empirically evaluate the benefits and limitations of the heuristic learned by ReLCol relative to existing construction algorithms, and demonstrate that reinforcement learning is a promising direction for further research on the graph colouring problem.","PeriodicalId":430111,"journal":{"name":"Learning and Intelligent Optimization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Learning and Intelligent Optimization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2304.04051","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The graph colouring problem consists of assigning labels, or colours, to the vertices of a graph such that no two adjacent vertices share the same colour. In this work we investigate whether deep reinforcement learning can be used to discover a competitive construction heuristic for graph colouring. Our proposed approach, ReLCol, uses deep Q-learning together with a graph neural network for feature extraction, and employs a novel way of parameterising the graph that results in improved performance. Using standard benchmark graphs with varied topologies, we empirically evaluate the benefits and limitations of the heuristic learned by ReLCol relative to existing construction algorithms, and demonstrate that reinforcement learning is a promising direction for further research on the graph colouring problem.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用深度q -学习和图神经网络生成图着色启发式算法
图形着色问题包括为图形的顶点分配标签或颜色,使相邻的两个顶点没有相同的颜色。在这项工作中,我们研究了深度强化学习是否可以用于发现图着色的竞争性构造启发式。我们提出的方法ReLCol使用深度q -学习和图神经网络进行特征提取,并采用了一种新的参数化图的方法,从而提高了性能。使用具有不同拓扑的标准基准图,我们经验地评估了ReLCol相对于现有构造算法的启发式学习的优点和局限性,并证明强化学习是图着色问题进一步研究的一个有前途的方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Integrating Hyperparameter Search into Model-Free AutoML with Context-Free Grammars Generating a Graph Colouring Heuristic with Deep Q-Learning and Graph Neural Networks Towards Tackling MaxSAT by Combining Nested Monte Carlo with Local Search Single MCMC Chain Parallelisation on Decision Trees Airport Digital Twins for Resilient Disaster Management Response
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1