MARCO:用于组合优化的记忆增强强化框架

Andoni I. Garmendia, Quentin Cappart, Josu Ceberio, Alexander Mendiburu
{"title":"MARCO:用于组合优化的记忆增强强化框架","authors":"Andoni I. Garmendia, Quentin Cappart, Josu Ceberio, Alexander Mendiburu","doi":"arxiv-2408.02207","DOIUrl":null,"url":null,"abstract":"Neural Combinatorial Optimization (NCO) is an emerging domain where deep\nlearning techniques are employed to address combinatorial optimization problems\nas a standalone solver. Despite their potential, existing NCO methods often\nsuffer from inefficient search space exploration, frequently leading to local\noptima entrapment or redundant exploration of previously visited states. This\npaper introduces a versatile framework, referred to as Memory-Augmented\nReinforcement for Combinatorial Optimization (MARCO), that can be used to\nenhance both constructive and improvement methods in NCO through an innovative\nmemory module. MARCO stores data collected throughout the optimization\ntrajectory and retrieves contextually relevant information at each state. This\nway, the search is guided by two competing criteria: making the best decision\nin terms of the quality of the solution and avoiding revisiting already\nexplored solutions. This approach promotes a more efficient use of the\navailable optimization budget. Moreover, thanks to the parallel nature of NCO\nmodels, several search threads can run simultaneously, all sharing the same\nmemory module, enabling an efficient collaborative exploration. Empirical\nevaluations, carried out on the maximum cut, maximum independent set and\ntravelling salesman problems, reveal that the memory module effectively\nincreases the exploration, enabling the model to discover diverse,\nhigher-quality solutions. MARCO achieves good performance in a low\ncomputational cost, establishing a promising new direction in the field of NCO.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"51 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MARCO: A Memory-Augmented Reinforcement Framework for Combinatorial Optimization\",\"authors\":\"Andoni I. Garmendia, Quentin Cappart, Josu Ceberio, Alexander Mendiburu\",\"doi\":\"arxiv-2408.02207\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural Combinatorial Optimization (NCO) is an emerging domain where deep\\nlearning techniques are employed to address combinatorial optimization problems\\nas a standalone solver. Despite their potential, existing NCO methods often\\nsuffer from inefficient search space exploration, frequently leading to local\\noptima entrapment or redundant exploration of previously visited states. This\\npaper introduces a versatile framework, referred to as Memory-Augmented\\nReinforcement for Combinatorial Optimization (MARCO), that can be used to\\nenhance both constructive and improvement methods in NCO through an innovative\\nmemory module. MARCO stores data collected throughout the optimization\\ntrajectory and retrieves contextually relevant information at each state. This\\nway, the search is guided by two competing criteria: making the best decision\\nin terms of the quality of the solution and avoiding revisiting already\\nexplored solutions. This approach promotes a more efficient use of the\\navailable optimization budget. Moreover, thanks to the parallel nature of NCO\\nmodels, several search threads can run simultaneously, all sharing the same\\nmemory module, enabling an efficient collaborative exploration. Empirical\\nevaluations, carried out on the maximum cut, maximum independent set and\\ntravelling salesman problems, reveal that the memory module effectively\\nincreases the exploration, enabling the model to discover diverse,\\nhigher-quality solutions. MARCO achieves good performance in a low\\ncomputational cost, establishing a promising new direction in the field of NCO.\",\"PeriodicalId\":501347,\"journal\":{\"name\":\"arXiv - CS - Neural and Evolutionary Computing\",\"volume\":\"51 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Neural and Evolutionary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.02207\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.02207","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

神经组合优化(NCO)是一个新兴领域,在该领域中,深度学习技术被用作独立求解器来解决组合优化问题。尽管潜力巨大,但现有的 NCO 方法往往存在搜索空间探索效率低下的问题,经常导致局部优化陷入困境或对之前访问过的状态进行冗余探索。本文介绍了一种名为 "组合优化记忆增强"(MARCO)的多功能框架,通过创新的记忆模块,该框架可用于增强 NCO 中的构造和改进方法。MARCO 存储在整个优化轨迹中收集的数据,并在每个状态下检索与上下文相关的信息。通过这种方式,搜索将遵循两个相互竞争的标准:根据解决方案的质量做出最佳决策,以及避免重访已探索过的解决方案。这种方法能更有效地利用可用的优化预算。此外,由于 NCO 模型的并行特性,多个搜索线程可以同时运行,共享同一个内存模块,从而实现高效的协作探索。在最大切割、最大独立集和旅行推销员问题上进行的经验评估表明,内存模块有效地提高了探索效率,使模型能够发现多样化、高质量的解决方案。MARCO 以较低的计算成本实现了良好的性能,为 NCO 领域开辟了一个前景广阔的新方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MARCO: A Memory-Augmented Reinforcement Framework for Combinatorial Optimization
Neural Combinatorial Optimization (NCO) is an emerging domain where deep learning techniques are employed to address combinatorial optimization problems as a standalone solver. Despite their potential, existing NCO methods often suffer from inefficient search space exploration, frequently leading to local optima entrapment or redundant exploration of previously visited states. This paper introduces a versatile framework, referred to as Memory-Augmented Reinforcement for Combinatorial Optimization (MARCO), that can be used to enhance both constructive and improvement methods in NCO through an innovative memory module. MARCO stores data collected throughout the optimization trajectory and retrieves contextually relevant information at each state. This way, the search is guided by two competing criteria: making the best decision in terms of the quality of the solution and avoiding revisiting already explored solutions. This approach promotes a more efficient use of the available optimization budget. Moreover, thanks to the parallel nature of NCO models, several search threads can run simultaneously, all sharing the same memory module, enabling an efficient collaborative exploration. Empirical evaluations, carried out on the maximum cut, maximum independent set and travelling salesman problems, reveal that the memory module effectively increases the exploration, enabling the model to discover diverse, higher-quality solutions. MARCO achieves good performance in a low computational cost, establishing a promising new direction in the field of NCO.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Hardware-Friendly Implementation of Physical Reservoir Computing with CMOS-based Time-domain Analog Spiking Neurons Self-Contrastive Forward-Forward Algorithm Bio-Inspired Mamba: Temporal Locality and Bioplausible Learning in Selective State Space Models PReLU: Yet Another Single-Layer Solution to the XOR Problem Inferno: An Extensible Framework for Spiking Neural Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1