Hadi Charkhgard , Hanieh Rastegar Moghaddam , Ali Eshragh , Sasan Mahmoudinazlou , Kimia Keshanian
{"title":"Solving hard bi-objective knapsack problems using deep reinforcement learning","authors":"Hadi Charkhgard , Hanieh Rastegar Moghaddam , Ali Eshragh , Sasan Mahmoudinazlou , Kimia Keshanian","doi":"10.1016/j.disopt.2025.100879","DOIUrl":null,"url":null,"abstract":"<div><div>We study a class of bi-objective integer programs known as bi-objective knapsack problems (BOKPs). Our research focuses on the development of innovative exact and approximate solution methods for BOKPs by synergizing algorithmic concepts from two distinct domains: multi-objective integer programming and (deep) reinforcement learning. While novel reinforcement learning techniques have been applied successfully to single-objective integer programming in recent years, a corresponding body of work is yet to be explored in the field of multi-objective integer programming. This study is an effort to bridge this existing gap in the literature. Through a computational study, we demonstrate that although it is feasible to develop exact reinforcement learning-based methods for solving BOKPs, they come with significant computational costs. Consequently, we recommend an alternative research direction: approximating the entire nondominated frontier using deep reinforcement learning-based methods. We introduce two such methods, which extend classical methods from the multi-objective integer programming literature, and illustrate their ability to rapidly produce high-quality approximations.</div></div>","PeriodicalId":50571,"journal":{"name":"Discrete Optimization","volume":"55 ","pages":"Article 100879"},"PeriodicalIF":0.9000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Discrete Optimization","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1572528625000027","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
Abstract
We study a class of bi-objective integer programs known as bi-objective knapsack problems (BOKPs). Our research focuses on the development of innovative exact and approximate solution methods for BOKPs by synergizing algorithmic concepts from two distinct domains: multi-objective integer programming and (deep) reinforcement learning. While novel reinforcement learning techniques have been applied successfully to single-objective integer programming in recent years, a corresponding body of work is yet to be explored in the field of multi-objective integer programming. This study is an effort to bridge this existing gap in the literature. Through a computational study, we demonstrate that although it is feasible to develop exact reinforcement learning-based methods for solving BOKPs, they come with significant computational costs. Consequently, we recommend an alternative research direction: approximating the entire nondominated frontier using deep reinforcement learning-based methods. We introduce two such methods, which extend classical methods from the multi-objective integer programming literature, and illustrate their ability to rapidly produce high-quality approximations.
期刊介绍:
Discrete Optimization publishes research papers on the mathematical, computational and applied aspects of all areas of integer programming and combinatorial optimization. In addition to reports on mathematical results pertinent to discrete optimization, the journal welcomes submissions on algorithmic developments, computational experiments, and novel applications (in particular, large-scale and real-time applications). The journal also publishes clearly labelled surveys, reviews, short notes, and open problems. Manuscripts submitted for possible publication to Discrete Optimization should report on original research, should not have been previously published, and should not be under consideration for publication by any other journal.