Deep Controlled Learning for Inventory Control

IF 6 2区 管理学 Q1 OPERATIONS RESEARCH & MANAGEMENT SCIENCE European Journal of Operational Research Pub Date : 2025-07-01 Epub Date: 2025-01-31 DOI:10.1016/j.ejor.2025.01.026
Tarkan Temizöz , Christina Imdahl , Remco Dijkman , Douniel Lamghari-Idrissi , Willem van Jaarsveld
{"title":"Deep Controlled Learning for Inventory Control","authors":"Tarkan Temizöz ,&nbsp;Christina Imdahl ,&nbsp;Remco Dijkman ,&nbsp;Douniel Lamghari-Idrissi ,&nbsp;Willem van Jaarsveld","doi":"10.1016/j.ejor.2025.01.026","DOIUrl":null,"url":null,"abstract":"<div><div>The application of Deep Reinforcement Learning (DRL) to inventory management is an emerging field. However, traditional DRL algorithms, originally developed for diverse domains such as game-playing and robotics, may not be well-suited for the specific challenges posed by inventory management. Consequently, these algorithms often fail to outperform established heuristics; for instance, no existing DRL approach consistently surpasses the capped base-stock policy in lost sales inventory control. This highlights a critical gap in the practical application of DRL to inventory management: the highly stochastic nature of inventory problems requires tailored solutions. In response, we propose Deep Controlled Learning (DCL), a new DRL algorithm designed for highly stochastic problems. DCL is based on approximate policy iteration and incorporates an efficient simulation mechanism, combining Sequential Halving with Common Random Numbers. Our numerical studies demonstrate that DCL consistently outperforms state-of-the-art heuristics and DRL algorithms across various inventory settings, including lost sales, perishable inventory systems, and inventory systems with random lead times. DCL achieves lower average costs in all test cases while maintaining an optimality gap of no more than 0.2%. Remarkably, this performance is achieved using the same hyperparameter set across all experiments, underscoring the robustness and generalizability of our approach. These findings contribute to the ongoing exploration of tailored DRL algorithms for inventory management, providing a foundation for further research and practical application in this area.</div></div>","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"324 1","pages":"Pages 104-117"},"PeriodicalIF":6.0000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Operational Research","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0377221725000463","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/31 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"OPERATIONS RESEARCH & MANAGEMENT SCIENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The application of Deep Reinforcement Learning (DRL) to inventory management is an emerging field. However, traditional DRL algorithms, originally developed for diverse domains such as game-playing and robotics, may not be well-suited for the specific challenges posed by inventory management. Consequently, these algorithms often fail to outperform established heuristics; for instance, no existing DRL approach consistently surpasses the capped base-stock policy in lost sales inventory control. This highlights a critical gap in the practical application of DRL to inventory management: the highly stochastic nature of inventory problems requires tailored solutions. In response, we propose Deep Controlled Learning (DCL), a new DRL algorithm designed for highly stochastic problems. DCL is based on approximate policy iteration and incorporates an efficient simulation mechanism, combining Sequential Halving with Common Random Numbers. Our numerical studies demonstrate that DCL consistently outperforms state-of-the-art heuristics and DRL algorithms across various inventory settings, including lost sales, perishable inventory systems, and inventory systems with random lead times. DCL achieves lower average costs in all test cases while maintaining an optimality gap of no more than 0.2%. Remarkably, this performance is achieved using the same hyperparameter set across all experiments, underscoring the robustness and generalizability of our approach. These findings contribute to the ongoing exploration of tailored DRL algorithms for inventory management, providing a foundation for further research and practical application in this area.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
库存控制的深度控制学习
深度强化学习(DRL)在库存管理中的应用是一个新兴领域。然而,传统的DRL算法,最初是为游戏和机器人等不同领域开发的,可能不太适合库存管理带来的具体挑战。因此,这些算法往往无法超越已建立的启发式;例如,在损失销售库存控制方面,没有任何现有的DRL方法始终超过上限基本库存策略。这突出了DRL在库存管理中的实际应用中的一个关键差距:库存问题的高度随机性需要量身定制的解决方案。作为回应,我们提出了深度控制学习(DCL),这是一种专为高度随机问题设计的新DRL算法。DCL基于近似策略迭代,结合了顺序减半和公共随机数的有效模拟机制。我们的数值研究表明,DCL在各种库存设置中始终优于最先进的启发式算法和DRL算法,包括销售损失、易腐库存系统和随机交货时间的库存系统。DCL在所有测试用例中实现了较低的平均成本,同时保持了不超过0.2%的最优性差距。值得注意的是,这种性能是在所有实验中使用相同的超参数集实现的,强调了我们方法的鲁棒性和泛化性。这些发现有助于持续探索适合库存管理的DRL算法,为该领域的进一步研究和实际应用提供基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
European Journal of Operational Research
European Journal of Operational Research 管理科学-运筹学与管理科学
CiteScore
11.90
自引率
9.40%
发文量
786
审稿时长
8.2 months
期刊介绍: The European Journal of Operational Research (EJOR) publishes high quality, original papers that contribute to the methodology of operational research (OR) and to the practice of decision making.
期刊最新文献
Recent developments in location-routing problems Super-efficiency in piecewise Cobb-Douglas technology with flexible endogenous direction Increasing competitiveness by imbalanced groups: The example of the 48-team FIFA World Cup A hybrid multi-layered ensemble model based on heterogeneous information network for small and medium-sized enterprise default prediction A global malmquist productivity index of athletics performance in olympic games
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1