Maximilian Nägele, Jan Olle, Thomas Fösel, Remmy Zen, Florian Marquardt
{"title":"Tackling Decision Processes with Non-Cumulative Objectives using Reinforcement Learning","authors":"Maximilian Nägele, Jan Olle, Thomas Fösel, Remmy Zen, Florian Marquardt","doi":"arxiv-2405.13609","DOIUrl":null,"url":null,"abstract":"Markov decision processes (MDPs) are used to model a wide variety of\napplications ranging from game playing over robotics to finance. Their optimal\npolicy typically maximizes the expected sum of rewards given at each step of\nthe decision process. However, a large class of problems does not fit\nstraightforwardly into this framework: Non-cumulative Markov decision processes\n(NCMDPs), where instead of the expected sum of rewards, the expected value of\nan arbitrary function of the rewards is maximized. Example functions include\nthe maximum of the rewards or their mean divided by their standard deviation.\nIn this work, we introduce a general mapping of NCMDPs to standard MDPs. This\nallows all techniques developed to find optimal policies for MDPs, such as\nreinforcement learning or dynamic programming, to be directly applied to the\nlarger class of NCMDPs. Focusing on reinforcement learning, we show\napplications in a diverse set of tasks, including classical control, portfolio\noptimization in finance, and discrete optimization problems. Given our\napproach, we can improve both final performance and training time compared to\nrelying on standard MDPs.","PeriodicalId":501294,"journal":{"name":"arXiv - QuantFin - Computational Finance","volume":"52 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Computational Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2405.13609","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Markov decision processes (MDPs) are used to model a wide variety of
applications ranging from game playing over robotics to finance. Their optimal
policy typically maximizes the expected sum of rewards given at each step of
the decision process. However, a large class of problems does not fit
straightforwardly into this framework: Non-cumulative Markov decision processes
(NCMDPs), where instead of the expected sum of rewards, the expected value of
an arbitrary function of the rewards is maximized. Example functions include
the maximum of the rewards or their mean divided by their standard deviation.
In this work, we introduce a general mapping of NCMDPs to standard MDPs. This
allows all techniques developed to find optimal policies for MDPs, such as
reinforcement learning or dynamic programming, to be directly applied to the
larger class of NCMDPs. Focusing on reinforcement learning, we show
applications in a diverse set of tasks, including classical control, portfolio
optimization in finance, and discrete optimization problems. Given our
approach, we can improve both final performance and training time compared to
relying on standard MDPs.