{"title":"QLBS: Q-Learner in the Black-Scholes(-Merton) Worlds","authors":"Igor Halperin","doi":"10.3905/jod.2020.1.108","DOIUrl":null,"url":null,"abstract":"This article presents a discrete-time option pricing model that is rooted in reinforcement learning (RL), and more specifically in the famous Q-Learning method of RL. We construct a risk-adjusted Markov Decision Process for a discrete-time version of the classical Black-Scholes-Merton (BSM) model, where the option price is an optimal Q-function, while the optimal hedge is a second argument of this optimal Q-function, so that both the price and hedge are parts of the same formula. Pricing is done by learning to dynamically optimize risk-adjusted returns for an option replicating portfolio, as in Markowitz portfolio theory. Using Q-Learning and related methods, once created in a parametric setting, the model can go model-free and learn to price and hedge an option directly from data, without an explicit model of the world. This suggests that RL may provide efficient data-driven and model-free methods for the optimal pricing and hedging of options. Once we depart from the academic continuous-time limit, and vice versa, option pricing methods developed in Mathematical Finance may be viewed as special cases of model-based reinforcement learning. Further, due to the simplicity and tractability of our model, which only needs basic linear algebra (plus Monte Carlo simulation, if we work with synthetic data), and its close relationship to the original BSM model, we suggest that our model could be used in the benchmarking of different RL algorithms for financial trading applications. TOPICS: Derivatives, options Key Findings • Reinforcement learning (RL) is the most natural way for pricing and hedging of options that relies directly on data and not on a specific model of asset pricing. • The discrete-time RL approach to option pricing generalizes classical continuous-time methods; enables tracking mis-hedging risk, which disappears in the formal continuous-time limit; and provides a consistent framework for using options for both hedging and speculation. • A simple quadratic reward function, which presents a minimal extension of the classical Black-Scholes framework when combined with the Q-learning method of RL, gives rise to a particularly simple computational scheme where option pricing and hedging are semianalytical, as they amount to multiple uses of a conventional least-squares regression.","PeriodicalId":501089,"journal":{"name":"The Journal of Derivatives","volume":"95 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Derivatives","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3905/jod.2020.1.108","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This article presents a discrete-time option pricing model that is rooted in reinforcement learning (RL), and more specifically in the famous Q-Learning method of RL. We construct a risk-adjusted Markov Decision Process for a discrete-time version of the classical Black-Scholes-Merton (BSM) model, where the option price is an optimal Q-function, while the optimal hedge is a second argument of this optimal Q-function, so that both the price and hedge are parts of the same formula. Pricing is done by learning to dynamically optimize risk-adjusted returns for an option replicating portfolio, as in Markowitz portfolio theory. Using Q-Learning and related methods, once created in a parametric setting, the model can go model-free and learn to price and hedge an option directly from data, without an explicit model of the world. This suggests that RL may provide efficient data-driven and model-free methods for the optimal pricing and hedging of options. Once we depart from the academic continuous-time limit, and vice versa, option pricing methods developed in Mathematical Finance may be viewed as special cases of model-based reinforcement learning. Further, due to the simplicity and tractability of our model, which only needs basic linear algebra (plus Monte Carlo simulation, if we work with synthetic data), and its close relationship to the original BSM model, we suggest that our model could be used in the benchmarking of different RL algorithms for financial trading applications. TOPICS: Derivatives, options Key Findings • Reinforcement learning (RL) is the most natural way for pricing and hedging of options that relies directly on data and not on a specific model of asset pricing. • The discrete-time RL approach to option pricing generalizes classical continuous-time methods; enables tracking mis-hedging risk, which disappears in the formal continuous-time limit; and provides a consistent framework for using options for both hedging and speculation. • A simple quadratic reward function, which presents a minimal extension of the classical Black-Scholes framework when combined with the Q-learning method of RL, gives rise to a particularly simple computational scheme where option pricing and hedging are semianalytical, as they amount to multiple uses of a conventional least-squares regression.