Sven Goluža, Tomislav Kovačević, Tessa Bauman, Zvonko Kostanjčar
{"title":"Deep reinforcement learning with positional context for intraday trading","authors":"Sven Goluža, Tomislav Kovačević, Tessa Bauman, Zvonko Kostanjčar","doi":"arxiv-2406.08013","DOIUrl":null,"url":null,"abstract":"Deep reinforcement learning (DRL) is a well-suited approach to financial\ndecision-making, where an agent makes decisions based on its trading strategy\ndeveloped from market observations. Existing DRL intraday trading strategies\nmainly use price-based features to construct the state space. They neglect the\ncontextual information related to the position of the strategy, which is an\nimportant aspect given the sequential nature of intraday trading. In this\nstudy, we propose a novel DRL model for intraday trading that introduces\npositional features encapsulating the contextual information into its sparse\nstate space. The model is evaluated over an extended period of almost a decade\nand across various assets including commodities and foreign exchange\nsecurities, taking transaction costs into account. The results show a notable\nperformance in terms of profitability and risk-adjusted metrics. The feature\nimportance results show that each feature incorporating contextual information\ncontributes to the overall performance of the model. Additionally, through an\nexploration of the agent's intraday trading activity, we unveil patterns that\nsubstantiate the effectiveness of our proposed model.","PeriodicalId":501478,"journal":{"name":"arXiv - QuantFin - Trading and Market Microstructure","volume":"30 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Trading and Market Microstructure","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2406.08013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep reinforcement learning (DRL) is a well-suited approach to financial
decision-making, where an agent makes decisions based on its trading strategy
developed from market observations. Existing DRL intraday trading strategies
mainly use price-based features to construct the state space. They neglect the
contextual information related to the position of the strategy, which is an
important aspect given the sequential nature of intraday trading. In this
study, we propose a novel DRL model for intraday trading that introduces
positional features encapsulating the contextual information into its sparse
state space. The model is evaluated over an extended period of almost a decade
and across various assets including commodities and foreign exchange
securities, taking transaction costs into account. The results show a notable
performance in terms of profitability and risk-adjusted metrics. The feature
importance results show that each feature incorporating contextual information
contributes to the overall performance of the model. Additionally, through an
exploration of the agent's intraday trading activity, we unveil patterns that
substantiate the effectiveness of our proposed model.