Deep reinforcement learning with positional context for intraday trading

Sven Goluža, Tomislav Kovačević, Tessa Bauman, Zvonko Kostanjčar
{"title":"Deep reinforcement learning with positional context for intraday trading","authors":"Sven Goluža, Tomislav Kovačević, Tessa Bauman, Zvonko Kostanjčar","doi":"arxiv-2406.08013","DOIUrl":null,"url":null,"abstract":"Deep reinforcement learning (DRL) is a well-suited approach to financial\ndecision-making, where an agent makes decisions based on its trading strategy\ndeveloped from market observations. Existing DRL intraday trading strategies\nmainly use price-based features to construct the state space. They neglect the\ncontextual information related to the position of the strategy, which is an\nimportant aspect given the sequential nature of intraday trading. In this\nstudy, we propose a novel DRL model for intraday trading that introduces\npositional features encapsulating the contextual information into its sparse\nstate space. The model is evaluated over an extended period of almost a decade\nand across various assets including commodities and foreign exchange\nsecurities, taking transaction costs into account. The results show a notable\nperformance in terms of profitability and risk-adjusted metrics. The feature\nimportance results show that each feature incorporating contextual information\ncontributes to the overall performance of the model. Additionally, through an\nexploration of the agent's intraday trading activity, we unveil patterns that\nsubstantiate the effectiveness of our proposed model.","PeriodicalId":501478,"journal":{"name":"arXiv - QuantFin - Trading and Market Microstructure","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Trading and Market Microstructure","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2406.08013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deep reinforcement learning (DRL) is a well-suited approach to financial decision-making, where an agent makes decisions based on its trading strategy developed from market observations. Existing DRL intraday trading strategies mainly use price-based features to construct the state space. They neglect the contextual information related to the position of the strategy, which is an important aspect given the sequential nature of intraday trading. In this study, we propose a novel DRL model for intraday trading that introduces positional features encapsulating the contextual information into its sparse state space. The model is evaluated over an extended period of almost a decade and across various assets including commodities and foreign exchange securities, taking transaction costs into account. The results show a notable performance in terms of profitability and risk-adjusted metrics. The feature importance results show that each feature incorporating contextual information contributes to the overall performance of the model. Additionally, through an exploration of the agent's intraday trading activity, we unveil patterns that substantiate the effectiveness of our proposed model.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
针对日内交易的带有位置背景的深度强化学习
深度强化学习(DRL)是一种非常适合金融决策的方法,在这种方法中,代理根据市场观察制定的交易策略做出决策。现有的 DRL 日内交易策略主要使用基于价格的特征来构建状态空间。它们忽略了与策略位置相关的上下文信息,而鉴于日内交易的连续性,这一点非常重要。在本研究中,我们提出了一种新型的日内交易 DRL 模型,该模型在稀疏的状态空间中引入了包含上下文信息的位置特征。在考虑交易成本的情况下,我们对该模型进行了近十年的长期评估,并对包括商品和外汇证券在内的各种资产进行了评估。结果显示,该模型在盈利能力和风险调整指标方面表现突出。特征重要性结果表明,包含上下文信息的每个特征都有助于提高模型的整体性能。此外,通过对代理日内交易活动的探索,我们揭示了证明我们所提模型有效性的模式。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Optimal position-building strategies in Competition MarS: a Financial Market Simulation Engine Powered by Generative Foundation Model Logarithmic regret in the ergodic Avellaneda-Stoikov market making model A Financial Time Series Denoiser Based on Diffusion Model Simulation of Social Media-Driven Bubble Formation in Financial Markets using an Agent-Based Model with Hierarchical Influence Network
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1