Synthetic Data Augmentation for Deep Reinforcement Learning in Financial Trading

Chunli Liu, Carmine Ventre, M. Polukarov
{"title":"Synthetic Data Augmentation for Deep Reinforcement Learning in Financial Trading","authors":"Chunli Liu, Carmine Ventre, M. Polukarov","doi":"10.1145/3533271.3561704","DOIUrl":null,"url":null,"abstract":"Despite the eye-catching advances in the area, deploying Deep Reinforcement Learning (DRL) in financial markets remains a challenging task. Model-based techniques often fall short due to epistemic uncertainty, whereas model-free approaches require large amount of data that is often unavailable. Motivated by the recent research on the generation of realistic synthetic financial data, we explore the possibility of using augmented synthetic datasets for training DRL agents without direct access to the real financial data. With our novel approach, termed synthetic data augmented reinforcement learning for trading (SDARL4T), we test whether the performance of DRL for financial trading can be enhanced, by attending to both profitability and generalization abilities. We show that DRL agents trained with SDARL4T make a profit which is comparable, and often much larger, than that obtained by the agents trained on real data, while guaranteeing similar robustness. These results support the adoption of our framework in real-world uses of DRL for trading.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Third ACM International Conference on AI in Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3533271.3561704","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Despite the eye-catching advances in the area, deploying Deep Reinforcement Learning (DRL) in financial markets remains a challenging task. Model-based techniques often fall short due to epistemic uncertainty, whereas model-free approaches require large amount of data that is often unavailable. Motivated by the recent research on the generation of realistic synthetic financial data, we explore the possibility of using augmented synthetic datasets for training DRL agents without direct access to the real financial data. With our novel approach, termed synthetic data augmented reinforcement learning for trading (SDARL4T), we test whether the performance of DRL for financial trading can be enhanced, by attending to both profitability and generalization abilities. We show that DRL agents trained with SDARL4T make a profit which is comparable, and often much larger, than that obtained by the agents trained on real data, while guaranteeing similar robustness. These results support the adoption of our framework in real-world uses of DRL for trading.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
金融交易中深度强化学习的合成数据增强
尽管该领域取得了引人注目的进展,但在金融市场中部署深度强化学习(DRL)仍然是一项具有挑战性的任务。由于认知的不确定性,基于模型的技术常常存在不足,而无模型的方法需要大量的数据,而这些数据通常是不可用的。受最近对生成真实合成财务数据的研究的启发,我们探索了在不直接访问真实财务数据的情况下使用增强合成数据集来训练DRL代理的可能性。通过我们的新方法,称为交易合成数据增强强化学习(SDARL4T),我们通过关注盈利能力和泛化能力来测试DRL在金融交易中的性能是否可以得到提高。我们表明,使用SDARL4T训练的DRL代理所获得的利润与使用真实数据训练的代理所获得的利润相当,并且通常要大得多,同时保证了相似的鲁棒性。这些结果支持我们的框架在实际交易中使用DRL。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Core Matrix Regression and Prediction with Regularization Risk-Aware Linear Bandits with Application in Smart Order Routing Addressing Extreme Market Responses Using Secure Aggregation Addressing Non-Stationarity in FX Trading with Online Model Selection of Offline RL Experts Objective Driven Portfolio Construction Using Reinforcement Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1