Reinforcement Learning Pair Trading: A Dynamic Scaling approach

Hongshen Yang, Avinash Malik
{"title":"Reinforcement Learning Pair Trading: A Dynamic Scaling approach","authors":"Hongshen Yang, Avinash Malik","doi":"arxiv-2407.16103","DOIUrl":null,"url":null,"abstract":"Cryptocurrency is a cryptography-based digital asset with extremely volatile\nprices. Around $70 billion worth of crypto-currency is traded daily on\nexchanges. Trading crypto-currency is difficult due to the inherent volatility\nof the crypto-market. In this work, we want to test the hypothesis: \"Can\ntechniques from artificial intelligence help with algorithmically trading\ncryptocurrencies?\". In order to address this question, we combine Reinforcement\nLearning (RL) with pair trading. Pair trading is a statistical arbitrage\ntrading technique which exploits the price difference between statistically\ncorrelated assets. We train reinforcement learners to determine when and how to\ntrade pairs of cryptocurrencies. We develop new reward shaping and\nobservation/action spaces for reinforcement learning. We performed experiments\nwith the developed reinforcement learner on pairs of BTC-GBP and BTC-EUR data\nseparated by 1-minute intervals (n = 263,520). The traditional non-RL pair\ntrading technique achieved an annualised profit of 8.33%, while the proposed\nRL-based pair trading technique achieved annualised profits from 9.94% -\n31.53%, depending upon the RL learner. Our results show that RL can\nsignificantly outperform manual and traditional pair trading techniques when\napplied to volatile markets such as cryptocurrencies.","PeriodicalId":501478,"journal":{"name":"arXiv - QuantFin - Trading and Market Microstructure","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Trading and Market Microstructure","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.16103","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Cryptocurrency is a cryptography-based digital asset with extremely volatile prices. Around $70 billion worth of crypto-currency is traded daily on exchanges. Trading crypto-currency is difficult due to the inherent volatility of the crypto-market. In this work, we want to test the hypothesis: "Can techniques from artificial intelligence help with algorithmically trading cryptocurrencies?". In order to address this question, we combine Reinforcement Learning (RL) with pair trading. Pair trading is a statistical arbitrage trading technique which exploits the price difference between statistically correlated assets. We train reinforcement learners to determine when and how to trade pairs of cryptocurrencies. We develop new reward shaping and observation/action spaces for reinforcement learning. We performed experiments with the developed reinforcement learner on pairs of BTC-GBP and BTC-EUR data separated by 1-minute intervals (n = 263,520). The traditional non-RL pair trading technique achieved an annualised profit of 8.33%, while the proposed RL-based pair trading technique achieved annualised profits from 9.94% - 31.53%, depending upon the RL learner. Our results show that RL can significantly outperform manual and traditional pair trading techniques when applied to volatile markets such as cryptocurrencies.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
强化学习配对交易:动态缩放方法
加密货币是一种基于密码学的数字资产,其价格极不稳定。每天在交易所交易的加密货币价值约 700 亿美元。由于加密市场固有的波动性,加密货币交易十分困难。在这项工作中,我们想测试一个假设:"人工智能技术能否帮助加密货币的算法交易?为了解决这个问题,我们将强化学习(RL)与配对交易相结合。配对交易是一种统计套利技术,它利用了统计相关资产之间的价格差异。我们训练强化学习器来确定何时以及如何进行加密货币对交易。我们为强化学习开发了新的奖励塑造和观察/行动空间。我们使用开发的强化学习器对按 1 分钟间隔划分的 BTC-GBP 和 BTC-EUR 数据对(n = 263520)进行了实验。传统的非 RL 配对交易技术实现了 8.33% 的年化利润,而所提出的基于 RL 的配对交易技术则实现了 9.94% -31.53% 的年化利润,具体取决于 RL 学习器。我们的研究结果表明,在加密货币等波动较大的市场中,RL 的表现明显优于人工和传统的配对交易技术。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Optimal position-building strategies in Competition MarS: a Financial Market Simulation Engine Powered by Generative Foundation Model Logarithmic regret in the ergodic Avellaneda-Stoikov market making model A Financial Time Series Denoiser Based on Diffusion Model Simulation of Social Media-Driven Bubble Formation in Financial Markets using an Agent-Based Model with Hierarchical Influence Network
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1