Market Making under Order Stacking Framework: A Deep Reinforcement Learning Approach

G. Chung, Munki Chung, Yongjae Lee, W. Kim
{"title":"Market Making under Order Stacking Framework: A Deep Reinforcement Learning Approach","authors":"G. Chung, Munki Chung, Yongjae Lee, W. Kim","doi":"10.1145/3533271.3561789","DOIUrl":null,"url":null,"abstract":"Market making strategy is one of the most popular high frequency trading strategies, where a market maker continuously quotes on both bid and ask side of the limit order book to profit from capturing bid-ask spread and to provide liquidity to the market. A market maker should consider three types of risk: 1) inventory risk, 2) adverse selection risk, and 3) non-execution risk. While there have been a lot of studies on market making via deep reinforcement learning, most of them focus on the first risk. However, in highly competitive markets, the latter two risks are very important to make stable profit from market making. For better control of the latter two risks, it is important to reserve good queue position of their resting limit orders. For this purpose, practitioners frequently adopt order stacking framework where their limit orders are quoted at multiple price levels beyond the best limit price. To the best of our knowledge, there have been no studies that adopt order stacking framework for market making. In this regard, we develop a deep reinforcement learning model for market making under order stacking framework. We use a modified state representation to efficiently encode the queue positions of the resting limit orders. We conduct comprehensive ablation study to show that by utilizing deep reinforcement learning, a market making agent under order stacking framework successfully learns to improve the PL while reducing various risks. For the training and testing of our model, we use complete limit order book data of KOSPI200 Index Futures from November 1, 2019 to January 31, 2020 which is comprised of 61 trading days.","PeriodicalId":134888,"journal":{"name":"Proceedings of the Third ACM International Conference on AI in Finance","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Third ACM International Conference on AI in Finance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3533271.3561789","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Market making strategy is one of the most popular high frequency trading strategies, where a market maker continuously quotes on both bid and ask side of the limit order book to profit from capturing bid-ask spread and to provide liquidity to the market. A market maker should consider three types of risk: 1) inventory risk, 2) adverse selection risk, and 3) non-execution risk. While there have been a lot of studies on market making via deep reinforcement learning, most of them focus on the first risk. However, in highly competitive markets, the latter two risks are very important to make stable profit from market making. For better control of the latter two risks, it is important to reserve good queue position of their resting limit orders. For this purpose, practitioners frequently adopt order stacking framework where their limit orders are quoted at multiple price levels beyond the best limit price. To the best of our knowledge, there have been no studies that adopt order stacking framework for market making. In this regard, we develop a deep reinforcement learning model for market making under order stacking framework. We use a modified state representation to efficiently encode the queue positions of the resting limit orders. We conduct comprehensive ablation study to show that by utilizing deep reinforcement learning, a market making agent under order stacking framework successfully learns to improve the PL while reducing various risks. For the training and testing of our model, we use complete limit order book data of KOSPI200 Index Futures from November 1, 2019 to January 31, 2020 which is comprised of 61 trading days.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
订单堆叠框架下的做市:一种深度强化学习方法
做市策略是最受欢迎的高频交易策略之一,做市商在限价单上连续报价,以获取买卖价差并为市场提供流动性。做市商应该考虑三种类型的风险:1)库存风险,2)逆向选择风险,3)非执行风险。虽然有很多关于通过深度强化学习做市的研究,但大多数都集中在第一种风险上。然而,在竞争激烈的市场中,后两种风险对于做市商获得稳定的利润至关重要。为了更好地控制后两种风险,为其剩余限价单预留良好的排队位置是很重要的。为此,从业者经常采用订单堆叠框架,他们的限价订单在最佳限价之外的多个价格水平上报价。就我们所知,目前还没有采用订单叠加框架进行做市的研究。在这方面,我们开发了一个订单堆叠框架下的深度强化学习模型。我们使用一种改进的状态表示来有效地编码静止极限订单的队列位置。我们进行了全面的烧蚀研究,表明在订单堆叠框架下,做市主体通过利用深度强化学习,成功地学习到了提高PL的同时降低了各种风险。为了训练和测试我们的模型,我们使用了2019年11月1日至2020年1月31日KOSPI200指数期货的完整限价订单数据,包括61个交易日。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Core Matrix Regression and Prediction with Regularization Risk-Aware Linear Bandits with Application in Smart Order Routing Addressing Extreme Market Responses Using Secure Aggregation Addressing Non-Stationarity in FX Trading with Online Model Selection of Offline RL Experts Objective Driven Portfolio Construction Using Reinforcement Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1