{"title":"数据时间旅行与一致的市场决策:利用匿名数据驯服多代理系统中的强化学习","authors":"Vincent Ragel, Damien Challet","doi":"arxiv-2408.02322","DOIUrl":null,"url":null,"abstract":"Reinforcement learning works best when the impact of the agent's actions on\nits environment can be perfectly simulated or fully appraised from available\ndata. Some systems are however both hard to simulate and very sensitive to\nsmall perturbations. An additional difficulty arises when an RL agent must\nlearn to be part of a multi-agent system using only anonymous data, which makes\nit impossible to infer the state of each agent, thus to use data directly.\nTypical examples are competitive systems without agent-resolved data such as\nfinancial markets. We introduce consistent data time travel for offline RL as a\nremedy for these problems: instead of using historical data in a sequential\nway, we argue that one needs to perform time travel in historical data, i.e.,\nto adjust the time index so that both the past state and the influence of the\nRL agent's action on the state coincide with real data. This both alleviates\nthe need to resort to imperfect models and consistently accounts for both the\nimmediate and long-term reactions of the system when using anonymous historical\ndata. We apply this idea to market making in limit order books, a notoriously\ndifficult task for RL; it turns out that the gain of the agent is significantly\nhigher with data time travel than with naive sequential data, which suggests\nthat the difficulty of this task for RL may have been overestimated.","PeriodicalId":501478,"journal":{"name":"arXiv - QuantFin - Trading and Market Microstructure","volume":"45 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Data time travel and consistent market making: taming reinforcement learning in multi-agent systems with anonymous data\",\"authors\":\"Vincent Ragel, Damien Challet\",\"doi\":\"arxiv-2408.02322\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reinforcement learning works best when the impact of the agent's actions on\\nits environment can be perfectly simulated or fully appraised from available\\ndata. Some systems are however both hard to simulate and very sensitive to\\nsmall perturbations. An additional difficulty arises when an RL agent must\\nlearn to be part of a multi-agent system using only anonymous data, which makes\\nit impossible to infer the state of each agent, thus to use data directly.\\nTypical examples are competitive systems without agent-resolved data such as\\nfinancial markets. We introduce consistent data time travel for offline RL as a\\nremedy for these problems: instead of using historical data in a sequential\\nway, we argue that one needs to perform time travel in historical data, i.e.,\\nto adjust the time index so that both the past state and the influence of the\\nRL agent's action on the state coincide with real data. This both alleviates\\nthe need to resort to imperfect models and consistently accounts for both the\\nimmediate and long-term reactions of the system when using anonymous historical\\ndata. We apply this idea to market making in limit order books, a notoriously\\ndifficult task for RL; it turns out that the gain of the agent is significantly\\nhigher with data time travel than with naive sequential data, which suggests\\nthat the difficulty of this task for RL may have been overestimated.\",\"PeriodicalId\":501478,\"journal\":{\"name\":\"arXiv - QuantFin - Trading and Market Microstructure\",\"volume\":\"45 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - QuantFin - Trading and Market Microstructure\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.02322\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Trading and Market Microstructure","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.02322","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Data time travel and consistent market making: taming reinforcement learning in multi-agent systems with anonymous data
Reinforcement learning works best when the impact of the agent's actions on
its environment can be perfectly simulated or fully appraised from available
data. Some systems are however both hard to simulate and very sensitive to
small perturbations. An additional difficulty arises when an RL agent must
learn to be part of a multi-agent system using only anonymous data, which makes
it impossible to infer the state of each agent, thus to use data directly.
Typical examples are competitive systems without agent-resolved data such as
financial markets. We introduce consistent data time travel for offline RL as a
remedy for these problems: instead of using historical data in a sequential
way, we argue that one needs to perform time travel in historical data, i.e.,
to adjust the time index so that both the past state and the influence of the
RL agent's action on the state coincide with real data. This both alleviates
the need to resort to imperfect models and consistently accounts for both the
immediate and long-term reactions of the system when using anonymous historical
data. We apply this idea to market making in limit order books, a notoriously
difficult task for RL; it turns out that the gain of the agent is significantly
higher with data time travel than with naive sequential data, which suggests
that the difficulty of this task for RL may have been overestimated.