{"title":"利用深度强化学习对加密货币交易进行在线概率知识提炼","authors":"Vasileios Moustakidis , Nikolaos Passalis , Anastasios Tefas","doi":"10.1016/j.patrec.2024.10.005","DOIUrl":null,"url":null,"abstract":"<div><div>Leveraging Deep Reinforcement Learning (DRL) for training agents for financial trading has gained significant attention in recent years. However, training these agents in noisy financial environments remains challenging and unstable, significantly impacting their performance as trading agents, as the recent literature has also showcased. This paper introduces a novel distillation method for DRL agents, aiming to improve the training stability of DRL agents. The proposed method transfers knowledge from a teacher ensemble to a student model, incorporating both the action probability distribution knowledge from the output layer, as well as the knowledge from the intermediate layers of the teacher’s network. Furthermore, the proposed method also works in an online fashion, allowing for eliminating the separate teacher training process typically involved in many DRL distillation pipelines, simplifying the distillation process. The proposed method is extensively evaluated on a large-scale cryptocurrency trading setup, demonstrating its ability to both lead to significant improvements in trading accuracy and obtained profit, as well as increase the stability of the training process.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"186 ","pages":"Pages 243-249"},"PeriodicalIF":3.9000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Online probabilistic knowledge distillation on cryptocurrency trading using Deep Reinforcement Learning\",\"authors\":\"Vasileios Moustakidis , Nikolaos Passalis , Anastasios Tefas\",\"doi\":\"10.1016/j.patrec.2024.10.005\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Leveraging Deep Reinforcement Learning (DRL) for training agents for financial trading has gained significant attention in recent years. However, training these agents in noisy financial environments remains challenging and unstable, significantly impacting their performance as trading agents, as the recent literature has also showcased. This paper introduces a novel distillation method for DRL agents, aiming to improve the training stability of DRL agents. The proposed method transfers knowledge from a teacher ensemble to a student model, incorporating both the action probability distribution knowledge from the output layer, as well as the knowledge from the intermediate layers of the teacher’s network. Furthermore, the proposed method also works in an online fashion, allowing for eliminating the separate teacher training process typically involved in many DRL distillation pipelines, simplifying the distillation process. The proposed method is extensively evaluated on a large-scale cryptocurrency trading setup, demonstrating its ability to both lead to significant improvements in trading accuracy and obtained profit, as well as increase the stability of the training process.</div></div>\",\"PeriodicalId\":54638,\"journal\":{\"name\":\"Pattern Recognition Letters\",\"volume\":\"186 \",\"pages\":\"Pages 243-249\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2024-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pattern Recognition Letters\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167865524002939\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865524002939","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Online probabilistic knowledge distillation on cryptocurrency trading using Deep Reinforcement Learning
Leveraging Deep Reinforcement Learning (DRL) for training agents for financial trading has gained significant attention in recent years. However, training these agents in noisy financial environments remains challenging and unstable, significantly impacting their performance as trading agents, as the recent literature has also showcased. This paper introduces a novel distillation method for DRL agents, aiming to improve the training stability of DRL agents. The proposed method transfers knowledge from a teacher ensemble to a student model, incorporating both the action probability distribution knowledge from the output layer, as well as the knowledge from the intermediate layers of the teacher’s network. Furthermore, the proposed method also works in an online fashion, allowing for eliminating the separate teacher training process typically involved in many DRL distillation pipelines, simplifying the distillation process. The proposed method is extensively evaluated on a large-scale cryptocurrency trading setup, demonstrating its ability to both lead to significant improvements in trading accuracy and obtained profit, as well as increase the stability of the training process.
期刊介绍:
Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition.
Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.