{"title":"A Deep Reinforcement Learning Approach for Portfolio Management in Non-Short-Selling Market","authors":"Ruidan Su, Chun Chi, Shikui Tu, Lei Xu","doi":"10.1049/2024/5399392","DOIUrl":null,"url":null,"abstract":"<div>\n <p>Reinforcement learning (RL) has been applied to financial portfolio management in recent years. Current studies mostly focus on profit accumulation without much consideration of risk. Some risk-return balanced studies extract features from price and volume data only, which is highly correlated and missing representation of risk features. To tackle these problems, we propose a weight control unit (WCU) to effectively manage the position of portfolio management in different market statuses. A loss penalty term is also designed in the reward function to prevent sharp drawdown during trading. Moreover, stock spatial interrelation representing the correlation between two different stocks is captured by a graph convolution network based on fundamental data. Temporal interrelation is also captured by a temporal convolutional network based on new factors designed with price and volume data. Both spatial and temporal interrelation work for better feature extraction from historical data and also make the model more interpretable. Finally, a deep deterministic policy gradient actor–critic RL is applied to explore optimal policy in portfolio management. We conduct our approach in a challenging non-short-selling market, and the experiment results show that our method outperforms the state-of-the-art methods in both profit and risk criteria. Specifically, with 6.72% improvement on an annualized rate of return, 7.72% decrease in maximum drawdown, and a better annualized Sharpe ratio of 0.112. Also, the loss penalty and WCU provide new aspects for future work in risk control.</p>\n </div>","PeriodicalId":56301,"journal":{"name":"IET Signal Processing","volume":"2024 1","pages":""},"PeriodicalIF":1.1000,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/5399392","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/2024/5399392","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Reinforcement learning (RL) has been applied to financial portfolio management in recent years. Current studies mostly focus on profit accumulation without much consideration of risk. Some risk-return balanced studies extract features from price and volume data only, which is highly correlated and missing representation of risk features. To tackle these problems, we propose a weight control unit (WCU) to effectively manage the position of portfolio management in different market statuses. A loss penalty term is also designed in the reward function to prevent sharp drawdown during trading. Moreover, stock spatial interrelation representing the correlation between two different stocks is captured by a graph convolution network based on fundamental data. Temporal interrelation is also captured by a temporal convolutional network based on new factors designed with price and volume data. Both spatial and temporal interrelation work for better feature extraction from historical data and also make the model more interpretable. Finally, a deep deterministic policy gradient actor–critic RL is applied to explore optimal policy in portfolio management. We conduct our approach in a challenging non-short-selling market, and the experiment results show that our method outperforms the state-of-the-art methods in both profit and risk criteria. Specifically, with 6.72% improvement on an annualized rate of return, 7.72% decrease in maximum drawdown, and a better annualized Sharpe ratio of 0.112. Also, the loss penalty and WCU provide new aspects for future work in risk control.
期刊介绍:
IET Signal Processing publishes research on a diverse range of signal processing and machine learning topics, covering a variety of applications, disciplines, modalities, and techniques in detection, estimation, inference, and classification problems. The research published includes advances in algorithm design for the analysis of single and high-multi-dimensional data, sparsity, linear and non-linear systems, recursive and non-recursive digital filters and multi-rate filter banks, as well a range of topics that span from sensor array processing, deep convolutional neural network based approaches to the application of chaos theory, and far more.
Topics covered by scope include, but are not limited to:
advances in single and multi-dimensional filter design and implementation
linear and nonlinear, fixed and adaptive digital filters and multirate filter banks
statistical signal processing techniques and analysis
classical, parametric and higher order spectral analysis
signal transformation and compression techniques, including time-frequency analysis
system modelling and adaptive identification techniques
machine learning based approaches to signal processing
Bayesian methods for signal processing, including Monte-Carlo Markov-chain and particle filtering techniques
theory and application of blind and semi-blind signal separation techniques
signal processing techniques for analysis, enhancement, coding, synthesis and recognition of speech signals
direction-finding and beamforming techniques for audio and electromagnetic signals
analysis techniques for biomedical signals
baseband signal processing techniques for transmission and reception of communication signals
signal processing techniques for data hiding and audio watermarking
sparse signal processing and compressive sensing
Special Issue Call for Papers:
Intelligent Deep Fuzzy Model for Signal Processing - https://digital-library.theiet.org/files/IET_SPR_CFP_IDFMSP.pdf