Deep learning is drawing keen attention in contemporary financial research. In this article, the authors investigate the statistical predictive power and economic significance of financial stock market data by using deep learning techniques. In particular, the authors use the equity premium as the response variable and financial variables as predictors. The deep learning techniques used in this study provide useful evidence of statistical predictability and economic significance. Considering the statistical predictive performance of the deep learning models, H2O deep learning (H2ODL) gives the smallest mean-squared forecast error (MSFE), with the corresponding highest cumulative return (CR) and Sharpe ratio (SR) in each of the out-of-sample periods. Specifically, the H2ODL with Rectifier used as the activation function outperformed the other models in this article. In the fusion results, the SAE-with-H2O using the Maxout activation function yields the smallest MSFE with the corresponding highest CR and SR in all of the out-of-sample periods. It is worth noting that the higher the CR, the higher the SR and the lower the MSFE, which concords with a rule of thumb. Overall, the empirical analysis in this study revealed that the SAE-with-H2O using the Maxout activation function produced the best statistically predictive and economically significant results with robustness across all out-of-sample periods. TOPICS: Big data/machine learning, performance measurement, quantitative methods, simulations, statistical methods Key Findings ▪ In this article, the authors use deep learning models to predict the equity premium, employing a plethora of well-known predictors. ▪ The authors employ deep learning models such as deep neural networks, a stacked autoencoder, and long short-term memory models. ▪ The statistical and economic significance of the proposed models is examined and back tested in three out-of-sample periods.
{"title":"On the Predictability of the Equity Premium Using Deep Learning Techniques","authors":"Jonathan Iworiso, Spyridon D. Vrontos","doi":"10.3905/jfds.2020.1.051","DOIUrl":"https://doi.org/10.3905/jfds.2020.1.051","url":null,"abstract":"Deep learning is drawing keen attention in contemporary financial research. In this article, the authors investigate the statistical predictive power and economic significance of financial stock market data by using deep learning techniques. In particular, the authors use the equity premium as the response variable and financial variables as predictors. The deep learning techniques used in this study provide useful evidence of statistical predictability and economic significance. Considering the statistical predictive performance of the deep learning models, H2O deep learning (H2ODL) gives the smallest mean-squared forecast error (MSFE), with the corresponding highest cumulative return (CR) and Sharpe ratio (SR) in each of the out-of-sample periods. Specifically, the H2ODL with Rectifier used as the activation function outperformed the other models in this article. In the fusion results, the SAE-with-H2O using the Maxout activation function yields the smallest MSFE with the corresponding highest CR and SR in all of the out-of-sample periods. It is worth noting that the higher the CR, the higher the SR and the lower the MSFE, which concords with a rule of thumb. Overall, the empirical analysis in this study revealed that the SAE-with-H2O using the Maxout activation function produced the best statistically predictive and economically significant results with robustness across all out-of-sample periods. TOPICS: Big data/machine learning, performance measurement, quantitative methods, simulations, statistical methods Key Findings ▪ In this article, the authors use deep learning models to predict the equity premium, employing a plethora of well-known predictors. ▪ The authors employ deep learning models such as deep neural networks, a stacked autoencoder, and long short-term memory models. ▪ The statistical and economic significance of the proposed models is examined and back tested in three out-of-sample periods.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121357677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors find that the foreign exchange derivatives market for British pound sterling versus euro deviates from the covered interest rate parity (CIP). The resulting arbitrage opportunities seem to be persistent and vary systematically. They are driven not only by Brexit-related politics. The authors find a relation between the cross-currency basis and various factors. Furthermore, they discover nonlinearities that require the application of deep learning methods. The findings are important for arbitrage desks: They show when arbitrage opportunities will become large for international trade, when to look for better alternatives than hedging with forwards, and when corporate treasuries should procure currencies—that are about to become scarce—in advance. TOPICS: Big data/machine learning, currency, simulations Key Findings ▪ We focus on the investigation of deviations from covered interest rate parity on the British pound and the euro and include event-driven factors. ▪ Arbitrage opportunities seem to be persistent and vary systematically. We make the driving factors explicit. ▪ The presence of nonlinearities requires the application of methods from deep learning. It is shown that deep learning adds value. Equipped with better forecasts, arbitrage desks can prepare for days when there are large arbitrage gains. Corporates can punctually adapt their procurement of currencies that are about to become scarce.
{"title":"Deviations from Covered Interest Rate Parity: The Case of British Pound Sterling versus Euro","authors":"F. Lehrbass, Thamara Sandra Schuster","doi":"10.3905/jfds.2020.1.050","DOIUrl":"https://doi.org/10.3905/jfds.2020.1.050","url":null,"abstract":"The authors find that the foreign exchange derivatives market for British pound sterling versus euro deviates from the covered interest rate parity (CIP). The resulting arbitrage opportunities seem to be persistent and vary systematically. They are driven not only by Brexit-related politics. The authors find a relation between the cross-currency basis and various factors. Furthermore, they discover nonlinearities that require the application of deep learning methods. The findings are important for arbitrage desks: They show when arbitrage opportunities will become large for international trade, when to look for better alternatives than hedging with forwards, and when corporate treasuries should procure currencies—that are about to become scarce—in advance. TOPICS: Big data/machine learning, currency, simulations Key Findings ▪ We focus on the investigation of deviations from covered interest rate parity on the British pound and the euro and include event-driven factors. ▪ Arbitrage opportunities seem to be persistent and vary systematically. We make the driving factors explicit. ▪ The presence of nonlinearities requires the application of methods from deep learning. It is shown that deep learning adds value. Equipped with better forecasts, arbitrage desks can prepare for days when there are large arbitrage gains. Corporates can punctually adapt their procurement of currencies that are about to become scarce.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132227576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Poh, Bryan Lim, S. Zohren, Stephen J. Roberts
The success of a cross-sectional systematic strategy depends critically on accurately ranking assets before portfolio construction. Contemporary techniques perform this ranking step either with simple heuristics or by sorting outputs from standard regression or classification models, which have been demonstrated to be suboptimal for ranking in other domains (e.g., information retrieval). To address this deficiency, the authors propose a framework to enhance cross-sectional portfolios by incorporating learning-to-rank algorithms, which lead to improvements in ranking accuracy by learning pairwise and listwise structures across instruments. Using cross-sectional momentum as a demonstrative case study, the authors show that the use of modern machine learning ranking algorithms can substantially improve the trading performance of cross-sectional strategies—providing approximately threefold boosting of Sharpe ratios compared with traditional approaches. TOPICS: Big data/machine learning, portfolio construction, performance measurement Key Findings ▪ Contemporary approaches (e.g., simple heuristics) used to score and rank assets in portfolio construction are sub optimal as they do not learn the broader pairwise and listwise relationships across instruments. ▪ Learning to rank algorithms can be used to address this shortcoming, learning the broader links across assets, which consequently allow them to be ranked more accurately. ▪ Using Cross-sectional Momentum as a demonstrative use-case, we show that more precise rankings produce long/short portfolios that significantly outperform traditional approaches across various financial and ranking-based measures.
{"title":"Building Cross-Sectional Systematic Strategies by Learning to Rank","authors":"Daniel Poh, Bryan Lim, S. Zohren, Stephen J. Roberts","doi":"10.2139/ssrn.3751012","DOIUrl":"https://doi.org/10.2139/ssrn.3751012","url":null,"abstract":"The success of a cross-sectional systematic strategy depends critically on accurately ranking assets before portfolio construction. Contemporary techniques perform this ranking step either with simple heuristics or by sorting outputs from standard regression or classification models, which have been demonstrated to be suboptimal for ranking in other domains (e.g., information retrieval). To address this deficiency, the authors propose a framework to enhance cross-sectional portfolios by incorporating learning-to-rank algorithms, which lead to improvements in ranking accuracy by learning pairwise and listwise structures across instruments. Using cross-sectional momentum as a demonstrative case study, the authors show that the use of modern machine learning ranking algorithms can substantially improve the trading performance of cross-sectional strategies—providing approximately threefold boosting of Sharpe ratios compared with traditional approaches. TOPICS: Big data/machine learning, portfolio construction, performance measurement Key Findings ▪ Contemporary approaches (e.g., simple heuristics) used to score and rank assets in portfolio construction are sub optimal as they do not learn the broader pairwise and listwise relationships across instruments. ▪ Learning to rank algorithms can be used to address this shortcoming, learning the broader links across assets, which consequently allow them to be ranked more accurately. ▪ Using Cross-sectional Momentum as a demonstrative use-case, we show that more precise rankings produce long/short portfolios that significantly outperform traditional approaches across various financial and ranking-based measures.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126971389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Markus Jaeger, Stephan Krügel, D. Marinelli, Jochen Papenbrock, Peter Schwendner
In this article, the authors construct a pipeline to benchmark hierarchical risk parity (HRP) relative to equal risk contribution (ERC) as examples of diversification strategies allocating to liquid multi-asset futures markets with dynamic leverage (volatility target). The authors use interpretable machine learning concepts (explainable AI) to compare the robustness of the strategies and to back out implicit rules for decision-making. The empirical dataset consists of 17 equity index, government bond, and commodity futures markets across 20 years. The two strategies are back tested for the empirical dataset and for about 100,000 bootstrapped datasets. XGBoost is used to regress the Calmar ratio spread between the two strategies against features of the bootstrapped datasets. Compared to ERC, HRP shows higher Calmar ratios and better matches the volatility target. Using Shapley values, the Calmar ratio spread can be attributed especially to univariate drawdown measures of the asset classes. TOPICS: Quantitative methods, statistical methods, big data/machine learning, portfolio construction, performance measurement Key Findings ▪ The authors introduce a procedure to benchmark rule-based investment strategies and to explain the differences in path-dependent risk-adjusted performance measures using interpretable machine learning. ▪ They apply the procedure to the Calmar ratio spread between hierarchical risk parity (HRP) and equal risk contribution (ERC) allocations of a multi-asset futures portfolio and find HRP to have superior risk-adjusted performance. ▪ The authors regress the Calmar ratio spread against statistical features of bootstrapped futures return datasets using XGBoost and apply the SHAP framework by Lundberg and Lee (2017) to discuss the local and global feature importance.
{"title":"Interpretable Machine Learning for Diversified Portfolio Construction","authors":"Markus Jaeger, Stephan Krügel, D. Marinelli, Jochen Papenbrock, Peter Schwendner","doi":"10.2139/ssrn.3730144","DOIUrl":"https://doi.org/10.2139/ssrn.3730144","url":null,"abstract":"In this article, the authors construct a pipeline to benchmark hierarchical risk parity (HRP) relative to equal risk contribution (ERC) as examples of diversification strategies allocating to liquid multi-asset futures markets with dynamic leverage (volatility target). The authors use interpretable machine learning concepts (explainable AI) to compare the robustness of the strategies and to back out implicit rules for decision-making. The empirical dataset consists of 17 equity index, government bond, and commodity futures markets across 20 years. The two strategies are back tested for the empirical dataset and for about 100,000 bootstrapped datasets. XGBoost is used to regress the Calmar ratio spread between the two strategies against features of the bootstrapped datasets. Compared to ERC, HRP shows higher Calmar ratios and better matches the volatility target. Using Shapley values, the Calmar ratio spread can be attributed especially to univariate drawdown measures of the asset classes. TOPICS: Quantitative methods, statistical methods, big data/machine learning, portfolio construction, performance measurement Key Findings ▪ The authors introduce a procedure to benchmark rule-based investment strategies and to explain the differences in path-dependent risk-adjusted performance measures using interpretable machine learning. ▪ They apply the procedure to the Calmar ratio spread between hierarchical risk parity (HRP) and equal risk contribution (ERC) allocations of a multi-asset futures portfolio and find HRP to have superior risk-adjusted performance. ▪ The authors regress the Calmar ratio spread against statistical features of bootstrapped futures return datasets using XGBoost and apply the SHAP framework by Lundberg and Lee (2017) to discuss the local and global feature importance.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128731518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-31DOI: 10.3905/jfds.2020.2.4.001
F. Fabozzi
david rowe Reprints Manager and Advertising Director Most portfolio optimization techniques require, in one way or another, forecasting the returns of the assets in the selection universe. In the lead article for this issue, “Deep Learning for Portfolio Optimization,” Zihao Zhang, Stefan Zohren, and Stephen Roberts adopt deep learning models to directly optimize a portfolio’s Sharpe ratio. Their framework circumvents the requirements for forecasting expected returns and allows the model to directly optimize portfolio weights through gradient ascent. Instead of using individual assets, the authors focus on exchange-traded funds of market indices due to their robust correlations, as well as reducing the scope of possible assets from which to choose. In a testing period from 2011 to April 2020, the proposed method delivers the best performance in terms of Sharpe ratio. A detailed analysis of the results during the recent COVID-19 crisis shows the rationality and practicality of their model. The authors also include a sensitivity analysis to understand how input features contribute to performance. Predicting business cycles and recessions is of great importance to asset managers, businesses, and macroeconomists alike, helping them foresee financial distress and to seek alternative investment strategies. Traditional modeling approaches proposed in the literature have estimated the probability of recessions by using probit models, which fail to account for non-linearity and interactions among predictors. More recently, machine learning classification algorithms have been applied to expand the number of predictors used to model the probability of recession, as well as incorporating interactions between the predictors. Although machine learning methods have been able to improve upon the forecasts of traditional linear models, the one crucial aspect that has been missing from the literature is the frequency at which recessions occur. Alireza Yazdani in “Machine Learning Prediction of Recessions: An Imbalanced Classification Approach,” argues that due to the low frequency of historical recessions, this problem is better dealt with by using an imbalanced classification approach. To compensate for the class imbalances, Yazdani uses down-sampling to create a roughly equal distribution of the non-recession and recession observations. Comparing the performance of the baseline probit model with various machine learning classification models, he finds that ensemble methods exhibit superior predictive power both in-sample and out-of-sample. He argues that nonlinear machine learning models help to both better identify various types of relationships in constantly changing financial data and enable the deployment of f lexible data-driven predictive modeling strategies. Most portfolio construction techniques rely on estimating sample covariance and correlations as the primary inputs. However, these b y gu es t o n Ju ne 1 4, 2 02 1. C op yr ig ht 2 02 0 Pa ge an t M e
{"title":"Managing Editor’s Letter","authors":"F. Fabozzi","doi":"10.3905/jfds.2020.2.4.001","DOIUrl":"https://doi.org/10.3905/jfds.2020.2.4.001","url":null,"abstract":"david rowe Reprints Manager and Advertising Director Most portfolio optimization techniques require, in one way or another, forecasting the returns of the assets in the selection universe. In the lead article for this issue, “Deep Learning for Portfolio Optimization,” Zihao Zhang, Stefan Zohren, and Stephen Roberts adopt deep learning models to directly optimize a portfolio’s Sharpe ratio. Their framework circumvents the requirements for forecasting expected returns and allows the model to directly optimize portfolio weights through gradient ascent. Instead of using individual assets, the authors focus on exchange-traded funds of market indices due to their robust correlations, as well as reducing the scope of possible assets from which to choose. In a testing period from 2011 to April 2020, the proposed method delivers the best performance in terms of Sharpe ratio. A detailed analysis of the results during the recent COVID-19 crisis shows the rationality and practicality of their model. The authors also include a sensitivity analysis to understand how input features contribute to performance. Predicting business cycles and recessions is of great importance to asset managers, businesses, and macroeconomists alike, helping them foresee financial distress and to seek alternative investment strategies. Traditional modeling approaches proposed in the literature have estimated the probability of recessions by using probit models, which fail to account for non-linearity and interactions among predictors. More recently, machine learning classification algorithms have been applied to expand the number of predictors used to model the probability of recession, as well as incorporating interactions between the predictors. Although machine learning methods have been able to improve upon the forecasts of traditional linear models, the one crucial aspect that has been missing from the literature is the frequency at which recessions occur. Alireza Yazdani in “Machine Learning Prediction of Recessions: An Imbalanced Classification Approach,” argues that due to the low frequency of historical recessions, this problem is better dealt with by using an imbalanced classification approach. To compensate for the class imbalances, Yazdani uses down-sampling to create a roughly equal distribution of the non-recession and recession observations. Comparing the performance of the baseline probit model with various machine learning classification models, he finds that ensemble methods exhibit superior predictive power both in-sample and out-of-sample. He argues that nonlinear machine learning models help to both better identify various types of relationships in constantly changing financial data and enable the deployment of f lexible data-driven predictive modeling strategies. Most portfolio construction techniques rely on estimating sample covariance and correlations as the primary inputs. However, these b y gu es t o n Ju ne 1 4, 2 02 1. C op yr ig ht 2 02 0 Pa ge an t M e","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129896498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study aims first at improving volatility prediction using a machine learning model called support vector regression GARCH (SVR-GARCH) using selected 30 stocks listed on the S&P 500. The authors compare the prediction results of the SVR-GARCH model with the GARCH family models and find that SVR-GARCH outperforms these models based on the performance metrics. The second goal of this study is to calculate value-at-risk (VaR) using predictions obtained in the previous part. Moreover, backtesting is applied to check the accuracy of the VaR results. The findings suggest that using predictions obtained from the SVR-GARCH model boosts VaR calculations and hence provides better financial risk management. TOPICS: Big data/machine learning, risk management, simulations, statistical methods, VAR and use of alternative risk measures of trading risk, volatility measures Key Findings • Machine learning–based implementations in finance can lead to improved performance. • Volatility prediction based on the SVR-GARCH machine learning–based volatility prediction model outperforms traditional volatility prediction models, making it possible to have more accurate financial models. • Using volatility prediction in the value-at-risk model yields far better results, implying that, given the better-performing volatility model, it is likely to manage financial risk better than ever.
{"title":"Volatility Prediction and Risk Management: An SVR-GARCH Approach","authors":"Abdullah Karasan, E. Gaygısız","doi":"10.3905/JFDS.2020.1.046","DOIUrl":"https://doi.org/10.3905/JFDS.2020.1.046","url":null,"abstract":"This study aims first at improving volatility prediction using a machine learning model called support vector regression GARCH (SVR-GARCH) using selected 30 stocks listed on the S&P 500. The authors compare the prediction results of the SVR-GARCH model with the GARCH family models and find that SVR-GARCH outperforms these models based on the performance metrics. The second goal of this study is to calculate value-at-risk (VaR) using predictions obtained in the previous part. Moreover, backtesting is applied to check the accuracy of the VaR results. The findings suggest that using predictions obtained from the SVR-GARCH model boosts VaR calculations and hence provides better financial risk management. TOPICS: Big data/machine learning, risk management, simulations, statistical methods, VAR and use of alternative risk measures of trading risk, volatility measures Key Findings • Machine learning–based implementations in finance can lead to improved performance. • Volatility prediction based on the SVR-GARCH machine learning–based volatility prediction model outperforms traditional volatility prediction models, making it possible to have more accurate financial models. • Using volatility prediction in the value-at-risk model yields far better results, implying that, given the better-performing volatility model, it is likely to manage financial risk better than ever.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129289335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiayi Du, M. Jin, Petter N. Kolm, G. Ritter, Yixuan Wang, Bofei Zhang
The authors propose models for the solution of the fundamental problem of option replication subject to discrete trading, round lotting, and nonlinear transaction costs using state-of-the-art methods in deep reinforcement learning (DRL), including deep Q-learning, deep Q-learning with Pop-Art, and proximal policy optimization (PPO). Each DRL model is trained to hedge a whole range of strikes, and no retraining is needed when the user changes to another strike within the range. The models are general, allowing the user to plug in any option pricing and simulation library and then train them with no further modifications to hedge arbitrary option portfolios. Through a series of simulations, the authors show that the DRL models learn similar or better strategies as compared to delta hedging. Out of all models, PPO performs the best in terms of profit and loss, training time, and amount of data needed for training. TOPICS: Big data/machine learning, options, risk management, simulations Key Findings • The authors propose models for the replication of options over a whole range of strikes subject to discrete trading, round lotting, and nonlinear transaction costs based on state-of-the-art methods in deep reinforcement learning including deep Q-learning and proximal policy optimization. • The models allow the user to plug in any option pricing and simulation library and then train them with no further modifications to hedge arbitrary option portfolios. • A series of simulations demonstrates that the deep reinforcement learning models learn similar or better strategies as compared to delta hedging. • Proximal policy optimization outperforms the other models in terms of profit and loss, training time, and amount of data needed for training.
{"title":"Deep Reinforcement Learning for Option Replication and Hedging","authors":"Jiayi Du, M. Jin, Petter N. Kolm, G. Ritter, Yixuan Wang, Bofei Zhang","doi":"10.3905/JFDS.2020.1.045","DOIUrl":"https://doi.org/10.3905/JFDS.2020.1.045","url":null,"abstract":"The authors propose models for the solution of the fundamental problem of option replication subject to discrete trading, round lotting, and nonlinear transaction costs using state-of-the-art methods in deep reinforcement learning (DRL), including deep Q-learning, deep Q-learning with Pop-Art, and proximal policy optimization (PPO). Each DRL model is trained to hedge a whole range of strikes, and no retraining is needed when the user changes to another strike within the range. The models are general, allowing the user to plug in any option pricing and simulation library and then train them with no further modifications to hedge arbitrary option portfolios. Through a series of simulations, the authors show that the DRL models learn similar or better strategies as compared to delta hedging. Out of all models, PPO performs the best in terms of profit and loss, training time, and amount of data needed for training. TOPICS: Big data/machine learning, options, risk management, simulations Key Findings • The authors propose models for the replication of options over a whole range of strikes subject to discrete trading, round lotting, and nonlinear transaction costs based on state-of-the-art methods in deep reinforcement learning including deep Q-learning and proximal policy optimization. • The models allow the user to plug in any option pricing and simulation library and then train them with no further modifications to hedge arbitrary option portfolios. • A series of simulations demonstrates that the deep reinforcement learning models learn similar or better strategies as compared to delta hedging. • Proximal policy optimization outperforms the other models in terms of profit and loss, training time, and amount of data needed for training.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134426451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, the authors adopt deep learning models to directly optimize the portfolio Sharpe ratio. The framework they present circumvents the requirements for forecasting expected returns and allows them to directly optimize portfolio weights by updating model parameters. Instead of selecting individual assets, they trade exchange-traded funds of market indexes to form a portfolio. Indexes of different asset classes show robust correlations, and trading them substantially reduces the spectrum of available assets from which to choose. The authors compare their method with a wide range of algorithms, with results showing that the model obtains the best performance over the testing period of 2011 to the end of April 2020, including the financial instabilities of the first quarter of 2020. A sensitivity analysis is included to clarify the relevance of input features, and the authors further study the performance of their approach under different cost rates and different risk levels via volatility scaling. TOPICS: Exchange-traded funds and applications, mutual fund performance, portfolio construction Key Findings • In this article, the authors utilize deep learning models to directly optimize the portfolio Sharpe ratio. They present a framework that bypasses traditional forecasting steps and allows portfolio weights to be optimized by updating model parameters. • The authors trade exchange-traded funds of market indexes to form a portfolio. Doing this substantially reduces the scope of possible assets to choose from, and these indexes have shown robust correlations. • The authors backtest their methods from 2011 to the end of April 2020, including the financial instabilities due to COVID-19. Their model delivers good performance under transaction costs, and a detailed study shows the rationality of their approach during the crisis.
{"title":"Deep Learning for Portfolio Optimization","authors":"Zihao Zhang, S. Zohren, Stephen J. Roberts","doi":"10.3905/JFDS.2020.1.042","DOIUrl":"https://doi.org/10.3905/JFDS.2020.1.042","url":null,"abstract":"In this article, the authors adopt deep learning models to directly optimize the portfolio Sharpe ratio. The framework they present circumvents the requirements for forecasting expected returns and allows them to directly optimize portfolio weights by updating model parameters. Instead of selecting individual assets, they trade exchange-traded funds of market indexes to form a portfolio. Indexes of different asset classes show robust correlations, and trading them substantially reduces the spectrum of available assets from which to choose. The authors compare their method with a wide range of algorithms, with results showing that the model obtains the best performance over the testing period of 2011 to the end of April 2020, including the financial instabilities of the first quarter of 2020. A sensitivity analysis is included to clarify the relevance of input features, and the authors further study the performance of their approach under different cost rates and different risk levels via volatility scaling. TOPICS: Exchange-traded funds and applications, mutual fund performance, portfolio construction Key Findings • In this article, the authors utilize deep learning models to directly optimize the portfolio Sharpe ratio. They present a framework that bypasses traditional forecasting steps and allows portfolio weights to be optimized by updating model parameters. • The authors trade exchange-traded funds of market indexes to form a portfolio. Doing this substantially reduces the scope of possible assets to choose from, and these indexes have shown robust correlations. • The authors backtest their methods from 2011 to the end of April 2020, including the financial instabilities due to COVID-19. Their model delivers good performance under transaction costs, and a detailed study shows the rationality of their approach during the crisis.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126644452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-31DOI: 10.3905/jfds.2020.2.3.001
Francesco A. Fabozzi
{"title":"Managing Editor’s Letter","authors":"Francesco A. Fabozzi","doi":"10.3905/jfds.2020.2.3.001","DOIUrl":"https://doi.org/10.3905/jfds.2020.2.3.001","url":null,"abstract":"","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116612974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, the authors show that prediction uncertainty estimates gleaned from deep learning models can be useful inputs for influencing the relative allocation of risk capital across trades. In this way, consideration of uncertainty is important because it permits the scaling of investment size across trade opportunities in a principled and data-driven way. The authors showcase this insight with a prediction model and, based on a Sharpe ratio metric, find clear outperformance relative to trading strategies that either do not take uncertainty into account or use an alternative market-based statistic as a proxy for uncertainty. Of added novelty is their modeling of high-frequency data at the top level of the Eurodollar futures limit order book for each trading day of 2018, whereby they predict interest rate curve changes on small time horizons. The authors are motivated to study the market for these popularly traded interest rate derivatives because it is deep and liquid and contributes to the efficient functioning of global finance—though there is relatively little by way of its modeling contained in the academic literature. Hence, they verify the utility of prediction models and uncertainty estimates for trading applications in this complex and multidimensional asset price space. TOPICS: Big data/machine learning, derivatives, simulations, statistical methods Key Findings ▪ The authors model high-frequency Eurodollar Futures limit order book data using state-of-the-art deep learning to predict interest rate curve changes on small time horizons. ▪ They further augment their models to yield estimates of prediction uncertainty. ▪ In certain settings, the uncertainty estimates can be used in conjunction with return predictions for scaling bankroll allocation between trades. This can lead to clear trading outperformance relative to the case that uncertainty is not taken into account.
{"title":"Investment Sizing with Deep Learning Prediction Uncertainties for High-Frequency Eurodollar Futures Trading","authors":"Trent Spears, S. Zohren, Stephen J. Roberts","doi":"10.2139/ssrn.3664497","DOIUrl":"https://doi.org/10.2139/ssrn.3664497","url":null,"abstract":"In this article, the authors show that prediction uncertainty estimates gleaned from deep learning models can be useful inputs for influencing the relative allocation of risk capital across trades. In this way, consideration of uncertainty is important because it permits the scaling of investment size across trade opportunities in a principled and data-driven way. The authors showcase this insight with a prediction model and, based on a Sharpe ratio metric, find clear outperformance relative to trading strategies that either do not take uncertainty into account or use an alternative market-based statistic as a proxy for uncertainty. Of added novelty is their modeling of high-frequency data at the top level of the Eurodollar futures limit order book for each trading day of 2018, whereby they predict interest rate curve changes on small time horizons. The authors are motivated to study the market for these popularly traded interest rate derivatives because it is deep and liquid and contributes to the efficient functioning of global finance—though there is relatively little by way of its modeling contained in the academic literature. Hence, they verify the utility of prediction models and uncertainty estimates for trading applications in this complex and multidimensional asset price space. TOPICS: Big data/machine learning, derivatives, simulations, statistical methods Key Findings ▪ The authors model high-frequency Eurodollar Futures limit order book data using state-of-the-art deep learning to predict interest rate curve changes on small time horizons. ▪ They further augment their models to yield estimates of prediction uncertainty. ▪ In certain settings, the uncertainty estimates can be used in conjunction with return predictions for scaling bankroll allocation between trades. This can lead to clear trading outperformance relative to the case that uncertainty is not taken into account.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122390964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}