投资组合优化中的过拟合

IF 0.4 4区 经济学 Q4 BUSINESS, FINANCE Journal of Risk Model Validation Pub Date : 2023-01-01 DOI:10.21314/jrmv.2023.005
Matteo Maggiolo, Oleg Szehr
{"title":"投资组合优化中的过拟合","authors":"Matteo Maggiolo, Oleg Szehr","doi":"10.21314/jrmv.2023.005","DOIUrl":null,"url":null,"abstract":"In this paper we measure the out-of-sample performance of sample-based rolling-window neural network (NN) portfolio optimization strategies. We show that if NN strategies are evaluated using the holdout (train–test split) technique, then high out-of-sample performance scores can commonly be achieved. Although this phenomenon is often employed to validate NN portfolio models, we demonstrate that it constitutes a “fata morgana” that arises due to a particular vulnerability of portfolio optimization to overfitting. To assess whether overfitting is present, we set up a dedicated methodology based on combinatorially symmetric cross-validation that involves performance measurement across different holdout periods and varying portfolio compositions (the random-asset-stabilized combinatorially symmetric cross-validation methodology). We compare a variety of NN strategies with classical extensions of the mean–variance model and the 1 / N strategy. We find that it is by no means trivial to outperform the classical models. While certain NN strategies outperform the 1 / N benchmark, of the almost 30 models that we evaluate explicitly, none is consistently better than the short-sale constrained minimum-variance rule in terms of the Sharpe ratio or the certainty equivalent of returns.","PeriodicalId":43447,"journal":{"name":"Journal of Risk Model Validation","volume":"48 1","pages":"0"},"PeriodicalIF":0.4000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Overfitting in portfolio optimization\",\"authors\":\"Matteo Maggiolo, Oleg Szehr\",\"doi\":\"10.21314/jrmv.2023.005\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper we measure the out-of-sample performance of sample-based rolling-window neural network (NN) portfolio optimization strategies. We show that if NN strategies are evaluated using the holdout (train–test split) technique, then high out-of-sample performance scores can commonly be achieved. Although this phenomenon is often employed to validate NN portfolio models, we demonstrate that it constitutes a “fata morgana” that arises due to a particular vulnerability of portfolio optimization to overfitting. To assess whether overfitting is present, we set up a dedicated methodology based on combinatorially symmetric cross-validation that involves performance measurement across different holdout periods and varying portfolio compositions (the random-asset-stabilized combinatorially symmetric cross-validation methodology). We compare a variety of NN strategies with classical extensions of the mean–variance model and the 1 / N strategy. We find that it is by no means trivial to outperform the classical models. While certain NN strategies outperform the 1 / N benchmark, of the almost 30 models that we evaluate explicitly, none is consistently better than the short-sale constrained minimum-variance rule in terms of the Sharpe ratio or the certainty equivalent of returns.\",\"PeriodicalId\":43447,\"journal\":{\"name\":\"Journal of Risk Model Validation\",\"volume\":\"48 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.4000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Risk Model Validation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.21314/jrmv.2023.005\",\"RegionNum\":4,\"RegionCategory\":\"经济学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"BUSINESS, FINANCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Risk Model Validation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21314/jrmv.2023.005","RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"BUSINESS, FINANCE","Score":null,"Total":0}
引用次数: 0

摘要

本文测量了基于样本的滚动窗口神经网络(NN)投资组合优化策略的样本外性能。我们表明,如果使用hold - out(训练-测试分割)技术评估NN策略,那么通常可以获得高样本外性能分数。虽然这种现象经常被用来验证神经网络投资组合模型,但我们证明它构成了一个“不可预测的结果”,这是由于投资组合优化对过拟合的特殊脆弱性而产生的。为了评估是否存在过拟合,我们建立了一种基于组合对称交叉验证的专用方法,该方法涉及跨不同持有期和不同投资组合组成的绩效测量(随机资产稳定组合对称交叉验证方法)。我们将各种神经网络策略与均值-方差模型的经典扩展和1 / N策略进行比较。我们发现,要胜过经典模型绝非易事。虽然某些神经网络策略优于1 / N基准,但在我们明确评估的近30个模型中,就夏普比率或回报的确定性当量而言,没有一个模型始终优于卖空约束最小方差规则。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Overfitting in portfolio optimization
In this paper we measure the out-of-sample performance of sample-based rolling-window neural network (NN) portfolio optimization strategies. We show that if NN strategies are evaluated using the holdout (train–test split) technique, then high out-of-sample performance scores can commonly be achieved. Although this phenomenon is often employed to validate NN portfolio models, we demonstrate that it constitutes a “fata morgana” that arises due to a particular vulnerability of portfolio optimization to overfitting. To assess whether overfitting is present, we set up a dedicated methodology based on combinatorially symmetric cross-validation that involves performance measurement across different holdout periods and varying portfolio compositions (the random-asset-stabilized combinatorially symmetric cross-validation methodology). We compare a variety of NN strategies with classical extensions of the mean–variance model and the 1 / N strategy. We find that it is by no means trivial to outperform the classical models. While certain NN strategies outperform the 1 / N benchmark, of the almost 30 models that we evaluate explicitly, none is consistently better than the short-sale constrained minimum-variance rule in terms of the Sharpe ratio or the certainty equivalent of returns.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.20
自引率
28.60%
发文量
8
期刊介绍: As monetary institutions rely greatly on economic and financial models for a wide array of applications, model validation has become progressively inventive within the field of risk. The Journal of Risk Model Validation focuses on the implementation and validation of risk models, and aims to provide a greater understanding of key issues including the empirical evaluation of existing models, pitfalls in model validation and the development of new methods. We also publish papers on back-testing. Our main field of application is in credit risk modelling but we are happy to consider any issues of risk model validation for any financial asset class. The Journal of Risk Model Validation considers submissions in the form of research papers on topics including, but not limited to: Empirical model evaluation studies Backtesting studies Stress-testing studies New methods of model validation/backtesting/stress-testing Best practices in model development, deployment, production and maintenance Pitfalls in model validation techniques (all types of risk, forecasting, pricing and rating)
期刊最新文献
Value-at-risk and the global financial crisis Does the asymmetric exponential power distribution improve systemic risk measurement? A modified hybrid feature-selection method based on a filter and wrapper approach for credit risk forecasting What can we expect from a good margin model? Observations from whole-distribution tests of risk-based initial margin models Internet financial risk assessment in China based on a particle swarm optimization–analytic hierarchy process and fuzzy comprehensive evaluation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1