This paper investigates the weak convergence of general non-Gaussian GARCH models together with an application to the pricing of European style options determined using an extended Girsanov principle and a conditional Esscher transform as the pricing kernel candidates. Applying these changes of measure to asymmetric GARCH models sampled at increasing frequencies, we obtain two risk neutral families of processes which converge to different bivariate diffusions, which are no longer standard Hull–White stochastic volatility models. Regardless of the innovations used, the GARCH implied diffusion limit based on the Esscher transform can be obtained by applying the minimal martingale measure under the physical measure. However, we further show that for skewed GARCH driving noise, the risk neutral diffusion limit of the extended Girsanov principle exhibits a non-zero market price of volatility risk which is proportional to the market price of the equity risk, where the constant of proportionality depends on the skewness and kurtosis of the underlying distribution. Our theoretical results are further supported by numerical simulations and a calibration exercise to observed market quotes.
{"title":"Non-Gaussian GARCH Option Pricing Models and Their Diffusion Limits","authors":"A. Badescu, R. Elliott, J. Ortega","doi":"10.2139/ssrn.2348407","DOIUrl":"https://doi.org/10.2139/ssrn.2348407","url":null,"abstract":"This paper investigates the weak convergence of general non-Gaussian GARCH models together with an application to the pricing of European style options determined using an extended Girsanov principle and a conditional Esscher transform as the pricing kernel candidates. Applying these changes of measure to asymmetric GARCH models sampled at increasing frequencies, we obtain two risk neutral families of processes which converge to different bivariate diffusions, which are no longer standard Hull–White stochastic volatility models. Regardless of the innovations used, the GARCH implied diffusion limit based on the Esscher transform can be obtained by applying the minimal martingale measure under the physical measure. However, we further show that for skewed GARCH driving noise, the risk neutral diffusion limit of the extended Girsanov principle exhibits a non-zero market price of volatility risk which is proportional to the market price of the equity risk, where the constant of proportionality depends on the skewness and kurtosis of the underlying distribution. Our theoretical results are further supported by numerical simulations and a calibration exercise to observed market quotes.","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125476083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I apply the model with unobserved components and stochastic volatility (UC-SV) to forecast the Russian consumer price index. I extend the model which was previously suggested as a model for inflation forecasting in the USA to take into account a possible difference in model parameters and seasonal factor. Comparison of the out-of-sample forecasting performance of the linear AR model and the UC-SV model by mean squared error of prediction shows better results for the latter model. Relatively small absolute value of the standard error of the forecasts calculated by the UC-SV model makes it a reasonable candidate for a real time forecasting method for the Russian CPI.
{"title":"Do Unobserved Components Models Forecast Inflation in Russia?","authors":"Bulat Gafarov","doi":"10.2139/ssrn.2333459","DOIUrl":"https://doi.org/10.2139/ssrn.2333459","url":null,"abstract":"I apply the model with unobserved components and stochastic volatility (UC-SV) to forecast the Russian consumer price index. I extend the model which was previously suggested as a model for inflation forecasting in the USA to take into account a possible difference in model parameters and seasonal factor. Comparison of the out-of-sample forecasting performance of the linear AR model and the UC-SV model by mean squared error of prediction shows better results for the latter model. Relatively small absolute value of the standard error of the forecasts calculated by the UC-SV model makes it a reasonable candidate for a real time forecasting method for the Russian CPI.","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124285223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a new simple panel unit-root test by extending the cross-sectionally augmented panel unit-root test (CIPS) developed by Pesaran et al. (2013) to allow for smoothing structural changes in deterministic terms, approximated by a Fourier series. The proposed statistic is the simple average of the individual statistics constructed from the breaks and cross-sectional dependence augmented Dickey-Fuller (BCADF) regression and is called the BCIPS statistic. We initially develop the tests by assuming that the number of factors in the model is known and show that the limiting distribution of the BCADF statistic is free of nuisance parameters. The nonstandard limiting distribution of the (truncated) BCIPS statistic is also shown to exist and its critical values are tabulated. Monte-Carlo experiments point out that the sizes and powers of the BCIPS statistic are generally satisfactory as long as T is greater than or equal to fifty and a hundred, respectively. By using two different methods to determine the number of factors, both the BCIPS and CIPS tests are applied to examine the validity of long-run purchasing power parity. The proposed test complements the panel unit-root tests with breaks using dummy variables.
{"title":"A Simple Panel Unit-Root Test with Smooth Breaks in the Presence of a Multifactor Error Structure","authors":"Chingnun Lee, Jyh‐Lin Wu, Lixong Yang","doi":"10.2139/ssrn.2327962","DOIUrl":"https://doi.org/10.2139/ssrn.2327962","url":null,"abstract":"This paper proposes a new simple panel unit-root test by extending the cross-sectionally augmented panel unit-root test (CIPS) developed by Pesaran et al. (2013) to allow for smoothing structural changes in deterministic terms, approximated by a Fourier series. The proposed statistic is the simple average of the individual statistics constructed from the breaks and cross-sectional dependence augmented Dickey-Fuller (BCADF) regression and is called the BCIPS statistic. We initially develop the tests by assuming that the number of factors in the model is known and show that the limiting distribution of the BCADF statistic is free of nuisance parameters. The nonstandard limiting distribution of the (truncated) BCIPS statistic is also shown to exist and its critical values are tabulated. Monte-Carlo experiments point out that the sizes and powers of the BCIPS statistic are generally satisfactory as long as T is greater than or equal to fifty and a hundred, respectively. By using two different methods to determine the number of factors, both the BCIPS and CIPS tests are applied to examine the validity of long-run purchasing power parity. The proposed test complements the panel unit-root tests with breaks using dummy variables.","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114830786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We apply the ethnographic tools of economic anthropology to analyse a particular ritual performed by the high priest of the Arbee sub‐tribe in the South Pacific island group of Aotearoa. (In other island groups, this high priest is sometimes known as the Governor of the Reserve Bank of New Zealand.) The ritual is considered by many within Aotearoa to be the cause of The Imbalance in The Economy. We analyse this claim and show that it has similarities (and equal validity) to claims of other cargo cults within the South‐West Pacific region.
{"title":"Monetary Policy and Economic Imbalances: An Ethnographic Examination of Central Bank Rituals","authors":"A. Grimes","doi":"10.1111/joes.12024","DOIUrl":"https://doi.org/10.1111/joes.12024","url":null,"abstract":"We apply the ethnographic tools of economic anthropology to analyse a particular ritual performed by the high priest of the Arbee sub‐tribe in the South Pacific island group of Aotearoa. (In other island groups, this high priest is sometimes known as the Governor of the Reserve Bank of New Zealand.) The ritual is considered by many within Aotearoa to be the cause of The Imbalance in The Economy. We analyse this claim and show that it has similarities (and equal validity) to claims of other cargo cults within the South‐West Pacific region.","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134414471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1111/j.1467-6419.2012.00741.x
Nikolai Stahler
The current crisis and discussions, in the euro area in particular, show that sovereign debt crises/defaults are no longer confined to developing economies. Following crises in many Latin American countries, the literature on quantitative dynamic macro models of sovereign default has been advancing rapidly. Current debate should take note of the findings of this literature – an extensive overview of which has been provided in this paper. This paper also discusses the inherent difficulties as well as possibilities of integrating this type of model into standard business cycle models (RBC and DSGE models). This is likely to be particularly helpful when using models to analyse upcoming issues in the euro area, such as a suitable sovereign insolvency law or the assumption of joint liability.
{"title":"Recent Developments in Quantitative Models of Sovereign Default","authors":"Nikolai Stahler","doi":"10.1111/j.1467-6419.2012.00741.x","DOIUrl":"https://doi.org/10.1111/j.1467-6419.2012.00741.x","url":null,"abstract":"The current crisis and discussions, in the euro area in particular, show that sovereign debt crises/defaults are no longer confined to developing economies. Following crises in many Latin American countries, the literature on quantitative dynamic macro models of sovereign default has been advancing rapidly. Current debate should take note of the findings of this literature – an extensive overview of which has been provided in this paper. This paper also discusses the inherent difficulties as well as possibilities of integrating this type of model into standard business cycle models (RBC and DSGE models). This is likely to be particularly helpful when using models to analyse upcoming issues in the euro area, such as a suitable sovereign insolvency law or the assumption of joint liability.","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"397 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116654037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benjamin Bruder, Nicolas Gaussel, J. Richard, T. Roncalli
The mean-variance optimization (MVO) theory of Markowitz (1952) for portfolio selection is one of the most important methods used in quantitative finance. This portfolio allocation needs two input parameters, the vector of expected returns and the covariance matrix of asset returns. This process leads to estimation errors, which may have a large impact on portfolio weights. In this paper we review different methods which aim to stabilize the mean-variance allocation. In particular, we consider recent results from machine learning theory to obtain more robust allocation.
{"title":"Regularization of Portfolio Allocation","authors":"Benjamin Bruder, Nicolas Gaussel, J. Richard, T. Roncalli","doi":"10.2139/ssrn.2767358","DOIUrl":"https://doi.org/10.2139/ssrn.2767358","url":null,"abstract":"The mean-variance optimization (MVO) theory of Markowitz (1952) for portfolio selection is one of the most important methods used in quantitative finance. This portfolio allocation needs two input parameters, the vector of expected returns and the covariance matrix of asset returns. This process leads to estimation errors, which may have a large impact on portfolio weights. In this paper we review different methods which aim to stabilize the mean-variance allocation. In particular, we consider recent results from machine learning theory to obtain more robust allocation.","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115148583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The study investigates the efficiency of nonlife insurance business in Kenya over the period 2006-2012 using stochastic frontier approach and establishes an industry efficiency of 72% over the period. The efficiency scores per year were; 2006 (67%), 2007 (69%), 2008 (61%), 2009 (73%), 2010 (99%), 2011 (73%) and 2012 (69%). Small and medium scale firms are catching up with large scale firms in terms of efficiency. Branch network and county coverage positively influence efficiency. Firms with 21-30 years of age are the most efficient cohort. There is no significant difference in efficiency between specialized nonlife and composite, domestic and foreign owned firms and between firms with regional and national underwriting orientation. Elasticity of the sum of net earned premiums and investment income to changes in commissions expenditure, management expenses and capital ranged between (0.2-0.6), (0.2-0.5) and (0.1-0.3) respectively, over the period. Deviations in profitability and claims paid across firms are wide translating to industry potential for cross-firm reinsurance. The profitability is positively correlated with claims paid across firms indicating sustained industry capacity to settle claims. In order to increase efficiency competitiveness firms should continue expanding their branch network and county coverage across Kenya, increase capitalization and innovate on investment portfolio. Deviations in profitability and underwriting risk across firms point to merging of firms, formation of strong cross-firm reinsurance ties, increasing of insurance penetration and investment returns, and enhancing client risk profiling and customer education on risk prevention as critical strategies towards increasing industry stability and sharing of claims settlement responsibilities.
{"title":"Efficiency of NonLife Insurance Business in Kenya: Stochastic Frontier Approach","authors":"Victor N. Mose","doi":"10.2139/ssrn.2262754","DOIUrl":"https://doi.org/10.2139/ssrn.2262754","url":null,"abstract":"The study investigates the efficiency of nonlife insurance business in Kenya over the period 2006-2012 using stochastic frontier approach and establishes an industry efficiency of 72% over the period. The efficiency scores per year were; 2006 (67%), 2007 (69%), 2008 (61%), 2009 (73%), 2010 (99%), 2011 (73%) and 2012 (69%). Small and medium scale firms are catching up with large scale firms in terms of efficiency. Branch network and county coverage positively influence efficiency. Firms with 21-30 years of age are the most efficient cohort. There is no significant difference in efficiency between specialized nonlife and composite, domestic and foreign owned firms and between firms with regional and national underwriting orientation. Elasticity of the sum of net earned premiums and investment income to changes in commissions expenditure, management expenses and capital ranged between (0.2-0.6), (0.2-0.5) and (0.1-0.3) respectively, over the period. Deviations in profitability and claims paid across firms are wide translating to industry potential for cross-firm reinsurance. The profitability is positively correlated with claims paid across firms indicating sustained industry capacity to settle claims. In order to increase efficiency competitiveness firms should continue expanding their branch network and county coverage across Kenya, increase capitalization and innovate on investment portfolio. Deviations in profitability and underwriting risk across firms point to merging of firms, formation of strong cross-firm reinsurance ties, increasing of insurance penetration and investment returns, and enhancing client risk profiling and customer education on risk prevention as critical strategies towards increasing industry stability and sharing of claims settlement responsibilities.","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115583181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A common heuristic for evaluating robustness of results to omitted variable bias is to look at coefficient movements after inclusion of controls. This heuristic is informative only if selection on observables is proportional to selection on unobservables. I formalize this link, drawing on theory in Altonji, Elder and Taber (2005) and show how, with this assumption, coefficient movements, along with movements in R-squared values, can be used to calculate omitted variable bias. I discuss empirical implementation and describe a formal bounding argument to replace the coefficient movement heuristic. I show two validation exercises suggesting that this bounding argument would perform well empirically. I discuss application of this procedure to a large set of publications in economics, and use evidence from randomized studies to draw guidelines as to appropriate bounding values.
{"title":"Unobservable Selection and Coefficient Stability: Theory and Validation","authors":"E. Oster","doi":"10.3386/W19054","DOIUrl":"https://doi.org/10.3386/W19054","url":null,"abstract":"A common heuristic for evaluating robustness of results to omitted variable bias is to look at coefficient movements after inclusion of controls. This heuristic is informative only if selection on observables is proportional to selection on unobservables. I formalize this link, drawing on theory in Altonji, Elder and Taber (2005) and show how, with this assumption, coefficient movements, along with movements in R-squared values, can be used to calculate omitted variable bias. I discuss empirical implementation and describe a formal bounding argument to replace the coefficient movement heuristic. I show two validation exercises suggesting that this bounding argument would perform well empirically. I discuss application of this procedure to a large set of publications in economics, and use evidence from randomized studies to draw guidelines as to appropriate bounding values.","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"712 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133498970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Gilboa, Andrew Postlewaite, L. Samuelson, D. Schmeidler
People often wonder why economists analyze models whose assumptions are known to be false, while economists feel that they learn a great deal from such exercises. We suggest that part of the knowledge generated by academic economists is case-based rather than rule-based. That is, instead of offering general rules or theories that should be contrasted with data, economists often analyze models that are theoretical cases", which help understand economic problems by drawing analogies between the model and the problem. According to this view, economic models, empirical data, experimental results and other sources of knowledge are all on equal footing, that is, they all provide cases to which a given problem can be compared. We offer complexity arguments that explain why case-based reasoning may sometimes be the method of choice and why economists prefer simple cases.
{"title":"Economic Models as Analogies, Third Version","authors":"I. Gilboa, Andrew Postlewaite, L. Samuelson, D. Schmeidler","doi":"10.2139/ssrn.2209153","DOIUrl":"https://doi.org/10.2139/ssrn.2209153","url":null,"abstract":"People often wonder why economists analyze models whose assumptions are known to be false, while economists feel that they learn a great deal from such exercises. We suggest that part of the knowledge generated by academic economists is case-based rather than rule-based. That is, instead of offering general rules or theories that should be contrasted with data, economists often analyze models that are theoretical cases\", which help understand economic problems by drawing analogies between the model and the problem. According to this view, economic models, empirical data, experimental results and other sources of knowledge are all on equal footing, that is, they all provide cases to which a given problem can be compared. We offer complexity arguments that explain why case-based reasoning may sometimes be the method of choice and why economists prefer simple cases.","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127755476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As a hybrid methodology to estimate VaR, that combines the use of parametric modelling with the use of bootstrapping techniques, filtered historical simulation (FHS) should not be sensitive to the use of alternative distributions assumed in the filtering stage. However, recent studies (Kuester et al. 2006) have found that the distribution used in the filtering stage can influence the VaR estimates obtained in the context of this methodology. Using Extreme Value Theory (EVT) this paper explains that the VaR estimates for lower probabilities should not be sensitive to the distribution assumed in the filtering stage of the FHS method. However, for higher probabilities, the EVT results do not hold and therefore the use of alternative distributions might impact the VaR estimates. These theoretical results are tested using both simulated and real data. Three different realistic data generating processes were considered to generate several series of simulated returns. Additionally, three competing models, differing in the innovations assumption, were tested: a normal-GARCH, a t-GARCH and a skew-t-GARCH. Our backtesting results indicate that FHS can forecast VaR with accuracy for data which exhibits a high incidence of zeros, time-varying skewness, asymmetric effects to return shocks on volatility, as well as other stylized facts. Importantly, our results for the simulated data demonstrate that, for lower probabilities, the choice of the distribution assumed in the filtering stage has no impact on the performance of FHS as an accurate method to forecasting VaR. Additionally, 40 years of daily data on six well known active stock indices are used to empirically evaluate the FHS VaR estimates. Four competing GARCH-type specifications, combined with three different innovation assumptions (normal, Student-t and skew-Student t), are used to capture time series dynamics. Based on a sample of several VaR probabilities, the results of the dynamic quantile (DQ) tests clearly indicate that the use of asymmetric GARCH models (specifically GJR and GJR in Mean) generally improve the VaR forecasting performance of FHS. In addition, the choice of a skew-Student t distribution for the innovation process slightly improves the performance results of the GJR in Mean model. When different VaR probabilities are used, the choice of an appropriate model specification seems to be more important than the choice of a suitable distribution assumption. With respect to the lower VaR probability tested (1%), the results show that, as expected, the VaR estimate is very similar regardless of the GARCH model and distribution assumed.
作为一种估计VaR的混合方法,它结合了参数化建模和自举技术的使用,滤波历史模拟(FHS)不应该对滤波阶段假设的替代分布的使用敏感。然而,最近的研究(Kuester et al. 2006)发现,在过滤阶段使用的分布会影响在这种方法的背景下获得的VaR估计。利用极值理论(EVT)解释了低概率VaR估计对FHS方法滤波阶段假设的分布不敏感。然而,对于更高的概率,EVT结果不成立,因此使用替代分布可能会影响VaR估计。这些理论结果用模拟数据和实际数据进行了验证。考虑了三种不同的现实数据生成过程来生成几个系列的模拟回报。此外,还测试了三种不同创新假设的竞争模型:正态garch、t-GARCH和偏态t-GARCH。我们的回测结果表明,FHS可以准确预测具有高零发生率、时变偏度、对波动率的非对称冲击效应以及其他程式化事实的数据的VaR。重要的是,我们对模拟数据的结果表明,在较低概率下,滤波阶段假设的分布的选择对FHS作为准确预测VaR的方法的性能没有影响。此外,我们使用了6个知名活跃股票指数40年的每日数据来经验评估FHS的VaR估计。四种相互竞争的garch类型规范,结合三种不同的创新假设(正常,学生t和倾斜学生t),用于捕获时间序列动态。基于多个VaR概率样本,动态分位数(DQ)检验结果清楚地表明,使用非对称GARCH模型(特别是GJR和GJR in Mean)总体上提高了FHS的VaR预测性能。此外,在Mean模型中,为创新过程选择一个倾斜的student t分布略微改善了GJR的性能结果。当使用不同的VaR概率时,选择合适的模型规范似乎比选择合适的分布假设更重要。对于检验的较低VaR概率(1%),结果表明,正如预期的那样,无论GARCH模型和假设的分布如何,VaR估计都非常相似。
{"title":"Value-at-Risk Forecasting Ability of Filtered Historical Simulation for Non-Normal GARCH Returns","authors":"C. Adcock, Nelson Areal, B. Oliveira","doi":"10.2139/ssrn.2133238","DOIUrl":"https://doi.org/10.2139/ssrn.2133238","url":null,"abstract":"As a hybrid methodology to estimate VaR, that combines the use of parametric modelling with the use of bootstrapping techniques, filtered historical simulation (FHS) should not be sensitive to the use of alternative distributions assumed in the filtering stage. However, recent studies (Kuester et al. 2006) have found that the distribution used in the filtering stage can influence the VaR estimates obtained in the context of this methodology. Using Extreme Value Theory (EVT) this paper explains that the VaR estimates for lower probabilities should not be sensitive to the distribution assumed in the filtering stage of the FHS method. However, for higher probabilities, the EVT results do not hold and therefore the use of alternative distributions might impact the VaR estimates. These theoretical results are tested using both simulated and real data. Three different realistic data generating processes were considered to generate several series of simulated returns. Additionally, three competing models, differing in the innovations assumption, were tested: a normal-GARCH, a t-GARCH and a skew-t-GARCH. Our backtesting results indicate that FHS can forecast VaR with accuracy for data which exhibits a high incidence of zeros, time-varying skewness, asymmetric effects to return shocks on volatility, as well as other stylized facts. Importantly, our results for the simulated data demonstrate that, for lower probabilities, the choice of the distribution assumed in the filtering stage has no impact on the performance of FHS as an accurate method to forecasting VaR. Additionally, 40 years of daily data on six well known active stock indices are used to empirically evaluate the FHS VaR estimates. Four competing GARCH-type specifications, combined with three different innovation assumptions (normal, Student-t and skew-Student t), are used to capture time series dynamics. Based on a sample of several VaR probabilities, the results of the dynamic quantile (DQ) tests clearly indicate that the use of asymmetric GARCH models (specifically GJR and GJR in Mean) generally improve the VaR forecasting performance of FHS. In addition, the choice of a skew-Student t distribution for the innovation process slightly improves the performance results of the GJR in Mean model. When different VaR probabilities are used, the choice of an appropriate model specification seems to be more important than the choice of a suitable distribution assumption. With respect to the lower VaR probability tested (1%), the results show that, as expected, the VaR estimate is very similar regardless of the GARCH model and distribution assumed.","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116742628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}