In this paper we propose a chi-square test for identification. Our proposed test statistic is based on the distance between two shrinkage extremum estimators. The two estimators converge in probability to the same limit when identification is strong, and their asymptotic distributions are different when identification is weak. The proposed test is consistent not only for the alternative hypothesis of no identification but also for the alternative of weak identification, which is confirmed by our Monte Carlo results. We apply the proposed technique to test whether the structural parameters of a representative Taylor-rule monetary policy reaction function are identified.
{"title":"Testing for Weak Identification in Possibly Nonlinear Models","authors":"B. Rossi, A. Inoue","doi":"10.2139/ssrn.1747169","DOIUrl":"https://doi.org/10.2139/ssrn.1747169","url":null,"abstract":"In this paper we propose a chi-square test for identification. Our proposed test statistic is based on the distance between two shrinkage extremum estimators. The two estimators converge in probability to the same limit when identification is strong, and their asymptotic distributions are different when identification is weak. The proposed test is consistent not only for the alternative hypothesis of no identification but also for the alternative of weak identification, which is confirmed by our Monte Carlo results. We apply the proposed technique to test whether the structural parameters of a representative Taylor-rule monetary policy reaction function are identified.","PeriodicalId":273058,"journal":{"name":"ERN: Model Construction & Estimation (Topic)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117225717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the problem of estimating the marginals in the case where there is knowledge on the copula. If the copula is smooth, it is known that it is possible to improve on the empirical distribution functions: optimal estimators still have a rate of convergence n^-^1^/^2, but a smaller asymptotic variance. In this paper we show that for non-smooth copulas it is sometimes possible to construct superefficient estimators of the marginals: we construct both a copula and, exploiting the information our copula provides, estimators of the marginals with the rate of convergence logn/n.
{"title":"Superefficient Estimation of the Marginals by Exploiting Knowledge on the Copula","authors":"J. Einmahl, R. V. D. Akker","doi":"10.2139/ssrn.1717842","DOIUrl":"https://doi.org/10.2139/ssrn.1717842","url":null,"abstract":"We consider the problem of estimating the marginals in the case where there is knowledge on the copula. If the copula is smooth, it is known that it is possible to improve on the empirical distribution functions: optimal estimators still have a rate of convergence n^-^1^/^2, but a smaller asymptotic variance. In this paper we show that for non-smooth copulas it is sometimes possible to construct superefficient estimators of the marginals: we construct both a copula and, exploiting the information our copula provides, estimators of the marginals with the rate of convergence logn/n.","PeriodicalId":273058,"journal":{"name":"ERN: Model Construction & Estimation (Topic)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124936297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider a new robust parametric estimation procedure, which minimizes an empirical version of the Havrda-Charvat-Tsallis entropy. The resulting estimator adapts according to the discrepancy between the data and the assumed model by tuning a single constant q, which controls the trade-o between robustness and eciency. The method is applied to expected re- turn and volatility estimation of financial asset returns under multivariate normality. Theoretical properties, ease of implementability and empirical re- sults on simulated and financial data make it a valid alternative to classic robust estimators and semi-parametric minimum divergence methods based on kernel smoothing.
{"title":"Efficient and Robust Estimation for Financial Returns: An Approach Based on q-Entropy","authors":"Davide Ferrari, S. Paterlini","doi":"10.2139/ssrn.1906819","DOIUrl":"https://doi.org/10.2139/ssrn.1906819","url":null,"abstract":"We consider a new robust parametric estimation procedure, which minimizes an empirical version of the Havrda-Charvat-Tsallis entropy. The resulting estimator adapts according to the discrepancy between the data and the assumed model by tuning a single constant q, which controls the trade-o between robustness and eciency. The method is applied to expected re- turn and volatility estimation of financial asset returns under multivariate normality. Theoretical properties, ease of implementability and empirical re- sults on simulated and financial data make it a valid alternative to classic robust estimators and semi-parametric minimum divergence methods based on kernel smoothing.","PeriodicalId":273058,"journal":{"name":"ERN: Model Construction & Estimation (Topic)","volume":"os-27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127773502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper explores the impact of trends on the volatility in equity market, with trends defined as uninterrupted runs of positive or negative returns. The impact of trends is first demonstrated as statistically significant using regression analysis to predict the squared normalised residuals of both (i) "raw" returns, and (ii) two widely-used "asymmetric" volatility models, GJR-GARCH and EGARCH. An extension of the asymmetric GARCH models is then proposed with inclusion of additional explanatory variables in the formula for conditional variance in order to account for presence of trends. The resulting model, subsequently tested using 40 years of daily returns on S&P500 index, has higher explanatory power measured by a number of statistical criteria including AIC, BIC and log-likelihood.
{"title":"Impact of Trends on Volatility in Equity Markets","authors":"E. Golosov","doi":"10.2139/ssrn.1690305","DOIUrl":"https://doi.org/10.2139/ssrn.1690305","url":null,"abstract":"The paper explores the impact of trends on the volatility in equity market, with trends defined as uninterrupted runs of positive or negative returns. The impact of trends is first demonstrated as statistically significant using regression analysis to predict the squared normalised residuals of both (i) \"raw\" returns, and (ii) two widely-used \"asymmetric\" volatility models, GJR-GARCH and EGARCH. An extension of the asymmetric GARCH models is then proposed with inclusion of additional explanatory variables in the formula for conditional variance in order to account for presence of trends. The resulting model, subsequently tested using 40 years of daily returns on S&P500 index, has higher explanatory power measured by a number of statistical criteria including AIC, BIC and log-likelihood.","PeriodicalId":273058,"journal":{"name":"ERN: Model Construction & Estimation (Topic)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122670922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-09-01DOI: 10.5089/9781455205356.001
Mićo Mrkaić
The study presents an analysis of the information content of IMF’s Data Quality Assessment Framework (DQAF) indicators. There are significant differences in the quantity of information between DQAF dimensions and sub-dimensions. The most informative DQAF dimension is accessibility, followed by the prerequisites of quality and accuracy and reliability. The least informative DQAF dimensions are serviceability and assurances of integrity. The implication of these findings is that the current DQAF indicators do not maximize the amount of information that could be obtained during data ROSC missions. An additional set of assessments that would refine the existing DQAF indicators would be beneficial in maximizing the information gathered during data ROSC mission. The entropy of DQAF indicators could also be used in the construction of a cardinal index of data quality.
{"title":"Information Content of DQAF Indicators - Empirical Entropy Analysis","authors":"Mićo Mrkaić","doi":"10.5089/9781455205356.001","DOIUrl":"https://doi.org/10.5089/9781455205356.001","url":null,"abstract":"The study presents an analysis of the information content of IMF’s Data Quality Assessment Framework (DQAF) indicators. There are significant differences in the quantity of information between DQAF dimensions and sub-dimensions. The most informative DQAF dimension is accessibility, followed by the prerequisites of quality and accuracy and reliability. The least informative DQAF dimensions are serviceability and assurances of integrity. The implication of these findings is that the current DQAF indicators do not maximize the amount of information that could be obtained during data ROSC missions. An additional set of assessments that would refine the existing DQAF indicators would be beneficial in maximizing the information gathered during data ROSC mission. The entropy of DQAF indicators could also be used in the construction of a cardinal index of data quality.","PeriodicalId":273058,"journal":{"name":"ERN: Model Construction & Estimation (Topic)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127311285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new and alternative quantile regression estimator is developed and it is shown that the estimator is root n-consistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is, however, a different and therefore new estimator. It allows for both linear- and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question.
{"title":"Minimax Regression Quantiles","authors":"S. Bache","doi":"10.2139/ssrn.1805497","DOIUrl":"https://doi.org/10.2139/ssrn.1805497","url":null,"abstract":"A new and alternative quantile regression estimator is developed and it is shown that the estimator is root n-consistent and asymptotically normal. The estimator is based on a minimax ‘deviance function’ and has asymptotically equivalent properties to the usual quantile regression estimator. It is, however, a different and therefore new estimator. It allows for both linear- and nonlinear model specifications. A simple algorithm for computing the estimates is proposed. It seems to work quite well in practice but whether it has theoretical justification is still an open question.","PeriodicalId":273058,"journal":{"name":"ERN: Model Construction & Estimation (Topic)","volume":"18 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126579965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We review the main approaches to dynamically reallocate capital between a risky portfolio and a risk-free account: expected utility maximization; option-based portfolio insurance (OBPI); and drawdown control, closely related to constant proportion portfolio insurance (CPPI). We present a refresher of the theory under general assumptions. We discuss the connections among the different approaches, as well as their relationship with convex and concave strategies. We provide explicit, practicable solutions with all the computations as well as numerical examples. Fully documented code for all the strategies is also provided.
{"title":"Review of Dynamic Allocation Strategies: Utility Maximization, Option Replication, Insurance, Drawdown Control, Convex/Concave Management","authors":"A. Meucci","doi":"10.2139/ssrn.1635982","DOIUrl":"https://doi.org/10.2139/ssrn.1635982","url":null,"abstract":"We review the main approaches to dynamically reallocate capital between a risky portfolio and a risk-free account: expected utility maximization; option-based portfolio insurance (OBPI); and drawdown control, closely related to constant proportion portfolio insurance (CPPI). We present a refresher of the theory under general assumptions. We discuss the connections among the different approaches, as well as their relationship with convex and concave strategies. We provide explicit, practicable solutions with all the computations as well as numerical examples. Fully documented code for all the strategies is also provided.","PeriodicalId":273058,"journal":{"name":"ERN: Model Construction & Estimation (Topic)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128689126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper demonstrates how the adjoint PDE method can be used to compute Greeks in Markov-functional models. This is an accurate and efficient way to compute Greeks, where most of the model sensitivities can be computed in approximately the same time as a single sensitivity using finite difference. We demonstrate the speed and accuracy of the method using a Markov-functional interest rate model, also demonstrating how the model Greeks can be converted into market Greeks.
{"title":"Fast Greeks for Markov-Functional Models Using Adjoint Pde Methods","authors":"Nick Denson, M. Joshi","doi":"10.2139/ssrn.1618026","DOIUrl":"https://doi.org/10.2139/ssrn.1618026","url":null,"abstract":"This paper demonstrates how the adjoint PDE method can be used to compute Greeks in Markov-functional models. This is an accurate and efficient way to compute Greeks, where most of the model sensitivities can be computed in approximately the same time as a single sensitivity using finite difference. We demonstrate the speed and accuracy of the method using a Markov-functional interest rate model, also demonstrating how the model Greeks can be converted into market Greeks.","PeriodicalId":273058,"journal":{"name":"ERN: Model Construction & Estimation (Topic)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124331052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of this paper is to examine the empirical behavior of copper spot prices in London Metal Exchange. Based on the particularities of the copper, various continuous processes are used. We simulate one, two and three factors stochastic processes using Monte Carlo simulation technique in and out of the sample of best model fitting. Simulations show that a class of stochastic volatility model has a great capacity to forecast the current copper prices.
{"title":"Modeling Copper Prices","authors":"Souha Boutouria, Fathi Abid","doi":"10.2139/ssrn.1831309","DOIUrl":"https://doi.org/10.2139/ssrn.1831309","url":null,"abstract":"The purpose of this paper is to examine the empirical behavior of copper spot prices in London Metal Exchange. Based on the particularities of the copper, various continuous processes are used. We simulate one, two and three factors stochastic processes using Monte Carlo simulation technique in and out of the sample of best model fitting. Simulations show that a class of stochastic volatility model has a great capacity to forecast the current copper prices.","PeriodicalId":273058,"journal":{"name":"ERN: Model Construction & Estimation (Topic)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128915071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Semi-elliptic stochastic differential equations (SDEs) are common models among practitioners. However, the value functions and sensitivities of such models are described by degenerate parabolic partial differential equations (PDEs) where the existence of regular global solutions is not trivial, and where densities do not exist in spaces of measurable functions but only in a distributional sense in general. In this paper, we show that for a related class of such equations regular global solutions can be constructed. Moreover, the solution scheme has a probabilistic interpretation where the existence of regular densities on certain subspaces of the state space can be exploited. Prominent examples of models of practical interest belonging to this class include factor reduced LIBOR market models and Cheyette models. Moreover, factor reduced SDEs originating from a full factor model are in the class to which our theorem applies. The result is also of interest for the theory of degenerate parabolic equations. A more detailed analysis of numerical and computational issues, as well as quantitative experiments will be found in the second part.
{"title":"On a Class of Semi-Elliptic Diffusion Models - Part I: A Constructive Analytical Approach for Global Solutions, Densities and Numerical Schemes with Applications to the LIBOR Market Model","authors":"Christian P. Fries, J. Kampen","doi":"10.2139/ssrn.1582414","DOIUrl":"https://doi.org/10.2139/ssrn.1582414","url":null,"abstract":"Semi-elliptic stochastic differential equations (SDEs) are common models among practitioners. However, the value functions and sensitivities of such models are described by degenerate parabolic partial differential equations (PDEs) where the existence of regular global solutions is not trivial, and where densities do not exist in spaces of measurable functions but only in a distributional sense in general. In this paper, we show that for a related class of such equations regular global solutions can be constructed. Moreover, the solution scheme has a probabilistic interpretation where the existence of regular densities on certain subspaces of the state space can be exploited. Prominent examples of models of practical interest belonging to this class include factor reduced LIBOR market models and Cheyette models. Moreover, factor reduced SDEs originating from a full factor model are in the class to which our theorem applies. The result is also of interest for the theory of degenerate parabolic equations. A more detailed analysis of numerical and computational issues, as well as quantitative experiments will be found in the second part.","PeriodicalId":273058,"journal":{"name":"ERN: Model Construction & Estimation (Topic)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128394880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}