Francisco Blasques, Vladim'ir Hol'y, Petra Tomanov'a
Abstract In finance, durations between successive transactions are usually modeled by the autoregressive conditional duration model based on a continuous distribution omitting zero values. Zero or close-to-zero durations can be caused by either split transactions or independent transactions. We propose a discrete model allowing for excessive zero values based on the zero-inflated negative binomial distribution with score dynamics. This model allows to distinguish between the processes generating split and standard transactions. We use the existing theory on score models to establish the invertibility of the score filter and verify that sufficient conditions hold for the consistency and asymptotic normality of the maximum likelihood of the model parameters. In an empirical study, we find that split transactions cause between 92 % and 98 % of zero and close-to-zero values. Furthermore, the loss of decimal places in the proposed approach is less severe than the incorrect treatment of zero values in continuous models.
{"title":"Zero-Inflated Autoregressive Conditional Duration Model for Discrete Trade Durations with Excessive Zeros","authors":"Francisco Blasques, Vladim'ir Hol'y, Petra Tomanov'a","doi":"10.1515/snde-2022-0008","DOIUrl":"https://doi.org/10.1515/snde-2022-0008","url":null,"abstract":"Abstract In finance, durations between successive transactions are usually modeled by the autoregressive conditional duration model based on a continuous distribution omitting zero values. Zero or close-to-zero durations can be caused by either split transactions or independent transactions. We propose a discrete model allowing for excessive zero values based on the zero-inflated negative binomial distribution with score dynamics. This model allows to distinguish between the processes generating split and standard transactions. We use the existing theory on score models to establish the invertibility of the score filter and verify that sufficient conditions hold for the consistency and asymptotic normality of the maximum likelihood of the model parameters. In an empirical study, we find that split transactions cause between 92 % and 98 % of zero and close-to-zero values. Furthermore, the loss of decimal places in the proposed approach is less severe than the incorrect treatment of zero values in continuous models.","PeriodicalId":46709,"journal":{"name":"Studies in Nonlinear Dynamics and Econometrics","volume":"7 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136228224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This paper investigates the stability of threshold autoregressive models. We review recent research on stability issues from both a theoretical and empirical standpoint. We provide a sufficient condition for the stationarity and ergodicity of threshold autoregressive models by applying the concept of joint spectral radius to the switching system. The joint spectral radius criterion offers a generally applicable criterion to determine the stability in a threshold autoregressive model.
{"title":"Stability in Threshold VAR Models","authors":"Pu Chen, Willi Semmler","doi":"10.1515/snde-2022-0099","DOIUrl":"https://doi.org/10.1515/snde-2022-0099","url":null,"abstract":"Abstract This paper investigates the stability of threshold autoregressive models. We review recent research on stability issues from both a theoretical and empirical standpoint. We provide a sufficient condition for the stationarity and ergodicity of threshold autoregressive models by applying the concept of joint spectral radius to the switching system. The joint spectral radius criterion offers a generally applicable criterion to determine the stability in a threshold autoregressive model.","PeriodicalId":46709,"journal":{"name":"Studies in Nonlinear Dynamics and Econometrics","volume":"54 41","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134993076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We study the role of co-jumps in the interest rate futures markets. To disentangle continuous part of quadratic covariation from co-jumps, we localize the co-jumps precisely through wavelet coefficients and identify statistically significant ones. Using high frequency data about U.S. and European yield curves we quantify the effect of co-jumps on their correlation structure. Empirical findings reveal much stronger co-jumping behavior of the U.S. yield curves in comparison to the European one. Further, we connect co-jumping behavior to the monetary policy announcements, and study effect of 103 FOMC and 119 ECB announcements on the identified co-jumps during the period from January 2007 to December 2017.
{"title":"Co-Jumping of Treasury Yield Curve Rates","authors":"Jozef Baruník, Pavel Fiser","doi":"10.1515/snde-2022-0091","DOIUrl":"https://doi.org/10.1515/snde-2022-0091","url":null,"abstract":"Abstract We study the role of co-jumps in the interest rate futures markets. To disentangle continuous part of quadratic covariation from co-jumps, we localize the co-jumps precisely through wavelet coefficients and identify statistically significant ones. Using high frequency data about U.S. and European yield curves we quantify the effect of co-jumps on their correlation structure. Empirical findings reveal much stronger co-jumping behavior of the U.S. yield curves in comparison to the European one. Further, we connect co-jumping behavior to the monetary policy announcements, and study effect of 103 FOMC and 119 ECB announcements on the identified co-jumps during the period from January 2007 to December 2017.","PeriodicalId":46709,"journal":{"name":"Studies in Nonlinear Dynamics and Econometrics","volume":" 13","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135292818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This paper proposes a cross-validation method to estimate the number of breaks in high-dimensional factor models. To preserve the original change structure, the parity-splitting strategy is adopted when employing the cross-validation method. The consistency of the estimator is established under some mild conditions. Simulation results show desired finite sample performance of the proposed method, especially in comparison with methods that need to predetermine the tuning parameters.
{"title":"Determination of the Number of Breaks in High-Dimensional Factor Models via Cross-Validation","authors":"Ruichao Zhou, Lu Wang, Jianhong Wu","doi":"10.1515/snde-2022-0037","DOIUrl":"https://doi.org/10.1515/snde-2022-0037","url":null,"abstract":"Abstract This paper proposes a cross-validation method to estimate the number of breaks in high-dimensional factor models. To preserve the original change structure, the parity-splitting strategy is adopted when employing the cross-validation method. The consistency of the estimator is established under some mild conditions. Simulation results show desired finite sample performance of the proposed method, especially in comparison with methods that need to predetermine the tuning parameters.","PeriodicalId":46709,"journal":{"name":"Studies in Nonlinear Dynamics and Econometrics","volume":"310 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135474691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Gold may have a hedge, safe haven, or diversifier property when added to stock portfolios. Motivated by the favorable statistical properties and out-of-sample performance of score-driven models, we investigate for equity-gold portfolios whether score-driven mean, volatility, and copula models can improve the performances of DCC (dynamic conditional correlation) portfolios, the naïve portfolio strategy, and the Standard & Poor’s 500 (S&P 500) index. We consider 2880 score-driven portfolio strategies. We use score-driven Clayton, rotated Clayton, Frank, Gaussian, Gumbel, rotated Gumbel, Plackett, and Student’s t copulas. We use several classical and score-driven models of marginal distribution. We use weekly, monthly, quarterly, semi-annual, and annual updates of portfolio weights. We use minimum-variance, maximum Sharpe ratio, and maximum utility function strategies. We use rolling data-windows for portfolio optimization for the COVID-19 investment period of February 2020 to September 2021. We classify competing portfolios by using a new robust multi-step model confidence set (MCS) test approach and provide evidence of the superiority of score-driven portfolios.
当加入股票投资组合时,黄金可能具有对冲、避险或分散的属性。受分数驱动模型良好的统计特性和样本外性能的激励,我们研究了分数驱动的均值、波动率和copula模型是否可以改善DCC(动态条件相关)投资组合、naïve投资组合策略和标准&标准普尔500指数。我们考虑了2880种得分驱动的投资组合策略。我们使用分数驱动的Clayton、旋转的Clayton、Frank、Gaussian、Gumbel、旋转的Gumbel、Plackett和Student’s t copula。我们使用了几种经典的和分数驱动的边际分布模型。我们使用每周、每月、每季度、每半年和每年更新的投资组合权重。我们使用最小方差、最大夏普比率和最大效用函数策略。我们使用滚动数据窗口对2020年2月至2021年9月的COVID-19投资期进行投资组合优化。我们使用一种新的稳健多步模型置信集(MCS)检验方法对竞争组合进行分类,并提供了分数驱动组合的优越性的证据。
{"title":"Comparison of Score-Driven Equity-Gold Portfolios During the COVID-19 Pandemic Using Model Confidence Sets","authors":"Astrid Ayala, Szabolcs Blazsek, Adrian Licht","doi":"10.1515/snde-2022-0107","DOIUrl":"https://doi.org/10.1515/snde-2022-0107","url":null,"abstract":"Abstract Gold may have a hedge, safe haven, or diversifier property when added to stock portfolios. Motivated by the favorable statistical properties and out-of-sample performance of score-driven models, we investigate for equity-gold portfolios whether score-driven mean, volatility, and copula models can improve the performances of DCC (dynamic conditional correlation) portfolios, the naïve portfolio strategy, and the Standard & Poor’s 500 (S&P 500) index. We consider 2880 score-driven portfolio strategies. We use score-driven Clayton, rotated Clayton, Frank, Gaussian, Gumbel, rotated Gumbel, Plackett, and Student’s t copulas. We use several classical and score-driven models of marginal distribution. We use weekly, monthly, quarterly, semi-annual, and annual updates of portfolio weights. We use minimum-variance, maximum Sharpe ratio, and maximum utility function strategies. We use rolling data-windows for portfolio optimization for the COVID-19 investment period of February 2020 to September 2021. We classify competing portfolios by using a new robust multi-step model confidence set (MCS) test approach and provide evidence of the superiority of score-driven portfolios.","PeriodicalId":46709,"journal":{"name":"Studies in Nonlinear Dynamics and Econometrics","volume":"82 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135480142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Time-varying parameter (TVP) regression models can involve a huge number of coefficients. Careful prior elicitation is required to yield sensible posterior and predictive inferences. In addition, the computational demands of Markov Chain Monte Carlo (MCMC) methods mean their use is limited to the case where the number of predictors is not too large. In light of these two concerns, this paper proposes a new dynamic shrinkage prior which reflects the empirical regularity that TVPs are typically sparse (i.e. time variation may occur only episodically and only for some of the coefficients). A scalable MCMC algorithm is developed which is capable of handling very high dimensional TVP regressions or TVP Vector Autoregressions. In an exercise using artificial data we demonstrate the accuracy and computational efficiency of our methods. In an application involving the term structure of interest rates in the eurozone, we find our dynamic shrinkage prior to effectively pick out small amounts of parameter change and our methods to forecast well.
{"title":"Dynamic Shrinkage Priors for Large Time-Varying Parameter Regressions Using Scalable Markov Chain Monte Carlo Methods","authors":"Niko Hauzenberger, Florian Huber, Gary Koop","doi":"10.1515/snde-2022-0077","DOIUrl":"https://doi.org/10.1515/snde-2022-0077","url":null,"abstract":"Abstract Time-varying parameter (TVP) regression models can involve a huge number of coefficients. Careful prior elicitation is required to yield sensible posterior and predictive inferences. In addition, the computational demands of Markov Chain Monte Carlo (MCMC) methods mean their use is limited to the case where the number of predictors is not too large. In light of these two concerns, this paper proposes a new dynamic shrinkage prior which reflects the empirical regularity that TVPs are typically sparse (i.e. time variation may occur only episodically and only for some of the coefficients). A scalable MCMC algorithm is developed which is capable of handling very high dimensional TVP regressions or TVP Vector Autoregressions. In an exercise using artificial data we demonstrate the accuracy and computational efficiency of our methods. In an application involving the term structure of interest rates in the eurozone, we find our dynamic shrinkage prior to effectively pick out small amounts of parameter change and our methods to forecast well.","PeriodicalId":46709,"journal":{"name":"Studies in Nonlinear Dynamics and Econometrics","volume":"19 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135874955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1515/snde-2023-frontmatter4
{"title":"Frontmatter","authors":"","doi":"10.1515/snde-2023-frontmatter4","DOIUrl":"https://doi.org/10.1515/snde-2023-frontmatter4","url":null,"abstract":"","PeriodicalId":46709,"journal":{"name":"Studies in Nonlinear Dynamics and Econometrics","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135587772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The paper examines the question of non-anonymous Growth Incidence Curves (na-GIC) from a Bayesian inferential point of view. Building on the notion of conditional quantiles of Barnett (1976. “The Ordering of Multivariate Data.” Journal of the Royal Statistical Society: Series A 139: 318–55), we show that removing the anonymity axiom leads to a complex and shaky curve that has to be smoothed, using a non-parametric approach. We opted for a Bayesian approach using Bernstein polynomials which provides confidence intervals, tests and a simple way to compare two na-GICs. The methodology is applied to examine wage dynamics in a US university with a particular attention devoted to unbundling and anti-discrimination policies. Our findings are the detection of wage scale compression for higher quantiles for all academics and an apparent pro-female wage increase compared to males. But this pro-female policy works only for academics and not for the para-academics categories created by the unbundling policy.
{"title":"Bayesian inference for non-anonymous growth incidence curves using Bernstein polynomials: an application to academic wage dynamics","authors":"Edwin Fourrier-Nicolaï, M. Lubrano","doi":"10.1515/snde-2022-0109","DOIUrl":"https://doi.org/10.1515/snde-2022-0109","url":null,"abstract":"Abstract The paper examines the question of non-anonymous Growth Incidence Curves (na-GIC) from a Bayesian inferential point of view. Building on the notion of conditional quantiles of Barnett (1976. “The Ordering of Multivariate Data.” Journal of the Royal Statistical Society: Series A 139: 318–55), we show that removing the anonymity axiom leads to a complex and shaky curve that has to be smoothed, using a non-parametric approach. We opted for a Bayesian approach using Bernstein polynomials which provides confidence intervals, tests and a simple way to compare two na-GICs. The methodology is applied to examine wage dynamics in a US university with a particular attention devoted to unbundling and anti-discrimination policies. Our findings are the detection of wage scale compression for higher quantiles for all academics and an apparent pro-female wage increase compared to males. But this pro-female policy works only for academics and not for the para-academics categories created by the unbundling policy.","PeriodicalId":46709,"journal":{"name":"Studies in Nonlinear Dynamics and Econometrics","volume":" ","pages":""},"PeriodicalIF":0.8,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45602662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The issue of modelling observations generated in matrix form over time is key in economics, finance and many domains of application. While it is common to model vectors of observations through standard vector time series analysis, original matrix-valued data often reflect different types of structures of time series observations which can be further exploited to model interdependencies. In this paper, we propose a novel matrix autoregressive model in a bilinear form which, while leading to a substantial dimensionality reduction and enhanced interpretability: (a) allows responses and potential covariates of interest to have different dimensions; (b) provides a suitable estimation procedure for matrix autoregression with lag structure; (c) facilitates the introduction of Bayesian estimators. We propose maximum likelihood and Bayesian estimation with Independent-Normal prior formulation, and study the theoretical properties of the estimators through simulated and real examples.
{"title":"Matrix autoregressive models: generalization and Bayesian estimation","authors":"A. Celani, Paolo Pagnottoni","doi":"10.2139/ssrn.4277828","DOIUrl":"https://doi.org/10.2139/ssrn.4277828","url":null,"abstract":"Abstract The issue of modelling observations generated in matrix form over time is key in economics, finance and many domains of application. While it is common to model vectors of observations through standard vector time series analysis, original matrix-valued data often reflect different types of structures of time series observations which can be further exploited to model interdependencies. In this paper, we propose a novel matrix autoregressive model in a bilinear form which, while leading to a substantial dimensionality reduction and enhanced interpretability: (a) allows responses and potential covariates of interest to have different dimensions; (b) provides a suitable estimation procedure for matrix autoregression with lag structure; (c) facilitates the introduction of Bayesian estimators. We propose maximum likelihood and Bayesian estimation with Independent-Normal prior formulation, and study the theoretical properties of the estimators through simulated and real examples.","PeriodicalId":46709,"journal":{"name":"Studies in Nonlinear Dynamics and Econometrics","volume":" ","pages":""},"PeriodicalIF":0.8,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45611414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This paper considers an extension of Hodrick–Prescott (HP) filter. It is a hybrid of HP filter and multiple regression. We refer to the filter as “HPX filter”. It is well known that HP filter has a unique global minimizer and the solution can be represented in matrix notation explicitly. Does HPX filter also have a unique global minimizer? Is it accomplished without any additional assumptions? Can the solution be expressed in matrix notation explicitly? In this paper, we answer these questions. In addition, this paper (i) provides an alternative perspective on the filter by representing it as a generalized ridge regression and (ii) gives an extension of it, which is a hybrid of Whittaker–Henderson method of graduation and multiple regression.
{"title":"HPX filter: a hybrid of Hodrick–Prescott filter and multiple regression","authors":"H. Yamada","doi":"10.1515/snde-2023-0004","DOIUrl":"https://doi.org/10.1515/snde-2023-0004","url":null,"abstract":"Abstract This paper considers an extension of Hodrick–Prescott (HP) filter. It is a hybrid of HP filter and multiple regression. We refer to the filter as “HPX filter”. It is well known that HP filter has a unique global minimizer and the solution can be represented in matrix notation explicitly. Does HPX filter also have a unique global minimizer? Is it accomplished without any additional assumptions? Can the solution be expressed in matrix notation explicitly? In this paper, we answer these questions. In addition, this paper (i) provides an alternative perspective on the filter by representing it as a generalized ridge regression and (ii) gives an extension of it, which is a hybrid of Whittaker–Henderson method of graduation and multiple regression.","PeriodicalId":46709,"journal":{"name":"Studies in Nonlinear Dynamics and Econometrics","volume":" ","pages":""},"PeriodicalIF":0.8,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45871275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}