首页 > 最新文献

ERN: Other Econometrics: Econometric Model Construction最新文献

英文 中文
Rank Determination in Tensor Factor Model 张量因子模型的等级确定
Pub Date : 2020-11-13 DOI: 10.1214/22-EJS1991
Yuefeng Han, Cun-Hui Zhang, Rong Chen
Factor model is an appealing and effective analytic tool for high-dimensional time series, with a wide range of applications in economics, finance and statistics. One of the fundamental issues in using factor model for time series in practice is the determination of the number of factors to use. This paper develops two criteria for such a task for tensor factor models where the signal part of an observed time series in tensor form assumes a Tucker decomposition with the core tensor as the factor tensor. The task is to determine the dimensions of the core tensor. One of the proposed criteria is similar to information based criteria of model selection, and the other is an extension of the approaches based on the ratios of consecutive eigenvalues often used in factor analysis for panel time series. The new criteria are designed to locate the gap between the true smallest non-zero eigenvalue and the zero eigenvalues of a functional of the population version of the auto-cross-covariances of the tensor time series using their sample versions. As sample size and tensor dimension increase, such a gap increases under regularity conditions, resulting in consistency of the rank estimator. The criteria are built upon the existing non-iterative and iterative estimation procedures of tensor factor model, yielding different performances. We provide sufficient conditions and convergence rate for the consistency of the criteria as the sample size $T$ and the dimensions of the observed tensor time series go to infinity. The results include the vector factor models as special cases, with an additional convergence rates. The results also include the cases when there exist factors with different signal strength. In addition, the convergence rates of the eigenvalue estimators are established. Simulation studies provide promising finite sample performance for the two criteria.
因子模型是一种有效的高维时间序列分析工具,在经济、金融、统计等领域有着广泛的应用。在时间序列中使用因子模型的一个基本问题是确定要使用的因子数量。本文对张量因子模型的这一任务开发了两个准则,其中观测到的张量形式的时间序列的信号部分假定以核心张量作为因子张量的Tucker分解。任务是确定核心张量的维度。提出的标准之一类似于基于信息的模型选择标准,另一个是基于连续特征值比率的方法的扩展,通常用于面板时间序列的因子分析。新标准被设计用来定位真正最小的非零特征值和零特征值之间的差距,这些特征值是使用样本版本的张量时间序列的自交叉协方差的总体版本的函数。随着样本量和张量维数的增加,这种差距在正则性条件下增大,导致秩估计量的一致性。这些准则是建立在现有的张量因子模型的非迭代和迭代估计过程之上的,产生了不同的性能。当样本大小$T$和观测到的张量时间序列的维数趋于无穷时,我们给出了准则一致性的充分条件和收敛速率。结果包括向量因子模型作为特殊情况,具有额外的收敛率。结果还包括存在不同信号强度因素的情况。此外,还建立了特征值估计的收敛速率。仿真研究为这两个准则提供了有希望的有限样本性能。
{"title":"Rank Determination in Tensor Factor Model","authors":"Yuefeng Han, Cun-Hui Zhang, Rong Chen","doi":"10.1214/22-EJS1991","DOIUrl":"https://doi.org/10.1214/22-EJS1991","url":null,"abstract":"Factor model is an appealing and effective analytic tool for high-dimensional time series, with a wide range of applications in economics, finance and statistics. One of the fundamental issues in using factor model for time series in practice is the determination of the number of factors to use. This paper develops two criteria for such a task for tensor factor models where the signal part of an observed time series in tensor form assumes a Tucker decomposition with the core tensor as the factor tensor. The task is to determine the dimensions of the core tensor. One of the proposed criteria is similar to information based criteria of model selection, and the other is an extension of the approaches based on the ratios of consecutive eigenvalues often used in factor analysis for panel time series. The new criteria are designed to locate the gap between the true smallest non-zero eigenvalue and the zero eigenvalues of a functional of the population version of the auto-cross-covariances of the tensor time series using their sample versions. As sample size and tensor dimension increase, such a gap increases under regularity conditions, resulting in consistency of the rank estimator. The criteria are built upon the existing non-iterative and iterative estimation procedures of tensor factor model, yielding different performances. We provide sufficient conditions and convergence rate for the consistency of the criteria as the sample size $T$ and the dimensions of the observed tensor time series go to infinity. The results include the vector factor models as special cases, with an additional convergence rates. The results also include the cases when there exist factors with different signal strength. In addition, the convergence rates of the eigenvalue estimators are established. Simulation studies provide promising finite sample performance for the two criteria.","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130392650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
A Toolkit for Robust Risk Assessment Using F-Divergences 基于f -散度的稳健风险评估工具
Pub Date : 2020-08-25 DOI: 10.2139/ssrn.3680475
T. Kruse, Judith C. Schneider, Nikolaus Schweizer
This paper assembles a toolkit for the assessment of model risk when model uncertainty sets are defined in terms of an F-divergence ball around a reference model. We propose a new family of F-divergences that are easy to implement and flexible enough to imply convincing uncertainty sets for broad classes of reference models. We use our theoretical results to construct concrete examples of divergences that allow for significant amounts of uncertainty about lognormal or heavy-tailed Weibull reference models without implying that the worst case is necessarily infinitely bad. We implement our tools in an open-source software package and apply them to three risk management problems from operations management, insurance, and finance. This paper was accepted by Baris Ata, stochastic models and simulation.
当模型不确定性集以围绕参考模型的f散度球定义时,本文组装了一个用于评估模型风险的工具包。我们提出了一个新的f -散度族,它易于实现,并且足够灵活,可以为广泛的参考模型类暗示令人信服的不确定性集。我们使用我们的理论结果来构建发散的具体示例,这些示例允许对数正态或重尾威布尔参考模型的大量不确定性,而不意味着最坏的情况必然是无限坏的。我们在一个开源软件包中实现我们的工具,并将它们应用于运营管理、保险和财务方面的三个风险管理问题。论文被Baris Ata、随机模型和仿真所接受。
{"title":"A Toolkit for Robust Risk Assessment Using F-Divergences","authors":"T. Kruse, Judith C. Schneider, Nikolaus Schweizer","doi":"10.2139/ssrn.3680475","DOIUrl":"https://doi.org/10.2139/ssrn.3680475","url":null,"abstract":"This paper assembles a toolkit for the assessment of model risk when model uncertainty sets are defined in terms of an F-divergence ball around a reference model. We propose a new family of F-divergences that are easy to implement and flexible enough to imply convincing uncertainty sets for broad classes of reference models. We use our theoretical results to construct concrete examples of divergences that allow for significant amounts of uncertainty about lognormal or heavy-tailed Weibull reference models without implying that the worst case is necessarily infinitely bad. We implement our tools in an open-source software package and apply them to three risk management problems from operations management, insurance, and finance. This paper was accepted by Baris Ata, stochastic models and simulation.","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"226 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124497358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
One-factor Hull-White Model Calibration for CVA - Part I: Instrument Selection With a Kink CVA的单因素Hull-White模型校准-第一部分:带扭结的仪器选择
Pub Date : 2020-07-09 DOI: 10.2139/ssrn.3659430
Christoph M. Puetter, Stefano Renzitti
This paper is the first of a multi-part series on the calibration of the one-factor Hull-White short rate model for the purpose of computing CVAs (and xVAs) with an xVA system. It introduces an atypical bootstrapping scheme for the calibration of the short rate volatility. The second part focuses on the selection of the mean reversion parameter. In both expositions we present long-term time series results for EUR, JPY, and USD, covering the period from the beginning of 2009 (at the earliest) to spring 2020.
本文是一个多部分系列的第一部分,该系列讨论了用xVA系统计算cva(和xVA)的单因素Hull-White短期利率模型的校准。引入了一种非典型自举方法来校准短期利率波动。第二部分是均值回归参数的选取。在这两个展览中,我们展示了欧元、日元和美元的长期时间序列结果,涵盖了从2009年初(最早)到2020年春季的这段时间。
{"title":"One-factor Hull-White Model Calibration for CVA - Part I: Instrument Selection With a Kink","authors":"Christoph M. Puetter, Stefano Renzitti","doi":"10.2139/ssrn.3659430","DOIUrl":"https://doi.org/10.2139/ssrn.3659430","url":null,"abstract":"This paper is the first of a multi-part series on the calibration of the one-factor Hull-White short rate model for the purpose of computing CVAs (and xVAs) with an xVA system. It introduces an atypical bootstrapping scheme for the calibration of the short rate volatility. The second part focuses on the selection of the mean reversion parameter. In both expositions we present long-term time series results for EUR, JPY, and USD, covering the period from the beginning of 2009 (at the earliest) to spring 2020.","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123667732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-Dimensional Granger Causality Tests with an Application to VIX and News 基于VIX和News的高维Granger因果检验
Pub Date : 2020-05-29 DOI: 10.2139/ssrn.3615718
Andrii Babii, Eric Ghysels, Jonas Striaukas
We study Granger causality testing for high-dimensional time series using regularized regressions. To perform proper inference, we rely on heteroskedasticity and autocorrelation consistent (HAC) estimation of the asymptotic variance and develop the inferential theory in the high-dimensional setting. To recognize the time-series data structures, we focus on the sparse-group LASSO (sg-LASSO) estimator, which includes the LASSO and the group LASSO as special cases. We establish the debiased central limit theorem for low-dimensional groups of regression coefficients and study the HAC estimator of the long-run variance based on the sg-LASSO residuals. This leads to valid time-series inference for individual regression coefficients as well as groups, including Granger causality tests. The treatment relies on a new Fuk–Nagaev inequality for a class of τ-mixing processes with heavier than Gaussian tails, which is of independent interest. In an empirical application, we study the Granger causal relationship between the VIX and financial news.
本文利用正则化回归研究了高维时间序列的格兰杰因果检验。为了进行适当的推理,我们依赖于渐近方差的异方差和自相关一致(HAC)估计,并在高维环境下发展了推理理论。为了识别时间序列数据结构,我们重点研究了稀疏群LASSO (sg-LASSO)估计量,其中包括LASSO和群LASSO作为特殊情况。建立了低维回归系数群的去偏中心极限定理,研究了基于sg-LASSO残差的长期方差HAC估计。这导致有效的时间序列推断的个别回归系数以及组,包括格兰杰因果检验。这种处理依赖于一个新的Fuk-Nagaev不等式,它适用于一类比高斯尾重的τ混合过程,这是一个独立的兴趣。在实证应用中,我们研究了波动率指数与财经新闻之间的格兰杰因果关系。
{"title":"High-Dimensional Granger Causality Tests with an Application to VIX and News","authors":"Andrii Babii, Eric Ghysels, Jonas Striaukas","doi":"10.2139/ssrn.3615718","DOIUrl":"https://doi.org/10.2139/ssrn.3615718","url":null,"abstract":"\u0000 We study Granger causality testing for high-dimensional time series using regularized regressions. To perform proper inference, we rely on heteroskedasticity and autocorrelation consistent (HAC) estimation of the asymptotic variance and develop the inferential theory in the high-dimensional setting. To recognize the time-series data structures, we focus on the sparse-group LASSO (sg-LASSO) estimator, which includes the LASSO and the group LASSO as special cases. We establish the debiased central limit theorem for low-dimensional groups of regression coefficients and study the HAC estimator of the long-run variance based on the sg-LASSO residuals. This leads to valid time-series inference for individual regression coefficients as well as groups, including Granger causality tests. The treatment relies on a new Fuk–Nagaev inequality for a class of τ-mixing processes with heavier than Gaussian tails, which is of independent interest. In an empirical application, we study the Granger causal relationship between the VIX and financial news.","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115757879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Selective Linear Segmentation For Detecting Relevant Parameter Changes 选择性线性分割检测相关参数的变化
Pub Date : 2019-08-20 DOI: 10.2139/ssrn.3461554
A. Dufays, Elysée Aristide Houndetoungan, Alain Coen
Change-point processes are one flexible approach to model long time series. We propose a method to uncover which model parameter truly vary when a change-point is detected. Given a set of breakpoints, we use a penalized likelihood approach to select the best set of parameters that changes over time and we prove that the penalty function leads to a consistent selection of the true model. Estimation is carried out via the deterministic annealing expectation-maximization algorithm. Our method accounts for model selection uncertainty and associates a probability to all the possible time-varying parameter specifications. Monte Carlo simulations highlight that the method works well for many time series models including heteroskedastic processes. For a sample of 14 Hedge funds (HF) strategies, using an asset based style pricing model, we shed light on the promising ability of our method to detect the time-varying dynamics of risk exposures as well as to forecast HF returns.
变更点过程是对长时间序列建模的一种灵活方法。我们提出了一种方法来揭示当检测到变化点时哪个模型参数真正变化。给定一组断点,我们使用惩罚似然方法来选择随时间变化的最佳参数集,并证明惩罚函数导致真实模型的一致选择。通过确定性退火期望最大化算法进行估计。我们的方法考虑了模型选择的不确定性,并将概率与所有可能的时变参数规范联系起来。蒙特卡罗模拟结果表明,该方法适用于包括异方差过程在内的许多时间序列模型。对于14个对冲基金(HF)策略的样本,我们使用基于资产的风格定价模型,揭示了我们的方法在检测风险敞口的时变动态以及预测HF回报方面的前景。
{"title":"Selective Linear Segmentation For Detecting Relevant Parameter Changes","authors":"A. Dufays, Elysée Aristide Houndetoungan, Alain Coen","doi":"10.2139/ssrn.3461554","DOIUrl":"https://doi.org/10.2139/ssrn.3461554","url":null,"abstract":"Change-point processes are one flexible approach to model long time series. We propose a method to uncover which model parameter truly vary when a change-point is detected. Given a set of breakpoints, we use a penalized likelihood approach to select the best set of parameters that changes over time and we prove that the penalty function leads to a consistent selection of the true model. Estimation is carried out via the deterministic annealing expectation-maximization algorithm. Our method accounts for model selection uncertainty and associates a probability to all the possible time-varying parameter specifications. Monte Carlo simulations highlight that the method works well for many time series models including heteroskedastic processes. For a sample of 14 Hedge funds (HF) strategies, using an asset based style pricing model, we shed light on the promising ability of our method to detect the time-varying dynamics of risk exposures as well as to forecast HF returns.","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114794489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
When Simplicity Offers a Benefit, Not a Cost: Closed-Form Estimation of the GARCH(1,1) Model that Enhances the Efficiency of Quasi-Maximum Likelihood 当简单带来好处而不是代价:提高拟极大似然效率的GARCH(1,1)模型的封闭估计
Pub Date : 2019-02-15 DOI: 10.17016/FEDS.2019.030
Todd Prono
Simple, multi-step estimators are developed for the popular GARCH(1,1) model, where these estimators are either available entirely in closed form or dependent upon a preliminary estimate from, for example, quasi-maximum likelihood. Identification sources to asymmetry in the model's innovations, casting skewness as an instrument in a linear, two-stage least squares estimator. Properties of regular variation coupled with point process theory establish the distributional limits of these estimators as stable, though highly non-Gaussian, with slow convergence rates relative to the ??n-case. Moment existence criteria necessary for these results are consistent with the heavy-tailed features of many financial returns. In light-tailed cases that support asymptotic normality for these simple estimators, conditions are discovered where the simple estimators can enhance the asymptotic efficiency of quasi-maximum likelihood estimation. In small samples, extensive Monte Carlo experime nts reveal these efficiency enhancements to be available for (very) heavy tailed cases. Consequently, the proposed simple estimators are members of the class of multi-step estimators aimed at improving the efficiency of the quasi-maximum likelihood estimator.
为流行的GARCH(1,1)模型开发了简单的多步估计器,其中这些估计器要么完全以封闭形式可用,要么依赖于例如准极大似然的初步估计。识别模型创新中不对称的来源,将偏度作为线性两阶段最小二乘估计器的工具。正则变分的性质与点过程理论相结合,证明了这些估计量的分布极限是稳定的,尽管高度非高斯分布,但相对于n-情况的收敛速度较慢。这些结果所必需的矩存在标准与许多金融回报的重尾特征是一致的。在支持这些简单估计的渐近正态性的轻尾情况下,发现了简单估计可以提高拟极大似然估计的渐近效率的条件。在小样本中,广泛的蒙特卡罗实验表明,这些效率增强可用于(非常)重尾情况。因此,所提出的简单估计量属于旨在提高拟极大似然估计效率的多步估计量。
{"title":"When Simplicity Offers a Benefit, Not a Cost: Closed-Form Estimation of the GARCH(1,1) Model that Enhances the Efficiency of Quasi-Maximum Likelihood","authors":"Todd Prono","doi":"10.17016/FEDS.2019.030","DOIUrl":"https://doi.org/10.17016/FEDS.2019.030","url":null,"abstract":"Simple, multi-step estimators are developed for the popular GARCH(1,1) model, where these estimators are either available entirely in closed form or dependent upon a preliminary estimate from, for example, quasi-maximum likelihood. Identification sources to asymmetry in the model's innovations, casting skewness as an instrument in a linear, two-stage least squares estimator. Properties of regular variation coupled with point process theory establish the distributional limits of these estimators as stable, though highly non-Gaussian, with slow convergence rates relative to the ??n-case. Moment existence criteria necessary for these results are consistent with the heavy-tailed features of many financial returns. In light-tailed cases that support asymptotic normality for these simple estimators, conditions are discovered where the simple estimators can enhance the asymptotic efficiency of quasi-maximum likelihood estimation. In small samples, extensive Monte Carlo experime nts reveal these efficiency enhancements to be available for (very) heavy tailed cases. Consequently, the proposed simple estimators are members of the class of multi-step estimators aimed at improving the efficiency of the quasi-maximum likelihood estimator.","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130486982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Currency Unions and Trade: A PPML Re‐Assessment with High‐Dimensional Fixed Effects 货币联盟与贸易:具有高维固定效应的PPML再评估
Pub Date : 2018-11-29 DOI: 10.1111/obes.12283
Mario Larch, Joschka Wanner, Y. Yotov, Thomas Zylkin
Recent work on the effects of currency unions (CUs) on trade stresses the importance of using many countries and years in order to obtain reliable estimates. However, for large samples, computational issues associated with the three‐way (exporter‐time, importer‐time, and country pair) fixed effects currently recommended in the gravity literature have heretofore limited the choice of estimator, leaving an important methodological gap. To address this gap, we introduce an iterative poisson pseudo‐maximum likelihood (PPML) estimation procedure that facilitates the inclusion of these fixed effects for large data sets and also allows for correlated errors across countries and time. When applied to a comprehensive sample with more than 200 countries trading over 65 years, these innovations flip the conclusions of an otherwise rigorously specified linear model. Most importantly, our estimates for both the overall CU effect and the Euro effect specifically are economically small and statistically insignificant. We also document that linear and PPML estimates of the Euro effect increasingly diverge as the sample size grows.
最近关于货币联盟对贸易的影响的工作强调,为了获得可靠的估计,必须使用许多国家和年份。然而,对于大样本,目前重力文献中推荐的与三向(出口国时间、进口国时间和国家对)固定效应相关的计算问题迄今为止限制了估计量的选择,留下了重要的方法学空白。为了解决这一差距,我们引入了一种迭代泊松伪极大似然(PPML)估计程序,该程序有助于将这些固定效应纳入大型数据集,并允许跨国家和时间的相关误差。当应用于200多个贸易超过65年的国家的综合样本时,这些创新颠覆了严格指定的线性模型的结论。最重要的是,我们对总体铜效应和欧元效应的估计在经济上很小,在统计上也不显著。我们还证明,随着样本量的增加,欧元效应的线性和PPML估计越来越偏离。
{"title":"Currency Unions and Trade: A PPML Re‐Assessment with High‐Dimensional Fixed Effects","authors":"Mario Larch, Joschka Wanner, Y. Yotov, Thomas Zylkin","doi":"10.1111/obes.12283","DOIUrl":"https://doi.org/10.1111/obes.12283","url":null,"abstract":"Recent work on the effects of currency unions (CUs) on trade stresses the importance of using many countries and years in order to obtain reliable estimates. However, for large samples, computational issues associated with the three‐way (exporter‐time, importer‐time, and country pair) fixed effects currently recommended in the gravity literature have heretofore limited the choice of estimator, leaving an important methodological gap. To address this gap, we introduce an iterative poisson pseudo‐maximum likelihood (PPML) estimation procedure that facilitates the inclusion of these fixed effects for large data sets and also allows for correlated errors across countries and time. When applied to a comprehensive sample with more than 200 countries trading over 65 years, these innovations flip the conclusions of an otherwise rigorously specified linear model. Most importantly, our estimates for both the overall CU effect and the Euro effect specifically are economically small and statistically insignificant. We also document that linear and PPML estimates of the Euro effect increasingly diverge as the sample size grows.","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123123195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 142
Empirical Validation of Agent-Based Models 基于agent的模型的实证验证
Pub Date : 2018-11-12 DOI: 10.1016/BS.HESCOM.2018.02.003
T. Lux, Remco C. J. Zwinkels
{"title":"Empirical Validation of Agent-Based Models","authors":"T. Lux, Remco C. J. Zwinkels","doi":"10.1016/BS.HESCOM.2018.02.003","DOIUrl":"https://doi.org/10.1016/BS.HESCOM.2018.02.003","url":null,"abstract":"","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122837427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 72
Predictive Regressions Under Asymmetric Loss: Factor Augmentation and Model Selection 非对称损失下的预测回归:因子增强与模型选择
Pub Date : 2018-05-11 DOI: 10.2139/ssrn.3180690
M. Demetrescu, Sinem Hacioglu Hoke
This paper discusses the specifics of forecasting using factor-augmented predictive regressions under general loss functions. In line with the literature, we employ principal component analysis to extract factors from the set of predictors. In addition, we also extract information on the volatility of the series to be predicted, since the volatility is forecast-relevant under non-quadratic loss functions. We ensure asymptotic unbiasedness of the forecasts under the relevant loss by estimating the predictive regression through the minimization of the in-sample average loss. Finally, we select the most promising predictors for the series to be forecast by employing an information criterion that is tailored to the relevant loss. Using a large monthly data set for the US economy, we assess the proposed adjustments in a pseudo out-of-sample forecasting exercise for various variables. As expected, the use of estimation under the relevant loss is found to be effective. Using an additional volatility proxy as the predictor and conducting model selection that is tailored to the relevant loss function enhances the forecast performance significantly.
本文讨论了在一般损失函数下使用因子增强预测回归进行预测的具体问题。与文献一致,我们采用主成分分析从预测集合中提取因子。此外,我们还提取了待预测序列的波动率信息,因为波动率在非二次损失函数下与预测相关。我们通过最小化样本内平均损失来估计预测回归,从而确保在相关损失下预测的渐近无偏性。最后,我们通过采用针对相关损失量身定制的信息准则,为要预测的序列选择最有希望的预测因子。使用美国经济的大型月度数据集,我们在各种变量的伪样本外预测练习中评估拟议的调整。正如预期的那样,在相关损失下使用估计是有效的。使用额外的波动率代理作为预测器,并根据相关损失函数进行模型选择,显著提高了预测性能。
{"title":"Predictive Regressions Under Asymmetric Loss: Factor Augmentation and Model Selection","authors":"M. Demetrescu, Sinem Hacioglu Hoke","doi":"10.2139/ssrn.3180690","DOIUrl":"https://doi.org/10.2139/ssrn.3180690","url":null,"abstract":"This paper discusses the specifics of forecasting using factor-augmented predictive regressions under general loss functions. In line with the literature, we employ principal component analysis to extract factors from the set of predictors. In addition, we also extract information on the volatility of the series to be predicted, since the volatility is forecast-relevant under non-quadratic loss functions. We ensure asymptotic unbiasedness of the forecasts under the relevant loss by estimating the predictive regression through the minimization of the in-sample average loss. Finally, we select the most promising predictors for the series to be forecast by employing an information criterion that is tailored to the relevant loss. Using a large monthly data set for the US economy, we assess the proposed adjustments in a pseudo out-of-sample forecasting exercise for various variables. As expected, the use of estimation under the relevant loss is found to be effective. Using an additional volatility proxy as the predictor and conducting model selection that is tailored to the relevant loss function enhances the forecast performance significantly.","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126586669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Asymptotic Post-Selection Inference for Akaike's Information Criterion Akaike信息准则的渐近后选择推理
Pub Date : 2018-02-01 DOI: 10.2139/ssrn.3167253
Ali Charkhi, G. Claeskens
Ignoring the model selection step in inference after selection is harmful. This paper studies the asymptotic distribution of estimators after model selection using the Akaike information criterion. First, we consider the classical setting in which a true model exists and is included in the candidate set of models. We exploit the overselection property of this criterion in the construction of a selection region, and obtain the asymptotic distribution of estimators and linear combinations thereof conditional on the selected model. The limiting distribution depends on the set of competitive models and on the smallest overparameterized model. Second, we relax the assumption about the existence of a true model, and obtain uniform asymptotic results. We use simulation to study the resulting postselection distributions and to calculate confidence regions for the model parameters. We apply the method to data.
在选择后的推理中忽略模型选择步骤是有害的。本文利用Akaike信息准则研究了模型选择后估计量的渐近分布。首先,我们考虑一个真实模型存在并包含在候选模型集中的经典设置。我们在选择区域的构造中利用了该准则的过选择性质,得到了所选模型条件下估计量及其线性组合的渐近分布。极限分布取决于竞争模型集和最小的过参数化模型。其次,我们放宽了真模型存在的假设,得到了一致的渐近结果。我们使用模拟来研究结果的后选择分布,并计算模型参数的置信区域。我们将这种方法应用于数据。
{"title":"Asymptotic Post-Selection Inference for Akaike's Information Criterion","authors":"Ali Charkhi, G. Claeskens","doi":"10.2139/ssrn.3167253","DOIUrl":"https://doi.org/10.2139/ssrn.3167253","url":null,"abstract":"Ignoring the model selection step in inference after selection is harmful. This paper studies the asymptotic distribution of estimators after model selection using the Akaike information criterion. First, we consider the classical setting in which a true model exists and is included in the candidate set of models. We exploit the overselection property of this criterion in the construction of a selection region, and obtain the asymptotic distribution of estimators and linear combinations thereof conditional on the selected model. The limiting distribution depends on the set of competitive models and on the smallest overparameterized model. Second, we relax the assumption about the existence of a true model, and obtain uniform asymptotic results. We use simulation to study the resulting postselection distributions and to calculate confidence regions for the model parameters. We apply the method to data.","PeriodicalId":106740,"journal":{"name":"ERN: Other Econometrics: Econometric Model Construction","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131385927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
ERN: Other Econometrics: Econometric Model Construction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1