Pub Date : 2024-02-19DOI: 10.3390/econometrics12010005
João Pedro Coli de Souza Monteneri Nacinben, Márcio Laurini
This study introduces a multivariate extension to the class of stochastic volatility models, employing integrated nested Laplace approximations (INLA) for estimation. Bayesian methods for estimating stochastic volatility models through Markov Chain Monte Carlo (MCMC) can become computationally burdensome or inefficient as the dataset size and problem complexity increase. Furthermore, issues related to chain convergence can also arise. In light of these challenges, this research aims to establish a computationally efficient approach for estimating multivariate stochastic volatility models. We propose a multifactor formulation estimated using the INLA methodology, enabling an approach that leverages sparse linear algebra and parallelization techniques. To evaluate the effectiveness of our proposed model, we conduct in-sample and out-of-sample empirical analyses of stock market index return series. Furthermore, we provide a comparative analysis with models estimated using MCMC, demonstrating the computational efficiency and goodness of fit improvements achieved with our approach.
{"title":"Multivariate Stochastic Volatility Modeling via Integrated Nested Laplace Approximations: A Multifactor Extension","authors":"João Pedro Coli de Souza Monteneri Nacinben, Márcio Laurini","doi":"10.3390/econometrics12010005","DOIUrl":"https://doi.org/10.3390/econometrics12010005","url":null,"abstract":"This study introduces a multivariate extension to the class of stochastic volatility models, employing integrated nested Laplace approximations (INLA) for estimation. Bayesian methods for estimating stochastic volatility models through Markov Chain Monte Carlo (MCMC) can become computationally burdensome or inefficient as the dataset size and problem complexity increase. Furthermore, issues related to chain convergence can also arise. In light of these challenges, this research aims to establish a computationally efficient approach for estimating multivariate stochastic volatility models. We propose a multifactor formulation estimated using the INLA methodology, enabling an approach that leverages sparse linear algebra and parallelization techniques. To evaluate the effectiveness of our proposed model, we conduct in-sample and out-of-sample empirical analyses of stock market index return series. Furthermore, we provide a comparative analysis with models estimated using MCMC, demonstrating the computational efficiency and goodness of fit improvements achieved with our approach.","PeriodicalId":11499,"journal":{"name":"Econometrics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139904204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-09DOI: 10.3390/econometrics12010004
Christa Hangl
Software investments can significantly contribute to corporate success by optimising productivity, stimulating creativity, elevating customer satisfaction, and equipping organisations with the essential resources to adapt and thrive in a rapidly changing market. This paper examines whether software investments have an impact on the economic success of the companies listed on the Austrian Traded Prime market (ATX companies). A literature review and qualitative content analysis are performed to answer the research questions. For testing hypotheses, a longitudinal study is conducted. Over a ten-year period, the consolidated financial statements of the businesses under review are evaluated. A panel will assist with the data analysis. This study offers notable distinctions from other research that has investigated the correlation between digitalisation and economic success. In contrast to prior studies that relied on surveys to assess the level of digitalisation, this study obtained the required data by conducting a comprehensive examination of the annual reports of all the organisations included in the analysis. The regression analysis of all businesses revealed no correlation between software expenditures and economic success. The regression models were subsequently calculated independently for financial and non-financial companies. The correlation between software investments and economic success in both industries is evident.
{"title":"Influence of Digitalisation on Business Success in Austrian Traded Prime Market Companies—A Longitudinal Study","authors":"Christa Hangl","doi":"10.3390/econometrics12010004","DOIUrl":"https://doi.org/10.3390/econometrics12010004","url":null,"abstract":"Software investments can significantly contribute to corporate success by optimising productivity, stimulating creativity, elevating customer satisfaction, and equipping organisations with the essential resources to adapt and thrive in a rapidly changing market. This paper examines whether software investments have an impact on the economic success of the companies listed on the Austrian Traded Prime market (ATX companies). A literature review and qualitative content analysis are performed to answer the research questions. For testing hypotheses, a longitudinal study is conducted. Over a ten-year period, the consolidated financial statements of the businesses under review are evaluated. A panel will assist with the data analysis. This study offers notable distinctions from other research that has investigated the correlation between digitalisation and economic success. In contrast to prior studies that relied on surveys to assess the level of digitalisation, this study obtained the required data by conducting a comprehensive examination of the annual reports of all the organisations included in the analysis. The regression analysis of all businesses revealed no correlation between software expenditures and economic success. The regression models were subsequently calculated independently for financial and non-financial companies. The correlation between software investments and economic success in both industries is evident.","PeriodicalId":11499,"journal":{"name":"Econometrics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139758546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-17DOI: 10.3390/econometrics12010003
Yong Bao
This paper proposes estimating linear dynamic panels by explicitly exploiting the endogeneity of lagged dependent variables and expressing the cross moments between the endogenous lagged dependent variables and disturbances in terms of model parameters. These moments, when recentered, form the basis for model estimation. The resulting estimator's asymptotic properties are derived under different asymptotic regimes (large number of cross-sectional units or long time spans), stable conditions (with or without a unit root), and error characteristics (homoskedasticity or heteroskedasticity of different forms). Monte Carlo experiments show that it has very good finite-sample performance.
{"title":"Estimating Linear Dynamic Panels with Recentered Moments","authors":"Yong Bao","doi":"10.3390/econometrics12010003","DOIUrl":"https://doi.org/10.3390/econometrics12010003","url":null,"abstract":"This paper proposes estimating linear dynamic panels by explicitly exploiting the endogeneity of lagged dependent variables and expressing the cross moments between the endogenous lagged dependent variables and disturbances in terms of model parameters. These moments, when recentered, form the basis for model estimation. The resulting estimator's asymptotic properties are derived under different asymptotic regimes (large number of cross-sectional units or long time spans), stable conditions (with or without a unit root), and error characteristics (homoskedasticity or heteroskedasticity of different forms). Monte Carlo experiments show that it has very good finite-sample performance.","PeriodicalId":11499,"journal":{"name":"Econometrics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139501691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The introduction of Bitcoin as a distributed peer-to-peer digital cash in 2008 and its first recorded real transaction in 2010 served the function of a medium of exchange, transforming the financial landscape by offering a decentralized, peer-to-peer alternative to conventional monetary systems. This study investigates the intricate relationship between cryptocurrencies and monetary policy, with a particular focus on their long-term volatility dynamics. We enhance the GARCH-MIDAS (Mixed Data Sampling) through the adoption of the SB-GARCH-MIDAS (Structural Break Mixed Data Sampling) to analyze the daily returns of three prominent cryptocurrencies (Bitcoin, Binance Coin, and XRP) alongside monthly monetary policy data from the USA and South Africa with respect to potential presence of a structural break in the monetary policy, which provided us with two GARCH-MIDAS models. As of 30 June 2022, the most recent data observation for all samples are noted, although it is essential to acknowledge that the data sample time range varies due to differences in cryptocurrency data accessibility. Our research incorporates model confidence set (MCS) procedures and assesses model performance using various metrics, including AIC, BIC, MSE, and QLIKE, supplemented by comprehensive residual diagnostics. Notably, our analysis reveals that the SB-GARCH-MIDAS model outperforms others in forecasting cryptocurrency volatility. Furthermore, we uncover that, in contrast to their younger counterparts, the long-term volatility of older cryptocurrencies is sensitive to structural breaks in exogenous variables. Our study sheds light on the diversification within the cryptocurrency space, shaped by technological characteristics and temporal considerations, and provides practical insights, emphasizing the importance of incorporating monetary policy in assessing cryptocurrency volatility. The implications of our study extend to portfolio management with dynamic consideration, offering valuable insights for investors and decision-makers, which underscores the significance of considering both cryptocurrency types and the economic context of host countries.
{"title":"Is Monetary Policy a Driver of Cryptocurrencies? Evidence from a Structural Break GARCH-MIDAS Approach","authors":"Md Samsul Alam, Alessandra Amendola, Vincenzo Candila, Shahram Dehghan Jabarabadi","doi":"10.3390/econometrics12010002","DOIUrl":"https://doi.org/10.3390/econometrics12010002","url":null,"abstract":"The introduction of Bitcoin as a distributed peer-to-peer digital cash in 2008 and its first recorded real transaction in 2010 served the function of a medium of exchange, transforming the financial landscape by offering a decentralized, peer-to-peer alternative to conventional monetary systems. This study investigates the intricate relationship between cryptocurrencies and monetary policy, with a particular focus on their long-term volatility dynamics. We enhance the GARCH-MIDAS (Mixed Data Sampling) through the adoption of the SB-GARCH-MIDAS (Structural Break Mixed Data Sampling) to analyze the daily returns of three prominent cryptocurrencies (Bitcoin, Binance Coin, and XRP) alongside monthly monetary policy data from the USA and South Africa with respect to potential presence of a structural break in the monetary policy, which provided us with two GARCH-MIDAS models. As of 30 June 2022, the most recent data observation for all samples are noted, although it is essential to acknowledge that the data sample time range varies due to differences in cryptocurrency data accessibility. Our research incorporates model confidence set (MCS) procedures and assesses model performance using various metrics, including AIC, BIC, MSE, and QLIKE, supplemented by comprehensive residual diagnostics. Notably, our analysis reveals that the SB-GARCH-MIDAS model outperforms others in forecasting cryptocurrency volatility. Furthermore, we uncover that, in contrast to their younger counterparts, the long-term volatility of older cryptocurrencies is sensitive to structural breaks in exogenous variables. Our study sheds light on the diversification within the cryptocurrency space, shaped by technological characteristics and temporal considerations, and provides practical insights, emphasizing the importance of incorporating monetary policy in assessing cryptocurrency volatility. The implications of our study extend to portfolio management with dynamic consideration, offering valuable insights for investors and decision-makers, which underscores the significance of considering both cryptocurrency types and the economic context of host countries.","PeriodicalId":11499,"journal":{"name":"Econometrics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139375791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-28DOI: 10.3390/econometrics12010001
Peter Roth
Throughout its lifespan, a journal goes through many phases—and Econometrics (Econometrics Homepage n [...]
一份期刊在其生命周期中会经历许多阶段--《计量经济学》(Econometrics Homepage n [...]
{"title":"Publisher’s Note: Econometrics—A New Era for a Well-Established Journal","authors":"Peter Roth","doi":"10.3390/econometrics12010001","DOIUrl":"https://doi.org/10.3390/econometrics12010001","url":null,"abstract":"Throughout its lifespan, a journal goes through many phases—and Econometrics (Econometrics Homepage n [...]","PeriodicalId":11499,"journal":{"name":"Econometrics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139065948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-15DOI: 10.3390/econometrics11040028
Mohitosh Kejriwal, Linh Nguyen, Xuewen Yu
This paper presents a new approach to constructing multistep combination forecasts in a nonstationary framework with stochastic and deterministic trends. Existing forecast combination approaches in the stationary setup typically target the in-sample asymptotic mean squared error (AMSE), relying on its approximate equivalence with the asymptotic forecast risk (AFR). Such equivalence, however, breaks down in a nonstationary setup. This paper develops combination forecasts based on minimizing an accumulated prediction errors (APE) criterion that directly targets the AFR and remains valid whether the time series is stationary or not. We show that the performance of APE-weighted forecasts is close to that of the optimal, infeasible combination forecasts. Simulation experiments are used to demonstrate the finite sample efficacy of the proposed procedure relative to Mallows/Cross-Validation weighting that target the AMSE as well as underscore the importance of accounting for both persistence and lag order uncertainty. An application to forecasting US macroeconomic time series confirms the simulation findings and illustrates the benefits of employing the APE criterion for real as well as nominal variables at both short and long horizons. A practical implication of our analysis is that the degree of persistence can play an important role in the choice of combination weights.
本文提出了一种在具有随机和确定趋势的非稳态框架下构建多步骤组合预测的新方法。静态设置下的现有预测组合方法通常以样本内渐近均方误差(AMSE)为目标,依赖于其与渐近预测风险(AFR)的近似等价性。然而,这种等价关系在非平稳设置中会被打破。本文基于最小化累积预测误差(APE)准则来开发组合预测,该准则直接针对 AFR,且无论时间序列是否静态都有效。我们证明了 APE 加权预测的性能接近于不可行的最优组合预测。模拟实验证明,相对于以 AMSE 为目标的 Mallows/Cross-Validation 加权法,所提出的程序具有有限样本的功效,并强调了考虑持续性和滞后阶次不确定性的重要性。对美国宏观经济时间序列的预测应用证实了模拟结果,并说明了在短期和长期范围内对实际变量和名义变量采用 APE 标准的好处。我们分析的一个实际意义是,持久性程度可以在组合权重的选择中发挥重要作用。
{"title":"Multistep Forecast Averaging with Stochastic and Deterministic Trends","authors":"Mohitosh Kejriwal, Linh Nguyen, Xuewen Yu","doi":"10.3390/econometrics11040028","DOIUrl":"https://doi.org/10.3390/econometrics11040028","url":null,"abstract":"This paper presents a new approach to constructing multistep combination forecasts in a nonstationary framework with stochastic and deterministic trends. Existing forecast combination approaches in the stationary setup typically target the in-sample asymptotic mean squared error (AMSE), relying on its approximate equivalence with the asymptotic forecast risk (AFR). Such equivalence, however, breaks down in a nonstationary setup. This paper develops combination forecasts based on minimizing an accumulated prediction errors (APE) criterion that directly targets the AFR and remains valid whether the time series is stationary or not. We show that the performance of APE-weighted forecasts is close to that of the optimal, infeasible combination forecasts. Simulation experiments are used to demonstrate the finite sample efficacy of the proposed procedure relative to Mallows/Cross-Validation weighting that target the AMSE as well as underscore the importance of accounting for both persistence and lag order uncertainty. An application to forecasting US macroeconomic time series confirms the simulation findings and illustrates the benefits of employing the APE criterion for real as well as nominal variables at both short and long horizons. A practical implication of our analysis is that the degree of persistence can play an important role in the choice of combination weights.","PeriodicalId":11499,"journal":{"name":"Econometrics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138690213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-12DOI: 10.3390/econometrics11040027
Willi Semmler, Gabriel R. Padró Rosario, Levent Koçkesen
Some financial disruptions that started in California, U.S., in March 2023, resulting in the closure of several medium-size U.S. banks, shed new light on the role of liquidity in business cycle dynamics. In the normal path of the business cycle, liquidity and output mutually interact. Small shocks generally lead to mean reversion through market forces, as a low degree of liquidity dissipation does not significantly disrupt the economic dynamics. However, larger shocks and greater liquidity dissipation arising from runs on financial institutions and contagion effects can trigger tipping points, financial disruptions, and economic downturns. The latter poses severe challenges for Central Banks, which during normal times, usually maintain a hands-off approach with soft regulation and monitoring, allowing the market to operate. However, in severe times of liquidity dissipation, they must swiftly restore liquidity flows and rebuild trust in stability to avoid further disruptions and meltdowns. In this paper, we present a nonlinear model of the liquidity–macro interaction and econometrically explore those types of dynamic features with data from the U.S. economy. Guided by a theoretical model, we use nonlinear econometric methods of a Smooth Transition Regression type to study those features, which provide and suggest further regulation and monitoring guidelines and institutional enforcement of rules.
{"title":"Liquidity and Business Cycles—With Occasional Disruptions","authors":"Willi Semmler, Gabriel R. Padró Rosario, Levent Koçkesen","doi":"10.3390/econometrics11040027","DOIUrl":"https://doi.org/10.3390/econometrics11040027","url":null,"abstract":"Some financial disruptions that started in California, U.S., in March 2023, resulting in the closure of several medium-size U.S. banks, shed new light on the role of liquidity in business cycle dynamics. In the normal path of the business cycle, liquidity and output mutually interact. Small shocks generally lead to mean reversion through market forces, as a low degree of liquidity dissipation does not significantly disrupt the economic dynamics. However, larger shocks and greater liquidity dissipation arising from runs on financial institutions and contagion effects can trigger tipping points, financial disruptions, and economic downturns. The latter poses severe challenges for Central Banks, which during normal times, usually maintain a hands-off approach with soft regulation and monitoring, allowing the market to operate. However, in severe times of liquidity dissipation, they must swiftly restore liquidity flows and rebuild trust in stability to avoid further disruptions and meltdowns. In this paper, we present a nonlinear model of the liquidity–macro interaction and econometrically explore those types of dynamic features with data from the U.S. economy. Guided by a theoretical model, we use nonlinear econometric methods of a Smooth Transition Regression type to study those features, which provide and suggest further regulation and monitoring guidelines and institutional enforcement of rules.","PeriodicalId":11499,"journal":{"name":"Econometrics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138579940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite the popularity of factor models with simple loading matrices, little attention has been given to formally address the identifiability of these models beyond standard rotation-based identification such as the positive lower triangular (PLT) constraint. To fill this gap, we review the advantages of variance identification in simple factor analysis and introduce the generalized lower triangular (GLT) structures. We show that the GLT assumption is an improvement over PLT without compromise: GLT is also unique but, unlike PLT, a non-restrictive assumption. Furthermore, we provide a simple counting rule for variance identification under GLT structures, and we demonstrate that within this model class, the unknown number of common factors can be recovered in an exploratory factor analysis. Our methodology is illustrated for simulated data in the context of post-processing posterior draws in sparse Bayesian factor analysis.
{"title":"When It Counts—Econometric Identification of the Basic Factor Model Based on GLT Structures","authors":"Sylvia Frühwirth-Schnatter, Darjus Hosszejni, Hedibert Freitas Lopes","doi":"10.3390/econometrics11040026","DOIUrl":"https://doi.org/10.3390/econometrics11040026","url":null,"abstract":"Despite the popularity of factor models with simple loading matrices, little attention has been given to formally address the identifiability of these models beyond standard rotation-based identification such as the positive lower triangular (PLT) constraint. To fill this gap, we review the advantages of variance identification in simple factor analysis and introduce the generalized lower triangular (GLT) structures. We show that the GLT assumption is an improvement over PLT without compromise: GLT is also unique but, unlike PLT, a non-restrictive assumption. Furthermore, we provide a simple counting rule for variance identification under GLT structures, and we demonstrate that within this model class, the unknown number of common factors can be recovered in an exploratory factor analysis. Our methodology is illustrated for simulated data in the context of post-processing posterior draws in sparse Bayesian factor analysis.","PeriodicalId":11499,"journal":{"name":"Econometrics","volume":null,"pages":null},"PeriodicalIF":1.5,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138517285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-08DOI: 10.3390/econometrics11040025
Julie Le Gallo, Marc-Alexandre Sénégas
We provide new analytical results for the implementation of the Hausman specification test statistic in a standard panel data model, comparing the version based on the estimators computed from the untransformed random effects model specification under Feasible Generalized Least Squares and the one computed from the quasi-demeaned model estimated by Ordinary Least Squares. We show that the quasi-demeaned model cannot provide a reliable magnitude when implementing the Hausman test in a finite sample setting, although it is the most common approach used to produce the test statistic in econometric software. The difference between the Hausman statistics computed under the two methods can be substantial and even lead to opposite conclusions for the test of orthogonality between the regressors and the individual-specific effects. Furthermore, this difference remains important even with large cross-sectional dimensions as it mainly depends on the within-between structure of the regressors and on the presence of a significant correlation between the individual effects and the covariates in the data. We propose to supplement the test outcomes that are provided in the main econometric software packages with some metrics to address the issue at hand.
{"title":"On the Proper Computation of the Hausman Test Statistic in Standard Linear Panel Data Models: Some Clarifications and New Results","authors":"Julie Le Gallo, Marc-Alexandre Sénégas","doi":"10.3390/econometrics11040025","DOIUrl":"https://doi.org/10.3390/econometrics11040025","url":null,"abstract":"We provide new analytical results for the implementation of the Hausman specification test statistic in a standard panel data model, comparing the version based on the estimators computed from the untransformed random effects model specification under Feasible Generalized Least Squares and the one computed from the quasi-demeaned model estimated by Ordinary Least Squares. We show that the quasi-demeaned model cannot provide a reliable magnitude when implementing the Hausman test in a finite sample setting, although it is the most common approach used to produce the test statistic in econometric software. The difference between the Hausman statistics computed under the two methods can be substantial and even lead to opposite conclusions for the test of orthogonality between the regressors and the individual-specific effects. Furthermore, this difference remains important even with large cross-sectional dimensions as it mainly depends on the within-between structure of the regressors and on the presence of a significant correlation between the individual effects and the covariates in the data. We propose to supplement the test outcomes that are provided in the main econometric software packages with some metrics to address the issue at hand.","PeriodicalId":11499,"journal":{"name":"Econometrics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135392844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-12DOI: 10.3390/econometrics11040024
Minkun Kim, David Lindberg, Martin Crane, Marija Bezbradica
In actuarial practice, the modeling of total losses tied to a certain policy is a nontrivial task due to complex distributional features. In the recent literature, the application of the Dirichlet process mixture for insurance loss has been proposed to eliminate the risk of model misspecification biases. However, the effect of covariates as well as missing covariates in the modeling framework is rarely studied. In this article, we propose novel connections among a covariate-dependent Dirichlet process mixture, log-normal convolution, and missing covariate imputation. As a generative approach, our framework models the joint of outcome and covariates, which allows us to impute missing covariates under the assumption of missingness at random. The performance is assessed by applying our model to several insurance datasets of varying size and data missingness from the literature, and the empirical results demonstrate the benefit of our model compared with the existing actuarial models, such as the Tweedie-based generalized linear model, generalized additive model, or multivariate adaptive regression spline.
{"title":"Dirichlet Process Log Skew-Normal Mixture with a Missing-at-Random-Covariate in Insurance Claim Analysis","authors":"Minkun Kim, David Lindberg, Martin Crane, Marija Bezbradica","doi":"10.3390/econometrics11040024","DOIUrl":"https://doi.org/10.3390/econometrics11040024","url":null,"abstract":"In actuarial practice, the modeling of total losses tied to a certain policy is a nontrivial task due to complex distributional features. In the recent literature, the application of the Dirichlet process mixture for insurance loss has been proposed to eliminate the risk of model misspecification biases. However, the effect of covariates as well as missing covariates in the modeling framework is rarely studied. In this article, we propose novel connections among a covariate-dependent Dirichlet process mixture, log-normal convolution, and missing covariate imputation. As a generative approach, our framework models the joint of outcome and covariates, which allows us to impute missing covariates under the assumption of missingness at random. The performance is assessed by applying our model to several insurance datasets of varying size and data missingness from the literature, and the empirical results demonstrate the benefit of our model compared with the existing actuarial models, such as the Tweedie-based generalized linear model, generalized additive model, or multivariate adaptive regression spline.","PeriodicalId":11499,"journal":{"name":"Econometrics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135968645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}