This note is in response to a recent paper by Hansen (2007, Econometrica) who proposed an optimal model average estimator with weights selected by minimizing a Mallows criterion. The main contribution of Hansen’s paper is a demonstration that the Mallows criterion is asymptotically equivalent to the squared error, so the model average estimator that minimizes the Mallows criterion also minimizes the squared error in large samples. We are concerned with two assumptions that accompany Hansen’s approach. First is the assumption that the approximating models are strictly nested in a way that depends on the ordering of regressors. Often there is no clear basis for the ordering and the approach does not permit non-nested models which are more realistic in a practical sense. Second, for the optimality result to hold the model weights are required to lie within a special discrete set. In fact, Hansen (2007) noted both difficulties and called for extensions of the proof techniques. We provide an alternative proof which shows that the result on the optimality of the Mallows criterion in fact holds for continuous model weights and under a non-nested set-up that allows any linear combination of regressors in the approximating models that make up the model average estimator. These are important extensions and our results provide a stronger theoretical basis for the use of the Mallows criterion in model averaging by strengthening existing findings.
{"title":"Least Squares Model Averaging: Some Further Results","authors":"Xinyu Zhang, Alan T. K. Wan, Guohua Zou","doi":"10.2139/ssrn.3481542","DOIUrl":"https://doi.org/10.2139/ssrn.3481542","url":null,"abstract":"This note is in response to a recent paper by Hansen (2007, Econometrica) who proposed an optimal model average estimator with weights selected by minimizing a Mallows criterion. The main contribution of Hansen’s paper is a demonstration that the Mallows criterion is asymptotically equivalent to the squared error, so the model average estimator that minimizes the Mallows criterion also minimizes the squared error in large samples. We are concerned with two assumptions that accompany Hansen’s approach. First is the assumption that the approximating models are strictly nested in a way that depends on the ordering of regressors. Often there is no clear basis for the ordering and the approach does not permit non-nested models which are more realistic in a practical sense. Second, for the optimality result to hold the model weights are required to lie within a special discrete set. In fact, Hansen (2007) noted both difficulties and called for extensions of the proof techniques. We provide an alternative proof which shows that the result on the optimality of the Mallows criterion in fact holds for continuous model weights and under a non-nested set-up that allows any linear combination of regressors in the approximating models that make up the model average estimator. These are important extensions and our results provide a stronger theoretical basis for the use of the Mallows criterion in model averaging by strengthening existing findings.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128152303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study we empirically investigate the determinants of and build a predictive econometric model for exposure at default (EAD) using a sample of Moody’s rated defaulted firms having revolving credits. We extend prior empirical work by considering alternative determinants of EAD risk, in addition to the traditional factors (e.g., credit rating.) Various measures of EAD risk are derived and compared. We build a multiple regression model in the generalized linear class and examine the comparative rank ordering and predictive accuracy properties of these. We find weak evidence of counter-cyclicality in EAD. While we find EAD risk to decrease with default risk, utilization has the strongest inverse relation. We also find EAD risk reduced for greater leverage, liquidity, more debt cushion; and increased for greater company size, higher collateral rank or more bank debt in the capital structure of the defaulted obligor. The models are validated rigorously through resampling experiment in a rolling out-of-time and sample experiment. In addition to the credit risk management implications of this study (the parameterization of pricing and portfolio management models), there is use in quantifying EAD risk for banks qualifying for the Advanced IRB approach in the regulatory framework of the Basel II accord.
{"title":"An Empirical Study of Exposure at Default","authors":"Michael Jacobs","doi":"10.2139/ssrn.1149407","DOIUrl":"https://doi.org/10.2139/ssrn.1149407","url":null,"abstract":"In this study we empirically investigate the determinants of and build a predictive econometric model for exposure at default (EAD) using a sample of Moody’s rated defaulted firms having revolving credits. We extend prior empirical work by considering alternative determinants of EAD risk, in addition to the traditional factors (e.g., credit rating.) Various measures of EAD risk are derived and compared. We build a multiple regression model in the generalized linear class and examine the comparative rank ordering and predictive accuracy properties of these. We find weak evidence of counter-cyclicality in EAD. While we find EAD risk to decrease with default risk, utilization has the strongest inverse relation. We also find EAD risk reduced for greater leverage, liquidity, more debt cushion; and increased for greater company size, higher collateral rank or more bank debt in the capital structure of the defaulted obligor. The models are validated rigorously through resampling experiment in a rolling out-of-time and sample experiment. In addition to the credit risk management implications of this study (the parameterization of pricing and portfolio management models), there is use in quantifying EAD risk for banks qualifying for the Advanced IRB approach in the regulatory framework of the Basel II accord.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133412490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dutta, Jackson and Le Breton (Econometrica, 2001) initiates the study of strategic candidacy. A voting procedure satisfies candidate stability if no candidate has incentives to withdraw her candidacy in order to manipulate the voting outcome in her favor. Dutta et al. (2001) shows that a single valued voting procedure satisfying candidate stability and unanimity must be dictatorial if voters have strict preferences and candidates cannot vote. Eraslan and McLennan (JET, 2004) extends this result to a framework that allows weak preferences and multi-valued voting procedures (voting correspondences). They obtain the existence of a serial dictatorship under a stronger version of candidate stability. We show that voting correspondences satisfying strong candidate stability and unanimity are monotonic, that is, if a winning candidate's position is weakly improved in all voters' preference rankings, then the candidate remains a winner. Monotonicity provides a direct link between the standard dictatorship in Dutta et al. (2001) and the serial dictatorship in Eraslan and McLennan (2004). Using this particular property of voting correspondences, we provide an alternative proof to Eraslan and McLennan's result.
Dutta, Jackson和Le Breton (Econometrica, 2001)开创了战略候选资格的研究。如果没有候选人为了操纵投票结果对自己有利而有退出候选人资格的动机,那么投票程序就满足候选人稳定性。Dutta et al.(2001)表明,如果选民有严格的偏好,候选人不能投票,那么满足候选人稳定性和一致性的单一价值投票程序必然是独裁的。Eraslan和McLennan (JET, 2004)将这一结果扩展到一个允许弱偏好和多值投票程序(投票对应)的框架。他们在候选人稳定性更强的情况下获得了一系列独裁统治的存在。我们证明了满足强候选人稳定性和一致性的投票对应是单调的,即如果获胜候选人的位置在所有选民的偏好排名中都有微弱的提高,那么该候选人仍然是赢家。单调性提供了Dutta等人(2001)的标准独裁与Eraslan和McLennan(2004)的连续独裁之间的直接联系。利用投票通信的这一特殊性质,我们为Eraslan和McLennan的结果提供了另一种证明。
{"title":"Monotonicity and Candidate Stable Voting Correspondences","authors":"Yuelan Chen","doi":"10.2139/ssrn.861544","DOIUrl":"https://doi.org/10.2139/ssrn.861544","url":null,"abstract":"Dutta, Jackson and Le Breton (Econometrica, 2001) initiates the study of strategic candidacy. A voting procedure satisfies candidate stability if no candidate has incentives to withdraw her candidacy in order to manipulate the voting outcome in her favor. Dutta et al. (2001) shows that a single valued voting procedure satisfying candidate stability and unanimity must be dictatorial if voters have strict preferences and candidates cannot vote. Eraslan and McLennan (JET, 2004) extends this result to a framework that allows weak preferences and multi-valued voting procedures (voting correspondences). They obtain the existence of a serial dictatorship under a stronger version of candidate stability. We show that voting correspondences satisfying strong candidate stability and unanimity are monotonic, that is, if a winning candidate's position is weakly improved in all voters' preference rankings, then the candidate remains a winner. Monotonicity provides a direct link between the standard dictatorship in Dutta et al. (2001) and the serial dictatorship in Eraslan and McLennan (2004). Using this particular property of voting correspondences, we provide an alternative proof to Eraslan and McLennan's result.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127535089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-11-01DOI: 10.1016/S1058-7497(06)17006-4
G. E. Whittenburg, I. Horowitz, William A. Raabe
{"title":"Determining Innocence in Innocent-Spouse Court Cases Using Logit/Probit Analysis","authors":"G. E. Whittenburg, I. Horowitz, William A. Raabe","doi":"10.1016/S1058-7497(06)17006-4","DOIUrl":"https://doi.org/10.1016/S1058-7497(06)17006-4","url":null,"abstract":"","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117195552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We develop a general procedure to construct pairwise meeting processes characterized by two features. First, in each period the process maximizes the number of matches in the population. Second, over time agents meet everybody else exactly once. We call this type of meetings “absolute strangers.” Our methodological contribution to economics is to offer a simple procedure to construct a type of decentralized trading environments usually employed in both theoretical and experimental economics. In particular, we demonstrate how to make use of the mathematics of Latin Squares to enrich the modeling of matching economies.
{"title":"Bilateral Matching with Latin Squares","authors":"C. Aliprantis, Gabriele Camera, D. Puzzello","doi":"10.2139/ssrn.2541967","DOIUrl":"https://doi.org/10.2139/ssrn.2541967","url":null,"abstract":"We develop a general procedure to construct pairwise meeting processes characterized by two features. First, in each period the process maximizes the number of matches in the population. Second, over time agents meet everybody else exactly once. We call this type of meetings “absolute strangers.” Our methodological contribution to economics is to offer a simple procedure to construct a type of decentralized trading environments usually employed in both theoretical and experimental economics. In particular, we demonstrate how to make use of the mathematics of Latin Squares to enrich the modeling of matching economies.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124816780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generalized autoregressive conditional heteroscedasticity (GARCH)-type models have been successively used to capture the conditional volatility of macroeconomic and financial time series in the past two decades. However, few diagnostic tests are specifically devised to check the adequacy of symmetric multivariate GARCH specifications. Moreover, most practitioners resort to the popular Ljung-Box test indiscriminately, even though the appropriateness of such a test is questionable. In this paper, we investigate the empirical size and power of four diagnostic tests: the Ling-Li test, Ljung-Box test, the Box-Pierce test modified by Tse and Tsui, and the runs test, respectively. We use Monte Carlo simulation experiments over a wide combination of data generating processes and estimation models of bivariate GARCH-type asymmetric models. In the absence of analytically derived diagnostic tests, our simulation results could serve as guidelines for empirical researchers and practitioners in selecting the appropriate diagnostic tests for multivariate asymmetric GARCH models.
{"title":"Size and Power of Diagnostic Tests for Asymmetric Garch-Type Models","authors":"P. Jayasinghe, A. Tsui","doi":"10.2139/SSRN.2462877","DOIUrl":"https://doi.org/10.2139/SSRN.2462877","url":null,"abstract":"Generalized autoregressive conditional heteroscedasticity (GARCH)-type models have been successively used to capture the conditional volatility of macroeconomic and financial time series in the past two decades. However, few diagnostic tests are specifically devised to check the adequacy of symmetric multivariate GARCH specifications. Moreover, most practitioners resort to the popular Ljung-Box test indiscriminately, even though the appropriateness of such a test is questionable. In this paper, we investigate the empirical size and power of four diagnostic tests: the Ling-Li test, Ljung-Box test, the Box-Pierce test modified by Tse and Tsui, and the runs test, respectively. We use Monte Carlo simulation experiments over a wide combination of data generating processes and estimation models of bivariate GARCH-type asymmetric models. In the absence of analytically derived diagnostic tests, our simulation results could serve as guidelines for empirical researchers and practitioners in selecting the appropriate diagnostic tests for multivariate asymmetric GARCH models.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129778592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study proposes the estimation of Multinomial Probit models using Mendell-Elston's approximation to the cumulative multivariate normal for the computation of choice probabilities. The accuracy of this numerical approximation in computing probabilities is compared with other procedures used in existing calibration programs. Finally, the proposed estimation procedure is tested on simulated choice data.
{"title":"The Estimation of Multinomial Probit Models: A New Calibration Algorithm","authors":"W. Kamakura","doi":"10.1287/trsc.23.4.253","DOIUrl":"https://doi.org/10.1287/trsc.23.4.253","url":null,"abstract":"This study proposes the estimation of Multinomial Probit models using Mendell-Elston's approximation to the cumulative multivariate normal for the computation of choice probabilities. The accuracy of this numerical approximation in computing probabilities is compared with other procedures used in existing calibration programs. Finally, the proposed estimation procedure is tested on simulated choice data.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132568075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Global environmental risks such as climate change and rising sea levels are low-probability events with widespread and possibly irreversible consequences. These are fundamentally new risks which are not well understood. Learning through experimentation is out of the question because these risks are effectively irreversible in a time-scale that matters. As a result, classical theories that rely on expected utility (see Utility theory) may not work well because they underestimate low-probability events, as discussed below. The need to make global environmental decisions calls tor a systematic analysis of choices involving low-probability events with major irreversible consequences. The topic is of current importance but has been neglected in the literature of choice under uncertainty.
{"title":"Catastrophic Risk","authors":"G. Chichilnisky","doi":"10.2139/ssrn.1375632","DOIUrl":"https://doi.org/10.2139/ssrn.1375632","url":null,"abstract":"Global environmental risks such as climate change and rising sea levels are low-probability events with widespread and possibly irreversible consequences. These are fundamentally new risks which are not well understood. Learning through experimentation is out of the question because these risks are effectively irreversible in a time-scale that matters. As a result, classical theories that rely on expected utility (see Utility theory) may not work well because they underestimate low-probability events, as discussed below. The need to make global environmental decisions calls tor a systematic analysis of choices involving low-probability events with major irreversible consequences. The topic is of current importance but has been neglected in the literature of choice under uncertainty.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122705722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Coppedge, J. Gerring, Staffan I. Lindberg, Svend-Erik Skaaning, Jan Teorell, Frida Andersson, Kyle L. Marquardt, Valeriya Mechkova, Farhad Miri, Daniel Pemstein, Josefine Pernes, N. Stepanova, Eitan Tzelgov, Yi-ting Wang
Part I sets forth the V-Dem conceptual scheme. Part II discusses the process of data collection. Part III describes the measurement model along with efforts to identify and correct errors.
{"title":"V-Dem Methodology V6","authors":"M. Coppedge, J. Gerring, Staffan I. Lindberg, Svend-Erik Skaaning, Jan Teorell, Frida Andersson, Kyle L. Marquardt, Valeriya Mechkova, Farhad Miri, Daniel Pemstein, Josefine Pernes, N. Stepanova, Eitan Tzelgov, Yi-ting Wang","doi":"10.2139/ssrn.2951040","DOIUrl":"https://doi.org/10.2139/ssrn.2951040","url":null,"abstract":"Part I sets forth the V-Dem conceptual scheme. Part II discusses the process of data collection. Part III describes the measurement model along with efforts to identify and correct errors.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127437864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}