I develop a continuum of new nonparametric bounds. They stem from the solution of an optimization problem that is complementary to the Hansen and Jaganathan (1991) approach and are shown to complete the nonparametric bound universe the literature has so far discovered. Through the lens of these bounds, I estimate rare event distributions using index option returns. Standard disaster models and their perturbations are shown unable to meet the bounds implied by simple static option trading strategies. My results suggest more sophisticated modeling of disaster models in order to reconcile with the index option data.
{"title":"Index Option Returns and Generalized Entropy Bounds","authors":"Yan Liu","doi":"10.2139/ssrn.2149265","DOIUrl":"https://doi.org/10.2139/ssrn.2149265","url":null,"abstract":"I develop a continuum of new nonparametric bounds. They stem from the solution of an optimization problem that is complementary to the Hansen and Jaganathan (1991) approach and are shown to complete the nonparametric bound universe the literature has so far discovered. Through the lens of these bounds, I estimate rare event distributions using index option returns. Standard disaster models and their perturbations are shown unable to meet the bounds implied by simple static option trading strategies. My results suggest more sophisticated modeling of disaster models in order to reconcile with the index option data.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75506845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is well known that a wide class of bayesian nonparametric priors lead to the representation of the distribution of the observable variables as a mixture density with an infinite number of components, and that such a representation induces a clustering structure in the observations. However, cluster identification is not straightforward a posteriori and some post-processing is usually required. In order to circumvent label switching, pairwise posterior similarity has been introduced, and it has been used in order to either apply classical clustering algorithms or estimate the underlying partition by minimising a suitable loss function. This paper proposes to map observations on a weighted undirected graph, where each node represents a sample item and edge weights are given by the posterior pairwise similarities. It will be shown how, after building a particular random walk on such a graph, it is possible to apply a community detection algorithm, known as map equation method, by optimising the description length of the partition. A relevant feature of this method is that it allows for both the quantification of the posterior uncertainty of the classification and the selection of variables to be used for classification purposes.
{"title":"Bayesian Nonparametric Clustering as a Community Detection Problem","authors":"S. Tonellato","doi":"10.2139/ssrn.3424529","DOIUrl":"https://doi.org/10.2139/ssrn.3424529","url":null,"abstract":"It is well known that a wide class of bayesian nonparametric priors lead to the representation of the distribution of the observable variables as a mixture density with an infinite number of components, and that such a representation induces a clustering structure in the observations. However, cluster identification is not straightforward a posteriori and some post-processing is usually required. In order to circumvent label switching, pairwise posterior similarity has been introduced, and it has been used in order to either apply classical clustering algorithms or estimate the underlying partition by minimising a suitable loss function. This paper proposes to map observations on a weighted undirected graph, where each node represents a sample item and edge weights are given by the posterior pairwise similarities. It will be shown how, after building a particular random walk on such a graph, it is possible to apply a community detection algorithm, known as map equation method, by optimising the description length of the partition. A relevant feature of this method is that it allows for both the quantification of the posterior uncertainty of the classification and the selection of variables to be used for classification purposes.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84754578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over the last decade, big data have poured into econometrics, demanding new statistical methods for analysing high-dimensional data and complex non-linear relationships. A common approach for addressing dimensionality issues relies on the use of static graphical structures for extracting the most significant dependence interrelationships between the variables of interest. Recently, Bayesian nonparametric techniques have become popular for modelling complex phenomena in a flexible and efficient manner, but only few attempts have been made in econometrics. In this paper, we provide an innovative Bayesian nonparametric (BNP) time-varying graphical framework for making inference in high-dimensional time series. We include a Bayesian nonparametric dependent prior specification on the matrix of coefficients and the covariance matrix by mean of a Time-Series DPP as in Nieto-Barajas et al. (2012). Following Billio et al. (2019), our hierarchical prior overcomes over-parametrization and over-fitting issues by clustering the vector autoregressive (VAR) coefficients into groups and by shrinking the coefficients of each group toward a common location. Our BNP timevarying VAR model is based on a spike-and-slab construction coupled with dependent Dirichlet Process prior (DPP) and allows to: (i) infer time-varying Granger causality networks from time series; (ii) flexibly model and cluster non-zero time-varying coefficients; (iii) accommodate for potential non-linearities. In order to assess the performance of the model, we study the merits of our approach by considering a well-known macroeconomic dataset. Moreover, we check the robustness of the method by comparing two alternative specifications, with Dirac and diffuse spike prior distributions.
{"title":"Bayesian Nonparametric Graphical Models for Time-Varying Parameters VAR","authors":"Matteo Iacopini, L. Rossini","doi":"10.2139/ssrn.3400078","DOIUrl":"https://doi.org/10.2139/ssrn.3400078","url":null,"abstract":"Over the last decade, big data have poured into econometrics, demanding new statistical methods for analysing high-dimensional data and complex non-linear relationships. A common approach for addressing dimensionality issues relies on the use of static graphical structures for extracting the most significant dependence interrelationships between the variables of interest. Recently, Bayesian nonparametric techniques have become popular for modelling complex phenomena in a flexible and efficient manner, but only few attempts have been made in econometrics. In this paper, we provide an innovative Bayesian nonparametric (BNP) time-varying graphical framework for making inference in high-dimensional time series. We include a Bayesian nonparametric dependent prior specification on the matrix of coefficients and the covariance matrix by mean of a Time-Series DPP as in Nieto-Barajas et al. (2012). Following Billio et al. (2019), our hierarchical prior overcomes over-parametrization and over-fitting issues by clustering the vector autoregressive (VAR) coefficients into groups and by shrinking the coefficients of each group toward a common location. Our BNP timevarying VAR model is based on a spike-and-slab construction coupled with dependent Dirichlet Process prior (DPP) and allows to: (i) infer time-varying Granger causality networks from time series; (ii) flexibly model and cluster non-zero time-varying coefficients; (iii) accommodate for potential non-linearities. In order to assess the performance of the model, we study the merits of our approach by considering a well-known macroeconomic dataset. Moreover, we check the robustness of the method by comparing two alternative specifications, with Dirac and diffuse spike prior distributions.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73577270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this short paper, we review various (non)-parametric regression methods, mainly k-nearest neighbors, Nadaraya-Watson, LP(p)-estimators, spline regressor and random forest. They are then compared when calibrating local stochastic volatility models using the particle method.
{"title":"(Non)-Parametric Regressions: Applications to Local Stochastic Volatility Models","authors":"P. Henry-Labordère","doi":"10.2139/ssrn.3374875","DOIUrl":"https://doi.org/10.2139/ssrn.3374875","url":null,"abstract":"In this short paper, we review various (non)-parametric regression methods, mainly k-nearest neighbors, Nadaraya-Watson, LP(p)-estimators, spline regressor and random forest. They are then compared when calibrating local stochastic volatility models using the particle method.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83555135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-12DOI: 10.3390/ECONOMETRICS7010010
Miguel Henry, G. Judge
The focus of this paper is an information theoretic-symbolic logic approach to extract information from complex economic systems and unlock its dynamic content. Permutation Entropy (PE) is used to capture the permutation patterns-ordinal relations among the individual values of a given time series; to obtain a probability distribution of the accessible patterns; and to quantify the degree of complexity of an economic behavior system. Ordinal patterns are used to describe the intrinsic patterns, which are hidden in the dynamics of the economic system. Empirical applications involving the Dow Jones Industrial Average are presented to indicate the information recovery value and the applicability of the PE method. The results demonstrate the ability of the PE method to detect the extent of complexity (irregularity) and to discriminate and classify admissible and forbidden states.
{"title":"Permutation Entropy and Information Recovery in Nonlinear Dynamic Economic Time Series","authors":"Miguel Henry, G. Judge","doi":"10.3390/ECONOMETRICS7010010","DOIUrl":"https://doi.org/10.3390/ECONOMETRICS7010010","url":null,"abstract":"The focus of this paper is an information theoretic-symbolic logic approach to extract information from complex economic systems and unlock its dynamic content. Permutation Entropy (PE) is used to capture the permutation patterns-ordinal relations among the individual values of a given time series; to obtain a probability distribution of the accessible patterns; and to quantify the degree of complexity of an economic behavior system. Ordinal patterns are used to describe the intrinsic patterns, which are hidden in the dynamics of the economic system. Empirical applications involving the Dow Jones Industrial Average are presented to indicate the information recovery value and the applicability of the PE method. The results demonstrate the ability of the PE method to detect the extent of complexity (irregularity) and to discriminate and classify admissible and forbidden states.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83126543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We develop estimation methodology for an additive nonparametric panel model that is suitable for capturing the pricing of coupon-paying government bonds followed over many time periods. We use our model to estimate the discount function and yield curve of nominally riskless government bonds. The novelty of our approach is the combination of two different techniques: cross-sectional nonparametric methods and kernel estimation for time varying dynamics in the time series context. The resulting estimator is used for predicting individual bond prices given the full schedule of their future payments. In addition, it is able to capture the yield curve shapes and dynamics commonly observed in the fixed income markets. We establish the consistency, the rate of convergence, and the asymptotic normality of the proposed estimator. A Monte Carlo exercise illustrates the good performance of the method under different scenarios. We apply our methodology to the daily CRSP bond market dataset, and compare ours with the popular Diebold and Li (2006) method.
{"title":"Estimation of a Nonparametric model for Bond Prices from Cross-section and Time series Information","authors":"B. Koo, D. La Vecchia, O. Linton","doi":"10.2139/ssrn.3341344","DOIUrl":"https://doi.org/10.2139/ssrn.3341344","url":null,"abstract":"We develop estimation methodology for an additive nonparametric panel model that is suitable for capturing the pricing of coupon-paying government bonds followed over many time periods. We use our model to estimate the discount function and yield curve of nominally riskless government bonds. The novelty of our approach is the combination of two different techniques: cross-sectional nonparametric methods and kernel estimation for time varying dynamics in the time series context. The resulting estimator is used for predicting individual bond prices given the full schedule of their future payments. In addition, it is able to capture the yield curve shapes and dynamics commonly observed in the fixed income markets. We establish the consistency, the rate of convergence, and the asymptotic normality of the proposed estimator. A Monte Carlo exercise illustrates the good performance of the method under different scenarios. We apply our methodology to the daily CRSP bond market dataset, and compare ours with the popular Diebold and Li (2006) method.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91263685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nonparametric identification strategy is employed to capture causal relationships without imposing any variant of monotonicity existing in the nonseparable nonlinear error model literature. This is important as when monotonicity is applied to the instrumental variables it limits their availability and when applied to the unobservables it can hardly be justified in the non-scalar case. Moreover, in cases where monotonicity is not satisfied the monotonicity-based estimators might be severely biased as shown in comparative Monte Carlo simulation. The key idea in the proposed identification and estimation strategy is to uncover the counterfactual distribution of the dependent variable, which is not directly observed in the data. We offer a two-step M-Estimator based on a resolution-dependent reproducing symmetric kernel density estimator rather than on the bandwidth-dependent classical kernel and thus, less sensitive to bandwidth choice. Additionally, the average marginal effect of the endogenous covariate on the outcome variable is identified directly from the noisy data which precludes the need to employ additional estimation steps thereby avoiding potential error accumulation. Asymptotic properties of the counterfactual M-Estimator are established.
{"title":"Nonseparability Without Monotonicity: The Couterfactual Distribution Estimator for Causal Inference","authors":"Nir Billfeld, Moshe Kim","doi":"10.2139/ssrn.3343438","DOIUrl":"https://doi.org/10.2139/ssrn.3343438","url":null,"abstract":"Nonparametric identification strategy is employed to capture causal relationships without imposing any variant of monotonicity existing in the nonseparable nonlinear error model literature. This is important as when monotonicity is applied to the instrumental variables it limits their availability and when applied to the unobservables it can hardly be justified in the non-scalar case. Moreover, in cases where monotonicity is not satisfied the monotonicity-based estimators might be severely biased as shown in comparative Monte Carlo simulation. The key idea in the proposed identification and estimation strategy is to uncover the counterfactual distribution of the dependent variable, which is not directly observed in the data. We offer a two-step M-Estimator based on a resolution-dependent reproducing symmetric kernel density estimator rather than on the bandwidth-dependent classical kernel and thus, less sensitive to bandwidth choice. Additionally, the average marginal effect of the endogenous covariate on the outcome variable is identified directly from the noisy data which precludes the need to employ additional estimation steps thereby avoiding potential error accumulation. Asymptotic properties of the counterfactual M-Estimator are established.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88681655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We study inference for the local innovations of Ito semimartingales. Specifically, we construct a resampling procedure for the empirical CDF of high-frequency innovations that have been standardized using a nonparametric estimate of its stochastic scale (volatility) and truncated to rid the effect of “large” jumps. Our locally dependent wild bootstrap (LDWB) accommodate issues related to the stochastic scale and jumps as well as account for a special block-wise dependence structure induced by sampling errors. We show that the LDWB replicates first and second-order limit theory from the usual empirical process and the stochastic scale estimate, respectively, in addition to an asymptotic bias. Moreover, we design the LDWB sufficiently general to establish asymptotic equivalence between it and a nonparametric local block bootstrap, also introduced here, up to second-order distribution theory. Finally, we introduce LDWB-aided Kolmogorov–Smirnov tests for local Gaussianity as well as local von-Mises statistics, with and without bootstrap inference, and establish their asymptotic validity using the second-order distribution theory. The finite sample performance of CLT and LDWB-aided local Gaussianity tests is assessed in a simulation study and an empirical application. Whereas the CLT test is oversized, even in large samples, the size of the LDWB tests is accurate, even in small samples. The empirical analysis verifies this pattern, in addition to providing new insights about the fine scale distributional properties of innovations to equity indices, commodities and exchange rates.
{"title":"Inference for Local Distributions at High Sampling Frequencies: A Bootstrap Approach","authors":"Ulrich Hounyo, R. T. Varneskov","doi":"10.2139/ssrn.3285701","DOIUrl":"https://doi.org/10.2139/ssrn.3285701","url":null,"abstract":"Abstract We study inference for the local innovations of Ito semimartingales. Specifically, we construct a resampling procedure for the empirical CDF of high-frequency innovations that have been standardized using a nonparametric estimate of its stochastic scale (volatility) and truncated to rid the effect of “large” jumps. Our locally dependent wild bootstrap (LDWB) accommodate issues related to the stochastic scale and jumps as well as account for a special block-wise dependence structure induced by sampling errors. We show that the LDWB replicates first and second-order limit theory from the usual empirical process and the stochastic scale estimate, respectively, in addition to an asymptotic bias. Moreover, we design the LDWB sufficiently general to establish asymptotic equivalence between it and a nonparametric local block bootstrap, also introduced here, up to second-order distribution theory. Finally, we introduce LDWB-aided Kolmogorov–Smirnov tests for local Gaussianity as well as local von-Mises statistics, with and without bootstrap inference, and establish their asymptotic validity using the second-order distribution theory. The finite sample performance of CLT and LDWB-aided local Gaussianity tests is assessed in a simulation study and an empirical application. Whereas the CLT test is oversized, even in large samples, the size of the LDWB tests is accurate, even in small samples. The empirical analysis verifies this pattern, in addition to providing new insights about the fine scale distributional properties of innovations to equity indices, commodities and exchange rates.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91406993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Panel data subject to heterogeneity in both cross-sectional and time-serial directions are commonly encountered across social and scientific fields. To address this problem, we propose a class of time-varying panel data models with individual-specific regression coefficients and interactive common factors. This results in a model capable of describing heterogeneous panel data in terms of time-varyingness in the time-serial direction and individual-specific coefficients among crosssections. Another striking generality of this proposed model relies on its compatibility with endogeneity in the sense of interactive common factors. Model estimation is achieved through a novel duple least-squares (DLS) iteration algorithm, which implements two least-squares estimation recursively. Its unified ability in estimation is nicely illustrated according to flexible applications on various cases with exogenous or endogenous common factors. Established asymptotic theory for DLS estimators benefits practitioners by demonstrating effectiveness of iteration in eliminating estimation bias gradually along with iterative steps. We further show that our model and estimation perform well on simulated data in various scenarios as well as an OECD healthcare expenditure dataset. The time-variation and heterogeneity among cross-sections are confirmed by our analysis.
{"title":"Nonparametric Estimation in Panel Data Models with Heterogeneity and Time-Varyingness","authors":"Fei Liu, Jiti Gao, Yanrong Yang","doi":"10.2139/ssrn.3214046","DOIUrl":"https://doi.org/10.2139/ssrn.3214046","url":null,"abstract":"Panel data subject to heterogeneity in both cross-sectional and time-serial directions are commonly encountered across social and scientific fields. To address this problem, we propose a class of time-varying panel data models with individual-specific regression coefficients and interactive common factors. This results in a model capable of describing heterogeneous panel data in terms of time-varyingness in the time-serial direction and individual-specific coefficients among crosssections. Another striking generality of this proposed model relies on its compatibility with endogeneity in the sense of interactive common factors. Model estimation is achieved through a novel duple least-squares (DLS) iteration algorithm, which implements two least-squares estimation recursively. Its unified ability in estimation is nicely illustrated according to flexible applications on various cases with exogenous or endogenous common factors. Established asymptotic theory for DLS estimators benefits practitioners by demonstrating effectiveness of iteration in eliminating estimation bias gradually along with iterative steps. We further show that our model and estimation perform well on simulated data in various scenarios as well as an OECD healthcare expenditure dataset. The time-variation and heterogeneity among cross-sections are confirmed by our analysis.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82334648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper considers the problem of estimating treatment effects when there is a large number of potential control units. The paper introduces to the economics literature the idea of polytope volume minimization as a method of estimating a factor model when observed outcomes are assumed to be a convex combination of the unobserved factor values. The paper shows that this method is particularly well-suited to the case where there are a large number of cross-sectional units. The paper presents identification results for both exact and approximate factor models which are new to the literature. It presents simulations that compare standard methods such as difference-in-difference and synthetic controls to the proposed approach. The estimator is used to estimate the effect of reunification on German growth rates.
{"title":"Measuring Treatment Effects with Big N Panels","authors":"C. Adams","doi":"10.2139/ssrn.3205224","DOIUrl":"https://doi.org/10.2139/ssrn.3205224","url":null,"abstract":"This paper considers the problem of estimating treatment effects when there is a large number of potential control units. The paper introduces to the economics literature the idea of polytope volume minimization as a method of estimating a factor model when observed outcomes are assumed to be a convex combination of the unobserved factor values. The paper shows that this method is particularly well-suited to the case where there are a large number of cross-sectional units. The paper presents identification results for both exact and approximate factor models which are new to the literature. It presents simulations that compare standard methods such as difference-in-difference and synthetic controls to the proposed approach. The estimator is used to estimate the effect of reunification on German growth rates.","PeriodicalId":11744,"journal":{"name":"ERN: Nonparametric Methods (Topic)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90776767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}