In this paper, we re-analyze Perron and Zhu's (2005) asymptotic properties of time series models with a break in trend. We prove that, for the model with a joint broken trend with stationary errors, their results do not hold when the break magnitude is fixed. Furthermore, we show that the "shrinking shift'' asymptotic framework is necessary to establish these results. Simulation results illustrate that the finite sample approximation based on the proposed asymptotic theory works well.
{"title":"A note on asymptotic properties of time series models with a trend break","authors":"Daisuke Yamazaki","doi":"10.2139/ssrn.3917796","DOIUrl":"https://doi.org/10.2139/ssrn.3917796","url":null,"abstract":"In this paper, we re-analyze Perron and Zhu's (2005) asymptotic properties of time series models with a break in trend. We prove that, for the model with a joint broken trend with stationary errors, their results do not hold when the break magnitude is fixed. Furthermore, we show that the \"shrinking shift'' asymptotic framework is necessary to establish these results. Simulation results illustrate that the finite sample approximation based on the proposed asymptotic theory works well.","PeriodicalId":413295,"journal":{"name":"ERN: Estimation (Topic)","volume":"127 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128024447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I establish the equivalence between the two-way fixed effects (TWFE) estimator and an estimator obtained from a pooled ordinary least squares regression that includes unit-specific time averages and time-period specific cross-sectional averages, which I call the two-way Mundlak (TWM) regression. This equivalence furthers our understanding of the anatomy of TWFE, and has several applications. The equivalence between TWFE and TWM implies that various estimators used for intervention analysis – with a common entry time into treatment or staggered entry, with or without covariates – can be computed using TWFE or pooled OLS regressions that control for time-constant treatment intensities, covariates, and interactions between them. The approach allows considerable heterogeneity in treatment effects across treatment intensity, calendar time, and covariates. The equivalence implies that standard strategies for heterogeneous trends are available to relax the common trends assumption. Further, the two-way Mundlak regression is easily adapted to nonlinear models such as exponential models and logit and probit models.
{"title":"Two-Way Fixed Effects, the Two-Way Mundlak Regression, and Difference-in-Differences Estimators","authors":"J. Wooldridge","doi":"10.2139/ssrn.3906345","DOIUrl":"https://doi.org/10.2139/ssrn.3906345","url":null,"abstract":"I establish the equivalence between the two-way fixed effects (TWFE) estimator and an estimator obtained from a pooled ordinary least squares regression that includes unit-specific time averages and time-period specific cross-sectional averages, which I call the two-way Mundlak (TWM) regression. This equivalence furthers our understanding of the anatomy of TWFE, and has several applications. The equivalence between TWFE and TWM implies that various estimators used for intervention analysis – with a common entry time into treatment or staggered entry, with or without covariates – can be computed using TWFE or pooled OLS regressions that control for time-constant treatment intensities, covariates, and interactions between them. The approach allows considerable heterogeneity in treatment effects across treatment intensity, calendar time, and covariates. The equivalence implies that standard strategies for heterogeneous trends are available to relax the common trends assumption. Further, the two-way Mundlak regression is easily adapted to nonlinear models such as exponential models and logit and probit models.","PeriodicalId":413295,"journal":{"name":"ERN: Estimation (Topic)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122445627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper examines regression-adjusted estimation and inference of unconditional quantile treatment effects (QTEs) under covariate-adaptive randomizations (CARs). Datasets from field experiments usually contain extra baseline covariates in addition to the strata indicators. We propose to incorporate these extra covariates via auxiliary regressions in the estimation and inference of unconditional QTEs. The auxiliary regression may be estimated parametrically, nonparametrically, or via regularization when the data are high-dimensional. Even when the auxiliary regression is misspecified, the proposed bootstrap inferential procedure still achieves the nominal rejection probability in the limit under the null for various CARs. When the auxiliary regression is correctly specified, the regression-adjusted estimator achieves the minimum asymptotic variance. We also derive the optimal pseudo true values for the potentially misspecified parametric model that minimize the asymptotic variance of the corresponding QTE estimator.
{"title":"Regression-Adjusted Estimation of Quantile Treatment Effects under Covariate-Adaptive Randomizations","authors":"Liang Jiang, P. Phillips, Yubo Tao, Yichong Zhang","doi":"10.2139/ssrn.3873937","DOIUrl":"https://doi.org/10.2139/ssrn.3873937","url":null,"abstract":"This paper examines regression-adjusted estimation and inference of unconditional quantile treatment effects (QTEs) under covariate-adaptive randomizations (CARs). Datasets from field experiments usually contain extra baseline covariates in addition to the strata indicators. We propose to incorporate these extra covariates via auxiliary regressions in the estimation and inference of unconditional QTEs. The auxiliary regression may be estimated parametrically, nonparametrically, or via regularization when the data are high-dimensional. Even when the auxiliary regression is misspecified, the proposed bootstrap inferential procedure still achieves the nominal rejection probability in the limit under the null for various CARs. When the auxiliary regression is correctly specified, the regression-adjusted estimator achieves the minimum asymptotic variance. We also derive the optimal pseudo true values for the potentially misspecified parametric model that minimize the asymptotic variance of the corresponding QTE estimator.","PeriodicalId":413295,"journal":{"name":"ERN: Estimation (Topic)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122892184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a novel structural estimation framework in which we train a surrogate of an economic model with deep neural networks. Our methodology alleviates the curse of dimensionality and speeds up the evaluation and parameter estimation by orders of magnitudes, which significantly enhances one's ability to conduct analyses that require frequent parameter re-estimation. As an empirical application, we compare two popular option pricing models (the Heston and the Bates model with double-exponential jumps) against a non-parametric random forest model. We document that: a) the Bates model produces better out-of-sample pricing on average, but both structural models fail to outperform random forest for large areas of the volatility surface; b) random forest is more competitive at short horizons (e.g., 1-day), for short-dated options (with less than 7 days to maturity), and on days with poor liquidity; c) both structural models outperform random forest in out-of-sample delta hedging; d) the Heston model's relative performance has deteriorated significantly after the 2008 financial crisis.
{"title":"Deep Structural Estimation: With an Application to Option Pricing","authors":"Hui Chen, Antoine Didisheim, S. Scheidegger","doi":"10.2139/ssrn.3782722","DOIUrl":"https://doi.org/10.2139/ssrn.3782722","url":null,"abstract":"We propose a novel structural estimation framework in which we train a surrogate of an economic model with deep neural networks. Our methodology alleviates the curse of dimensionality and speeds up the evaluation and parameter estimation by orders of magnitudes, which significantly enhances one's ability to conduct analyses that require frequent parameter re-estimation. As an empirical application, we compare two popular option pricing models (the Heston and the Bates model with double-exponential jumps) against a non-parametric random forest model. We document that: a) the Bates model produces better out-of-sample pricing on average, but both structural models fail to outperform random forest for large areas of the volatility surface; b) random forest is more competitive at short horizons (e.g., 1-day), for short-dated options (with less than 7 days to maturity), and on days with poor liquidity; c) both structural models outperform random forest in out-of-sample delta hedging; d) the Heston model's relative performance has deteriorated significantly after the 2008 financial crisis.","PeriodicalId":413295,"journal":{"name":"ERN: Estimation (Topic)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128306989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Estimators of Dynamic Stochastic General Equilibrium (DSGE) Model parameters, as well as impulse response functions, can be wildly inaccurate when data used in the estimation process are de-trended; even if the data are de-trended in the same manner, the model is de-trended. However, little is known about inferences of DSGE parameters and impulse response functions when raw data are used. This may be attributable to difficulties in applying the law of large numbers and the central limit theorem on sample means or functions of sample means when data are not derived from stationary processes. The good news for DSGE models is that the equilibrium conditions, represented by the first-order conditions of agent problems used to build the impulse response functions, are usually written as a non-linear combination of stationary variables at the true value of the parameters. In this study, we exploited that property to suggest the conditions under which the generalized method of moments (GMM), the indirect inference (II) estimators, and the minimum chi-square estimators are consistent and asymptotically Gaussian distributions. We also suggested procedures and conditions under which the GMM bootstrap, the indirect inference bootstrap, and the minimum chi-square bootstrap for DSGE model parameters are valid. For empirical application, we used U.S. data to assess the impulse response functions -- due respectively to money supply shock, government spending shock, and productivity shock -- in a DSGE framework in which the Federal Reserve Bank set the policy rate that controlled the raw value of the nominal gross domestic product. This empirical analysis would have been very difficult without our theoretical results.
{"title":"Estimations and Inferences of Dynamic Stochastic General Equilibrium Models Using Raw Data","authors":"Charles Olivier Mao Takongmo","doi":"10.2139/ssrn.3780223","DOIUrl":"https://doi.org/10.2139/ssrn.3780223","url":null,"abstract":"Estimators of Dynamic Stochastic General Equilibrium (DSGE) Model parameters, as well as impulse response functions, can be wildly inaccurate when data used in the estimation process are de-trended; even if the data are de-trended in the same manner, the model is de-trended. However, little is known about inferences of DSGE parameters and impulse response functions when raw data are used. This may be attributable to difficulties in applying the law of large numbers and the central limit theorem on sample means or functions of sample means when data are not derived from stationary processes. The good news for DSGE models is that the equilibrium conditions, represented by the first-order conditions of agent problems used to build the impulse response functions, are usually written as a non-linear combination of stationary variables at the true value of the parameters. In this study, we exploited that property to suggest the conditions under which the generalized method of moments (GMM), the indirect inference (II) estimators, and the minimum chi-square estimators are consistent and asymptotically Gaussian distributions. We also suggested procedures and conditions under which the GMM bootstrap, the indirect inference bootstrap, and the minimum chi-square bootstrap for DSGE model parameters are valid. For empirical application, we used U.S. data to assess the impulse response functions -- due respectively to money supply shock, government spending shock, and productivity shock -- in a DSGE framework in which the Federal Reserve Bank set the policy rate that controlled the raw value of the nominal gross domestic product. This empirical analysis would have been very difficult without our theoretical results.<br>","PeriodicalId":413295,"journal":{"name":"ERN: Estimation (Topic)","volume":"185 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134525266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heteroskedasticity‐ and autocorrelation‐robust (HAR) inference in time series regression typically involves kernel estimation of the long‐run variance. Conventional wisdom holds that, for a given kernel, the choice of truncation parameter trades off a test's null rejection rate and power, and that this tradeoff differs across kernels. We formalize this intuition: using higher‐order expansions, we provide a unified size‐power frontier for both kernel and weighted orthonormal series tests using nonstandard “fixed‐ b” critical values. We also provide a frontier for the subset of these tests for which the fixed‐ b distribution is t or F. These frontiers are respectively achieved by the QS kernel and equal‐weighted periodogram. The frontiers have simple closed‐form expressions, which show that the price paid for restricting attention to tests with t and F critical values is small. The frontiers are derived for the Gaussian multivariate location model, but simulations suggest the qualitative findings extend to stochastic regressors.
{"title":"The Size-Power Tradeoff in HAR Inference","authors":"Eben Lazarus, D. Lewis, J. Stock","doi":"10.2139/ssrn.3436372","DOIUrl":"https://doi.org/10.2139/ssrn.3436372","url":null,"abstract":"Heteroskedasticity‐ and autocorrelation‐robust (HAR) inference in time series regression typically involves kernel estimation of the long‐run variance. Conventional wisdom holds that, for a given kernel, the choice of truncation parameter trades off a test's null rejection rate and power, and that this tradeoff differs across kernels. We formalize this intuition: using higher‐order expansions, we provide a unified size‐power frontier for both kernel and weighted orthonormal series tests using nonstandard “fixed‐\u0000 b” critical values. We also provide a frontier for the subset of these tests for which the fixed‐\u0000 b distribution is \u0000 t or \u0000 F. These frontiers are respectively achieved by the QS kernel and equal‐weighted periodogram. The frontiers have simple closed‐form expressions, which show that the price paid for restricting attention to tests with \u0000 t and \u0000 F critical values is small. The frontiers are derived for the Gaussian multivariate location model, but simulations suggest the qualitative findings extend to stochastic regressors.\u0000","PeriodicalId":413295,"journal":{"name":"ERN: Estimation (Topic)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114879018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Probit models with endogenous regressors are commonly used models in economics and other social sciences. Yet, the robustness properties of parametric estimators in these models have not been formally studied. In this paper, we derive the influence functions of the endogenous probit model’s classical estimators (the maximum likelihood and the two-step estimator) and prove their non-robustness to small but harmful deviations from distributional assumptions. We propose a procedure to obtain a robust alternative estimator, prove its asymptotic normality and provide its asymptotic variance. A simple robust test for endogeneity is also constructed. We compare the performance of the robust and classical estimators in Monte Carlo simulations with different types of contamination scenarios. The use of our estimator is illustrated in several empirical applications.
{"title":"Robust Estimation of Probit Models with Endogeneity","authors":"A. Naghi, Máté Váradi, Mikhail Zhelonkin","doi":"10.2139/ssrn.3766318","DOIUrl":"https://doi.org/10.2139/ssrn.3766318","url":null,"abstract":"Probit models with endogenous regressors are commonly used models in economics and other social sciences. Yet, the robustness properties of parametric estimators in these models have not been formally studied. In this paper, we derive the influence functions of the endogenous probit model’s classical estimators (the maximum likelihood and the two-step estimator) and prove their non-robustness to small but harmful deviations from distributional assumptions. We propose a procedure to obtain a robust alternative estimator, prove its asymptotic normality and provide its asymptotic variance. A simple robust test for endogeneity is also constructed. We compare the performance of the robust and classical estimators in Monte Carlo simulations with different types of contamination scenarios. The use of our estimator is illustrated in several empirical applications.","PeriodicalId":413295,"journal":{"name":"ERN: Estimation (Topic)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115401598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study estimation of the conditional tail average treatment effect (CTATE), defined as a difference between conditional tail expectations of potential outcomes. The CTATE can capture heterogeneity and deliver aggregated local information of treatment effects over different quantile levels, and is closely related to the notion of second order stochastic dominance and the Lorenz curve. These properties render it a valuable tool for policy evaluations. We consider a semiparametric treatment effect framework under endogeneity for the CTATE estimation using a newly introduced class of consistent loss functions jointly for the conditioanl tail expectation and quantile. We establish asymptotic theory of our proposed CTATE estimator and provide an efficient algorithm for its implementation. We then apply the method to the evaluation of effects from participating in programs of the Job Training Partnership Act in the US.
{"title":"Estimations of the Conditional Tail Average Treatment Effect","authors":"Le‐Yu Chen, Yu-Min Yen","doi":"10.2139/ssrn.3740489","DOIUrl":"https://doi.org/10.2139/ssrn.3740489","url":null,"abstract":"We study estimation of the conditional tail average treatment effect (CTATE), defined as a difference between conditional tail expectations of potential outcomes. The CTATE can capture heterogeneity and deliver aggregated local information of treatment effects over different quantile levels, and is closely related to the notion of second order stochastic dominance and the Lorenz curve. These properties render it a valuable tool for policy evaluations. We consider a semiparametric treatment effect framework under endogeneity for the CTATE estimation using a newly introduced class of consistent loss functions jointly for the conditioanl tail expectation and quantile. We establish asymptotic theory of our proposed CTATE estimator and provide an efficient algorithm for its implementation. We then apply the method to the evaluation of effects from participating in programs of the Job Training Partnership Act in the US.","PeriodicalId":413295,"journal":{"name":"ERN: Estimation (Topic)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132867665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper constructs a new estimator for large covariance matrices by drawing a bridge between the classic Stein (1975) estimator in finite samples and recent progress under large-dimensional asymptotics. The estimator keeps the eigenvectors of the sample covariance matrix and applies shrinkage to the inverse sample eigenvalues. The corresponding formula is quadratic: it has two shrinkage targets weighted by quadratic functions of the concentration (that is, matrix dimension divided by sample size). The first target dominates mid-level concentrations and the second one higher levels. This extra degree of freedom enables us to outperform linear shrinkage when optimal shrinkage is not linear (which is the general case). Both of our targets are based on what we term the “Stein shrinker”, a local attraction operator that pulls sample covariance matrix eigenvalues towards their nearest neighbors, but whose force diminishes with distance, like gravitation. We prove that no cubic or higher-order nonlinearities beat quadratic with respect to Frobenius loss under large-dimensional asymptotics. Non-normality and the case where the matrix dimension exceeds the sample size are accommodated. Monte Carlo simulations confirm state-of-the-art performance in terms of accuracy, speed, and scalability.
{"title":"Quadratic Shrinkage for Large Covariance Matrices","authors":"Olivier Ledoit, Michael Wolf","doi":"10.2139/ssrn.3486378","DOIUrl":"https://doi.org/10.2139/ssrn.3486378","url":null,"abstract":"This paper constructs a new estimator for large covariance matrices by drawing a bridge between the classic Stein (1975) estimator in finite samples and recent progress under large-dimensional asymptotics. The estimator keeps the eigenvectors of the sample covariance matrix and applies shrinkage to the inverse sample eigenvalues. The corresponding formula is quadratic: it has two shrinkage targets weighted by quadratic functions of the concentration (that is, matrix dimension divided by sample size). The first target dominates mid-level concentrations and the second one higher levels. This extra degree of freedom enables us to outperform linear shrinkage when optimal shrinkage is not linear (which is the general case). Both of our targets are based on what we term the “Stein shrinker”, a local attraction operator that pulls sample covariance matrix eigenvalues towards their nearest neighbors, but whose force diminishes with distance, like gravitation. We prove that no cubic or higher-order nonlinearities beat quadratic with respect to Frobenius loss under large-dimensional asymptotics. Non-normality and the case where the matrix dimension exceeds the sample size are accommodated. Monte Carlo simulations confirm state-of-the-art performance in terms of accuracy, speed, and scalability.","PeriodicalId":413295,"journal":{"name":"ERN: Estimation (Topic)","volume":"46 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120932014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For linear regression models, we propose and study a multi-step kernel density-based estimator that is adaptive to unknown error distributions. We establish asymptotic normality and almost sure convergence. An efficient EM algorithm is provided to implement the proposed estimator. We also compare its finite sample performance with five other adaptive estimators in an extensive Monte Carlo study of eight error distributions. Our method generally attains high mean-square-error efficiency. An empirical example illustrates the gain in efficiency of the new adaptive method when making statistical inference about the slope parameters in three linear regressions.
{"title":"A Multi-Step Kernel – Based Regression Estimator That Adapts to Error Distributions of Unknown Form","authors":"J. De Gooijer, Hugo Reichardt","doi":"10.2139/ssrn.3532384","DOIUrl":"https://doi.org/10.2139/ssrn.3532384","url":null,"abstract":"For linear regression models, we propose and study a multi-step kernel density-based estimator that is adaptive to unknown error distributions. We establish asymptotic normality and almost sure convergence. An efficient EM algorithm is provided to implement the proposed estimator. We also compare its finite sample performance with five other adaptive estimators in an extensive Monte Carlo study of eight error distributions. Our method generally attains high mean-square-error efficiency. An empirical example illustrates the gain in efficiency of the new adaptive method when making statistical inference about the slope parameters in three linear regressions.","PeriodicalId":413295,"journal":{"name":"ERN: Estimation (Topic)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114295130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}