Alberto Abadie, S. Athey, G. Imbens, J. Wooldridge
Clustered standard errors, with clusters defined by factors such as geography, are widespread in empirical research in economics and many other disciplines. Formally, clustered standard errors adjust for the correlations induced by sampling the outcome variable from a data-generating process with unobserved cluster-level components. However, the standard econometric framework for clustering leaves important questions unanswered: (i) Why do we adjust standard errors for clustering in some ways but not others, e.g., by state but not by gender, and in observational studies, but not in completely randomized experiments? (ii) Why is conventional clustering an “all-or-nothing” adjustment, while within-cluster correlations can be strong or extremely weak? (iii) In what settings does the choice of whether and how to cluster make a difference? We address these and other questions using a novel framework for clustered inference on average treatment effects. In addition to the common sampling component, the new framework incorporates a design component that accounts for the variability induced on the estimator by the treatment assignment mechanism. We show that, when the number of clusters in the sample is a nonnegligible fraction of the number of clusters in the population, conventional cluster standard errors can be severely inflated, and propose new variance estimators that correct for this bias.
{"title":"When Should You Adjust Standard Errors for Clustering?","authors":"Alberto Abadie, S. Athey, G. Imbens, J. Wooldridge","doi":"10.3386/W24003","DOIUrl":"https://doi.org/10.3386/W24003","url":null,"abstract":"\u0000 Clustered standard errors, with clusters defined by factors such as geography, are widespread in empirical research in economics and many other disciplines. Formally, clustered standard errors adjust for the correlations induced by sampling the outcome variable from a data-generating process with unobserved cluster-level components. However, the standard econometric framework for clustering leaves important questions unanswered: (i) Why do we adjust standard errors for clustering in some ways but not others, e.g., by state but not by gender, and in observational studies, but not in completely randomized experiments? (ii) Why is conventional clustering an “all-or-nothing” adjustment, while within-cluster correlations can be strong or extremely weak? (iii) In what settings does the choice of whether and how to cluster make a difference? We address these and other questions using a novel framework for clustered inference on average treatment effects. In addition to the common sampling component, the new framework incorporates a design component that accounts for the variability induced on the estimator by the treatment assignment mechanism. We show that, when the number of clusters in the sample is a nonnegligible fraction of the number of clusters in the population, conventional cluster standard errors can be severely inflated, and propose new variance estimators that correct for this bias.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121875974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We provide analytical approximations for the law of the solutions to a certain class of scalar McKean–Vlasov stochastic differential equations (MKV-SDEs) with random initial datum. “Propagation of chaos“ results ( [15] ) connect this class of SDEs with the macroscopic limiting behavior of a particle, evolving within a mean-field interaction particle system, as the total number of particles tends to infinity. Here we assume the mean-field interaction only acting on the drift of each particle, this giving rise to a MKV-SDE where the drift coefficient depends on the law of the unknown solution. By perturbing the non-linear forward Kolmogorov equation associated to the MKV-SDE, we perform a two-steps approximating procedure that decouples the McKean–Vlasov interaction from the standard dependence on the state-variables. The first step yields an expansion for the marginal distribution at a given time, whereas the second yields an expansion for the transition density. Both the approximating series turn out to be asymptotically convergent in the limit of short times and small noise, the convergence order for the latter expansion being higher than for the former. Concise numerical tests are presented to illustrate the accuracy of the resulting approximation formulas. The latter are expressed in semi-closed form and can be then regarded as a viable alternative to the numerical simulation of the large-particle system, which can be computationally very expensive. Moreover, these results pave the way for further extensions of this approach to more general dynamics and to high-dimensional settings.
摘要本文给出了一类具有随机初始基准的标量McKean-Vlasov随机微分方程(MKV-SDEs)的解律的解析近似。“混沌传播”(Propagation of chaos)的结果([15])将这类sde与粒子的宏观极限行为联系起来,当粒子总数趋于无穷大时,粒子在平均场相互作用粒子系统中演化。这里我们假设平均场相互作用只作用于每个粒子的漂移,这就产生了MKV-SDE,其中漂移系数取决于未知解的定律。通过扰动与MKV-SDE相关的非线性前向Kolmogorov方程,我们执行了一个两步逼近过程,将McKean-Vlasov相互作用与对状态变量的标准依赖解耦。第一步得到给定时间的边际分布的展开式,而第二步得到跃迁密度的展开式。两种近似级数在短时间和小噪声极限下都是渐近收敛的,后者的收敛阶高于前者。给出了简明的数值试验来说明所得近似公式的准确性。后者以半封闭形式表示,可以被视为大颗粒系统的数值模拟的可行替代方案,这在计算上非常昂贵。此外,这些结果为进一步将该方法扩展到更一般的动力学和高维设置铺平了道路。
{"title":"Analytical Approximations of Non-Linear SDEs of McKean-Vlasov Type","authors":"E. Gobet, S. Pagliarani","doi":"10.2139/ssrn.2868660","DOIUrl":"https://doi.org/10.2139/ssrn.2868660","url":null,"abstract":"Abstract We provide analytical approximations for the law of the solutions to a certain class of scalar McKean–Vlasov stochastic differential equations (MKV-SDEs) with random initial datum. “Propagation of chaos“ results ( [15] ) connect this class of SDEs with the macroscopic limiting behavior of a particle, evolving within a mean-field interaction particle system, as the total number of particles tends to infinity. Here we assume the mean-field interaction only acting on the drift of each particle, this giving rise to a MKV-SDE where the drift coefficient depends on the law of the unknown solution. By perturbing the non-linear forward Kolmogorov equation associated to the MKV-SDE, we perform a two-steps approximating procedure that decouples the McKean–Vlasov interaction from the standard dependence on the state-variables. The first step yields an expansion for the marginal distribution at a given time, whereas the second yields an expansion for the transition density. Both the approximating series turn out to be asymptotically convergent in the limit of short times and small noise, the convergence order for the latter expansion being higher than for the former. Concise numerical tests are presented to illustrate the accuracy of the resulting approximation formulas. The latter are expressed in semi-closed form and can be then regarded as a viable alternative to the numerical simulation of the large-particle system, which can be computationally very expensive. Moreover, these results pave the way for further extensions of this approach to more general dynamics and to high-dimensional settings.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127704262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A data-cloning SMC² method is proposed as a general purpose optimization routine for estimating latent variable models by maximum likelihood. The latent variables are first marginalized out by SMC at any fixed parameter value, and the model parameters are then estimated by density tempered SMC. The data-cloning step is employed to efficiently reduce Monte Carlo errors inherent in the SMC² algorithm and also to effectively address multi-modality present in typical objective functions. This new method has wide applicability and can be massively parallelized to take advantage of typical computers today.
{"title":"Maximum Likelihood Estimation of Latent Variable Models by SMC with Marginalization and Data Cloning","authors":"J. Duan, Andras Fulop, Yu-Wei Hsieh","doi":"10.2139/ssrn.3043426","DOIUrl":"https://doi.org/10.2139/ssrn.3043426","url":null,"abstract":"A data-cloning SMC² method is proposed as a general purpose optimization routine for estimating latent variable models by maximum likelihood. The latent variables are first marginalized out by SMC at any fixed parameter value, and the model parameters are then estimated by density tempered SMC. The data-cloning step is employed to efficiently reduce Monte Carlo errors inherent in the SMC² algorithm and also to effectively address multi-modality present in typical objective functions. This new method has wide applicability and can be massively parallelized to take advantage of typical computers today.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127171346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the regression discontinuity (RD) design with the duration outcome which has discrete support. The parameters of policy interest are treatment effects on unconditional (duration effect) and conditional (hazard effect) exiting probabilities for each discrete level. We find that a flexible separability structure of the underlying continuous-time duration process can be exploited to substantially improve the quality of the fully nonparametric estimator. We propose global sieve-based estimators, and associated marginal and simultaneous inference. Simultaneous inference over discrete levels is nonstandard since the asymptotic variance matrix is singular with unknown rank. The peculiarity is delivered by the nature of the RD estimand, and we provide solutions. Random censoring and competing risks can also be allowed in our framework. The standard practice of applying local linear estimators to a sequence of binary outcomes is in general unsatisfactory, which motivates our semi-nonparametric approach. First, it provides poor hazard estimates near the end of the observation period due to small sizes of risk sets (in the neighborhood of the cutoff). Second, it fits each probability separately and thus does not support joint inference. The estimation and inference methods we advocate in this paper are computationally easy and fast to implement, which is illustrated by numerical examples.
{"title":"Estimation and Inference of Regression Discontinuity Design with Ordered or Discrete Duration Outcomes","authors":"Ke-Li Xu","doi":"10.2139/ssrn.2992158","DOIUrl":"https://doi.org/10.2139/ssrn.2992158","url":null,"abstract":"We consider the regression discontinuity (RD) design with the duration outcome which has discrete support. The parameters of policy interest are treatment effects on unconditional (duration effect) and conditional (hazard effect) exiting probabilities for each discrete level. We find that a flexible separability structure of the underlying continuous-time duration process can be exploited to substantially improve the quality of the fully nonparametric estimator. We propose global sieve-based estimators, and associated marginal and simultaneous inference. Simultaneous inference over discrete levels is nonstandard since the asymptotic variance matrix is singular with unknown rank. The peculiarity is delivered by the nature of the RD estimand, and we provide solutions. Random censoring and competing risks can also be allowed in our framework. The standard practice of applying local linear estimators to a sequence of binary outcomes is in general unsatisfactory, which motivates our semi-nonparametric approach. First, it provides poor hazard estimates near the end of the observation period due to small sizes of risk sets (in the neighborhood of the cutoff). Second, it fits each probability separately and thus does not support joint inference. The estimation and inference methods we advocate in this paper are computationally easy and fast to implement, which is illustrated by numerical examples.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116794051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article overviews time-series-cross-section (TSCS) data analysis in the social sciences, a method that has been gaining in popularity since the late 1990s. The paper outlines the pros and cons of the different strategies to model both the time-series and the cross-sectional dimensions of TSCS data. Most importantly, it is argued throughout that one should follow an iterative process when modeling TSCS data. This means using more general models first and then imposing some restrictions on the basis of theoretical insights and in accordance with the actual structure of the data.
{"title":"Comparing Political Units Over Time: An Overview of Time-Series-Cross-Section Analysis","authors":"Phillippe J Scrimger","doi":"10.2139/ssrn.2988020","DOIUrl":"https://doi.org/10.2139/ssrn.2988020","url":null,"abstract":"This article overviews time-series-cross-section (TSCS) data analysis in the social sciences, a method that has been gaining in popularity since the late 1990s. The paper outlines the pros and cons of the different strategies to model both the time-series and the cross-sectional dimensions of TSCS data. Most importantly, it is argued throughout that one should follow an iterative process when modeling TSCS data. This means using more general models first and then imposing some restrictions on the basis of theoretical insights and in accordance with the actual structure of the data.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126542175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Massimiliano Marcellino, Fotis Papailias, G. Mazzi, G. Kapetanios, Dario Buono
This paper aims at providing a primer on the use of big data in macroeconomic nowcasting and early estimation. We discuss: (i) a typology of big data characteristics relevant for macroeconomic nowcasting and early estimates, (ii) methods for features extraction from unstructured big data to usable time series, (iii) econometric methods that could be used for nowcasting with big data, (iv) some empirical nowcasting results for key target variables for four EU countries, and (v) ways to evaluate nowcasts and ash estimates. We conclude by providing a set of recommendations to assess the pros and cons of the use of big data in a specific empirical nowcasting context.
{"title":"Big Data Econometrics: Now Casting and Early Estimates","authors":"Massimiliano Marcellino, Fotis Papailias, G. Mazzi, G. Kapetanios, Dario Buono","doi":"10.2139/ssrn.3206554","DOIUrl":"https://doi.org/10.2139/ssrn.3206554","url":null,"abstract":"This paper aims at providing a primer on the use of big data in macroeconomic nowcasting and early estimation. We discuss: (i) a typology of big data characteristics relevant for macroeconomic nowcasting and early estimates, (ii) methods for features extraction from unstructured big data to usable time series, (iii) econometric methods that could be used for nowcasting with big data, (iv) some empirical nowcasting results for key target variables for four EU countries, and (v) ways to evaluate nowcasts and ash estimates. We conclude by providing a set of recommendations to assess the pros and cons of the use of big data in a specific empirical nowcasting context.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121966052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This study proposes a model that enables us to investigate the multi-generation and the multi-country diffusion process simultaneously. Many former studies focus on only one of the dimensions since it is difficult to integrate both dimensions at the same time. Our proposed framework can explain both diffusion processes by capturing the common trend of multi-generation diffusion process and the country-specific heterogeneity. We develop the choice-based diffusion model by decomposing the choice probability of adoption into two components; the first component explains the individual country heterogeneity depending on the country-based variables while the second component captures the common trend of multi-generation diffusion process with the generation-based variables. We apply the model to 3G and 4G connections across 25 countries. Empirical result shows that it is not easy to use individual country level model for most countries due to the lack of data points. Our pooled model outperforms several individual country models according to the fitting and forecasting measures. We find that each country's market competitiveness and the market price affect the rate of diffusion and show that random effects of 3G and 4G are positively correlated. This framework provides the fine prediction capability even with few data points and valuable information for formulating policies on a new generation.
{"title":"A Choice-Based Diffusion Model for Multi-Generation and Multi-Country Data","authors":"H. Lim, D. Jun, Mohsen Hamoudia","doi":"10.2139/ssrn.2933276","DOIUrl":"https://doi.org/10.2139/ssrn.2933276","url":null,"abstract":"Abstract This study proposes a model that enables us to investigate the multi-generation and the multi-country diffusion process simultaneously. Many former studies focus on only one of the dimensions since it is difficult to integrate both dimensions at the same time. Our proposed framework can explain both diffusion processes by capturing the common trend of multi-generation diffusion process and the country-specific heterogeneity. We develop the choice-based diffusion model by decomposing the choice probability of adoption into two components; the first component explains the individual country heterogeneity depending on the country-based variables while the second component captures the common trend of multi-generation diffusion process with the generation-based variables. We apply the model to 3G and 4G connections across 25 countries. Empirical result shows that it is not easy to use individual country level model for most countries due to the lack of data points. Our pooled model outperforms several individual country models according to the fitting and forecasting measures. We find that each country's market competitiveness and the market price affect the rate of diffusion and show that random effects of 3G and 4G are positively correlated. This framework provides the fine prediction capability even with few data points and valuable information for formulating policies on a new generation.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117309716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce an asymptotically unbiased estimator for the full high-dimensional parameter vector in linear regression models where the number of variables exceeds the number of available observations. The estimator is accompanied by a closed-form expression for the covariance matrix of the estimates that is free of tuning parameters. This enables the construction of confidence intervals that are valid uniformly over the parameter vector. Estimates are obtained by using a scaled Moore-Penrose pseudoinverse as an approximate inverse of the singular empirical covariance matrix of the regressors. The approximation induces a bias, which is then corrected for using the lasso. Regularization of the pseudoinverse is shown to yield narrower confidence intervals under a suitable choice of the regularization parameter. The methods are illustrated in Monte Carlo experiments and in an empirical example where gross domestic product is explained by a large number of macroeconomic and financial indicators.
{"title":"Inference in High-Dimensional Linear Regression Models","authors":"Tom Boot, D. Nibbering","doi":"10.2139/ssrn.2932785","DOIUrl":"https://doi.org/10.2139/ssrn.2932785","url":null,"abstract":"We introduce an asymptotically unbiased estimator for the full high-dimensional parameter vector in linear regression models where the number of variables exceeds the number of available observations. The estimator is accompanied by a closed-form expression for the covariance matrix of the estimates that is free of tuning parameters. This enables the construction of confidence intervals that are valid uniformly over the parameter vector. Estimates are obtained by using a scaled Moore-Penrose pseudoinverse as an approximate inverse of the singular empirical covariance matrix of the regressors. The approximation induces a bias, which is then corrected for using the lasso. Regularization of the pseudoinverse is shown to yield narrower confidence intervals under a suitable choice of the regularization parameter. The methods are illustrated in Monte Carlo experiments and in an empirical example where gross domestic product is explained by a large number of macroeconomic and financial indicators.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117218443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The well known Preston Curve shows relationship between per capita output and health achievements. What role does government policy play in the alleviation of poverty?
著名的普雷斯顿曲线显示了人均产出与卫生成就之间的关系。政府政策在减轻贫困方面发挥了什么作用?
{"title":"Economic Growth and Health Inequality: The Perspective of Gender","authors":"P. Krishnamoorthy","doi":"10.2139/ssrn.2919294","DOIUrl":"https://doi.org/10.2139/ssrn.2919294","url":null,"abstract":"The well known Preston Curve shows relationship between per capita output and health achievements. What role does government policy play in the alleviation of poverty?","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114918641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prediction market prices are often used as estimates of the probability of outcomes in future elections and referendums. I argue that this practice is often flawed, and I develop a model that empiricists can use to partially identify probabilities from prediction market prices. In the special case of log utility, election outcome probabilities can be fully (point) identified by a simple type of futures contract that is not commonly used in practice. Prediction markets are also used to examine whether stock market valuations would be higher under one election outcome than the other. I show that this question cannot be answered without assuming investors' higher-order beliefs are correct. In the case of the 2016 US presidential election, my model suggests that investors had incorrect higher-order beliefs, and that these incorrect higher-order beliefs affected the aggregate value of the S&P 500 by approximately $400 billion, or 2% of its aggregate value.
{"title":"Interpreting Prediction Market Prices","authors":"Jared Williams","doi":"10.2139/ssrn.2815131","DOIUrl":"https://doi.org/10.2139/ssrn.2815131","url":null,"abstract":"Prediction market prices are often used as estimates of the probability of outcomes in future elections and referendums. I argue that this practice is often flawed, and I develop a model that empiricists can use to partially identify probabilities from prediction market prices. In the special case of log utility, election outcome probabilities can be fully (point) identified by a simple type of futures contract that is not commonly used in practice. Prediction markets are also used to examine whether stock market valuations would be higher under one election outcome than the other. I show that this question cannot be answered without assuming investors' higher-order beliefs are correct. In the case of the 2016 US presidential election, my model suggests that investors had incorrect higher-order beliefs, and that these incorrect higher-order beliefs affected the aggregate value of the S&P 500 by approximately $400 billion, or 2% of its aggregate value.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122300463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}