Value premiums, which we define as value portfolio returns in excess of market portfolio returns, are on average much lower in the second half of the July 1963–June 2019 period. But the high volatility of monthly premiums prevents us from rejecting the hypothesis that expected premiums are the same in both halves of the sample. Regressions that forecast value premiums with book-to-market ratios in excess of market (BM–BMM) produce more reliable evidence of second-half declines in expected value premiums, but only if we assume the regression coefficients are constant during the sample period. Received: January 21, 2020; editorial decision: July 21, 2020; Editor: Jeffrey Pontiff.
{"title":"The Value Premium","authors":"E. Fama, K. French","doi":"10.2139/ssrn.3525096","DOIUrl":"https://doi.org/10.2139/ssrn.3525096","url":null,"abstract":"\u0000 Value premiums, which we define as value portfolio returns in excess of market portfolio returns, are on average much lower in the second half of the July 1963–June 2019 period. But the high volatility of monthly premiums prevents us from rejecting the hypothesis that expected premiums are the same in both halves of the sample. Regressions that forecast value premiums with book-to-market ratios in excess of market (BM–BMM) produce more reliable evidence of second-half declines in expected value premiums, but only if we assume the regression coefficients are constant during the sample period.\u0000 Received: January 21, 2020; editorial decision: July 21, 2020; Editor: Jeffrey Pontiff.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127746115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In nonparametric contextual bandit formulations, a key complexity driver is the smoothness of payoff functions with respect to covariates. In many practical settings, the smoothness of payoffs is unknown, and misspecification of smoothness may severely deteriorate the performance of existing methods. In the paper “Smoothness-Adaptive Contextual Bandits,” Yonatan Gur, Ahmadreza Momeni, and Stefan Wager consider a framework where the smoothness of payoff functions is unknown and study when and how algorithms may adapt to unknown smoothness. First, they establish that designing algorithms that adapt to unknown smoothness is, in general, impossible. However, under a natural self-similarity condition, they establish that adapting to unknown smoothness is possible and devise a general policy for achieving smoothness-adaptive performance. The policy infers the smoothness of payoffs throughout the decision-making process while leveraging the structure of off-the-shelf nonadaptive policies. It matches (up to a logarithmic scale) the performance that is achievable when the smoothness of payoffs is known in advance.
{"title":"Smoothness-Adaptive Contextual Bandits","authors":"Y. Gur, Ahmadreza Momeni, Stefan Wager","doi":"10.2139/ssrn.3893198","DOIUrl":"https://doi.org/10.2139/ssrn.3893198","url":null,"abstract":"In nonparametric contextual bandit formulations, a key complexity driver is the smoothness of payoff functions with respect to covariates. In many practical settings, the smoothness of payoffs is unknown, and misspecification of smoothness may severely deteriorate the performance of existing methods. In the paper “Smoothness-Adaptive Contextual Bandits,” Yonatan Gur, Ahmadreza Momeni, and Stefan Wager consider a framework where the smoothness of payoff functions is unknown and study when and how algorithms may adapt to unknown smoothness. First, they establish that designing algorithms that adapt to unknown smoothness is, in general, impossible. However, under a natural self-similarity condition, they establish that adapting to unknown smoothness is possible and devise a general policy for achieving smoothness-adaptive performance. The policy infers the smoothness of payoffs throughout the decision-making process while leveraging the structure of off-the-shelf nonadaptive policies. It matches (up to a logarithmic scale) the performance that is achievable when the smoothness of payoffs is known in advance.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125358906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The theory Ordinals can serve as a replacement for the theory ZFC because: • Ordinals are a very well understood mathematical structure. • There is only one model of Ordinals up to a unique isomorphism, which decides every proposition of the theory Ordinals in the model. • Ordinals is much more powerful than ZFC. Standard mathematics that has been carried out in ZFC can more easily be done in Ordinals. Axioms of ZFC are in effect theorems of Ordinals. Cardinals of ZFC are among the ordinals of the theory Ordinals. • The theory Ordinals is algorithmically inexhaustible, i.e., it is impossible to computationally enumerate theorems of the theory thereby reinforcing the intuition behind [Franzen, 2004]. Contrary to [Church 1934], the conclusion in this article is to abandon the assumption that theorems of a theory must be computationally enumerable while retaining the requirement that proof checking must be computationally decidable. • There are no “monsters” [Lakatos 1976] in models of Ordinals such as the ones in models of 1st-order ZFC. Consequently unlike ZFC, the theory Ordinals is not subject to cyberattacks using “monsters” in models such as the ones that plague 1st-order ZFC. The theory Ordinals is based on intensional types as opposed to extensional sets of ZFC. Using intensional types together with strongly-typed ordinal induction is key to proving that there is just one model of the theory Ordinals up to a unique isomorphism.
{"title":"Theory Ordinals Can Replace ZFC in Computer Science","authors":"C. Hewitt","doi":"10.2139/ssrn.3457802","DOIUrl":"https://doi.org/10.2139/ssrn.3457802","url":null,"abstract":"The theory Ordinals can serve as a replacement for the theory ZFC because: \u0000• Ordinals are a very well understood mathematical structure. \u0000• There is only one model of Ordinals up to a unique isomorphism, which decides every proposition of the theory Ordinals in the model. \u0000• Ordinals is much more powerful than ZFC. Standard mathematics that has been carried out in ZFC can more easily be done in Ordinals. Axioms of ZFC are in effect theorems of Ordinals. Cardinals of ZFC are among the ordinals of the theory Ordinals. \u0000• The theory Ordinals is algorithmically inexhaustible, i.e., it is impossible to computationally enumerate theorems of the theory thereby reinforcing the intuition behind [Franzen, 2004]. Contrary to [Church 1934], the conclusion in this article is to abandon the assumption that theorems of a theory must be computationally enumerable while retaining the requirement that proof checking must be computationally decidable. \u0000• There are no “monsters” [Lakatos 1976] in models of Ordinals such as the ones in models of 1st-order ZFC. Consequently unlike ZFC, the theory Ordinals is not subject to cyberattacks using “monsters” in models such as the ones that plague 1st-order ZFC. \u0000 \u0000The theory Ordinals is based on intensional types as opposed to extensional sets of ZFC. Using intensional types together with strongly-typed ordinal induction is key to proving that there is just one model of the theory Ordinals up to a unique isomorphism.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114550569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We examine Markov-switching autoregressive models where the commonly used Gaussian assumption for disturbances is replaced with a skew-normal distribution. This allows us to detect regime changes not only in the mean and the variance of a specified time series, but also in its skewness. A Bayesian framework is developed based on Markov chain Monte Carlo sampling. Our informative prior distributions lead to closed-form full conditional posterior distributions, whose sampling can be efficiently conducted within a Gibbs sampling scheme. The usefulness of the methodology is illustrated with a real-data example from U.S. stock markets.
{"title":"Bayesian Inference for Markov-switching Skewed Autoregressive Models","authors":"Stéphane Lhuissier","doi":"10.2139/ssrn.3442765","DOIUrl":"https://doi.org/10.2139/ssrn.3442765","url":null,"abstract":"We examine Markov-switching autoregressive models where the commonly used Gaussian assumption for disturbances is replaced with a skew-normal distribution. This allows us to detect regime changes not only in the mean and the variance of a specified time series, but also in its skewness. A Bayesian framework is developed based on Markov chain Monte Carlo sampling. Our informative prior distributions lead to closed-form full conditional posterior distributions, whose sampling can be efficiently conducted within a Gibbs sampling scheme. The usefulness of the methodology is illustrated with a real-data example from U.S. stock markets.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123901220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We develop a methodology for proving central limit theorems in network models with strategic interactions and homophilous agents. We consider an asymptotic framework in which the size of the network tends to infinity, which is useful for inference in the typical setting in which the sample consists of a single large network. In the presence of strategic interactions, network moments are generally complex functions of network components, where a node's component consists of all alters to which it is directly or indirectly connected. We find that a modification of "exponential stabilization" conditions from the stochastic geometry literature provides a useful formulation of weak dependence for moments of this type. Our first contribution is to prove a CLT for a large class of network moments satisfying stabilization and a moment condition. Our second contribution is a methodology for deriving primitive sufficient conditions for stabilization using results in branching process theory. We apply the methodology to static and dynamic models of network formation and discuss how it can be used more broadly.
{"title":"Normal Approximation in Large Network Models","authors":"Michael P. Leung, H. Moon","doi":"10.2139/ssrn.3377709","DOIUrl":"https://doi.org/10.2139/ssrn.3377709","url":null,"abstract":"We develop a methodology for proving central limit theorems in network models with strategic interactions and homophilous agents. We consider an asymptotic framework in which the size of the network tends to infinity, which is useful for inference in the typical setting in which the sample consists of a single large network. In the presence of strategic interactions, network moments are generally complex functions of network components, where a node's component consists of all alters to which it is directly or indirectly connected. We find that a modification of \"exponential stabilization\" conditions from the stochastic geometry literature provides a useful formulation of weak dependence for moments of this type. Our first contribution is to prove a CLT for a large class of network moments satisfying stabilization and a moment condition. Our second contribution is a methodology for deriving primitive sufficient conditions for stabilization using results in branching process theory. We apply the methodology to static and dynamic models of network formation and discuss how it can be used more broadly.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114635683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vicente Nunez Anton, Juan Manuel Pérez Salamero González, Marta Regúlez-Castillo, M. Ventura-Marco, Carlos Vidal-Meliá
Pearson’s chi-square test is widely employed in social and health sciences to analyse categorical data and contingency tables. For the test to be valid, the sample size must be large enough to provide a minimum number of expected elements per category. This paper develops functions for regrouping strata automatically, thus enabling the goodness-of-fit test to be performed within an iterative procedure. The usefulness and performance of these functions is illustrated by means of a simulation study and the application to different datasets. Finally, the iterative use of the functions is applied to the Continuous Sample of Working Lives, a dataset that has been used in a considerable number of studies, especially on labour economics and the Spanish public pension system.
{"title":"Automatic Regrouping of Strata in the Goodness-of-Fit Chi-Square Test","authors":"Vicente Nunez Anton, Juan Manuel Pérez Salamero González, Marta Regúlez-Castillo, M. Ventura-Marco, Carlos Vidal-Meliá","doi":"10.2139/ssrn.3337624","DOIUrl":"https://doi.org/10.2139/ssrn.3337624","url":null,"abstract":"Pearson’s chi-square test is widely employed in social and health sciences to analyse categorical data and contingency tables. For the test to be valid, the sample size must be large enough to provide a minimum number of expected elements per category. This paper develops functions for regrouping strata automatically, thus enabling the goodness-of-fit test to be performed within an iterative procedure. The usefulness and performance of these functions is illustrated by means of a simulation study and the application to different datasets. Finally, the iterative use of the functions is applied to the Continuous Sample of Working Lives, a dataset that has been used in a considerable number of studies, especially on labour economics and the Spanish public pension system.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126389554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing proofs of the asymptotic validity of conventional methods of impulse response inference based on higher-order autoregressions are pointwise only. In this paper, we establish the uniform asymptotic validity of conventional asymptotic and bootstrap inference about individual impulse responses and vectors of impulse responses when the horizon is fixed with respect to the sample size. For inference about vectors of impulse responses based on Wald test statistics to be uniformly valid, lag-augmented autoregressions are required, whereas inference about individual impulse responses is uniformly valid under weak conditions even without lag augmentation. We introduce a new rank condition that ensures the uniform validity of inference on impulse responses and show that this condition holds under weak conditions. Simulations show that the highest finite-sample accuracy is achieved when bootstrapping the lag-augmented autoregression using the bias adjustments of Kilian (1999). The conventional bootstrap percentile interval for impulse responses based on this approach remains accurate even at long horizons. We provide a formal asymptotic justification for this result.
{"title":"The Uniform Validity of Impulse Response Inference in Autoregressions","authors":"A. Inoue, L. Kilian","doi":"10.24149/wp1908","DOIUrl":"https://doi.org/10.24149/wp1908","url":null,"abstract":"Existing proofs of the asymptotic validity of conventional methods of impulse response inference based on higher-order autoregressions are pointwise only. In this paper, we establish the uniform asymptotic validity of conventional asymptotic and bootstrap inference about individual impulse responses and vectors of impulse responses when the horizon is fixed with respect to the sample size. For inference about vectors of impulse responses based on Wald test statistics to be uniformly valid, lag-augmented autoregressions are required, whereas inference about individual impulse responses is uniformly valid under weak conditions even without lag augmentation. We introduce a new rank condition that ensures the uniform validity of inference on impulse responses and show that this condition holds under weak conditions. Simulations show that the highest finite-sample accuracy is achieved when bootstrapping the lag-augmented autoregression using the bias adjustments of Kilian (1999). The conventional bootstrap percentile interval for impulse responses based on this approach remains accurate even at long horizons. We provide a formal asymptotic justification for this result.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116818262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1920/WP.CEM.2018.6418
B. Salanié, F. Wolak
Many econometric models used in applied work integrate over unobserved heterogeneity. We show that a class of these models that includes many random coefficients demand systems can be approximated by a "small-sigma" expansion that yields a straightforward 2SLS estimator. We study in detail the models of market shares popular in empirical IO ("macro BLP"). Our estimator is only approximately correct, but it performs very well in practice. It is extremely fast and easy to implement, and it accommodates to misspecifications in the higher moments of the distribution of the random coefficients. At the very least, it provides excellent starting values for more commonly used estimators of these models.
{"title":"Fast, \"Robust\", and Approximately Correct: Estimating Mixed Demand Systems","authors":"B. Salanié, F. Wolak","doi":"10.1920/WP.CEM.2018.6418","DOIUrl":"https://doi.org/10.1920/WP.CEM.2018.6418","url":null,"abstract":"Many econometric models used in applied work integrate over unobserved heterogeneity. We show that a class of these models that includes many random coefficients demand systems can be approximated by a \"small-sigma\" expansion that yields a straightforward 2SLS estimator. We study in detail the models of market shares popular in empirical IO (\"macro BLP\"). Our estimator is only approximately correct, but it performs very well in practice. It is extremely fast and easy to implement, and it accommodates to misspecifications in the higher moments of the distribution of the random coefficients. At the very least, it provides excellent starting values for more commonly used estimators of these models.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128074518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There exists significant hype regarding how much machine learning and incorporating social media data can improve forecast accuracy in commercial applications. To assess if the hype is warranted, we use data from the film industry in simulation experiments that contrast econometric approaches with tools from the predictive analytics literature. Further, we propose new strategies that combine elements from each literature in a bid to capture richer patterns of heterogeneity in the underlying relationship governing revenue. Our results demonstrate the importance of social media data and value from hybrid strategies that combine econometrics and machine learning when conducting forecasts with new big data sources. Specifically, while both least squares support vector regression and recursive partitioning strategies greatly outperform dimension reduction strategies and traditional econometrics approaches in fore-cast accuracy, there are further significant gains from using hybrid approaches. Further, Monte Carlo experiments demonstrate that these benefits arise from the significant heterogeneity in how social media measures and other film characteristics influence box office outcomes.
{"title":"The Bigger Picture: Combining Econometrics with Analytics Improve Forecasts of Movie Success","authors":"Steven F. Lehrer, Tian Xie","doi":"10.3386/w24755","DOIUrl":"https://doi.org/10.3386/w24755","url":null,"abstract":"There exists significant hype regarding how much machine learning and incorporating social media data can improve forecast accuracy in commercial applications. To assess if the hype is warranted, we use data from the film industry in simulation experiments that contrast econometric approaches with tools from the predictive analytics literature. Further, we propose new strategies that combine elements from each literature in a bid to capture richer patterns of heterogeneity in the underlying relationship governing revenue. Our results demonstrate the importance of social media data and value from hybrid strategies that combine econometrics and machine learning when conducting forecasts with new big data sources. Specifically, while both least squares support vector regression and recursive partitioning strategies greatly outperform dimension reduction strategies and traditional econometrics approaches in fore-cast accuracy, there are further significant gains from using hybrid approaches. Further, Monte Carlo experiments demonstrate that these benefits arise from the significant heterogeneity in how social media measures and other film characteristics influence box office outcomes.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130814361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adding multivariate stochastic volatility of a ?exible form to large Vector Autoregressions (VARs) involving over a hundred variables has proved challenging due to computational considerations and over-parameterization concerns. The existing literature either works with homoskedastic models or smaller models with restrictive forms for the stochastic volatility. In this pa- per, we develop composite likelihood methods for large VARs with multivariate stochastic volatility. These involve estimating large numbers of parsimonious models and then taking a weighted average across these models. We discuss various schemes for choosing the weights. In our empirical work involving VARs of up to 196 variables, we show that composite likelihood methods have similar properties to existing alternatives used with small data sets in that they estimate the multivariate stochastic volatility in a ?exible and realistic manner and they forecast comparably. In very high dimensional VARs, they are computationally feasible where other approaches involving stochastic volatility are not and produce superior forecasts than natural conjugate prior homoskedastic VARs.
{"title":"Composite Likelihood Methods for Large Bayesian VARs with Stochastic Volatility","authors":"J. Chan, Eric Eisenstat, Chenghan Hou, G. Koop","doi":"10.2139/ssrn.3187049","DOIUrl":"https://doi.org/10.2139/ssrn.3187049","url":null,"abstract":"Adding multivariate stochastic volatility of a ?exible form to large Vector Autoregressions (VARs) involving over a hundred variables has proved challenging due to computational considerations and over-parameterization concerns. The existing literature either works with homoskedastic models or smaller models with restrictive forms for the stochastic volatility. In this pa- per, we develop composite likelihood methods for large VARs with multivariate stochastic volatility. These involve estimating large numbers of parsimonious models and then taking a weighted average across these models. We discuss various schemes for choosing the weights. In our empirical work involving VARs of up to 196 variables, we show that composite likelihood methods have similar properties to existing alternatives used with small data sets in that they estimate the multivariate stochastic volatility in a ?exible and realistic manner and they forecast comparably. In very high dimensional VARs, they are computationally feasible where other approaches involving stochastic volatility are not and produce superior forecasts than natural conjugate prior homoskedastic VARs.","PeriodicalId":320844,"journal":{"name":"PSN: Econometrics","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131324210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}