This paper identifies and addresses dynamic selection problems that arise in online learning algorithms with endogenous data. In a contextual multi-armed bandit model, we show that a novel bias (self-fulfilling bias) arises because the endogeneity of the data influences the choices of decisions, affecting the distribution of future data to be collected and analyzed. We propose a class of algorithms to correct for the bias by incorporating instrumental variables into leading online learning algorithms. These algorithms lead to the true parameter values and meanwhile attain low (logarithmic-like) regret levels. We further prove a central limit theorem for statistical inference of the parameters of interest. To establish the theoretical properties, we develop a general technique that untangles the interdependence between data and actions.
{"title":"Self-fulfilling Bandits: Dynamic Selection in Algorithmic Decision-making","authors":"Jin Li, Ye Luo, Xiaowei Zhang","doi":"10.2139/ssrn.3912989","DOIUrl":"https://doi.org/10.2139/ssrn.3912989","url":null,"abstract":"This paper identifies and addresses dynamic selection problems that arise in online learning algorithms with endogenous data. In a contextual multi-armed bandit model, we show that a novel bias (self-fulfilling bias) arises because the endogeneity of the data influences the choices of decisions, affecting the distribution of future data to be collected and analyzed. We propose a class of algorithms to correct for the bias by incorporating instrumental variables into leading online learning algorithms. These algorithms lead to the true parameter values and meanwhile attain low (logarithmic-like) regret levels. We further prove a central limit theorem for statistical inference of the parameters of interest. To establish the theoretical properties, we develop a general technique that untangles the interdependence between data and actions.","PeriodicalId":239853,"journal":{"name":"ERN: Other Econometrics: Econometric & Statistical Methods - Special Topics (Topic)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116660234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-04DOI: 10.5121/ijsea.2021.12401
Tharwon Arnuphaptrairong
Literature review shows that more accurate software effort and cost estimation methods are needed for software project management success. Expert judgment and algorithmic model estimation are two predominant methods discussed in the literature. Both are reported almost at the comparable level of accuracy performance. The combination of the two methods is suggested to increase the estimation accuracy. Delphi method is an encouraging structured expert judgment method for software effort group estimation but surprisingly little was reported in the literature. The objective of this study is to test if the Delphi estimates will be more accurate if the participants in the Delphi process are exposed to the algorithmic estimates. A Delphi experiment where the participants in the Delphi process were exposed to three algorithmic estimates – Function Points, COCOMO estimates, and Use Case Points, was therefore conducted. The findings show that the Delphi estimates are slightly more accurate than the statistical combination of individual expert estimates, but they are not statistically significant. However, the Delphi estimates are statistically significant more accurate than the individual estimates. The results also show that the Delphi estimates are slightly less optimistic than the statistical combination of individual expert estimates but they are not statistically significant either. The adapted Delphi experiment shows a promising technique for improving the software cost estimation accuracy.
{"title":"Enhancing Delphi Method with Algorithmic Estimates for Software Effort Estimation: An Experimental Study","authors":"Tharwon Arnuphaptrairong","doi":"10.5121/ijsea.2021.12401","DOIUrl":"https://doi.org/10.5121/ijsea.2021.12401","url":null,"abstract":"Literature review shows that more accurate software effort and cost estimation methods are needed for software project management success. Expert judgment and algorithmic model estimation are two predominant methods discussed in the literature. Both are reported almost at the comparable level of accuracy performance. The combination of the two methods is suggested to increase the estimation accuracy. Delphi method is an encouraging structured expert judgment method for software effort group estimation but surprisingly little was reported in the literature. The objective of this study is to test if the Delphi estimates will be more accurate if the participants in the Delphi process are exposed to the algorithmic estimates. A Delphi experiment where the participants in the Delphi process were exposed to three algorithmic estimates – Function Points, COCOMO estimates, and Use Case Points, was therefore conducted. The findings show that the Delphi estimates are slightly more accurate than the statistical combination of individual expert estimates, but they are not statistically significant. However, the Delphi estimates are statistically significant more accurate than the individual estimates. The results also show that the Delphi estimates are slightly less optimistic than the statistical combination of individual expert estimates but they are not statistically significant either. The adapted Delphi experiment shows a promising technique for improving the software cost estimation accuracy.","PeriodicalId":239853,"journal":{"name":"ERN: Other Econometrics: Econometric & Statistical Methods - Special Topics (Topic)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116560939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Regression Discontinuity (RD) uses policy thresholds to identify causal treatment effects at the threshold. In most settings, the Local Average Treatment Effect (LATE) at the threshold is not the parameter of interest. I provide high level smoothness conditions under which extrapolation across the threshold is possible. Under these restrictions, both estimation and inference for the LATE in other locations is possible. In some situations, extrapolation may be necessary to merely estimate the LATE at the threshold. RD donuts are one such situation, and I provide results allowing estimation and inference in that setting as well.
{"title":"Donuts and Distant CATEs: Derivative Bounds for RD Extrapolation","authors":"Connor Dowd","doi":"10.2139/ssrn.3641913","DOIUrl":"https://doi.org/10.2139/ssrn.3641913","url":null,"abstract":"Regression Discontinuity (RD) uses policy thresholds to identify causal treatment effects at the threshold. In most settings, the Local Average Treatment Effect (LATE) at the threshold is not the parameter of interest. I provide high level smoothness conditions under which extrapolation across the threshold is possible. Under these restrictions, both estimation and inference for the LATE in other locations is possible. In some situations, extrapolation may be necessary to merely estimate the LATE at the threshold. RD donuts are one such situation, and I provide results allowing estimation and inference in that setting as well.","PeriodicalId":239853,"journal":{"name":"ERN: Other Econometrics: Econometric & Statistical Methods - Special Topics (Topic)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129095064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes the modelling of spread risk, in case of missing or illiquid market data, by using a subset of good quality liquid bond/credit default swap (CDS) spread time series. The proposed method links copula simulation to the actual historical spread dynamics. This is important when calculating credit valuation adjustment (CVA) risk charge and Value-at-Risk (VaR) with historical simulation. The methodology center around buckets of similar spreads. Buckets with good data, are straight forward, whereas buckets without data rely on a cross sectional model spread based on other buckets with good data. Residuals from regression of the bucket spread returns against market index returns are used to derive a link for each bucket. The link is subsequently used for simulating the spread dynamics in case of missing or illiquid spreads, using a modified one factor copula. A link between the actual and simulated residuals maintaining the risk dynamics is thus ensured. The result of the copula simulation is transformed into quantiles that are plugged into residual distributions from actual quality data, thereby maintaining the properties of actual market data such that the choice of copula only affects the risk dynamics, not the distributions of the risk factors.
{"title":"Consistent Spread Dynamics for CVA Risk Charge and Historical Value-at-Risk by Means of Cross Sectional / Consolidated Bucket Link Copula Simulation","authors":"Christian Buch Kjeldgaard","doi":"10.2139/ssrn.3874830","DOIUrl":"https://doi.org/10.2139/ssrn.3874830","url":null,"abstract":"This paper describes the modelling of spread risk, in case of missing or illiquid market data, by using a subset of good quality liquid bond/credit default swap (CDS) spread time series. The proposed method links copula simulation to the actual historical spread dynamics. This is important when calculating credit valuation adjustment (CVA) risk charge and Value-at-Risk (VaR) with historical simulation. The methodology center around buckets of similar spreads. Buckets with good data, are straight forward, whereas buckets without data rely on a cross sectional model spread based on other buckets with good data. Residuals from regression of the bucket spread returns against market index returns are used to derive a link for each bucket. The link is subsequently used for simulating the spread dynamics in case of missing or illiquid spreads, using a modified one factor copula. A link between the actual and simulated residuals maintaining the risk dynamics is thus ensured. The result of the copula simulation is transformed into quantiles that are plugged into residual distributions from actual quality data, thereby maintaining the properties of actual market data such that the choice of copula only affects the risk dynamics, not the distributions of the risk factors.","PeriodicalId":239853,"journal":{"name":"ERN: Other Econometrics: Econometric & Statistical Methods - Special Topics (Topic)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116734968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a multiplicative dynamic factor structure for the conditional modelling of the variances of an N-dimensional vector of financial returns. We identify common and idiosyncratic conditional volatility factors. The econometric framework is based on an observation-driven time series model that is simple and parsimonious. The common factor is modeled by a normal density and is robust to fat-tailed returns as it averages information over the cross-section of the observed N-dimensional vector of returns. The idiosyncratic factors are designed to capture the erratic shocks in returns and therefore rely on fat-tailed densities. Our model is potentially of a high-dimension, is parsimonious and it does not necessarily suffer from the curse of dimensionality. The relatively simple structure of the model leads to simple computations for the estimation of parameters and signal extraction of factors. We derive the stochastic properties of our proposed dynamic factor model, including bounded moments, stationarity, ergodicity, and filter invertibility. We further establish consistency and asymptotic normality of the maximum likelihood estimator. The finite sample properties of the estimator and the reliability of our method to track the common conditional volatility factor are investigated by means of a Monte Carlo study. Finally, we illustrate our approach with two empirical studies. The first study is for a panel of financial returns from ten stocks of the S&P100. The second study is for the panel of returns from all S&P100 stocks.
{"title":"Common and Idiosyncratic Conditional Volatility Factors: Theory and Empirical Evidence","authors":"F. Blasques, Enzo D’Innocenzo, S. J. Koopman","doi":"10.2139/ssrn.3875612","DOIUrl":"https://doi.org/10.2139/ssrn.3875612","url":null,"abstract":"We propose a multiplicative dynamic factor structure for the conditional modelling of the variances of an N-dimensional vector of financial returns. We identify common and idiosyncratic conditional volatility factors. The econometric framework is based on an observation-driven time series model that is simple and parsimonious. The common factor is modeled by a normal density and is robust to fat-tailed returns as it averages information over the cross-section of the observed N-dimensional vector of returns. The idiosyncratic factors are designed to capture the erratic shocks in returns and therefore rely on fat-tailed densities. Our model is potentially of a high-dimension, is parsimonious and it does not necessarily suffer from the curse of dimensionality. The relatively simple structure of the model leads to simple computations for the estimation of parameters and signal extraction of factors. We derive the stochastic properties of our proposed dynamic factor model, including bounded moments, stationarity, ergodicity, and filter invertibility. We further establish consistency and asymptotic normality of the maximum likelihood estimator. The finite sample properties of the estimator and the reliability of our method to track the common conditional volatility factor are investigated by means of a Monte Carlo study. Finally, we illustrate our approach with two empirical studies. The first study is for a panel of financial returns from ten stocks of the S&P100. The second study is for the panel of returns from all S&P100 stocks. <br>","PeriodicalId":239853,"journal":{"name":"ERN: Other Econometrics: Econometric & Statistical Methods - Special Topics (Topic)","volume":"290 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123386144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Fosgerau, E. Melo, M. Shum, Jesper Riis-Vestergaard Sørensen
Abstract This note provides several remarks relating to the conditional choice probability (CCP) based estimation approaches for dynamic discrete-choice models. Specifically, the Arcidiacono and Miller (2011) estimation procedure relies on the ”inverse-CCP” mapping ψ p from CCPs to choice-specific value functions. Exploiting the convex-analytic structure of discrete choice models, we discuss two approaches for computing this mapping, using either linear or convex programming, for models where the utility shocks can follow arbitrary parametric distributions. Furthermore, the ψ function is generally distinct from the ”selection adjustment” term (i.e. the expectation of the utility shock for the chosen alternative), so that computational approaches for computing the latter may not be appropriate for computing ψ .
{"title":"Some Remarks on CCP-based Estimators of Dynamic Models","authors":"M. Fosgerau, E. Melo, M. Shum, Jesper Riis-Vestergaard Sørensen","doi":"10.2139/ssrn.3793008","DOIUrl":"https://doi.org/10.2139/ssrn.3793008","url":null,"abstract":"Abstract This note provides several remarks relating to the conditional choice probability (CCP) based estimation approaches for dynamic discrete-choice models. Specifically, the Arcidiacono and Miller (2011) estimation procedure relies on the ”inverse-CCP” mapping ψ p from CCPs to choice-specific value functions. Exploiting the convex-analytic structure of discrete choice models, we discuss two approaches for computing this mapping, using either linear or convex programming, for models where the utility shocks can follow arbitrary parametric distributions. Furthermore, the ψ function is generally distinct from the ”selection adjustment” term (i.e. the expectation of the utility shock for the chosen alternative), so that computational approaches for computing the latter may not be appropriate for computing ψ .","PeriodicalId":239853,"journal":{"name":"ERN: Other Econometrics: Econometric & Statistical Methods - Special Topics (Topic)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125515953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We show that adding countries as a panel dimension to macroeconomic data can statistically significantly improve the generalization ability of structural and reduced-form models, as well as allow machine learning methods to outperform these and other macroeconomic forecasting models. Using GDP forecasts for evaluation, this procedure reduces root mean squared error (RMSE) by 12% across horizons and models for certain reduced-form models and by 24% across horizons for structural DSGE models. Removing US data from the training set and forecasting out-of-sample country-wise, we show that both reduced form and structural models become more policy invariant, and outperform a baseline model that uses US data only. Finally, given the comparative advantage of "nonparametric" machine learning forecasting models in a data-rich regime, we demonstrate that our recurrent neural network (RNN) model and automated machine learning (AutoML) approach outperforms all baseline economic models in this regime. Robustness checks indicate that machine learning outperformance is reproducible, numerically stable, and generalizes across models.
{"title":"Improving External Validity of Machine Learning, Reduced Form, and Structural Macroeconomic Models using Panel Data","authors":"Cameron Fen, Samir S Undavia","doi":"10.2139/ssrn.3839863","DOIUrl":"https://doi.org/10.2139/ssrn.3839863","url":null,"abstract":"We show that adding countries as a panel dimension to macroeconomic data can statistically significantly improve the generalization ability of structural and reduced-form models, as well as allow machine learning methods to outperform these and other macroeconomic forecasting models. Using GDP forecasts for evaluation, this procedure reduces root mean squared error (RMSE) by 12% across horizons and models for certain reduced-form models and by 24% across horizons for structural DSGE models. Removing US data from the training set and forecasting out-of-sample country-wise, we show that both reduced form and structural models become more policy invariant, and outperform a baseline model that uses US data only. Finally, given the comparative advantage of \"nonparametric\" machine learning forecasting models in a data-rich regime, we demonstrate that our recurrent neural network (RNN) model and automated machine learning (AutoML) approach outperforms all baseline economic models in this regime. Robustness checks indicate that machine learning outperformance is reproducible, numerically stable, and generalizes across models.","PeriodicalId":239853,"journal":{"name":"ERN: Other Econometrics: Econometric & Statistical Methods - Special Topics (Topic)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129208139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Instead of using the classical block-shaped market depth to build the optimal execution model, this work studies the constrained optimal execution problem in a limit order book (LOB) market with a power-shaped market depth. Different from the linear price impact derived from the framework of block-shaped market depth, the price impact generated from the framework of our power-shaped market depth becomes a nonlinear function, which is consistent with the empirical studies. We also consider a class of state-dependent upper and lower bound constraints on trading strategies, which includes non-negative constraint (or non-short selling constraint) as its special case. Even though both the power-shaped market depth and trading strategy constraints make it hardly to solve such an optimal execution problem analytically, we still develop some significant properties on the optimal execution policy and optimal execution cost of our model. From some illustrative examples, we find that the optimal execution policy derived from our model is quite different from the one generated from the model with the block-shaped market depth when the market exhibits finite resiliency. However, when the market resilience is infinite, the obtained result becomes different, i.e., the optimal execution policies derived from these two kind of models are equivalent. For a special model with the stochastic block-shaped market depth and infinite market resilience, we successfully derive the analytical solution for such an optimal execution problem by utilizing the state separation property induced from its structure. The revealed optimal execution strategy is a piece-wise affine function with respect to the current remaining position, which can be computed off-line efficiently by solving two coupled equations. Finally, due to its explicit solution, we utilize this optimal execution model to demonstrate that the model admits no price magnification opportunity for the two-sided trading strategy.
{"title":"Constrained Optimal Execution in Limit Order Book Market with Power-shaped Market Depth","authors":"Weipin Wu, Jianjun Gao, Dian Yu","doi":"10.2139/ssrn.3798235","DOIUrl":"https://doi.org/10.2139/ssrn.3798235","url":null,"abstract":"Instead of using the classical block-shaped market depth to build the optimal execution model, this work studies the constrained optimal execution problem in a limit order book (LOB) market with a power-shaped market depth. Different from the linear price impact derived from the framework of block-shaped market depth, the price impact generated from the framework of our power-shaped market depth becomes a nonlinear function, which is consistent with the empirical studies. We also consider a class of state-dependent upper and lower bound constraints on trading strategies, which includes non-negative constraint (or non-short selling constraint) as its special case. Even though both the power-shaped market depth and trading strategy constraints make it hardly to solve such an optimal execution problem analytically, we still develop some significant properties on the optimal execution policy and optimal execution cost of our model. From some illustrative examples, we find that the optimal execution policy derived from our model is quite different from the one generated from the model with the block-shaped market depth when the market exhibits finite resiliency. However, when the market resilience is infinite, the obtained result becomes different, i.e., the optimal execution policies derived from these two kind of models are equivalent. For a special model with the stochastic block-shaped market depth and infinite market resilience, we successfully derive the analytical solution for such an optimal execution problem by utilizing the state separation property induced from its structure. The revealed optimal execution strategy is a piece-wise affine function with respect to the current remaining position, which can be computed off-line efficiently by solving two coupled equations. Finally, due to its explicit solution, we utilize this optimal execution model to demonstrate that the model admits no price magnification opportunity for the two-sided trading strategy.","PeriodicalId":239853,"journal":{"name":"ERN: Other Econometrics: Econometric & Statistical Methods - Special Topics (Topic)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114620111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the optimal design of a crowd-sourcing contest in settings where the output (from the contestants) is quantifiable -- for example, a data science challenge. This setting is in contrast to settings where the output is only qualitative and cannot be quantified in an objective manner -- for example, when the goal of the contest is to design a logo. The rapidly growing literature on the design of crowd-sourcing contests focuses largely on ordinal contests -- these are contests where contestants' outputs are ranked by the organizer and awards are based on the relative ranks. Such contests are ideally suited for the latter setting, where output is qualitative. For our setting (quantitative output), it is possible to design contests where awards are based on the actual outputs and not on their ranking alone -- thus, our space of contest designs includes ordinal contests but is significantly larger. We derive an easy-to-implement contest design for this setting and establish its optimality.
{"title":"Crowd-Sourcing for Data Science and Quantifiable Challenges: Optimal Contest Design","authors":"Milind Dawande, G. Janakiraman, Goutham Takasi","doi":"10.2139/ssrn.3740224","DOIUrl":"https://doi.org/10.2139/ssrn.3740224","url":null,"abstract":"We study the optimal design of a crowd-sourcing contest in settings where the output (from the contestants) is quantifiable -- for example, a data science challenge. This setting is in contrast to settings where the output is only qualitative and cannot be quantified in an objective manner -- for example, when the goal of the contest is to design a logo. The rapidly growing literature on the design of crowd-sourcing contests focuses largely on ordinal contests -- these are contests where contestants' outputs are ranked by the organizer and awards are based on the relative ranks. Such contests are ideally suited for the latter setting, where output is qualitative. For our setting (quantitative output), it is possible to design contests where awards are based on the actual outputs and not on their ranking alone -- thus, our space of contest designs includes ordinal contests but is significantly larger. We derive an easy-to-implement contest design for this setting and establish its optimality.","PeriodicalId":239853,"journal":{"name":"ERN: Other Econometrics: Econometric & Statistical Methods - Special Topics (Topic)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124251516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}