One of the merits of this far reaching article is to show that not all “Frequentisms” are equal. Furthermore that there are frequentist approaches which are compelling scientifically, notably the “Empirical Frequentist” (EP), which can be paraphrased as “The proof of the pudding is in the eating”. Somewhat surprisingly to some (but anticipated in Wald’s admissibility Theorems in Decision Theory), is the conclusion that the easiest and best way to achieve the EP property is through Bayesian reasoning, perhaps more exactly, through Objective Bayesian reasoning. (I am avoiding the expression Empirical Bayesian reasoning which would be appropriate if it wasn’t associated with a very particular group of methods. It is argued below that a better name would be “Bayes Empirical”) I concentrate on Hypothesis Testing since that is the most challenging area of deeper disagreement among schools. From this substantive classification of Frequentisms, emerges the opportunity for a convergence, which is even more satisfying than a compromise, between schools. This may only be fully achieved if the prior probabilities are known, which is not usually the case. However, particularly in Hypothesis Testing, prior probabilities can and should be estimated and its uncertainty acknowledged in a Bayesian way. This may be termed perhaps, Bayes Empirical: The systematic empirical study of Prior Possibilities based on relevant data, acknowledging its uncertainty.
{"title":"Invited Discussion of J.O. Berger: Four Types of Frequentism and Their Interplay with Bayesianism","authors":"L. Pericchi","doi":"10.51387/23-nejsds4b","DOIUrl":"https://doi.org/10.51387/23-nejsds4b","url":null,"abstract":"One of the merits of this far reaching article is to show that not all “Frequentisms” are equal. Furthermore that there are frequentist approaches which are compelling scientifically, notably the “Empirical Frequentist” (EP), which can be paraphrased as “The proof of the pudding is in the eating”. Somewhat surprisingly to some (but anticipated in Wald’s admissibility Theorems in Decision Theory), is the conclusion that the easiest and best way to achieve the EP property is through Bayesian reasoning, perhaps more exactly, through Objective Bayesian reasoning. (I am avoiding the expression Empirical Bayesian reasoning which would be appropriate if it wasn’t associated with a very particular group of methods. It is argued below that a better name would be “Bayes Empirical”) I concentrate on Hypothesis Testing since that is the most challenging area of deeper disagreement among schools. From this substantive classification of Frequentisms, emerges the opportunity for a convergence, which is even more satisfying than a compromise, between schools. This may only be fully achieved if the prior probabilities are known, which is not usually the case. However, particularly in Hypothesis Testing, prior probabilities can and should be estimated and its uncertainty acknowledged in a Bayesian way. This may be termed perhaps, Bayes Empirical: The systematic empirical study of Prior Possibilities based on relevant data, acknowledging its uncertainty.","PeriodicalId":94360,"journal":{"name":"The New England Journal of Statistics in Data Science","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86015080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dixon Vimalajeewa, A. Dasgupta, F. Ruggeri, B. Vidakovic
In this paper, we propose a method for wavelet denoising of signals contaminated with Gaussian noise when prior information about the ${L^{2}}$-energy of the signal is available. Assuming the independence model, according to which the wavelet coefficients are treated individually, we propose simple, level-dependent shrinkage rules that turn out to be Γ-minimax for a suitable class of priors. The proposed methodology is particularly well suited in denoising tasks when the signal-to-noise ratio is low, which is illustrated by simulations on a battery of some standard test functions. Comparison to some commonly used wavelet shrinkage methods is provided.
{"title":"Gamma-Minimax Wavelet Shrinkage for Signals with Low SNR","authors":"Dixon Vimalajeewa, A. Dasgupta, F. Ruggeri, B. Vidakovic","doi":"10.51387/23-nejsds43","DOIUrl":"https://doi.org/10.51387/23-nejsds43","url":null,"abstract":"In this paper, we propose a method for wavelet denoising of signals contaminated with Gaussian noise when prior information about the ${L^{2}}$-energy of the signal is available. Assuming the independence model, according to which the wavelet coefficients are treated individually, we propose simple, level-dependent shrinkage rules that turn out to be Γ-minimax for a suitable class of priors. The proposed methodology is particularly well suited in denoising tasks when the signal-to-noise ratio is low, which is illustrated by simulations on a battery of some standard test functions. Comparison to some commonly used wavelet shrinkage methods is provided.","PeriodicalId":94360,"journal":{"name":"The New England Journal of Statistics in Data Science","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75513278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The performance of a learning technique relies heavily on hyperparameter settings. It calls for hyperparameter tuning for a deep learning technique, which may be too computationally expensive for sophisticated learning techniques. As such, expeditiously exploring the relationship between hyperparameters and the performance of a learning technique controlled by these hyperparameters is desired, and thus it entails the consideration of design strategies to collect informative data efficiently to do so. Various designs can be considered for this purpose. The question as to which design to use then naturally arises. In this paper, we examine the use of different types of designs in efficiently collecting informative data to study the surface of test accuracy, a measure of the performance of a learning technique, over hyperparameters. Under the settings we considered, we find that the strong orthogonal array outperforms all other comparable designs.
{"title":"Evaluating Designs for Hyperparameter Tuning in Deep Neural Networks","authors":"Chenlu Shi, Ashley Kathleen Chiu, Hongquan Xu","doi":"10.51387/23-nejsds26","DOIUrl":"https://doi.org/10.51387/23-nejsds26","url":null,"abstract":"The performance of a learning technique relies heavily on hyperparameter settings. It calls for hyperparameter tuning for a deep learning technique, which may be too computationally expensive for sophisticated learning techniques. As such, expeditiously exploring the relationship between hyperparameters and the performance of a learning technique controlled by these hyperparameters is desired, and thus it entails the consideration of design strategies to collect informative data efficiently to do so. Various designs can be considered for this purpose. The question as to which design to use then naturally arises. In this paper, we examine the use of different types of designs in efficiently collecting informative data to study the surface of test accuracy, a measure of the performance of a learning technique, over hyperparameters. Under the settings we considered, we find that the strong orthogonal array outperforms all other comparable designs.","PeriodicalId":94360,"journal":{"name":"The New England Journal of Statistics in Data Science","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85137668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Designing longitudinal studies is generally a very challenging problem because of the complex optimization problems. We show the popular nature-inspired metaheuristic algorithm, Particle Swarm Optimization (PSO), can find different types of optimal exact designs for longitudinal studies with different correlation structures for different types of models. In particular, we demonstrate PSO-generated D-optimal longitudinal studies for the widely used Michaelis-Menten model with various correlation structures agree with the reported analytically derived locally D-optimal designs in the literature when there are only 2 observations per subject, and their numerical D-optimal designs when there are 3 and 4 observations per subject. We further show the usefulness of PSO by applying it to generate new locally D-optimal designs to estimate model parameters when there are 5 or more observations per subject. Additionally, we find various optimal longitudinal designs for a growth curve model commonly used in animal studies and for a nonlinear HIV dynamic model for studying T-cells in AIDS subjects. In particular, c-optimal exact designs for estimating one or more functions of model parameters (c-optimality) were found, along with other types of multiple objectives optimal designs.
{"title":"Particle Swarm Optimization for Finding Efficient Longitudinal Exact Designs for Nonlinear Models","authors":"Ping-Yang Chen, Ray‐Bing Chen, W. Wong","doi":"10.51387/23-nejsds45","DOIUrl":"https://doi.org/10.51387/23-nejsds45","url":null,"abstract":"Designing longitudinal studies is generally a very challenging problem because of the complex optimization problems. We show the popular nature-inspired metaheuristic algorithm, Particle Swarm Optimization (PSO), can find different types of optimal exact designs for longitudinal studies with different correlation structures for different types of models. In particular, we demonstrate PSO-generated D-optimal longitudinal studies for the widely used Michaelis-Menten model with various correlation structures agree with the reported analytically derived locally D-optimal designs in the literature when there are only 2 observations per subject, and their numerical D-optimal designs when there are 3 and 4 observations per subject. We further show the usefulness of PSO by applying it to generate new locally D-optimal designs to estimate model parameters when there are 5 or more observations per subject. Additionally, we find various optimal longitudinal designs for a growth curve model commonly used in animal studies and for a nonlinear HIV dynamic model for studying T-cells in AIDS subjects. In particular, c-optimal exact designs for estimating one or more functions of model parameters (c-optimality) were found, along with other types of multiple objectives optimal designs.","PeriodicalId":94360,"journal":{"name":"The New England Journal of Statistics in Data Science","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84998503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Not-so-radical Rejoinder: Habituate Systems Thinking and Data (Science) Confession for Quality Enhancement","authors":"Xiao Meng","doi":"10.51387/22-nejsds6rej","DOIUrl":"https://doi.org/10.51387/22-nejsds6rej","url":null,"abstract":"","PeriodicalId":94360,"journal":{"name":"The New England Journal of Statistics in Data Science","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73337670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Controlled experiments are widely applied in many areas such as clinical trials or user behavior studies in IT companies. Recently, it is popular to study experimental design problems to facilitate personalized decision making. In this paper, we investigate the problem of optimal design of multiple treatment allocation for personalized decision making in the presence of observational covariates associated with experimental units (often, patients or users). We assume that the response of a subject assigned to a treatment follows a linear model which includes the interaction between covariates and treatments to facilitate precision decision making. We define the optimal objective as the maximum variance of estimated personalized treatment effects over different treatments and different covariates values. The optimal design is obtained by minimizing this objective. Under a semi-definite program reformulation of the original optimization problem, we use a YALMIP and MOSEK based optimization solver to provide the optimal design. Numerical studies are provided to assess the quality of the optimal design.
{"title":"Optimal Design of Controlled Experiments for Personalized Decision Making in the Presence of Observational Covariates","authors":"Yezhuo Li, Qiong Zhang, A. Khademi, Boshi Yang","doi":"10.51387/23-nejsds22","DOIUrl":"https://doi.org/10.51387/23-nejsds22","url":null,"abstract":"Controlled experiments are widely applied in many areas such as clinical trials or user behavior studies in IT companies. Recently, it is popular to study experimental design problems to facilitate personalized decision making. In this paper, we investigate the problem of optimal design of multiple treatment allocation for personalized decision making in the presence of observational covariates associated with experimental units (often, patients or users). We assume that the response of a subject assigned to a treatment follows a linear model which includes the interaction between covariates and treatments to facilitate precision decision making. We define the optimal objective as the maximum variance of estimated personalized treatment effects over different treatments and different covariates values. The optimal design is obtained by minimizing this objective. Under a semi-definite program reformulation of the original optimization problem, we use a YALMIP and MOSEK based optimization solver to provide the optimal design. Numerical studies are provided to assess the quality of the optimal design.","PeriodicalId":94360,"journal":{"name":"The New England Journal of Statistics in Data Science","volume":"217 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73630471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In addition to scientific questions, clinical trialists often explore or require other design features, such as increasing the power while controlling the type I error rate, minimizing unnecessary exposure to inferior treatments, and comparing multiple treatments in one clinical trial. We propose implementing adaptive seamless design (ASD) with response adaptive randomization (RAR) to satisfy various clinical trials’ design objectives. However, the combination of ASD and RAR poses a challenge in controlling the type I error rate. In this paper, we investigated how to utilize the advantages of the two adaptive methods and control the type I error rate. We offered the theoretical foundation for this procedure. Numerical studies demonstrated that our methods could achieve efficient and ethical objectives while controlling the type I error rate.
{"title":"Seamless Clinical Trials with Doubly Adaptive Biased Coin Designs","authors":"Hongjian Zhu, Jun Yu, D. Lai, Li Wang","doi":"10.51387/23-nejsds25","DOIUrl":"https://doi.org/10.51387/23-nejsds25","url":null,"abstract":"In addition to scientific questions, clinical trialists often explore or require other design features, such as increasing the power while controlling the type I error rate, minimizing unnecessary exposure to inferior treatments, and comparing multiple treatments in one clinical trial. We propose implementing adaptive seamless design (ASD) with response adaptive randomization (RAR) to satisfy various clinical trials’ design objectives. However, the combination of ASD and RAR poses a challenge in controlling the type I error rate. In this paper, we investigated how to utilize the advantages of the two adaptive methods and control the type I error rate. We offered the theoretical foundation for this procedure. Numerical studies demonstrated that our methods could achieve efficient and ethical objectives while controlling the type I error rate.","PeriodicalId":94360,"journal":{"name":"The New England Journal of Statistics in Data Science","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81634857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Colin O. Wu, Ming-Hui Chen, Min-ge Xie, HaiYing Wang, Jing Wu
We are pleased to launch the first issue of the New England Journal of Statistics in Data Science (NEJSDS). NEJSDS is the official journal of the New England Statistical Society (NESS) under the leadership of Vice President for Journal and Publication and sponsored by the College of Liberal Arts and Sciences, University of Connecticut. The aims of the journal are to serve as an interface between statistics and other disciplines in data science, to encourage researchers to exchange innovative ideas, and to promote data science methods to the general scientific community. The journal publishes high quality original research, novel applications, and timely review articles in all aspects of data science, including all areas of statistical methodology, methods of machine learning, and artificial intelligence, novel algorithms, computational methods, data management and manipulation, applications of data science methods, among others. We encourage authors to submit collaborative work driven by real life problems posed by researchers, administrators, educators, or other stakeholders, and which require original and innovative solutions from data scientists.
{"title":"Inaugural Editorial. Can We Achieve Our Mission: Fast, Accessible, Cutting-edge, and Top-quality?","authors":"Colin O. Wu, Ming-Hui Chen, Min-ge Xie, HaiYing Wang, Jing Wu","doi":"10.51387/23-nejsds11edi","DOIUrl":"https://doi.org/10.51387/23-nejsds11edi","url":null,"abstract":"We are pleased to launch the first issue of the New England Journal of Statistics in Data Science (NEJSDS). NEJSDS is the official journal of the New England Statistical Society (NESS) under the leadership of Vice President for Journal and Publication and sponsored by the College of Liberal Arts and Sciences, University of Connecticut. The aims of the journal are to serve as an interface between statistics and other disciplines in data science, to encourage researchers to exchange innovative ideas, and to promote data science methods to the general scientific community. The journal publishes high quality original research, novel applications, and timely review articles in all aspects of data science, including all areas of statistical methodology, methods of machine learning, and artificial intelligence, novel algorithms, computational methods, data management and manipulation, applications of data science methods, among others. We encourage authors to submit collaborative work driven by real life problems posed by researchers, administrators, educators, or other stakeholders, and which require original and innovative solutions from data scientists.","PeriodicalId":94360,"journal":{"name":"The New England Journal of Statistics in Data Science","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84686155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sumin Shen, Huiying Mao, Zezhong Zhang, Zili Chen, Keyu Nie, Xinwei Deng
In online experimentation, appropriate metrics (e.g., purchase) provide strong evidence to support hypotheses and enhance the decision-making process. However, incomplete metrics are frequently occurred in the online experimentation, making the available data to be much fewer than the planned online experiments (e.g., A/B testing). In this work, we introduce the concept of dropout buyers and categorize users with incomplete metric values into two groups: visitors and dropout buyers. For the analysis of incomplete metrics, we propose a clustering-based imputation method using k-nearest neighbors. Our proposed imputation method considers both the experiment-specific features and users’ activities along their shopping paths, allowing different imputation values for different users. To facilitate efficient imputation of large-scale data sets in online experimentation, the proposed method uses a combination of stratification and clustering. The performance of the proposed method is compared to several conventional methods in both simulation studies and a real online experiment at eBay.
{"title":"Clustering-Based Imputation for Dropout Buyers in Large-Scale Online Experimentation","authors":"Sumin Shen, Huiying Mao, Zezhong Zhang, Zili Chen, Keyu Nie, Xinwei Deng","doi":"10.51387/23-nejsds33","DOIUrl":"https://doi.org/10.51387/23-nejsds33","url":null,"abstract":"In online experimentation, appropriate metrics (e.g., purchase) provide strong evidence to support hypotheses and enhance the decision-making process. However, incomplete metrics are frequently occurred in the online experimentation, making the available data to be much fewer than the planned online experiments (e.g., A/B testing). In this work, we introduce the concept of dropout buyers and categorize users with incomplete metric values into two groups: visitors and dropout buyers. For the analysis of incomplete metrics, we propose a clustering-based imputation method using k-nearest neighbors. Our proposed imputation method considers both the experiment-specific features and users’ activities along their shopping paths, allowing different imputation values for different users. To facilitate efficient imputation of large-scale data sets in online experimentation, the proposed method uses a combination of stratification and clustering. The performance of the proposed method is compared to several conventional methods in both simulation studies and a real online experiment at eBay.","PeriodicalId":94360,"journal":{"name":"The New England Journal of Statistics in Data Science","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91260284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Approximate confidence distribution computing (ACDC) offers a new take on the rapidly developing field of likelihood-free inference from within a frequentist framework. The appeal of this computational method for statistical inference hinges upon the concept of a confidence distribution, a special type of estimator which is defined with respect to the repeated sampling principle. An ACDC method provides frequentist validation for computational inference in problems with unknown or intractable likelihoods. The main theoretical contribution of this work is the identification of a matching condition necessary for frequentist validity of inference from this method. In addition to providing an example of how a modern understanding of confidence distribution theory can be used to connect Bayesian and frequentist inferential paradigms, we present a case to expand the current scope of so-called approximate Bayesian inference to include non-Bayesian inference by targeting a confidence distribution rather than a posterior. The main practical contribution of this work is the development of a data-driven approach to drive ACDC in both Bayesian or frequentist contexts. The ACDC algorithm is data-driven by the selection of a data-dependent proposal function, the structure of which is quite general and adaptable to many settings. We explore three numerical examples that both verify the theoretical arguments in the development of ACDC and suggest instances in which ACDC outperform approximate Bayesian computing methods computationally.
{"title":"Approximate Confidence Distribution Computing","authors":"S. Thornton, Wentao Li, Min‐ge Xie","doi":"10.51387/23-nejsds38","DOIUrl":"https://doi.org/10.51387/23-nejsds38","url":null,"abstract":"Approximate confidence distribution computing (ACDC) offers a new take on the rapidly developing field of likelihood-free inference from within a frequentist framework. The appeal of this computational method for statistical inference hinges upon the concept of a confidence distribution, a special type of estimator which is defined with respect to the repeated sampling principle. An ACDC method provides frequentist validation for computational inference in problems with unknown or intractable likelihoods. The main theoretical contribution of this work is the identification of a matching condition necessary for frequentist validity of inference from this method. In addition to providing an example of how a modern understanding of confidence distribution theory can be used to connect Bayesian and frequentist inferential paradigms, we present a case to expand the current scope of so-called approximate Bayesian inference to include non-Bayesian inference by targeting a confidence distribution rather than a posterior. The main practical contribution of this work is the development of a data-driven approach to drive ACDC in both Bayesian or frequentist contexts. The ACDC algorithm is data-driven by the selection of a data-dependent proposal function, the structure of which is quite general and adaptable to many settings. We explore three numerical examples that both verify the theoretical arguments in the development of ACDC and suggest instances in which ACDC outperform approximate Bayesian computing methods computationally.","PeriodicalId":94360,"journal":{"name":"The New England Journal of Statistics in Data Science","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89760848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}