Covariate-adaptive randomization schemes such as the minimization and stratified permuted blocks are often applied in clinical trials to balance treatment assignments across prognostic factors. The existing theoretical developments on inference after covariate-adaptive randomization are mostly limited to situations where a correct model between the response and covariates can be specified or the randomization method has well-understood properties. Based on stratification with covariate levels utilized in randomization and a further adjusting for covariates not used in randomization, in this article we propose several estimators for model free inference on average treatment effect defined as the difference between response means under two treatments. We establish asymptotic normality of the proposed estimators under all popular covariate-adaptive randomization schemes including the minimization whose theoretical property is unclear, and we show that the asymptotic distributions are invariant with respect to covariate-adaptive randomization methods. Consistent variance estimators are constructed for asymptotic inference. Asymptotic relative efficiencies and finite sample properties of estimators are also studied. We recommend using one of our proposed estimators for valid and model free inference after covariate-adaptive randomization.
{"title":"Inference on Average Treatment Effect under Minimization and Other Covariate-Adaptive Randomization Methods","authors":"T. Ye, Yanyao Yi, J. Shao","doi":"10.1093/BIOMET/ASAB015","DOIUrl":"https://doi.org/10.1093/BIOMET/ASAB015","url":null,"abstract":"Covariate-adaptive randomization schemes such as the minimization and stratified permuted blocks are often applied in clinical trials to balance treatment assignments across prognostic factors. The existing theoretical developments on inference after covariate-adaptive randomization are mostly limited to situations where a correct model between the response and covariates can be specified or the randomization method has well-understood properties. Based on stratification with covariate levels utilized in randomization and a further adjusting for covariates not used in randomization, in this article we propose several estimators for model free inference on average treatment effect defined as the difference between response means under two treatments. We establish asymptotic normality of the proposed estimators under all popular covariate-adaptive randomization schemes including the minimization whose theoretical property is unclear, and we show that the asymptotic distributions are invariant with respect to covariate-adaptive randomization methods. Consistent variance estimators are constructed for asymptotic inference. Asymptotic relative efficiencies and finite sample properties of estimators are also studied. We recommend using one of our proposed estimators for valid and model free inference after covariate-adaptive randomization.","PeriodicalId":186390,"journal":{"name":"arXiv: Methodology","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133982510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fithian and Hastie (2014) proposed a new sampling scheme called local case-control (LCC) sampling that achieves stability and efficiency by utilizing a clever adjustment pertained to the logistic model. It is particularly useful for classification with large and imbalanced data. This paper proposes a more general sampling scheme based on a working principle that data points deserve higher sampling probability if they contain more information or appear "surprising" in the sense of, for example, a large error of pilot prediction or a large absolute score. Compared with the relevant existing sampling schemes, as reported in Fithian and Hastie (2014) and Ai, et al. (2018), the proposed one has several advantages. It adaptively gives out the optimal forms to a variety of objectives, including the LCC and Ai et al. (2018)'s sampling as special cases. Under same model specifications, the proposed estimator also performs no worse than those in the literature. The estimation procedure is valid even if the model is misspecified and/or the pilot estimator is inconsistent or dependent on full data. We present theoretical justifications of the claimed advantages and optimality of the estimation and the sampling design. Different from Ai, et al. (2018), our large sample theory are population-wise rather than data-wise. Moreover, the proposed approach can be applied to unsupervised learning studies, since it essentially only requires a specific loss function and no response-covariate structure of data is needed. Numerical studies are carried out and the evidence in support of the theory is shown.
{"title":"Surprise sampling: Improving and extending the local case-control sampling","authors":"Xinwei Shen, Kani Chen, Wen Yu","doi":"10.1214/21-EJS1844","DOIUrl":"https://doi.org/10.1214/21-EJS1844","url":null,"abstract":"Fithian and Hastie (2014) proposed a new sampling scheme called local case-control (LCC) sampling that achieves stability and efficiency by utilizing a clever adjustment pertained to the logistic model. It is particularly useful for classification with large and imbalanced data. This paper proposes a more general sampling scheme based on a working principle that data points deserve higher sampling probability if they contain more information or appear \"surprising\" in the sense of, for example, a large error of pilot prediction or a large absolute score. Compared with the relevant existing sampling schemes, as reported in Fithian and Hastie (2014) and Ai, et al. (2018), the proposed one has several advantages. It adaptively gives out the optimal forms to a variety of objectives, including the LCC and Ai et al. (2018)'s sampling as special cases. Under same model specifications, the proposed estimator also performs no worse than those in the literature. The estimation procedure is valid even if the model is misspecified and/or the pilot estimator is inconsistent or dependent on full data. We present theoretical justifications of the claimed advantages and optimality of the estimation and the sampling design. Different from Ai, et al. (2018), our large sample theory are population-wise rather than data-wise. Moreover, the proposed approach can be applied to unsupervised learning studies, since it essentially only requires a specific loss function and no response-covariate structure of data is needed. Numerical studies are carried out and the evidence in support of the theory is shown.","PeriodicalId":186390,"journal":{"name":"arXiv: Methodology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127314406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The multivariate Fay-Herriot model is quite effective in combining information through correlations among small area survey estimates of related variables or historical survey estimates of the same variable or both. Though the literature on small area estimation is already very rich, construction of second-order efficient confidence intervals from multivariate models have so far received very little attention. In this paper, we develop a parametric bootstrap method for constructing a second-order efficient confidence interval for a general linear combination of small area means using the multivariate Fay-Herriot normal model. The proposed parametric bootstrap method replaces difficult and tedious analytical derivations by the power of efficient algorithm and high speed computer. Moreover, the proposed method is more versatile than the analytical method because the parametric bootstrap method can be easily applied to any method of model parameter estimation and any specific structure of the variance-covariance matrix of the multivariate Fay-Herriot model avoiding all the cumbersome and time-consuming calculations required in the analytical method. We apply our proposed methodology in constructing confidence intervals for the median income of four-person families for the fifty states and the District of Columbia in the United States. Our data analysis demonstrates that the proposed parametric bootstrap method generally provides much shorter confidence intervals compared to the corresponding traditional direct method. Moreover, the confidence intervals obtained from the multivariate model is generally shorter than the corresponding univariate model indicating the potential advantage of exploiting correlations of median income of four-person families with median incomes of three and five person families.
{"title":"Parametric Bootstrap Confidence Intervals for the Multivariate Fay–Herriot Model","authors":"Takumi Saegusa, S. Sugasawa, P. Lahiri","doi":"10.1093/jssam/smaa038","DOIUrl":"https://doi.org/10.1093/jssam/smaa038","url":null,"abstract":"The multivariate Fay-Herriot model is quite effective in combining information through correlations among small area survey estimates of related variables or historical survey estimates of the same variable or both. Though the literature on small area estimation is already very rich, construction of second-order efficient confidence intervals from multivariate models have so far received very little attention. In this paper, we develop a parametric bootstrap method for constructing a second-order efficient confidence interval for a general linear combination of small area means using the multivariate Fay-Herriot normal model. The proposed parametric bootstrap method replaces difficult and tedious analytical derivations by the power of efficient algorithm and high speed computer. Moreover, the proposed method is more versatile than the analytical method because the parametric bootstrap method can be easily applied to any method of model parameter estimation and any specific structure of the variance-covariance matrix of the multivariate Fay-Herriot model avoiding all the cumbersome and time-consuming calculations required in the analytical method. We apply our proposed methodology in constructing confidence intervals for the median income of four-person families for the fifty states and the District of Columbia in the United States. Our data analysis demonstrates that the proposed parametric bootstrap method generally provides much shorter confidence intervals compared to the corresponding traditional direct method. Moreover, the confidence intervals obtained from the multivariate model is generally shorter than the corresponding univariate model indicating the potential advantage of exploiting correlations of median income of four-person families with median incomes of three and five person families.","PeriodicalId":186390,"journal":{"name":"arXiv: Methodology","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114806846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-24DOI: 10.1007/978-3-030-47756-1_15
Eduardo Garc'ia-Portugu'es, J. 'Alvarez-Li'ebana, G. 'Alvarez-P'erez, W. Gonz'alez-Manteiga
{"title":"Goodness-of-fit Tests for Functional Linear Models Based on Integrated Projections","authors":"Eduardo Garc'ia-Portugu'es, J. 'Alvarez-Li'ebana, G. 'Alvarez-P'erez, W. Gonz'alez-Manteiga","doi":"10.1007/978-3-030-47756-1_15","DOIUrl":"https://doi.org/10.1007/978-3-030-47756-1_15","url":null,"abstract":"","PeriodicalId":186390,"journal":{"name":"arXiv: Methodology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115176362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conventional likelihood-based information criteria for model selection rely on the distribution assumption of data. However, for complex data that are increasingly available in many scientific fields, the specification of their underlying distribution turns out to be challenging, and the existing criteria may be limited and are not general enough to handle a variety of model selection problems. Here, we propose a robust and consistent model selection criterion based upon the empirical likelihood function which is data-driven. In particular, this framework adopts plug-in estimators that can be achieved by solving external estimating equations, not limited to the empirical likelihood, which avoids potential computational convergence issues and allows versatile applications, such as generalized linear models, generalized estimating equations, penalized regressions and so on. The formulation of our proposed criterion is initially derived from the asymptotic expansion of the marginal likelihood under variable selection framework, but more importantly, the consistent model selection property is established under a general context. Extensive simulation studies confirm the out-performance of the proposal compared to traditional model selection criteria. Finally, an application to the Atherosclerosis Risk in Communities Study illustrates the practical value of this proposed framework.
{"title":"A Robust Consistent Information Criterion for Model Selection based on Empirical Likelihood","authors":"Chixiang Chen, Ming Wang, R. Wu, Runze Li","doi":"10.5705/ss.202020.0254","DOIUrl":"https://doi.org/10.5705/ss.202020.0254","url":null,"abstract":"Conventional likelihood-based information criteria for model selection rely on the distribution assumption of data. However, for complex data that are increasingly available in many scientific fields, the specification of their underlying distribution turns out to be challenging, and the existing criteria may be limited and are not general enough to handle a variety of model selection problems. Here, we propose a robust and consistent model selection criterion based upon the empirical likelihood function which is data-driven. In particular, this framework adopts plug-in estimators that can be achieved by solving external estimating equations, not limited to the empirical likelihood, which avoids potential computational convergence issues and allows versatile applications, such as generalized linear models, generalized estimating equations, penalized regressions and so on. The formulation of our proposed criterion is initially derived from the asymptotic expansion of the marginal likelihood under variable selection framework, but more importantly, the consistent model selection property is established under a general context. Extensive simulation studies confirm the out-performance of the proposal compared to traditional model selection criteria. Finally, an application to the Atherosclerosis Risk in Communities Study illustrates the practical value of this proposed framework.","PeriodicalId":186390,"journal":{"name":"arXiv: Methodology","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115453885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christoph Schultheiss, Claude Renaux, Peter Buhlmann
We consider post-selection inference for high-dimensional (generalized) linear models. Data carving (Fithian et al., 2014) is a promising technique to perform this task. However, it suffers from the instability of the model selector and hence may lead to poor replicability, especially in high-dimensional settings. We propose the multicarve method inspired by multisplitting, to improve upon stability and replicability. Furthermore, we extend existing concepts to group inference and illustrate the applicability of the methodology also for generalized linear models.
我们考虑高维(广义)线性模型的后选择推理。数据雕刻(Fithian et al., 2014)是执行此任务的一种有前途的技术。然而,它受到模型选择器的不稳定性的影响,因此可能导致较差的可复制性,特别是在高维设置中。我们提出了受多重分裂启发的多重曲线方法,以提高稳定性和可复制性。此外,我们将现有的概念扩展到群推理,并说明该方法也适用于广义线性模型。
{"title":"Multicarving for high-dimensional post-selection inference","authors":"Christoph Schultheiss, Claude Renaux, Peter Buhlmann","doi":"10.1214/21-EJS1825","DOIUrl":"https://doi.org/10.1214/21-EJS1825","url":null,"abstract":"We consider post-selection inference for high-dimensional (generalized) linear models. Data carving (Fithian et al., 2014) is a promising technique to perform this task. However, it suffers from the instability of the model selector and hence may lead to poor replicability, especially in high-dimensional settings. We propose the multicarve method inspired by multisplitting, to improve upon stability and replicability. Furthermore, we extend existing concepts to group inference and illustrate the applicability of the methodology also for generalized linear models.","PeriodicalId":186390,"journal":{"name":"arXiv: Methodology","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116552035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Synthetic control methods have gained popularity among causal studies with observational data, particularly when estimating the impacts of the interventions that are implemented to a small number of large units. Implementing the synthetic control methods faces two major challenges: a) estimating weights for each control unit to create a synthetic control and b) providing statistical inferences. To overcome these challenges, we propose a Bayesian framework that implements the synthetic control method with the parallelly shiftable convex hull and provides a useful Bayesian inference, which is drawn from the duality between a penalized least squares method and a Bayesian Maximum A Posteriori (MAP) approach. Simulation results indicate that the proposed method leads to smaller biases compared to alternatives. We apply our Bayesian method to the real data example of Abadie and Gardeazabal (2003) and find that the treatment effects are statistically significant during the subset of the post-treatment period.
{"title":"Synthetic control method with convex hull restrictions: a Bayesian maximum a posteriori approach","authors":"Gyuhyeong Goh, Jisang Yu","doi":"10.1093/ECTJ/UTAB015","DOIUrl":"https://doi.org/10.1093/ECTJ/UTAB015","url":null,"abstract":"Synthetic control methods have gained popularity among causal studies with observational data, particularly when estimating the impacts of the interventions that are implemented to a small number of large units. Implementing the synthetic control methods faces two major challenges: a) estimating weights for each control unit to create a synthetic control and b) providing statistical inferences. To overcome these challenges, we propose a Bayesian framework that implements the synthetic control method with the parallelly shiftable convex hull and provides a useful Bayesian inference, which is drawn from the duality between a penalized least squares method and a Bayesian Maximum A Posteriori (MAP) approach. Simulation results indicate that the proposed method leads to smaller biases compared to alternatives. We apply our Bayesian method to the real data example of Abadie and Gardeazabal (2003) and find that the treatment effects are statistically significant during the subset of the post-treatment period.","PeriodicalId":186390,"journal":{"name":"arXiv: Methodology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116117728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-26DOI: 10.1002/9781118445112.STAT08129
C. Armero
This paper takes a quick look at Bayesian joint models (BJM) for longitudinal and survival data. A general formulation for BJM is examined in terms of the sampling distribution of the longitudinal and survival processes, the conditional distribution of the random effects and the prior distribution. Next a basic BJM defined in terms of a mixed linear model and a Cox survival regression models is discussed and some extensions and other Bayesian topics are briefly outlined.
{"title":"Bayesian Joint Models for Longitudinal and Survival Data","authors":"C. Armero","doi":"10.1002/9781118445112.STAT08129","DOIUrl":"https://doi.org/10.1002/9781118445112.STAT08129","url":null,"abstract":"This paper takes a quick look at Bayesian joint models (BJM) for longitudinal and survival data. A general formulation for BJM is examined in terms of the sampling distribution of the longitudinal and survival processes, the conditional distribution of the random effects and the prior distribution. Next a basic BJM defined in terms of a mixed linear model and a Cox survival regression models is discussed and some extensions and other Bayesian topics are briefly outlined.","PeriodicalId":186390,"journal":{"name":"arXiv: Methodology","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124174255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Public-use survey data are an important source of information for researchers in social science and health studies to build statistical models and make inferences on the target finite population. This paper presents two general inferential tools through the pseudo empirical likelihood and the sample empirical likelihood methods. Theoretical results on point estimation and linear or nonlinear hypothesis tests involving parameters defined through estimating equations are established, and practical issues with the implementation of the proposed methods are discussed. Results from simulation studies and an application to the 2016 General Social Survey dataset of Statistics Canada show that the proposed methods work well under different scenarios. The inferential procedures and theoretical results presented in the paper make the empirical likelihood a practically useful tool for users of complex survey data.
{"title":"Empirical likelihood inference with public-use survey data","authors":"Puying Zhao, J. Rao, Changbao Wu","doi":"10.1214/20-ejs1726","DOIUrl":"https://doi.org/10.1214/20-ejs1726","url":null,"abstract":"Public-use survey data are an important source of information for researchers in social science and health studies to build statistical models and make inferences on the target finite population. This paper presents two general inferential tools through the pseudo empirical likelihood and the sample empirical likelihood methods. Theoretical results on point estimation and linear or nonlinear hypothesis tests involving parameters defined through estimating equations are established, and practical issues with the implementation of the proposed methods are discussed. Results from simulation studies and an application to the 2016 General Social Survey dataset of Statistics Canada show that the proposed methods work well under different scenarios. The inferential procedures and theoretical results presented in the paper make the empirical likelihood a practically useful tool for users of complex survey data.","PeriodicalId":186390,"journal":{"name":"arXiv: Methodology","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126838871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing homogeneity of distributions is an old problem that has received considerable attention, especially in the nonparametric Bayesian literature. To this effect, we propose the semi-hierarchical Dirichlet process, a novel hierarchical prior that extends the hierarchical Dirichlet process of Teh et al. (2006) and that avoids the degeneracy issues of nested processes recently described by Camerlenghi et al. (2019a). We go beyond the simple yes/no answer to the homogeneity question and embed the proposed prior in a random partition model; this procedure allows us to give a more comprehensive response to the above question and in fact find groups of populations that are internally homogeneous when I greater or equal than 2 such populations are considered. We study theoretical properties of the semi-hierarchical Dirichlet process and of the Bayes factor for the homogeneity test when I = 2. Extensive simulation studies and applications to educational data are also discussed.
{"title":"The Semi-Hierarchical Dirichlet Process and Its Application to Clustering Homogeneous Distributions","authors":"Mario Beraha, A. Guglielmi, F. Quintana","doi":"10.1214/21-ba1278","DOIUrl":"https://doi.org/10.1214/21-ba1278","url":null,"abstract":"Assessing homogeneity of distributions is an old problem that has received considerable attention, especially in the nonparametric Bayesian literature. To this effect, we propose the semi-hierarchical Dirichlet process, a novel hierarchical prior that extends the hierarchical Dirichlet process of Teh et al. (2006) and that avoids the degeneracy issues of nested processes recently described by Camerlenghi et al. (2019a). We go beyond the simple yes/no answer to the homogeneity question and embed the proposed prior in a random partition model; this procedure allows us to give a more comprehensive response to the above question and in fact find groups of populations that are internally homogeneous when I greater or equal than 2 such populations are considered. We study theoretical properties of the semi-hierarchical Dirichlet process and of the Bayes factor for the homogeneity test when I = 2. Extensive simulation studies and applications to educational data are also discussed.","PeriodicalId":186390,"journal":{"name":"arXiv: Methodology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130001011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}