Pub Date : 2023-08-22DOI: 10.1080/10705511.2023.2230520
Martin Hecht, Julia-Kim Walther, Manuel Arnold, Steffen Zitzmann
Abstract
Planning longitudinal studies can be challenging as various design decisions need to be made. Often, researchers are in search for the optimal design that maximizes statistical power to test certain parameters of the employed model. We provide a user-friendly Shiny app OptDynMo available at https://shiny.psychologie.hu-berlin.de/optdynmo that helps to find the optimal number of persons (N) and the optimal number of time points (T) for which the power of the likelihood ratio test (LRT) for a model parameter is maximal given a fixed budget for conducting the study. The total cost of the study is computed from two components: the cost to include one person in the study and the cost for measuring one person at one time point. Currently supported models are the cross-lagged panel model (CLPM), factor CLPM, random intercepts cross-lagged panel model (RI-CLPM), stable trait autoregressive trait and state model (STARTS), latent curve model with structured residuals (LCM-SR), autoregressive latent trajectory model (ALT), and the latent change score model (LCS).
{"title":"Finding the Optimal Number of Persons (N) and Time Points (T) for Maximal Power in Dynamic Longitudinal Models Given a Fixed Budget","authors":"Martin Hecht, Julia-Kim Walther, Manuel Arnold, Steffen Zitzmann","doi":"10.1080/10705511.2023.2230520","DOIUrl":"https://doi.org/10.1080/10705511.2023.2230520","url":null,"abstract":"<p><b>Abstract</b></p><p>Planning longitudinal studies can be challenging as various design decisions need to be made. Often, researchers are in search for the optimal design that maximizes statistical power to test certain parameters of the employed model. We provide a user-friendly Shiny app OptDynMo available at https://shiny.psychologie.hu-berlin.de/optdynmo that helps to find the optimal number of persons (<i>N</i>) and the optimal number of time points (<i>T</i>) for which the power of the likelihood ratio test (LRT) for a model parameter is maximal given a fixed budget for conducting the study. The total cost of the study is computed from two components: the cost to include one person in the study and the cost for measuring one person at one time point. Currently supported models are the cross-lagged panel model (CLPM), factor CLPM, random intercepts cross-lagged panel model (RI-CLPM), stable trait autoregressive trait and state model (STARTS), latent curve model with structured residuals (LCM-SR), autoregressive latent trajectory model (ALT), and the latent change score model (LCS).</p>","PeriodicalId":21964,"journal":{"name":"Structural Equation Modeling: A Multidisciplinary Journal","volume":"17 4","pages":""},"PeriodicalIF":6.0,"publicationDate":"2023-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50165106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-14DOI: 10.1080/10705511.2023.2213842
Meng Qiu, Ke-Hai Yuan
Abstract
Latent class analysis (LCA) is a widely used technique for detecting unobserved population heterogeneity in cross-sectional data. Despite its popularity, the performance of LCA is not well understood. In this study, we evaluate the performance of LCA with binary data by examining classification accuracy, parameter estimation accuracy, and coverage rates of confidence intervals (CIs) through Monte Carlo simulation studies. We address the issue of label switching with a distance-based relabeling approach and introduce an index to measure separation among latent classes. Our results show that classification accuracy, parameter estimation accuracy, and CI coverage rates are primarily influenced by class separation and the number of indicators used for LCA. We recommend using a large sample size to mitigate the effects of tiny class sizes. Additionally, the study finds that the parametric bootstrap CIs perform comparably well or better when compared with the CIs based on the standard maximum likelihood method.
{"title":"Label Switching in Latent Class Analysis: Accuracy of Classification, Parameter Estimates, and Confidence Intervals","authors":"Meng Qiu, Ke-Hai Yuan","doi":"10.1080/10705511.2023.2213842","DOIUrl":"https://doi.org/10.1080/10705511.2023.2213842","url":null,"abstract":"<p><b>Abstract</b></p><p>Latent class analysis (LCA) is a widely used technique for detecting unobserved population heterogeneity in cross-sectional data. Despite its popularity, the performance of LCA is not well understood. In this study, we evaluate the performance of LCA with binary data by examining classification accuracy, parameter estimation accuracy, and coverage rates of confidence intervals (CIs) through Monte Carlo simulation studies. We address the issue of label switching with a distance-based relabeling approach and introduce an index to measure separation among latent classes. Our results show that classification accuracy, parameter estimation accuracy, and CI coverage rates are primarily influenced by class separation and the number of indicators used for LCA. We recommend using a large sample size to mitigate the effects of tiny class sizes. Additionally, the study finds that the parametric bootstrap CIs perform comparably well or better when compared with the CIs based on the standard maximum likelihood method.</p>","PeriodicalId":21964,"journal":{"name":"Structural Equation Modeling: A Multidisciplinary Journal","volume":"91 1","pages":""},"PeriodicalIF":6.0,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50165357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-28DOI: 10.1080/10705511.2023.2224515
Andrea Hasl, Manuel Voelkle, Charles Driver, Julia Kretschmann, Martin Brunner
Abstract
To examine developmental processes, intervention effects, or both, longitudinal studies often aim to include measurement intervals that are equally spaced for all participants. In reality, however, this goal is hardly ever met. Although different approaches have been proposed to deal with this issue, few studies have investigated the potential benefits of individual variation in time intervals. In the present paper, we examine how continuous time dynamic models can be used to study nonexperimental intervention effects in longitudinal studies where measurement intervals vary between and within participants. We empirically illustrate this method by using panel data (N = 2,877) to study the effect of the transition from primary to secondary school on students’ motivation. Results of a simulation study also show that the precision and recovery of the estimate of the effect improves with individual variation in time intervals.
{"title":"Leveraging Observation Timing Variability to Understand Intervention Effects in Panel Studies: An Empirical Illustration and Simulation Study","authors":"Andrea Hasl, Manuel Voelkle, Charles Driver, Julia Kretschmann, Martin Brunner","doi":"10.1080/10705511.2023.2224515","DOIUrl":"https://doi.org/10.1080/10705511.2023.2224515","url":null,"abstract":"<p><b>Abstract</b></p><p>To examine developmental processes, intervention effects, or both, longitudinal studies often aim to include measurement intervals that are equally spaced for all participants. In reality, however, this goal is hardly ever met. Although different approaches have been proposed to deal with this issue, few studies have investigated the potential benefits of individual variation in time intervals. In the present paper, we examine how continuous time dynamic models can be used to study nonexperimental intervention effects in longitudinal studies where measurement intervals vary between and within participants. We empirically illustrate this method by using panel data (<i>N</i> = 2,877) to study the effect of the transition from primary to secondary school on students’ motivation. Results of a simulation study also show that the precision and recovery of the estimate of the effect improves with individual variation in time intervals.</p>","PeriodicalId":21964,"journal":{"name":"Structural Equation Modeling: A Multidisciplinary Journal","volume":"86 3","pages":""},"PeriodicalIF":6.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50165386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-28DOI: 10.1080/10705511.2023.2225132
Daniel McNeish, Patrick D. Manapat
Abstract
A recent review found that 11% of published factor models are hierarchical models with second-order factors. However, dedicated recommendations for evaluating hierarchical model fit have yet to emerge. Traditional benchmarks like RMSEA <0.06 or CFI >0.95 are often consulted, but they were never intended to generalize to hierarchical models. Through simulation, we show that traditional benchmarks perform poorly at identifying misspecification in hierarchical models. This corroborates previous studies showing that traditional benchmarks do not maintain optimal sensitivity to misspecification as model characteristics deviate from those used to derive the benchmarks. Instead, we propose a hierarchical extension to the dynamic fit index (DFI) framework, which automates custom simulations to derive cutoffs with optimal sensitivity for specific model characteristics. In simulations to evaluate performance, results showed that the hierarchical DFI extension routinely exceeded 95% classification accuracy and 90% sensitivity to misspecification whereas traditional benchmarks applied to hierarchical models rarely exceeded 50% classification accuracy and 20% sensitivity.
{"title":"Dynamic Fit Index Cutoffs for Hierarchical and Second-Order Factor Models","authors":"Daniel McNeish, Patrick D. Manapat","doi":"10.1080/10705511.2023.2225132","DOIUrl":"https://doi.org/10.1080/10705511.2023.2225132","url":null,"abstract":"<p><b>Abstract</b></p><p>A recent review found that 11% of published factor models are hierarchical models with second-order factors. However, dedicated recommendations for evaluating hierarchical model fit have yet to emerge. Traditional benchmarks like RMSEA <0.06 or CFI >0.95 are often consulted, but they were never intended to generalize to hierarchical models. Through simulation, we show that traditional benchmarks perform poorly at identifying misspecification in hierarchical models. This corroborates previous studies showing that traditional benchmarks do not maintain optimal sensitivity to misspecification as model characteristics deviate from those used to derive the benchmarks. Instead, we propose a hierarchical extension to the dynamic fit index (DFI) framework, which automates custom simulations to derive cutoffs with optimal sensitivity for specific model characteristics. In simulations to evaluate performance, results showed that the hierarchical DFI extension routinely exceeded 95% classification accuracy and 90% sensitivity to misspecification whereas traditional benchmarks applied to hierarchical models rarely exceeded 50% classification accuracy and 20% sensitivity.</p>","PeriodicalId":21964,"journal":{"name":"Structural Equation Modeling: A Multidisciplinary Journal","volume":"87 4","pages":""},"PeriodicalIF":6.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50165383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-28DOI: 10.1080/10705511.2023.2230519
Saijun Zhao, Zhiyong Zhang, Hong Zhang
Abstract
Mediation analysis is widely applied in various fields of science, such as psychology, epidemiology, and sociology. In practice, many psychological and behavioral phenomena are dynamic, and the corresponding mediation effects are expected to change over time. However, most existing mediation methods assume a static mediation effect over time, which overlooks the dynamic nature of mediation effect. To address this issue, we propose dynamic mediation models that can capture the dynamic nature of the mediation effect. Specifically, we model the path parameters of mediation models as auto-regressive (AR) processes of time that can vary over time. Additionally, we define the mediation effect under the potential outcome framework, and examine its identification and causal interpretation. Bayesian methods utilizing Gibbs sampling are adopted to estimate unknown parameters in the proposed dynamic mediation models. We further evaluate our proposed models and methods through extensive simulations and illustrate their application through a real data application.
{"title":"Bayesian Inference of Dynamic Mediation Models for Longitudinal Data","authors":"Saijun Zhao, Zhiyong Zhang, Hong Zhang","doi":"10.1080/10705511.2023.2230519","DOIUrl":"https://doi.org/10.1080/10705511.2023.2230519","url":null,"abstract":"<p><b>Abstract</b></p><p>Mediation analysis is widely applied in various fields of science, such as psychology, epidemiology, and sociology. In practice, many psychological and behavioral phenomena are dynamic, and the corresponding mediation effects are expected to change over time. However, most existing mediation methods assume a static mediation effect over time, which overlooks the dynamic nature of mediation effect. To address this issue, we propose dynamic mediation models that can capture the dynamic nature of the mediation effect. Specifically, we model the path parameters of mediation models as auto-regressive (AR) processes of time that can vary over time. Additionally, we define the mediation effect under the potential outcome framework, and examine its identification and causal interpretation. Bayesian methods utilizing Gibbs sampling are adopted to estimate unknown parameters in the proposed dynamic mediation models. We further evaluate our proposed models and methods through extensive simulations and illustrate their application through a real data application.</p>","PeriodicalId":21964,"journal":{"name":"Structural Equation Modeling: A Multidisciplinary Journal","volume":"87 3","pages":""},"PeriodicalIF":6.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50165384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-28DOI: 10.1080/10705511.2023.2223360
Chunhua Cao, Xinya Liang
Abstract
Cross-loadings are common in multiple-factor confirmatory factor analysis (CFA) but often ignored in measurement invariance testing. This study examined the impact of ignoring cross-loadings on the sensitivity of fit measures (CFI, RMSEA, SRMR, SRMRu, AIC, BIC, SaBIC, LRT) to measurement noninvariance . The manipulated design factors included the magnitude and percentage of cross-loadings, the magnitude and percentage of noninvariance, location of measurement noninvariance, model size, and sample size. Results suggested that the ignored cross-loadings affected the sensitivity of all fit measures but LRT to metric noninvariance to varying degrees, whereas they did not affect the sensitivity of fit measures to scalar noninvariance except for RMSEA. RMSEA was impacted by the magnitude of cross-loadings in both metric and scalar invariance testing. In the largest model size, CFI failed to detect metric noninvariance when there were no cross-loadings in the population model but detected the metric noninvariance of .30 with ignored cross-loadings.
{"title":"The Impact of Ignoring Cross-loadings on the Sensitivity of Fit Measures in Measurement Invariance Testing","authors":"Chunhua Cao, Xinya Liang","doi":"10.1080/10705511.2023.2223360","DOIUrl":"https://doi.org/10.1080/10705511.2023.2223360","url":null,"abstract":"<p><b>Abstract</b></p><p>Cross-loadings are common in multiple-factor confirmatory factor analysis (CFA) but often ignored in measurement invariance testing. This study examined the impact of ignoring cross-loadings on the sensitivity of fit measures (CFI, RMSEA, SRMR, SRMRu, AIC, BIC, SaBIC, LRT) to measurement noninvariance . The manipulated design factors included the magnitude and percentage of cross-loadings, the magnitude and percentage of noninvariance, location of measurement noninvariance, model size, and sample size. Results suggested that the ignored cross-loadings affected the sensitivity of all fit measures but LRT to metric noninvariance to varying degrees, whereas they did not affect the sensitivity of fit measures to scalar noninvariance except for RMSEA. RMSEA was impacted by the magnitude of cross-loadings in both metric and scalar invariance testing. In the largest model size, CFI failed to detect metric noninvariance when there were no cross-loadings in the population model but detected the metric noninvariance of .30 with ignored cross-loadings.</p>","PeriodicalId":21964,"journal":{"name":"Structural Equation Modeling: A Multidisciplinary Journal","volume":"87 2","pages":""},"PeriodicalIF":6.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50165385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-14DOI: 10.1080/10705511.2023.2220919
Tong-Rong Yang, Li-Jen Weng
Abstract
In Savalei’s (2011Savalei, V. (2011). What to do about zero frequency cells when estimating polychoric correlations. Structural Equation Modeling, 18, 253–273. https://doi.org/10.1080/10705511.2011.557339[Taylor & Francis Online], [Web of Science ®], [Google Scholar]) simulation that evaluated the performance of polychoric correlation estimates in small samples, two methods for treating zero-frequency cells, adding 0.5 (ADD) and doing nothing (NONE), were compared. Savalei tentatively suggested using ADD for binary data and NONE for data with three or more categories. Yet, Savalei’s suggestion could be explained by the skewness of the data distribution being severe for binary data and slight for three-category data. To rule out this alternative explanation, we extended Savalei’s design by incorporating the degree of skewness into our simulation. With slightly skewed data, NONE is recommended due to its high-quality estimates. With severely skewed data, only ADD is recommended for binary data when the skewness of two variables is the same-signed and the underlying correlation is expected to be strong. Methods for improving the polychoric correlation estimates with severely skewed data merit further study.
[摘要]in Savalei 's(2011)。在估计多频相关性时如何处理零频率单元。力学与工程学报,18(3):593 - 593。https://doi.org/10.1080/10705511.2011.557339[泰勒,Francis Online], [Web of Science®],[谷歌Scholar])模拟评估小样本中多频相关估计的性能,比较了两种处理零频率细胞的方法,添加0.5 (ADD)和不做(NONE)。Savalei初步建议对二进制数据使用ADD,对具有三个或更多类别的数据使用NONE。然而,Savalei的建议可以用数据分布的偏性对二进制数据来说很严重,而对三类数据来说则轻微来解释。为了排除这种可能的解释,我们扩展了Savalei的设计,在模拟中加入了偏度。对于稍微偏斜的数据,建议使用NONE,因为它具有高质量的估计。对于严重偏斜的数据,当两个变量的偏度是同号的并且期望潜在的相关性很强时,只建议对二进制数据使用ADD。改进严重偏斜数据的多重相关估计的方法值得进一步研究。
{"title":"Revisiting Savalei’s (2011) Research on Remediating Zero-Frequency Cells in Estimating Polychoric Correlations: A Data Distribution Perspective","authors":"Tong-Rong Yang, Li-Jen Weng","doi":"10.1080/10705511.2023.2220919","DOIUrl":"https://doi.org/10.1080/10705511.2023.2220919","url":null,"abstract":"<p><b>Abstract</b></p><p>In Savalei’s (<span>2011<span aria-label=\"reference\" role=\"dialog\"><span aria-label=\"Close reference popup\" role=\"button\" tabindex=\"0\"></span><span></span> <span>Savalei, <span>V.</span></span> (<span>2011</span>). <span>What to do about zero frequency cells when estimating polychoric correlations</span>. <i>Structural Equation Modeling</i>, <i>18</i>, <span>253</span>–<span>273</span>. <span>https://doi.org/10.1080/10705511.2011.557339</span><span><span>[Taylor & Francis Online], [Web of Science ®]</span> <span>, [Google Scholar]</span></span></span></span>) simulation that evaluated the performance of polychoric correlation estimates in small samples, two methods for treating zero-frequency cells, adding 0.5 (ADD) and doing nothing (NONE), were compared. Savalei tentatively suggested using ADD for binary data and NONE for data with three or more categories. Yet, Savalei’s suggestion could be explained by the skewness of the data distribution being severe for binary data and slight for three-category data. To rule out this alternative explanation, we extended Savalei’s design by incorporating the degree of skewness into our simulation. With slightly skewed data, NONE is recommended due to its high-quality estimates. With severely skewed data, only ADD is recommended for binary data when the skewness of two variables is the same-signed and the underlying correlation is expected to be strong. Methods for improving the polychoric correlation estimates with severely skewed data merit further study.</p>","PeriodicalId":21964,"journal":{"name":"Structural Equation Modeling: A Multidisciplinary Journal","volume":"24 7","pages":""},"PeriodicalIF":6.0,"publicationDate":"2023-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50165593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-14DOI: 10.1080/10705511.2023.2220915
Timothy R. Konold, Elizabeth A. Sanders
Abstract
Within the frequentist structural equation modeling (SEM) framework, adjudicating model quality through measures of fit has been an active area of methodological research. Complicating this conversation is research revealing that a higher quality measurement portion of a SEM can result in poorer estimates of overall model fit than lower quality measurement models, given the same structural misspecifications. Through population analysis and Monte Carlo simulation, we extend the earlier research to recently developed Bayesian SEM measures of fit to evaluate whether these indices are susceptible to the same reliability paradox, in the context of using both uninformative and informative priors. Our results show that the reliability paradox occurs for RMSEA, and to some extent, gamma-hat and PPP (measures of absolute fit); but not CFI or TLI (measures of relative fit), across Bayesian (MCMC) and frequentist (maximum likelihood) SEM frameworks alike. Taken together, these findings indicate that the behavior of these newly adapted Bayesian fit indices map closely to their frequentist analogs. Implications for their utility in identifying incorrectly specified models are discussed.
{"title":"The SEM Reliability Paradox in a Bayesian Framework","authors":"Timothy R. Konold, Elizabeth A. Sanders","doi":"10.1080/10705511.2023.2220915","DOIUrl":"https://doi.org/10.1080/10705511.2023.2220915","url":null,"abstract":"<p><b>Abstract</b></p><p>Within the frequentist structural equation modeling (SEM) framework, adjudicating model quality through measures of fit has been an active area of methodological research. Complicating this conversation is research revealing that a higher quality measurement portion of a SEM can result in poorer estimates of overall model fit than lower quality measurement models, given the same structural misspecifications. Through population analysis and Monte Carlo simulation, we extend the earlier research to recently developed Bayesian SEM measures of fit to evaluate whether these indices are susceptible to the same reliability paradox, in the context of using both uninformative and informative priors. Our results show that the reliability paradox occurs for RMSEA, and to some extent, gamma-hat and PPP (measures of absolute fit); but not CFI or TLI (measures of relative fit), across Bayesian (MCMC) and frequentist (maximum likelihood) SEM frameworks alike. Taken together, these findings indicate that the behavior of these newly adapted Bayesian fit indices map closely to their frequentist analogs. Implications for their utility in identifying incorrectly specified models are discussed.</p>","PeriodicalId":21964,"journal":{"name":"Structural Equation Modeling: A Multidisciplinary Journal","volume":"23 19","pages":""},"PeriodicalIF":6.0,"publicationDate":"2023-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50165608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}