Pub Date : 2020-11-16DOI: 10.1080/13803611.2021.1968442
G. Marks
ABSTRACT Most studies on the relationship between students’ socioeconomic status (SES) and student achievement assume that its effects are sizable and causal. A large variety of theoretical explanations have been proposed. However, the SES–achievement association may reflect, to some extent, the inter-relationships of parents’ abilities, SES, children’s abilities, and student achievement. The purpose of this study is to quantify the role of SES vis-à-vis child and parents’ abilities, and prior achievement. Analyses of a covariance matrix that includes supplementary correlations for fathers and mothers’ abilities derived from the literature indicate that more than half of the SES–achievement association can be accounted for by parents’ abilities. SES coefficients decline further with the addition of child’s abilities. With the addition of prior achievement, the SES coefficients are trivial implying that SES has little or no contemporaneous effects. These findings are not compatible with standard theoretical explanations for SES inequalities in achievement.
{"title":"Is the relationship between socioeconomic status (SES) and student achievement causal? Considering student and parent abilities","authors":"G. Marks","doi":"10.1080/13803611.2021.1968442","DOIUrl":"https://doi.org/10.1080/13803611.2021.1968442","url":null,"abstract":"ABSTRACT Most studies on the relationship between students’ socioeconomic status (SES) and student achievement assume that its effects are sizable and causal. A large variety of theoretical explanations have been proposed. However, the SES–achievement association may reflect, to some extent, the inter-relationships of parents’ abilities, SES, children’s abilities, and student achievement. The purpose of this study is to quantify the role of SES vis-à-vis child and parents’ abilities, and prior achievement. Analyses of a covariance matrix that includes supplementary correlations for fathers and mothers’ abilities derived from the literature indicate that more than half of the SES–achievement association can be accounted for by parents’ abilities. SES coefficients decline further with the addition of child’s abilities. With the addition of prior achievement, the SES coefficients are trivial implying that SES has little or no contemporaneous effects. These findings are not compatible with standard theoretical explanations for SES inequalities in achievement.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":"26 1","pages":"344 - 367"},"PeriodicalIF":1.4,"publicationDate":"2020-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44764218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-16DOI: 10.1080/13803611.2021.1991643
K. Morrison, G. P. van der Werf
The opening pages of Pearl and Mackenzie’s volume The Book of Why: The New Science of Cause and Effect (2018) herald their captivating romp through causality by referring to a “ladder of causation” (p. 28) that starts from association (by seeing and observing), moves up to intervention (by doing and intervening), and thence to counterfactuals (by imagining, retrospection, and understanding). Each rung of the ladder establishes causality more certainly. Humans think causally. Causality can be studied by many methods. Here, Pearl and Mackenzie (2018) state that statistical analysis does not simply concern data and their methods of analysis; rather, there is a need for an “understanding of the process that produces the data” (p. 85). Such “understanding” comes from introducing causality, as causality yields something additional to the original data. “Methods” of data analysis are informed by an “understanding” of causality, as this Editorial shows. Pearl and Mackenzie write that if we remove the understanding of causation from statistical analysis, all that we are left with is data reduction, which does not tell us much. The papers in this issue move forward from “methods” to “understanding” data with regard to causality. Further, the Editorial indicates how easily it is to find expressions of causality in articles; this should caution researchers to take care in the wording that they use. The Editorial below draws attention to wording in deliberately italicising causal words in quoting from the articles in this issue. For example, is causality really being demonstrated, or, like Pearl and MacKenzie’s lowest rung of the ladder, is there merely association? Causality, be it post hoc or ante hoc, is self-evidently important in education. However, how we adduce causality is far from straightforward, and the papers in this issue yield insights into, and cautions concerning, claims for, and demonstrations of, causality. The papers here indicate methods, challenges, outcomes, and benefits of studying causality. The challenges of “methods” and “understanding” when investigating causality are legion. Witness, for example, in the perennial search for causality, its differences from association, prediction, explanation, inference, influence, correlation, accounting for, correspondence to, purposiveness, and a whole armoury of other words. Look at the dangers of working with mediating, confounding, and moderating variables, transitivity, or controlling out almost everything such that what remains is very little. Wrestle with underdetermination, overdetermination, supervenience, and the difficulties of mereology. Consider the challenges of probabilistic causation and Bayesian approaches, leavened by multilevel causal modelling. Add to these the context-dense, variable-rich, causally complex world of education, and the attraction of Pearl and MacKenzie’s (2018) “childlike simplicity” (p. 39) of a causal diagram evaporates in front of our eyes. Little wonder it i
{"title":"Methods, understandings, and expressions of causality in educational research","authors":"K. Morrison, G. P. van der Werf","doi":"10.1080/13803611.2021.1991643","DOIUrl":"https://doi.org/10.1080/13803611.2021.1991643","url":null,"abstract":"The opening pages of Pearl and Mackenzie’s volume The Book of Why: The New Science of Cause and Effect (2018) herald their captivating romp through causality by referring to a “ladder of causation” (p. 28) that starts from association (by seeing and observing), moves up to intervention (by doing and intervening), and thence to counterfactuals (by imagining, retrospection, and understanding). Each rung of the ladder establishes causality more certainly. Humans think causally. Causality can be studied by many methods. Here, Pearl and Mackenzie (2018) state that statistical analysis does not simply concern data and their methods of analysis; rather, there is a need for an “understanding of the process that produces the data” (p. 85). Such “understanding” comes from introducing causality, as causality yields something additional to the original data. “Methods” of data analysis are informed by an “understanding” of causality, as this Editorial shows. Pearl and Mackenzie write that if we remove the understanding of causation from statistical analysis, all that we are left with is data reduction, which does not tell us much. The papers in this issue move forward from “methods” to “understanding” data with regard to causality. Further, the Editorial indicates how easily it is to find expressions of causality in articles; this should caution researchers to take care in the wording that they use. The Editorial below draws attention to wording in deliberately italicising causal words in quoting from the articles in this issue. For example, is causality really being demonstrated, or, like Pearl and MacKenzie’s lowest rung of the ladder, is there merely association? Causality, be it post hoc or ante hoc, is self-evidently important in education. However, how we adduce causality is far from straightforward, and the papers in this issue yield insights into, and cautions concerning, claims for, and demonstrations of, causality. The papers here indicate methods, challenges, outcomes, and benefits of studying causality. The challenges of “methods” and “understanding” when investigating causality are legion. Witness, for example, in the perennial search for causality, its differences from association, prediction, explanation, inference, influence, correlation, accounting for, correspondence to, purposiveness, and a whole armoury of other words. Look at the dangers of working with mediating, confounding, and moderating variables, transitivity, or controlling out almost everything such that what remains is very little. Wrestle with underdetermination, overdetermination, supervenience, and the difficulties of mereology. Consider the challenges of probabilistic causation and Bayesian approaches, leavened by multilevel causal modelling. Add to these the context-dense, variable-rich, causally complex world of education, and the attraction of Pearl and MacKenzie’s (2018) “childlike simplicity” (p. 39) of a causal diagram evaporates in front of our eyes. Little wonder it i","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":"26 1","pages":"339 - 343"},"PeriodicalIF":1.4,"publicationDate":"2020-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48385374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-16DOI: 10.1080/13803611.2021.1969810
Y. Tan
{"title":"Understanding and using challenging educational theories","authors":"Y. Tan","doi":"10.1080/13803611.2021.1969810","DOIUrl":"https://doi.org/10.1080/13803611.2021.1969810","url":null,"abstract":"","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":"26 1","pages":"460 - 462"},"PeriodicalIF":1.4,"publicationDate":"2020-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45407763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-16DOI: 10.1080/13803611.2021.1949355
G. Silm, O. Must, Karin Täht, M. Pedaste
ABSTRACT Test-taking motivation (TTM) has been associated with test performance in low-stakes testing contexts. However, there have been few studies about TTM in high-stakes testing contexts, and these have contradictory results. Our aim was to explore the relationship between test-taking effort and test performance in a real-life high-stakes testing context (n = 1,515). We collected time-based and self-reported data about test-taking effort and used a structural equation model (SEM) to predict test performance. We found that the motivational indicators added about 15% of predictive power to the SEM model, where gender and previous performance had been controlled for. Altogether, the SEM model predicted 69% of the variance in test results. We compared the findings to previous studies and concluded that the possible effect of TTM should be considered in various testing contexts, whether low-stakes or high-stakes.
{"title":"Does test-taking motivation predict test results in a high-stakes testing context?","authors":"G. Silm, O. Must, Karin Täht, M. Pedaste","doi":"10.1080/13803611.2021.1949355","DOIUrl":"https://doi.org/10.1080/13803611.2021.1949355","url":null,"abstract":"ABSTRACT Test-taking motivation (TTM) has been associated with test performance in low-stakes testing contexts. However, there have been few studies about TTM in high-stakes testing contexts, and these have contradictory results. Our aim was to explore the relationship between test-taking effort and test performance in a real-life high-stakes testing context (n = 1,515). We collected time-based and self-reported data about test-taking effort and used a structural equation model (SEM) to predict test performance. We found that the motivational indicators added about 15% of predictive power to the SEM model, where gender and previous performance had been controlled for. Altogether, the SEM model predicted 69% of the variance in test results. We compared the findings to previous studies and concluded that the possible effect of TTM should be considered in various testing contexts, whether low-stakes or high-stakes.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":"26 1","pages":"387 - 413"},"PeriodicalIF":1.4,"publicationDate":"2020-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/13803611.2021.1949355","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45569696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-16DOI: 10.1080/13803611.2021.1977152
S. Yildirim-Erbasli, O. Bulut
ABSTRACT This study investigated the impact of students’ test-taking effort on their growth estimates in reading. The sample consisted of 7,602 students (Grades 1 to 4) in the United States who participated in the fall and spring administrations of a computer-based reading assessment. First, a new response dataset was created by flagging both rapid-guessing and slow-responding behaviours and recoding these non-effortful responses as missing. Second, students’ academic growth (i.e., daily increase in ability levels) from fall to spring was calculated based on their original responses and responses in the new dataset excluding non-effortful responses. The results indicated that students’ growth estimates changed significantly after recoding non-effortful responses as missing. Also, the difference in the growth estimates varied depending on the grade level. Overall, students’ test-taking effort appeared to be influential in the estimation of students’ reading growth. Implications for practice were discussed.
{"title":"The impact of students’ test-taking effort on growth estimates in low-stakes educational assessments","authors":"S. Yildirim-Erbasli, O. Bulut","doi":"10.1080/13803611.2021.1977152","DOIUrl":"https://doi.org/10.1080/13803611.2021.1977152","url":null,"abstract":"ABSTRACT This study investigated the impact of students’ test-taking effort on their growth estimates in reading. The sample consisted of 7,602 students (Grades 1 to 4) in the United States who participated in the fall and spring administrations of a computer-based reading assessment. First, a new response dataset was created by flagging both rapid-guessing and slow-responding behaviours and recoding these non-effortful responses as missing. Second, students’ academic growth (i.e., daily increase in ability levels) from fall to spring was calculated based on their original responses and responses in the new dataset excluding non-effortful responses. The results indicated that students’ growth estimates changed significantly after recoding non-effortful responses as missing. Also, the difference in the growth estimates varied depending on the grade level. Overall, students’ test-taking effort appeared to be influential in the estimation of students’ reading growth. Implications for practice were discussed.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":"26 1","pages":"368 - 386"},"PeriodicalIF":1.4,"publicationDate":"2020-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43302422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-17DOI: 10.1080/13803611.2021.1963941
Hongwen Guo, Kadriye Ercikan
ABSTRACT Rapid response behaviour, a type of test disengagement, cannot be interpreted as a true indicator of the targeted constructs and may compromise score accuracy as well as score validity for interpretation. Rapid responding may be due to multiple factors for diverse populations. In this study, using Programme for International Student Assessment (PISA) 2018 Science data, we examined the comparability of rapid response behaviours for nine different language and cultural groups. Statistical methods were developed to flag rapid responses on different item types and to test differential rapid responding rates between groups. Results showed that rapid response rates and their association with performance were different among the studied groups. However, regardless of students’ group membership, a rapid response led to a chance correct rate, and it had a negative impact on performance. The results also indicated limited impact of different test engagement on performance ranking for the studied groups.
{"title":"Differential rapid responding across language and cultural groups","authors":"Hongwen Guo, Kadriye Ercikan","doi":"10.1080/13803611.2021.1963941","DOIUrl":"https://doi.org/10.1080/13803611.2021.1963941","url":null,"abstract":"ABSTRACT Rapid response behaviour, a type of test disengagement, cannot be interpreted as a true indicator of the targeted constructs and may compromise score accuracy as well as score validity for interpretation. Rapid responding may be due to multiple factors for diverse populations. In this study, using Programme for International Student Assessment (PISA) 2018 Science data, we examined the comparability of rapid response behaviours for nine different language and cultural groups. Statistical methods were developed to flag rapid responses on different item types and to test differential rapid responding rates between groups. Results showed that rapid response rates and their association with performance were different among the studied groups. However, regardless of students’ group membership, a rapid response led to a chance correct rate, and it had a negative impact on performance. The results also indicated limited impact of different test engagement on performance ranking for the studied groups.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":"26 1","pages":"302 - 327"},"PeriodicalIF":1.4,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41439984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-17DOI: 10.1080/13803611.2021.1963942
S. Wise
ABSTRACT There has been increasing concern about the presence of disengaged test taking in international assessment programmes and its implications for the validity of inferences made regarding a country’s level of educational attainment. This issue has received a growing research interest over the past 20 years, with notable advances in both the measurement of disengagement as well as our understanding of its distortive impact on both individual and aggregated scores. In this paper, I discuss six important insights yielded by this research and their implications for assessment programmes.
{"title":"Six insights regarding test-taking disengagement","authors":"S. Wise","doi":"10.1080/13803611.2021.1963942","DOIUrl":"https://doi.org/10.1080/13803611.2021.1963942","url":null,"abstract":"ABSTRACT There has been increasing concern about the presence of disengaged test taking in international assessment programmes and its implications for the validity of inferences made regarding a country’s level of educational attainment. This issue has received a growing research interest over the past 20 years, with notable advances in both the measurement of disengagement as well as our understanding of its distortive impact on both individual and aggregated scores. In this paper, I discuss six important insights yielded by this research and their implications for assessment programmes.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":"26 1","pages":"328 - 338"},"PeriodicalIF":1.4,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46329911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-17DOI: 10.1080/13803611.2021.1963943
Gavin T. L. Brown, Kane Meissel
{"title":"Cross-cultural study of test effort in PISA","authors":"Gavin T. L. Brown, Kane Meissel","doi":"10.1080/13803611.2021.1963943","DOIUrl":"https://doi.org/10.1080/13803611.2021.1963943","url":null,"abstract":"","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":"26 1","pages":"217 - 220"},"PeriodicalIF":1.4,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44595437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-17DOI: 10.1080/13803611.2021.1963940
Erik Lundgren, Hanna Eklöf
ABSTRACT The present study used process data from a computer-based problem-solving task as indications of behavioural level of test-taking effort, and explored how behavioural item-level effort related to overall test performance and self-reported effort. Variables were extracted from raw process data and clustered. Four distinct clusters were obtained and characterised as high effort, medium effort, low effort, and planner. Regression modelling indicated that among students that failed to solve the task, level of effort invested before giving up positively predicted overall test performance. Among students that solved the task, level of effort was instead weakly negatively related to test performance. A low level of behavioural effort before giving up the task was also related to lower self-reported effort. Results suggest that effort invested before giving up provides information about test-takers’ motivation to spend effort on the test. We conclude that process data could augment existing methods of assessing test-taking effort.
{"title":"Within-item response processes as indicators of test-taking effort and motivation","authors":"Erik Lundgren, Hanna Eklöf","doi":"10.1080/13803611.2021.1963940","DOIUrl":"https://doi.org/10.1080/13803611.2021.1963940","url":null,"abstract":"ABSTRACT The present study used process data from a computer-based problem-solving task as indications of behavioural level of test-taking effort, and explored how behavioural item-level effort related to overall test performance and self-reported effort. Variables were extracted from raw process data and clustered. Four distinct clusters were obtained and characterised as high effort, medium effort, low effort, and planner. Regression modelling indicated that among students that failed to solve the task, level of effort invested before giving up positively predicted overall test performance. Among students that solved the task, level of effort was instead weakly negatively related to test performance. A low level of behavioural effort before giving up the task was also related to lower self-reported effort. Results suggest that effort invested before giving up provides information about test-takers’ motivation to spend effort on the test. We conclude that process data could augment existing methods of assessing test-taking effort.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":"26 1","pages":"275 - 301"},"PeriodicalIF":1.4,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45210217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-17DOI: 10.1080/13803611.2021.1963939
M. Ivanova, M. Michaelides, Hanna Eklöf
ABSTRACT Collecting process data in computer-based assessments provides opportunities to describe examinee behaviour during a test-taking session. The number of actions taken by students while interacting with an item is in this context a variable that has been gaining attention. The present study aims to investigate how the number of actions performed on constructed-response items relates to self-reported effort, performance, and item cluster position in the test. The theory of planned behaviour was used as an interpretative framework. Data from two item clusters of the 2015 Swedish Programme for International Student Assessment (PISA) Science administration were utilised. Results showed that the number of actions was significantly related to performance on the items, self-reported test-taking effort, and cluster position. Latent variable models were examined separately for performance-level groups. Overall, the number of actions performed on constructed-response items as a behavioural indicator in testing situations may be useful in gauging test-taking engagement.
{"title":"How does the number of actions on constructed-response items relate to test-taking effort and performance?","authors":"M. Ivanova, M. Michaelides, Hanna Eklöf","doi":"10.1080/13803611.2021.1963939","DOIUrl":"https://doi.org/10.1080/13803611.2021.1963939","url":null,"abstract":"ABSTRACT Collecting process data in computer-based assessments provides opportunities to describe examinee behaviour during a test-taking session. The number of actions taken by students while interacting with an item is in this context a variable that has been gaining attention. The present study aims to investigate how the number of actions performed on constructed-response items relates to self-reported effort, performance, and item cluster position in the test. The theory of planned behaviour was used as an interpretative framework. Data from two item clusters of the 2015 Swedish Programme for International Student Assessment (PISA) Science administration were utilised. Results showed that the number of actions was significantly related to performance on the items, self-reported test-taking effort, and cluster position. Latent variable models were examined separately for performance-level groups. Overall, the number of actions performed on constructed-response items as a behavioural indicator in testing situations may be useful in gauging test-taking engagement.","PeriodicalId":47025,"journal":{"name":"Educational Research and Evaluation","volume":"26 1","pages":"252 - 274"},"PeriodicalIF":1.4,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46284404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}