Pub Date : 2022-01-02DOI: 10.1080/08957347.2022.2034825
Michele B. Carney, Katie Paulding, Joe Champion
ABSTRACT Teachers need ways to efficiently assess students’ cognitive understanding. One promising approach involves easily adapted and administered item types that yield quantitative scores that can be interpreted in terms of whether or not students likely possess key understandings. This study illustrates an approach to analyzing response process validity evidence from item types for assessing two important aspects of proportional reasoning. Data include results from an interview protocol used with 33 middle school students to compare their responses to prototypical item types to their conceptions of composed unit and multiplicative comparison. The findings provide validity evidence in support of the score interpretations for the item types but also detail important item specifications and caveats. Discussion includes recommendations for extending the research for examining response process validity evidence in support of claims related to cognitive interpretations of scores for other key mathematical conceptions.
{"title":"Efficient Assessment of Students’ Proportional Reasoning","authors":"Michele B. Carney, Katie Paulding, Joe Champion","doi":"10.1080/08957347.2022.2034825","DOIUrl":"https://doi.org/10.1080/08957347.2022.2034825","url":null,"abstract":"ABSTRACT Teachers need ways to efficiently assess students’ cognitive understanding. One promising approach involves easily adapted and administered item types that yield quantitative scores that can be interpreted in terms of whether or not students likely possess key understandings. This study illustrates an approach to analyzing response process validity evidence from item types for assessing two important aspects of proportional reasoning. Data include results from an interview protocol used with 33 middle school students to compare their responses to prototypical item types to their conceptions of composed unit and multiplicative comparison. The findings provide validity evidence in support of the score interpretations for the item types but also detail important item specifications and caveats. Discussion includes recommendations for extending the research for examining response process validity evidence in support of claims related to cognitive interpretations of scores for other key mathematical conceptions.","PeriodicalId":51609,"journal":{"name":"Applied Measurement in Education","volume":"35 1","pages":"46 - 62"},"PeriodicalIF":1.5,"publicationDate":"2022-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43149266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-02DOI: 10.1080/08957347.2022.2034824
Daniel Katz, A. Huggins-Manley, Walter L. Leite
ABSTRACT According to the Standards for Educational and Psychological Testing (2014), one aspect of test fairness concerns examinees having comparable opportunities to learn prior to taking tests. Meanwhile, many researchers are developing platforms enhanced by artificial intelligence (AI) that can personalize curriculum to individual student needs. This leads to a larger overarching question: When personalized learning leads to students having differential exposure to curriculum throughout the K-12 school year, how might this affect test fairness with respect to summative, end-of-year high-stakes tests? As a first step, we traced the differences in content exposure associated with personalized learning and more traditional learning paths. To better understand the implications of differences in content coverage, we conducted a simulation study to evaluate the degree to which curriculum exposure varied across students in a particular AI-enhanced learning platform for Algebra instruction with high-school students. Results indicate that AI-enhanced personalized learning may pose threats to test fairness as opportunity-to-learn on K-12 summative high-stakes tests. We discuss the implications given different perspectives of the role of testing in education
{"title":"Personalized Online Learning, Test Fairness, and Educational Measurement: Considering Differential Content Exposure Prior to a High Stakes End of Course Exam","authors":"Daniel Katz, A. Huggins-Manley, Walter L. Leite","doi":"10.1080/08957347.2022.2034824","DOIUrl":"https://doi.org/10.1080/08957347.2022.2034824","url":null,"abstract":"ABSTRACT According to the Standards for Educational and Psychological Testing (2014), one aspect of test fairness concerns examinees having comparable opportunities to learn prior to taking tests. Meanwhile, many researchers are developing platforms enhanced by artificial intelligence (AI) that can personalize curriculum to individual student needs. This leads to a larger overarching question: When personalized learning leads to students having differential exposure to curriculum throughout the K-12 school year, how might this affect test fairness with respect to summative, end-of-year high-stakes tests? As a first step, we traced the differences in content exposure associated with personalized learning and more traditional learning paths. To better understand the implications of differences in content coverage, we conducted a simulation study to evaluate the degree to which curriculum exposure varied across students in a particular AI-enhanced learning platform for Algebra instruction with high-school students. Results indicate that AI-enhanced personalized learning may pose threats to test fairness as opportunity-to-learn on K-12 summative high-stakes tests. We discuss the implications given different perspectives of the role of testing in education","PeriodicalId":51609,"journal":{"name":"Applied Measurement in Education","volume":"35 1","pages":"1 - 16"},"PeriodicalIF":1.5,"publicationDate":"2022-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48261491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-02DOI: 10.1080/08957347.2022.2034822
Thijmen van Alphen, S. Jak, Joost Jansen in de Wal, J. Schuitema, T. Peetsma
ABSTRACT Intensive longitudinal data is increasingly used to study state-like processes such as changes in daily stress. Measures aimed at collecting such data require the same level of scrutiny regarding scale reliability as traditional questionnaires. The most prevalent methods used to assess reliability of intensive longitudinal measures are based on the generalizability theory or a multilevel factor analytic approach. However, the application of recent improvements made for the factor analytic approach may not be readily applicable for all researchers. Therefore, this article illustrates a five-step approach for determining reliability of daily data, which is one type of intensive longitudinal data. First, we show how the proposed reliability equations are applied. Next, we illustrate how these equations are used as part of our five-step approach with empirical data, originating from a study investigating changes in daily stress of secondary school teachers. The results are a within-level (ωw), between-level (ωb) reliability score. Mplus syntax for these examples is included and discussed. As such, this paper anticipates on the need for comprehensive guides for the analysis of daily data.
{"title":"Determining Reliability of Daily Measures: An Illustration with Data on Teacher Stress","authors":"Thijmen van Alphen, S. Jak, Joost Jansen in de Wal, J. Schuitema, T. Peetsma","doi":"10.1080/08957347.2022.2034822","DOIUrl":"https://doi.org/10.1080/08957347.2022.2034822","url":null,"abstract":"ABSTRACT Intensive longitudinal data is increasingly used to study state-like processes such as changes in daily stress. Measures aimed at collecting such data require the same level of scrutiny regarding scale reliability as traditional questionnaires. The most prevalent methods used to assess reliability of intensive longitudinal measures are based on the generalizability theory or a multilevel factor analytic approach. However, the application of recent improvements made for the factor analytic approach may not be readily applicable for all researchers. Therefore, this article illustrates a five-step approach for determining reliability of daily data, which is one type of intensive longitudinal data. First, we show how the proposed reliability equations are applied. Next, we illustrate how these equations are used as part of our five-step approach with empirical data, originating from a study investigating changes in daily stress of secondary school teachers. The results are a within-level (ωw), between-level (ωb) reliability score. Mplus syntax for these examples is included and discussed. As such, this paper anticipates on the need for comprehensive guides for the analysis of daily data.","PeriodicalId":51609,"journal":{"name":"Applied Measurement in Education","volume":"35 1","pages":"63 - 79"},"PeriodicalIF":1.5,"publicationDate":"2022-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47889298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-02DOI: 10.1080/08957347.2021.1987905
B. Perkins, D. Pastor, S. Finney
ABSTRACT When tests are low stakes for examinees, meaning there are little to no personal consequences associated with test results, some examinees put little effort into their performance. To understand the causes and consequences of diminished effort, researchers correlate test-taking effort with other variables, such as test-taking emotions and test performance. Most studies correlate examinees’ overall level of test-taking effort with other variables, with fewer studies considering variables related to changing effort levels during testing. To understand if fluctuations in effort during testing relate to fluctuations in emotions, we collected effort and emotions (anger, boredom, emotionality, enjoyment, pride, worry) data from 768 university students three times during a low-stakes institutional accountability test. Examinees greatly varied in their average levels of effort and average levels of emotions during testing; relatively less variability was observed in these variables during testing. Average levels of emotions were predictive of effort, but fluctuations in emotions during testing were not. Our findings indicate that researchers should consider the proportion of intraindividual and interindividual variability in effort when considering predictors of test-taking effort.
{"title":"Between- versus Within-Examinee Variability in Test-Taking Effort and Test Emotions during a Low-Stakes Test","authors":"B. Perkins, D. Pastor, S. Finney","doi":"10.1080/08957347.2021.1987905","DOIUrl":"https://doi.org/10.1080/08957347.2021.1987905","url":null,"abstract":"ABSTRACT When tests are low stakes for examinees, meaning there are little to no personal consequences associated with test results, some examinees put little effort into their performance. To understand the causes and consequences of diminished effort, researchers correlate test-taking effort with other variables, such as test-taking emotions and test performance. Most studies correlate examinees’ overall level of test-taking effort with other variables, with fewer studies considering variables related to changing effort levels during testing. To understand if fluctuations in effort during testing relate to fluctuations in emotions, we collected effort and emotions (anger, boredom, emotionality, enjoyment, pride, worry) data from 768 university students three times during a low-stakes institutional accountability test. Examinees greatly varied in their average levels of effort and average levels of emotions during testing; relatively less variability was observed in these variables during testing. Average levels of emotions were predictive of effort, but fluctuations in emotions during testing were not. Our findings indicate that researchers should consider the proportion of intraindividual and interindividual variability in effort when considering predictors of test-taking effort.","PeriodicalId":51609,"journal":{"name":"Applied Measurement in Education","volume":"34 1","pages":"285 - 300"},"PeriodicalIF":1.5,"publicationDate":"2021-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48626074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-02DOI: 10.1080/08957347.2021.1987906
Roghayeh Mehrazmay, B. Ghonsooly, J. de la Torre
ABSTRACT The present study aims to examine gender differential item functioning (DIF) in the reading comprehension section of a high stakes test using cognitive diagnosis models. Based on the multiple-group generalized deterministic, noisy “and” gate (MG G-DINA) model, the Wald test and likelihood ratio test are used to detect DIF. The flagged items are further inspected to find the attributes they measure, and the probabilities of correct response are checked across latent profiles to gain insights into the potential reasons for the occurrence of DIF. In addition, attribute and latent class prevalence are examined across males and females. The three items displaying large DIF involve three attributes, namely Vocabulary, Main Idea, and Details. The results indicate that females have lower probabilities of correct response across all latent profiles, and fewer females have mastered all the attributes. Moreover, the findings show that the same attribute mastery profiles are prevalent across genders. Finally, the results of the DIF analysis are used to select models that could replace the complex MG G-DINA without significant loss of information.
{"title":"Detecting Differential Item Functioning Using Cognitive Diagnosis Models: Applications of the Wald Test and Likelihood Ratio Test in a University Entrance Examination","authors":"Roghayeh Mehrazmay, B. Ghonsooly, J. de la Torre","doi":"10.1080/08957347.2021.1987906","DOIUrl":"https://doi.org/10.1080/08957347.2021.1987906","url":null,"abstract":"ABSTRACT The present study aims to examine gender differential item functioning (DIF) in the reading comprehension section of a high stakes test using cognitive diagnosis models. Based on the multiple-group generalized deterministic, noisy “and” gate (MG G-DINA) model, the Wald test and likelihood ratio test are used to detect DIF. The flagged items are further inspected to find the attributes they measure, and the probabilities of correct response are checked across latent profiles to gain insights into the potential reasons for the occurrence of DIF. In addition, attribute and latent class prevalence are examined across males and females. The three items displaying large DIF involve three attributes, namely Vocabulary, Main Idea, and Details. The results indicate that females have lower probabilities of correct response across all latent profiles, and fewer females have mastered all the attributes. Moreover, the findings show that the same attribute mastery profiles are prevalent across genders. Finally, the results of the DIF analysis are used to select models that could replace the complex MG G-DINA without significant loss of information.","PeriodicalId":51609,"journal":{"name":"Applied Measurement in Education","volume":"34 1","pages":"262 - 284"},"PeriodicalIF":1.5,"publicationDate":"2021-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43501745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-02DOI: 10.1080/08957347.2021.1987904
R. Feinberg, D. Jurich, S. Wise
ABSTRACT Previous research on rapid responding tends to implicitly consider examinees as either engaging in solution behavior or purely guessing. However, particularly in a high-stakes testing context, examinees perceiving that they are running out of time may consider the remaining items for less time than necessary to provide a fully informed response, but longer than a truly rapid guess. This partial consideration results in a response that misrepresents true ability, but with accuracy above the level of pure chance. To address this limitation of existing methodology, we propose an empirical approach that attempts to disentangle fully and partially informed responses to be used as a preliminary measure of the extent to which speededness may be distorting test score validity. We first illustrate and validate the approach using an experimental dataset in which the amount of time per item was manipulated. Next, applications of this approach are demonstrated using observational data in a more realistic context through four operational exams in which speededness is unknown.
{"title":"Reconceptualizing Rapid Responses as a Speededness Indicator in High-Stakes Assessments","authors":"R. Feinberg, D. Jurich, S. Wise","doi":"10.1080/08957347.2021.1987904","DOIUrl":"https://doi.org/10.1080/08957347.2021.1987904","url":null,"abstract":"ABSTRACT Previous research on rapid responding tends to implicitly consider examinees as either engaging in solution behavior or purely guessing. However, particularly in a high-stakes testing context, examinees perceiving that they are running out of time may consider the remaining items for less time than necessary to provide a fully informed response, but longer than a truly rapid guess. This partial consideration results in a response that misrepresents true ability, but with accuracy above the level of pure chance. To address this limitation of existing methodology, we propose an empirical approach that attempts to disentangle fully and partially informed responses to be used as a preliminary measure of the extent to which speededness may be distorting test score validity. We first illustrate and validate the approach using an experimental dataset in which the amount of time per item was manipulated. Next, applications of this approach are demonstrated using observational data in a more realistic context through four operational exams in which speededness is unknown.","PeriodicalId":51609,"journal":{"name":"Applied Measurement in Education","volume":"34 1","pages":"312 - 326"},"PeriodicalIF":1.5,"publicationDate":"2021-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45481814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-02DOI: 10.1080/08957347.2021.1987900
Tuğba Karadavut
ABSTRACT Mixture IRT models address the heterogeneity in a population by extracting latent classes and allowing item parameters to vary between latent classes. Once the latent classes are extracted, they need to be further examined to be characterized. Some approaches have been adopted in the literature for this purpose. These approaches examine either the examinee or the item characteristics conceptually or statistically. In this study, we propose a two-step procedure for characterizing the latent classes. First, a DIF analysis can be conducted to determine the items that function differentially between the latent classes using the latent class membership information. Then, the characteristics of the items with DIF can be further examined to use this information for characterizing the latent classes. We provided an empirical example to illustrate this procedure.
{"title":"Characterizing the Latent Classes in a Mixture IRT Model Using DIF","authors":"Tuğba Karadavut","doi":"10.1080/08957347.2021.1987900","DOIUrl":"https://doi.org/10.1080/08957347.2021.1987900","url":null,"abstract":"ABSTRACT Mixture IRT models address the heterogeneity in a population by extracting latent classes and allowing item parameters to vary between latent classes. Once the latent classes are extracted, they need to be further examined to be characterized. Some approaches have been adopted in the literature for this purpose. These approaches examine either the examinee or the item characteristics conceptually or statistically. In this study, we propose a two-step procedure for characterizing the latent classes. First, a DIF analysis can be conducted to determine the items that function differentially between the latent classes using the latent class membership information. Then, the characteristics of the items with DIF can be further examined to use this information for characterizing the latent classes. We provided an empirical example to illustrate this procedure.","PeriodicalId":51609,"journal":{"name":"Applied Measurement in Education","volume":"34 1","pages":"301 - 311"},"PeriodicalIF":1.5,"publicationDate":"2021-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46295386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-02DOI: 10.1080/08957347.2021.1987902
Samuel D Lee, Philip T. Walmsley, P. Sackett, N. Kuncel
ABSTRACT Providing assessment validity information to decision makers in a clear and useful format is an ongoing challenge for the educational and psychological measurement community. We identify issues with a previous approach to a graphical presentation, noting that it is mislabeled as presenting incremental validity, when in fact it displays the effects of using predictors in a multiple hurdle fashion. We offer a straightforward technique for displaying incremental validity among predictors in reference to a criterion measure.
{"title":"A Method for Displaying Incremental Validity with Expectancy Charts","authors":"Samuel D Lee, Philip T. Walmsley, P. Sackett, N. Kuncel","doi":"10.1080/08957347.2021.1987902","DOIUrl":"https://doi.org/10.1080/08957347.2021.1987902","url":null,"abstract":"ABSTRACT Providing assessment validity information to decision makers in a clear and useful format is an ongoing challenge for the educational and psychological measurement community. We identify issues with a previous approach to a graphical presentation, noting that it is mislabeled as presenting incremental validity, when in fact it displays the effects of using predictors in a multiple hurdle fashion. We offer a straightforward technique for displaying incremental validity among predictors in reference to a criterion measure.","PeriodicalId":51609,"journal":{"name":"Applied Measurement in Education","volume":"34 1","pages":"251 - 261"},"PeriodicalIF":1.5,"publicationDate":"2021-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59806139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-02DOI: 10.1080/08957347.2021.1987907
K. Geisinger
Three of the papers in this issue consider college admissions testing and a fourth high-stakes testing. I am not entirely sure that there is a more controversial topic today in higher education, ev...
{"title":"The Consideration of Admissions Testing at Colleges and Universities: A Perspective","authors":"K. Geisinger","doi":"10.1080/08957347.2021.1987907","DOIUrl":"https://doi.org/10.1080/08957347.2021.1987907","url":null,"abstract":"Three of the papers in this issue consider college admissions testing and a fourth high-stakes testing. I am not entirely sure that there is a more controversial topic today in higher education, ev...","PeriodicalId":51609,"journal":{"name":"Applied Measurement in Education","volume":"34 1","pages":"237 - 239"},"PeriodicalIF":1.5,"publicationDate":"2021-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45967835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-02DOI: 10.1080/08957347.2021.1987903
P. Sackett, M. S. Sharpe, N. Kuncel
ABSTRACT The literature is replete with references to a disproportionate reliance on admission test scores (e.g., the ACT or SAT) in the college admissions process. School-reported reliance on test scores and grades has been used to study this question, generally indicating relatively equal reliance on the two, with a slightly higher endorsement of grades. As an alternative, we develop an empirical index of relative reliance on tests and grades, and compare school-reported estimates with empirical evidence of relative reliance. Using a dataset from 174 U.S. colleges and universities, we examine the degree to which applicants and enrolled students differ on the SAT and on high school GPA in each school, and develop an index of empirical relative reliance on test scores vs. grades. We find that schools tend to select on test scores and high school grades relatively equally, with the empirical reliance index showing slightly more reliance on test scores and school-reported reliance estimates showing slightly more reliance on grades.
{"title":"Comparing School Reports and Empirical Estimates of Relative Reliance on Tests Vs Grades in College Admissions","authors":"P. Sackett, M. S. Sharpe, N. Kuncel","doi":"10.1080/08957347.2021.1987903","DOIUrl":"https://doi.org/10.1080/08957347.2021.1987903","url":null,"abstract":"ABSTRACT The literature is replete with references to a disproportionate reliance on admission test scores (e.g., the ACT or SAT) in the college admissions process. School-reported reliance on test scores and grades has been used to study this question, generally indicating relatively equal reliance on the two, with a slightly higher endorsement of grades. As an alternative, we develop an empirical index of relative reliance on tests and grades, and compare school-reported estimates with empirical evidence of relative reliance. Using a dataset from 174 U.S. colleges and universities, we examine the degree to which applicants and enrolled students differ on the SAT and on high school GPA in each school, and develop an index of empirical relative reliance on test scores vs. grades. We find that schools tend to select on test scores and high school grades relatively equally, with the empirical reliance index showing slightly more reliance on test scores and school-reported reliance estimates showing slightly more reliance on grades.","PeriodicalId":51609,"journal":{"name":"Applied Measurement in Education","volume":"34 1","pages":"240 - 250"},"PeriodicalIF":1.5,"publicationDate":"2021-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43893869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}