Pub Date : 2020-06-04DOI: 10.1080/10627197.2020.1766957
Christine J. Lyon, Leslie Nabors Oláh, Meghan W. Brenneman
ABSTRACT There is currently a lack of instrumentation with sufficient technical quality focused on the implementation of classroom formative assessment at a grain-size appropriate for the provision of feedback to teachers and program developers. This paper details the development of a validity argument for the High-Impact Classroom Assessment Practices observation protocol (HI-CAP), and examines how to begin evaluating one inference in the interpretative argument. We present a conceptual framework for the HI-CAP and then articulate the interpretive argument. Finally we present evidence to evaluate one part of this argument, the scoring inference, using independent ratings of lessons from pairs of observers across 65 lessons in ninth-grade ELA and mathematics which suggest modest evidence for appropriateness, consistency, and preliminary evidence supporting the scoring model. We conclude with a discussion of the strengths and limitations of the current protocol and training procedures and implications for developing a validity argument for other similar protocols.
{"title":"A Formative Assessment Observation Protocol to Measure Implementation: Evaluating the Scoring Inference","authors":"Christine J. Lyon, Leslie Nabors Oláh, Meghan W. Brenneman","doi":"10.1080/10627197.2020.1766957","DOIUrl":"https://doi.org/10.1080/10627197.2020.1766957","url":null,"abstract":"ABSTRACT There is currently a lack of instrumentation with sufficient technical quality focused on the implementation of classroom formative assessment at a grain-size appropriate for the provision of feedback to teachers and program developers. This paper details the development of a validity argument for the High-Impact Classroom Assessment Practices observation protocol (HI-CAP), and examines how to begin evaluating one inference in the interpretative argument. We present a conceptual framework for the HI-CAP and then articulate the interpretive argument. Finally we present evidence to evaluate one part of this argument, the scoring inference, using independent ratings of lessons from pairs of observers across 65 lessons in ninth-grade ELA and mathematics which suggest modest evidence for appropriateness, consistency, and preliminary evidence supporting the scoring model. We conclude with a discussion of the strengths and limitations of the current protocol and training procedures and implications for developing a validity argument for other similar protocols.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"25 1","pages":"288 - 313"},"PeriodicalIF":1.5,"publicationDate":"2020-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2020.1766957","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48127363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-24DOI: 10.1080/10627197.2020.1766960
L. Cagasan, E. Care, Pamela Robertson, R. Luo
ABSTRACT This paper explores ways of capturing teachers’ formative assessment behaviors in Philippine classrooms through an observation tool. Early versions of the tool were structured using the ‘Elicit-Student response-Recognize-Use’ ESRU model. To account for the practices observed in the classroom, the observation tool was resituated to focus on Elicit (E) and Use (U) components. Both cultural and physical factors that characterize the Philippine classroom were considered to help ensure that the observation tool would reflect current practices in classrooms. Data from the tool are envisioned to inform the Philippines’ Department of Education as they embark on the development of teacher competencies in formative assessment. The final version of the tool captures the basic practices in a reliable way. The tool provides a model of increasing competency in formative assessment implementation that can be used to design teacher training modules and for professional development.
{"title":"Developing a Formative Assessment Protocol to Examine Formative Assessment Practices in the Philippines","authors":"L. Cagasan, E. Care, Pamela Robertson, R. Luo","doi":"10.1080/10627197.2020.1766960","DOIUrl":"https://doi.org/10.1080/10627197.2020.1766960","url":null,"abstract":"ABSTRACT This paper explores ways of capturing teachers’ formative assessment behaviors in Philippine classrooms through an observation tool. Early versions of the tool were structured using the ‘Elicit-Student response-Recognize-Use’ ESRU model. To account for the practices observed in the classroom, the observation tool was resituated to focus on Elicit (E) and Use (U) components. Both cultural and physical factors that characterize the Philippine classroom were considered to help ensure that the observation tool would reflect current practices in classrooms. Data from the tool are envisioned to inform the Philippines’ Department of Education as they embark on the development of teacher competencies in formative assessment. The final version of the tool captures the basic practices in a reliable way. The tool provides a model of increasing competency in formative assessment implementation that can be used to design teacher training modules and for professional development.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"25 1","pages":"259 - 275"},"PeriodicalIF":1.5,"publicationDate":"2020-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2020.1766960","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43227732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-24DOI: 10.1080/10627197.2020.1766958
J. Figa, W. Tarekegne, Mekuria Abebe Kebede
ABSTRACT The purpose of this paper is to examine the practices of formative assessment in the West Arsi zone secondary schools in Ethiopia. A descriptive cross-sectional survey research design was employed. The study participants were secondary school supervisors, principals, teachers, and students. Questionnaires, interviews, observations, and document analysis were used to gather data. The results revealed that secondary school teachers sometimes communicate learning objectives for students, sometimes integrate formative assessment strategies, and sometimes provide formative feedback, with considerable variation of practices. The results have further shown that lack of instructional materials, absence of laboratory equipment and/or technicians, lack of teachers’ competencies, large class size, and shortage of instructional time were the major challenges to formative assessment implementation.
{"title":"The Practice of Formative Assessment in Ethiopian Secondary School Curriculum Implementation: The Case of West Arsi Zone Secondary Schools","authors":"J. Figa, W. Tarekegne, Mekuria Abebe Kebede","doi":"10.1080/10627197.2020.1766958","DOIUrl":"https://doi.org/10.1080/10627197.2020.1766958","url":null,"abstract":"ABSTRACT The purpose of this paper is to examine the practices of formative assessment in the West Arsi zone secondary schools in Ethiopia. A descriptive cross-sectional survey research design was employed. The study participants were secondary school supervisors, principals, teachers, and students. Questionnaires, interviews, observations, and document analysis were used to gather data. The results revealed that secondary school teachers sometimes communicate learning objectives for students, sometimes integrate formative assessment strategies, and sometimes provide formative feedback, with considerable variation of practices. The results have further shown that lack of instructional materials, absence of laboratory equipment and/or technicians, lack of teachers’ competencies, large class size, and shortage of instructional time were the major challenges to formative assessment implementation.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":" 50","pages":"276 - 287"},"PeriodicalIF":1.5,"publicationDate":"2020-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2020.1766958","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41252115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-02DOI: 10.1080/10627197.2020.1756255
Matthew Wilsey, Matthew Kloser, H. Borko, Stephanie Rafanelli
ABSTRACT Classroom assessment and the use of student performance data to inform instructional decisions have significant potential to help students meet the learning goals of science education. Research has shown that process-oriented assessment practices are challenging and sometimes ignored aspects of teaching, partly because teachers’ conceptions of assessment practice do not reflect a cycle of assessment that continually informs instruction. This study explores middle school science teachers’ conceptions of assessment practice based on drawn conceptual models and interviews that were gathered as part of a year-long professional development intervention. Results indicate that participants initially conceived of assessment practice in terms of tangible elements. Changes were seen, however, across the PD as several teachers developed conceptions that were more iterative, in which frequent assessment was used to inform future instruction. These findings raise important questions for how PD can most effectively support teachers’ adoption of research-based conceptions of quality science assessment practices.
{"title":"Middle School Science Teachers’ Conceptions of Assessment Practice Throughout a Year-long Professional Development Experience","authors":"Matthew Wilsey, Matthew Kloser, H. Borko, Stephanie Rafanelli","doi":"10.1080/10627197.2020.1756255","DOIUrl":"https://doi.org/10.1080/10627197.2020.1756255","url":null,"abstract":"ABSTRACT Classroom assessment and the use of student performance data to inform instructional decisions have significant potential to help students meet the learning goals of science education. Research has shown that process-oriented assessment practices are challenging and sometimes ignored aspects of teaching, partly because teachers’ conceptions of assessment practice do not reflect a cycle of assessment that continually informs instruction. This study explores middle school science teachers’ conceptions of assessment practice based on drawn conceptual models and interviews that were gathered as part of a year-long professional development intervention. Results indicate that participants initially conceived of assessment practice in terms of tangible elements. Changes were seen, however, across the PD as several teachers developed conceptions that were more iterative, in which frequent assessment was used to inform future instruction. These findings raise important questions for how PD can most effectively support teachers’ adoption of research-based conceptions of quality science assessment practices.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"25 1","pages":"136 - 158"},"PeriodicalIF":1.5,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2020.1756255","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42254757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-02DOI: 10.1080/10627197.2020.1756256
Geoffrey Phelps, D. Gitomer, C. Iaconangelo, E. Etkina, L. Seeley, S. Vokos
ABSTRACT Assessments of teacher content knowledge are increasingly designed to provide evidence of the content knowledge needed to carry out the moment-to-moment work of teaching. Often these assessments focus on content knowledge only used in teaching with the goal of testing types of professional content knowledge. In this paper, we argue that while this general approach has produced powerful exemplars of new types of assessment tasks, it has been less successful in developing tests that provide more general evidence of the range of content knowledge associated with particular teaching practices. To illustrate a more systematic approach, we describe the use of evidence-centered design (ECD) to develop an assessment of content knowledge for teaching (CKT) in the area of secondary physics-energy.
{"title":"Developing Assessments of Content Knowledge for Teaching Using Evidence-centered Design","authors":"Geoffrey Phelps, D. Gitomer, C. Iaconangelo, E. Etkina, L. Seeley, S. Vokos","doi":"10.1080/10627197.2020.1756256","DOIUrl":"https://doi.org/10.1080/10627197.2020.1756256","url":null,"abstract":"ABSTRACT Assessments of teacher content knowledge are increasingly designed to provide evidence of the content knowledge needed to carry out the moment-to-moment work of teaching. Often these assessments focus on content knowledge only used in teaching with the goal of testing types of professional content knowledge. In this paper, we argue that while this general approach has produced powerful exemplars of new types of assessment tasks, it has been less successful in developing tests that provide more general evidence of the range of content knowledge associated with particular teaching practices. To illustrate a more systematic approach, we describe the use of evidence-centered design (ECD) to develop an assessment of content knowledge for teaching (CKT) in the area of secondary physics-energy.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"25 1","pages":"111 - 91"},"PeriodicalIF":1.5,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2020.1756256","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44661437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-02DOI: 10.1080/10627197.2020.1756254
S. Finney, Paulius Satkus, B. Perkins
ABSTRACT Test-taking effort relates to performance on low-stakes tests; thus, researchers and assessment practitioners have investigated what influences students to put forth effort when completing these tests. Using a longitudinal design, we evaluated the often-cited effect of perceived test importance on test-taking effort. More specifically, a 29-item low-stakes institutional accountability test was split into three subtests. College students completed measures of perceived test importance and test-taking effort after each subtest, in addition to measures of test emotions (anger, pride). Emotions were assessed and modeled to provide a rigorous test of the unique relation between perceived test importance and effort. Using a panel model with autoregressive effects, we found perceived test importance had no significant direct or indirect effects on effort during the test. Emotions, however, were predictive of subsequent effort. These results can inform interventions to increase test-taking effort by calling attention to variables other than perceived test importance.
{"title":"The Effect of Perceived Test Importance and Examinee Emotions on Expended Effort during A Low-Stakes Test: A Longitudinal Panel Model","authors":"S. Finney, Paulius Satkus, B. Perkins","doi":"10.1080/10627197.2020.1756254","DOIUrl":"https://doi.org/10.1080/10627197.2020.1756254","url":null,"abstract":"ABSTRACT Test-taking effort relates to performance on low-stakes tests; thus, researchers and assessment practitioners have investigated what influences students to put forth effort when completing these tests. Using a longitudinal design, we evaluated the often-cited effect of perceived test importance on test-taking effort. More specifically, a 29-item low-stakes institutional accountability test was split into three subtests. College students completed measures of perceived test importance and test-taking effort after each subtest, in addition to measures of test emotions (anger, pride). Emotions were assessed and modeled to provide a rigorous test of the unique relation between perceived test importance and effort. Using a panel model with autoregressive effects, we found perceived test importance had no significant direct or indirect effects on effort during the test. Emotions, however, were predictive of subsequent effort. These results can inform interventions to increase test-taking effort by calling attention to variables other than perceived test importance.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"25 1","pages":"159 - 177"},"PeriodicalIF":1.5,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2020.1756254","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47622478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-02DOI: 10.1080/10627197.2019.1702463
Amy K. Clark, Meagan Karvonen
ABSTRACT Alternate assessments based on alternate achievement standards (AA-AAS) have historically lacked broad validity evidence and an overall evaluation of the extent to which evidence supports intended uses of results. An expanding body of validation literature, the funding of two AA-AAS consortia, and advances in computer-based assessment have supported improvements in AA-AAS validation. This paper describes the validation approach used with the Dynamic Learning Maps® alternate assessment system, including development of the theory of action, claims, and interpretive argument; examples of evidence collected; and evaluation of the evidence in light of the maturity of the assessment system. We focus especially on claims and sources of evidence unique to AA-AAS and especially the Dynamic Learning Maps system design. We synthesize the evidence to evaluate the degree to which it supports the intended uses of assessment results for the targeted population. Considerations are presented for subsequent data collection efforts.
{"title":"Constructing and Evaluating a Validation Argument for a Next-Generation Alternate Assessment","authors":"Amy K. Clark, Meagan Karvonen","doi":"10.1080/10627197.2019.1702463","DOIUrl":"https://doi.org/10.1080/10627197.2019.1702463","url":null,"abstract":"ABSTRACT Alternate assessments based on alternate achievement standards (AA-AAS) have historically lacked broad validity evidence and an overall evaluation of the extent to which evidence supports intended uses of results. An expanding body of validation literature, the funding of two AA-AAS consortia, and advances in computer-based assessment have supported improvements in AA-AAS validation. This paper describes the validation approach used with the Dynamic Learning Maps® alternate assessment system, including development of the theory of action, claims, and interpretive argument; examples of evidence collected; and evaluation of the evidence in light of the maturity of the assessment system. We focus especially on claims and sources of evidence unique to AA-AAS and especially the Dynamic Learning Maps system design. We synthesize the evidence to evaluate the degree to which it supports the intended uses of assessment results for the targeted population. Considerations are presented for subsequent data collection efforts.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"25 1","pages":"47 - 64"},"PeriodicalIF":1.5,"publicationDate":"2020-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2019.1702463","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42789720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-02DOI: 10.1080/10627197.2019.1702466
J. Herman, Michael C. Kane
ABSTRACT The introduction to the Special Issue on Validity Studies lays out the rationale and provides an overview of the four articles comprising the issue. Each of the articles focuses on the validation of a specific instrument for a given purpose. The set represents a range of assessment types and uses, including the assessment of reading strategy use, an observation measure of special education teachers, a next generation alternative assessment for students with the most significant cognitive disabilities, and a student self report survey to assess and respond to individual risks and needs that may lead to truancy and/or school failure, Each article presents a unique case of using evidence to support the argument that scores from a given instrument provide both intended inferences and valid information for a given purpose. The concluding commentary summarizes lessons learned and challenges remaining for the field.
{"title":"Introduction","authors":"J. Herman, Michael C. Kane","doi":"10.1080/10627197.2019.1702466","DOIUrl":"https://doi.org/10.1080/10627197.2019.1702466","url":null,"abstract":"ABSTRACT The introduction to the Special Issue on Validity Studies lays out the rationale and provides an overview of the four articles comprising the issue. Each of the articles focuses on the validation of a specific instrument for a given purpose. The set represents a range of assessment types and uses, including the assessment of reading strategy use, an observation measure of special education teachers, a next generation alternative assessment for students with the most significant cognitive disabilities, and a student self report survey to assess and respond to individual risks and needs that may lead to truancy and/or school failure, Each article presents a unique case of using evidence to support the argument that scores from a given instrument provide both intended inferences and valid information for a given purpose. The concluding commentary summarizes lessons learned and challenges remaining for the field.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"25 1","pages":"1 - 4"},"PeriodicalIF":1.5,"publicationDate":"2020-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2019.1702466","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48865090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-02DOI: 10.1080/10627197.2019.1702465
M. Kane
ABSTRACT As noted in the introduction, we undertook this effort for two reasons, first to encourage assessment developers to undertake relatively comprehensive validity evaluations, and second, to provide some examples of reasonably comprehensive validity studies.
{"title":"Validity Studies Commentary","authors":"M. Kane","doi":"10.1080/10627197.2019.1702465","DOIUrl":"https://doi.org/10.1080/10627197.2019.1702465","url":null,"abstract":"ABSTRACT As noted in the introduction, we undertook this effort for two reasons, first to encourage assessment developers to undertake relatively comprehensive validity evaluations, and second, to provide some examples of reasonably comprehensive validity studies.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"25 1","pages":"83 - 89"},"PeriodicalIF":1.5,"publicationDate":"2020-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2019.1702465","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49421197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-02DOI: 10.1080/10627197.2019.1702462
Chad M. Gotch, B. French
ABSTRACT The State of Washington requires school districts to file court petitions on students with excessive unexcused absences. The Washington Assessment of Risks and Needs of Students (WARNS), a self-report screening instrument developed for use by high school and juvenile court personnel in such situations, purports to measure six facets of risks and needs of youth relevant to improving school and life outcomes. Many psychometric analyses have been documented either in the technical manual or through peer-reviewed publications. We review this body of evidence in the context of a central claim about score interpretation/use and the inferences that underlie that claim. Such evidence is strong for inferences related to the target domain, scoring, generalization, and extrapolation. Evidence for an implication inference, however, is pending. We propose a validation trajectory partnership for the WARNS to build evidence through a collaborative research program.
{"title":"A Validation Trajectory for the Washington Assessment of Risks and Needs of Students","authors":"Chad M. Gotch, B. French","doi":"10.1080/10627197.2019.1702462","DOIUrl":"https://doi.org/10.1080/10627197.2019.1702462","url":null,"abstract":"ABSTRACT The State of Washington requires school districts to file court petitions on students with excessive unexcused absences. The Washington Assessment of Risks and Needs of Students (WARNS), a self-report screening instrument developed for use by high school and juvenile court personnel in such situations, purports to measure six facets of risks and needs of youth relevant to improving school and life outcomes. Many psychometric analyses have been documented either in the technical manual or through peer-reviewed publications. We review this body of evidence in the context of a central claim about score interpretation/use and the inferences that underlie that claim. Such evidence is strong for inferences related to the target domain, scoring, generalization, and extrapolation. Evidence for an implication inference, however, is pending. We propose a validation trajectory partnership for the WARNS to build evidence through a collaborative research program.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"25 1","pages":"65 - 82"},"PeriodicalIF":1.5,"publicationDate":"2020-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10627197.2019.1702462","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42911109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}