Pub Date : 2022-04-21DOI: 10.1080/10627197.2022.2063832
O. Paakkari, L. Paakkari, Henna L Haapala, M. Hirvensalo
ABSTRACT The study explored the latent construct underlying the assessment conceptions and practices of Finnish Health Education teachers (n = 165) in the context of curricula, seeking thereby to identify the teachers’ assessment profiles. Six underlying factors were found to encompass their assessment conceptions and practices, namely Assessment supporting learning, Assessment of working, Self and peer assessment as part of grading, Common assessment criteria, Questionable assessment practices, and Norm-referenced assessment. Via cluster analysis, three distinct assessment profiles were identified, labeled as Problematic assessors, Learning supportive assessors, and Norm-based assessors. These findings can be used to develop Health Education teacher training and facilitate teachers’ assessment literacy.
{"title":"Health Education Teachers’ Assessment Conceptions and Practices: Identifying Assessment Profiles","authors":"O. Paakkari, L. Paakkari, Henna L Haapala, M. Hirvensalo","doi":"10.1080/10627197.2022.2063832","DOIUrl":"https://doi.org/10.1080/10627197.2022.2063832","url":null,"abstract":"ABSTRACT The study explored the latent construct underlying the assessment conceptions and practices of Finnish Health Education teachers (n = 165) in the context of curricula, seeking thereby to identify the teachers’ assessment profiles. Six underlying factors were found to encompass their assessment conceptions and practices, namely Assessment supporting learning, Assessment of working, Self and peer assessment as part of grading, Common assessment criteria, Questionable assessment practices, and Norm-referenced assessment. Via cluster analysis, three distinct assessment profiles were identified, labeled as Problematic assessors, Learning supportive assessors, and Norm-based assessors. These findings can be used to develop Health Education teacher training and facilitate teachers’ assessment literacy.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"27 1","pages":"285 - 299"},"PeriodicalIF":1.5,"publicationDate":"2022-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46357157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-03DOI: 10.1080/10627197.2022.2087625
J. Lee
ABSTRACT The purpose of this paper is to theoretically explore how Hip Hop pedagogy can be utilized and implemented in K-12 classroom formative assessment practices. As a conceptual paper, there will be five sections. The first section will explore classroom formative assessment definitions and highlight theoretical frameworks used to conceptualize the pedagogical practice. The next section will begin to formulate aspects of an antiracist classroom formative assessment theoretical framework. The third section will explore how Hip Hop pedagogy can be used as an antiracist classroom formative assessment approach, while the fourth section will address considerations and objections related to such usage. The final section will end with a call to action for educational researchers, professionals, and practitioners who want to interrogate and investigate how to overcome systematic racism and oppression in our classroom assessment research and practice.
{"title":"Towards an Antiracist Classroom Formative Assessment Framework","authors":"J. Lee","doi":"10.1080/10627197.2022.2087625","DOIUrl":"https://doi.org/10.1080/10627197.2022.2087625","url":null,"abstract":"ABSTRACT The purpose of this paper is to theoretically explore how Hip Hop pedagogy can be utilized and implemented in K-12 classroom formative assessment practices. As a conceptual paper, there will be five sections. The first section will explore classroom formative assessment definitions and highlight theoretical frameworks used to conceptualize the pedagogical practice. The next section will begin to formulate aspects of an antiracist classroom formative assessment theoretical framework. The third section will explore how Hip Hop pedagogy can be used as an antiracist classroom formative assessment approach, while the fourth section will address considerations and objections related to such usage. The final section will end with a call to action for educational researchers, professionals, and practitioners who want to interrogate and investigate how to overcome systematic racism and oppression in our classroom assessment research and practice.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"27 1","pages":"179 - 186"},"PeriodicalIF":1.5,"publicationDate":"2022-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46399290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-03DOI: 10.1080/10627197.2022.2087624
D. Gitomer, Emi Iwatani
ABSTRACT The education measurement community has centered the idea of test fairness in both theory and practice. Yet, racial justice advocates in education research and practice (the racial justice community) have consistently critiqued that assessments are hardly fair and play a critical and outsized role in contributing to racial and social inequities in the educational system and larger society. We attempt to unpack two communities’ different perspectives and different conclusions about fairness and assessments. We argue that these differences are rooted in the historical makeup of these communities, how they bound the issue of fairness, how they evaluate fairness, and how they consider the consequences of assessment both contemporaneously and historically. We conclude by contending that progress with respect to equity and justice will require an appreciation of and grappling with the nature of these differences and attention to boundaryspanners who have long identified with both communities.
{"title":"Two Communities’ Views on Test Fairness","authors":"D. Gitomer, Emi Iwatani","doi":"10.1080/10627197.2022.2087624","DOIUrl":"https://doi.org/10.1080/10627197.2022.2087624","url":null,"abstract":"ABSTRACT The education measurement community has centered the idea of test fairness in both theory and practice. Yet, racial justice advocates in education research and practice (the racial justice community) have consistently critiqued that assessments are hardly fair and play a critical and outsized role in contributing to racial and social inequities in the educational system and larger society. We attempt to unpack two communities’ different perspectives and different conclusions about fairness and assessments. We argue that these differences are rooted in the historical makeup of these communities, how they bound the issue of fairness, how they evaluate fairness, and how they consider the consequences of assessment both contemporaneously and historically. We conclude by contending that progress with respect to equity and justice will require an appreciation of and grappling with the nature of these differences and attention to boundaryspanners who have long identified with both communities.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"27 1","pages":"197 - 203"},"PeriodicalIF":1.5,"publicationDate":"2022-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42175097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-03DOI: 10.1080/10627197.2022.2097782
A. Bailey, J. Martínez, Andreas Oranje, Molly Faulkner-Bond
At the end of 2020 we put out a call for full-length empirical papers and short conceptual contributions examining how the coronavirus or COVID-19 pandemic and racial inequities were affecting the educational assessment of students, either separately or in combination, as the health crisis was viewed as a factor intersecting with and exacerbating existing racial inequities in educational systems. The papers in this special issue attend to such issues as how challenges implementing virtual standardized testing during the coronavirus pandemic impacted academic performance, how educational and assessment experiences may differ for diverse groups of school-age students, and how traditional assessment approaches are being reconsidered in response to mounting research evidence and growing concerns around enduring social and racial inequities faced by Black, Latinx, Asian, Indigenous, and other nonwhite citizens and communities. The papers offer needed empirical evidence, innovative methodological approaches, and theoretical and substantive examinations of the effects of the twin pandemics.
{"title":"Introduction to Twin Pandemics: How a Global Health Crisis and Persistent Racial Injustices are Impacting Educational Assessment","authors":"A. Bailey, J. Martínez, Andreas Oranje, Molly Faulkner-Bond","doi":"10.1080/10627197.2022.2097782","DOIUrl":"https://doi.org/10.1080/10627197.2022.2097782","url":null,"abstract":"At the end of 2020 we put out a call for full-length empirical papers and short conceptual contributions examining how the coronavirus or COVID-19 pandemic and racial inequities were affecting the educational assessment of students, either separately or in combination, as the health crisis was viewed as a factor intersecting with and exacerbating existing racial inequities in educational systems. The papers in this special issue attend to such issues as how challenges implementing virtual standardized testing during the coronavirus pandemic impacted academic performance, how educational and assessment experiences may differ for diverse groups of school-age students, and how traditional assessment approaches are being reconsidered in response to mounting research evidence and growing concerns around enduring social and racial inequities faced by Black, Latinx, Asian, Indigenous, and other nonwhite citizens and communities. The papers offer needed empirical evidence, innovative methodological approaches, and theoretical and substantive examinations of the effects of the twin pandemics.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"27 1","pages":"93 - 97"},"PeriodicalIF":1.5,"publicationDate":"2022-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41735831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-03DOI: 10.1080/10627197.2022.2087621
S. Wise, Megan Kuhfeld, John Cronin
ABSTRACT The arrival of the COVID-19 pandemic had a profound effect on K-12 education. Most schools transitioned to remote instruction, and some used remote testing to assess student learning. Remote testing, however, is less controlled than in-school testing, leading to concerns regarding test-taking engagement. This study compared the disengagement of students remotely administered an adaptive interim assessment in spring 2020 with their disengagement on the assessment administered in-school during fall 2019. Results showed that disengagement gradually increased across grade level. This pattern was not meaningfully different between the two testing contexts, with the exception of results for American Indian/Alaska Native students, who showed higher disengagement under remote testing. In addition, the test’s engagement feature – which automatically paused the test event of a disengaged student and notified the test proctor – had a consistently positive impact whether the proctor was in the same room as the student or proctoring was done remotely.
{"title":"Assessment in the Time of COVID-19: Understanding Patterns of Student Disengagement during Remote Low-Stakes Testing","authors":"S. Wise, Megan Kuhfeld, John Cronin","doi":"10.1080/10627197.2022.2087621","DOIUrl":"https://doi.org/10.1080/10627197.2022.2087621","url":null,"abstract":"ABSTRACT The arrival of the COVID-19 pandemic had a profound effect on K-12 education. Most schools transitioned to remote instruction, and some used remote testing to assess student learning. Remote testing, however, is less controlled than in-school testing, leading to concerns regarding test-taking engagement. This study compared the disengagement of students remotely administered an adaptive interim assessment in spring 2020 with their disengagement on the assessment administered in-school during fall 2019. Results showed that disengagement gradually increased across grade level. This pattern was not meaningfully different between the two testing contexts, with the exception of results for American Indian/Alaska Native students, who showed higher disengagement under remote testing. In addition, the test’s engagement feature – which automatically paused the test event of a disengaged student and notified the test proctor – had a consistently positive impact whether the proctor was in the same room as the student or proctoring was done remotely.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"27 1","pages":"136 - 151"},"PeriodicalIF":1.5,"publicationDate":"2022-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43824500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-03DOI: 10.1080/10627197.2022.2087623
L. Hamilton, J. Kaufman
ABSTRACT The “twin pandemics” of racial injustice and COVID-19 have underscored the importance of promoting civic knowledge, skills, dispositions, and engagement among the nation’s young people. Although some evidence has demonstrated that civic-learning opportunities are inequitably distributed across U.S. schools and communities, we currently have limited data that could help inform efforts to promote more equitable access to these opportunities. In this article, we draw on a recent National Academies of Science, Engineering, and Medicine report on equity indicators to explore the potential value of a large-scale system of such indicators for civic learning. We highlight the need for indicators of both opportunities to learn and of learning outcomes. We describe key features of this system, propose some example indicators, and discuss a research agenda that could contribute to the development of high-quality measures of equity in civic learning.
{"title":"Indicators of Equitable Civic Learning in U.S. Public Schools","authors":"L. Hamilton, J. Kaufman","doi":"10.1080/10627197.2022.2087623","DOIUrl":"https://doi.org/10.1080/10627197.2022.2087623","url":null,"abstract":"ABSTRACT The “twin pandemics” of racial injustice and COVID-19 have underscored the importance of promoting civic knowledge, skills, dispositions, and engagement among the nation’s young people. Although some evidence has demonstrated that civic-learning opportunities are inequitably distributed across U.S. schools and communities, we currently have limited data that could help inform efforts to promote more equitable access to these opportunities. In this article, we draw on a recent National Academies of Science, Engineering, and Medicine report on equity indicators to explore the potential value of a large-scale system of such indicators for civic learning. We highlight the need for indicators of both opportunities to learn and of learning outcomes. We describe key features of this system, propose some example indicators, and discuss a research agenda that could contribute to the development of high-quality measures of equity in civic learning.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"27 1","pages":"187 - 196"},"PeriodicalIF":1.5,"publicationDate":"2022-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46818739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-03DOI: 10.1080/10627197.2022.2094757
Michaeline Russell, Olivia Szendey, Zhushan Li
ABSTRACT Recent research provides evidence that an intersectional approach to defining reference and focal groups results in a higher percentage of comparisons flagged for potential DIF. The study presented here examined the generalizability of this pattern across methods for examining DIF. While the level of DIF detection differed among the four methods examined, the pattern in which the intersectional approach yielded a substantially larger percentage of flagged comparisons compared to the traditional approach was consistent across three of the four methods. The study explores implications that an intersectional approach to examining differential item functioning has for use by large-scale test development programs and identifies further research needed to support the adoption of an intersectional approach to DIF analyses.
{"title":"An Intersectional Approach to DIF: Comparing Outcomes across Methods","authors":"Michaeline Russell, Olivia Szendey, Zhushan Li","doi":"10.1080/10627197.2022.2094757","DOIUrl":"https://doi.org/10.1080/10627197.2022.2094757","url":null,"abstract":"ABSTRACT Recent research provides evidence that an intersectional approach to defining reference and focal groups results in a higher percentage of comparisons flagged for potential DIF. The study presented here examined the generalizability of this pattern across methods for examining DIF. While the level of DIF detection differed among the four methods examined, the pattern in which the intersectional approach yielded a substantially larger percentage of flagged comparisons compared to the traditional approach was consistent across three of the four methods. The study explores implications that an intersectional approach to examining differential item functioning has for use by large-scale test development programs and identifies further research needed to support the adoption of an intersectional approach to DIF analyses.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"27 1","pages":"115 - 135"},"PeriodicalIF":1.5,"publicationDate":"2022-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43519599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-03DOI: 10.1080/10627197.2022.2087622
J. Soland, A. McGinty, A. Gray, E. Solari, Walter A. Herring, Rujun Xu
ABSTRACT Kindergarten entry assessments (KEAs) are frequently used to understand students’ early literacy skills. Amidst COVID-19, such assessments will be vital in understanding how the pandemic has affected early literacy, including how it has contributed to inequities in the educational system. However, the pandemic has also created challenges for comparing scores from KEAs across years and modes of administration. In this study, we examine these issues using a KEA administered to most Kindergarten students in Virginia. This screener was rapidly converted to an online platform to ensure students could continue taking it during the pandemic. Results indicate that the sample of students taking the test shifted substantially pre- and post-pandemic, complicating comparisons of performance. While we do not find evidence of noninvariance by mode at the test level, we do see signs that more subtle forms of item-level bias may be at play. Implications for equity, fairness, and inclusion are discussed.
{"title":"Early Literacy, Equity, and Test Score Comparability during the Pandemic","authors":"J. Soland, A. McGinty, A. Gray, E. Solari, Walter A. Herring, Rujun Xu","doi":"10.1080/10627197.2022.2087622","DOIUrl":"https://doi.org/10.1080/10627197.2022.2087622","url":null,"abstract":"ABSTRACT Kindergarten entry assessments (KEAs) are frequently used to understand students’ early literacy skills. Amidst COVID-19, such assessments will be vital in understanding how the pandemic has affected early literacy, including how it has contributed to inequities in the educational system. However, the pandemic has also created challenges for comparing scores from KEAs across years and modes of administration. In this study, we examine these issues using a KEA administered to most Kindergarten students in Virginia. This screener was rapidly converted to an online platform to ensure students could continue taking it during the pandemic. Results indicate that the sample of students taking the test shifted substantially pre- and post-pandemic, complicating comparisons of performance. While we do not find evidence of noninvariance by mode at the test level, we do see signs that more subtle forms of item-level bias may be at play. Implications for equity, fairness, and inclusion are discussed.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"27 1","pages":"98 - 114"},"PeriodicalIF":1.5,"publicationDate":"2022-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48069327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-03DOI: 10.1080/10627197.2022.2087626
J. Schweig, A. McEachin, Megan Kuhfeld, Louis T. Mariano, M. Diliberti
ABSTRACT As students return to in-person instruction in the 2021–2022 school year, local education agencies (LEAs) must develop resource allocation strategies to support schools in need. Federal programs have provided resources to support restart and recovery. However, there is little consensus on how LEAs can target resources to support those schools most in need. This study investigates the relationship between three school need indicators (i.e., pre-COVID student performance and progress, school and community poverty, and pandemic vulnerability) and measures of student performance and progress throughout the pandemic to determine which indicators support valid school need inferences. We find that school poverty more strongly predicts performance and progress during the pandemic than pre-COVID academic measures. In elementary schools, we find that pandemic vulnerability independently predicts achievement even when conditioning on poverty and pre-pandemic achievement. Of the indicators of poverty we investigated, the percentage of free and reduced-price lunch-eligible students is the strongest predictor.
{"title":"Allocating Resources for COVID-19 Recovery: A Comparison of Three Indicators of School Need","authors":"J. Schweig, A. McEachin, Megan Kuhfeld, Louis T. Mariano, M. Diliberti","doi":"10.1080/10627197.2022.2087626","DOIUrl":"https://doi.org/10.1080/10627197.2022.2087626","url":null,"abstract":"ABSTRACT As students return to in-person instruction in the 2021–2022 school year, local education agencies (LEAs) must develop resource allocation strategies to support schools in need. Federal programs have provided resources to support restart and recovery. However, there is little consensus on how LEAs can target resources to support those schools most in need. This study investigates the relationship between three school need indicators (i.e., pre-COVID student performance and progress, school and community poverty, and pandemic vulnerability) and measures of student performance and progress throughout the pandemic to determine which indicators support valid school need inferences. We find that school poverty more strongly predicts performance and progress during the pandemic than pre-COVID academic measures. In elementary schools, we find that pandemic vulnerability independently predicts achievement even when conditioning on poverty and pre-pandemic achievement. Of the indicators of poverty we investigated, the percentage of free and reduced-price lunch-eligible students is the strongest predictor.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"27 1","pages":"152 - 169"},"PeriodicalIF":1.5,"publicationDate":"2022-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43224441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-17DOI: 10.1080/10627197.2022.2052723
Chad M. Gotch, M. Roduta Roberts
ABSTRACT Individual-level score reports represent a common artifact in teacher-parent communication about standardized tests. Previous research has documented challenges in communicating student achievement. Researchers have also leveraged teachers in the process of score report design. Little is known, however, about teachers’ experiences with using score reports in authentic settings. In this study, we used a participatory action research approach in a year-long clinical partnership with four elementary teachers to iteratively propose and assess tools and strategies to support the communication of student test performance. Teachers achieved some success in their efforts, but experienced challenges of sustainability and anticipated peer buy-in. Findings from this study also illustrated a strong presence of tensions in the teachers’ work related to testing and communicating test performance. Overall, involving teachers in participatory research inquiry yielded novel insights for extending score report research and improving operational practice in test companies.
{"title":"Developing Test Performance Communication Solutions in a Teacher-Researcher Partnership","authors":"Chad M. Gotch, M. Roduta Roberts","doi":"10.1080/10627197.2022.2052723","DOIUrl":"https://doi.org/10.1080/10627197.2022.2052723","url":null,"abstract":"ABSTRACT Individual-level score reports represent a common artifact in teacher-parent communication about standardized tests. Previous research has documented challenges in communicating student achievement. Researchers have also leveraged teachers in the process of score report design. Little is known, however, about teachers’ experiences with using score reports in authentic settings. In this study, we used a participatory action research approach in a year-long clinical partnership with four elementary teachers to iteratively propose and assess tools and strategies to support the communication of student test performance. Teachers achieved some success in their efforts, but experienced challenges of sustainability and anticipated peer buy-in. Findings from this study also illustrated a strong presence of tensions in the teachers’ work related to testing and communicating test performance. Overall, involving teachers in participatory research inquiry yielded novel insights for extending score report research and improving operational practice in test companies.","PeriodicalId":46209,"journal":{"name":"Educational Assessment","volume":"27 1","pages":"247 - 268"},"PeriodicalIF":1.5,"publicationDate":"2022-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41919491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}