Pub Date : 2022-11-02DOI: 10.1080/0969594X.2022.2147483
Henrik Galligani Ræder, Björn Andersson, R. Olsen
ABSTRACT Enabling comparable scores across grades is of interest for policymakers to evaluate educational systems, for researchers to investigate substantive questions, and for teachers to infer student growth. This study implemented a vertical scaling design to numeracy tests given in grades 5 and 8 as part of the Norwegian national testing system. Our design bridges the gap between grades 5 and 8 with a linking test tailored for grades 6 and 7, without the need for new item development. The design combines the existing administration for all grade 5 and 8 students with additional tests for samples of grade 6 and 7 students. The findings indicate that vertically scaling existing tests is possible through a cost-effective design and that numeracy, as measured by the Norwegian national tests, is comparable across four grades. We discuss the implications of our study for creating vertical scales in the context of national assessment systems.
{"title":"Numeracy across grades – vertically scaling the Norwegian national numeracy tests","authors":"Henrik Galligani Ræder, Björn Andersson, R. Olsen","doi":"10.1080/0969594X.2022.2147483","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2147483","url":null,"abstract":"ABSTRACT Enabling comparable scores across grades is of interest for policymakers to evaluate educational systems, for researchers to investigate substantive questions, and for teachers to infer student growth. This study implemented a vertical scaling design to numeracy tests given in grades 5 and 8 as part of the Norwegian national testing system. Our design bridges the gap between grades 5 and 8 with a linking test tailored for grades 6 and 7, without the need for new item development. The design combines the existing administration for all grade 5 and 8 students with additional tests for samples of grade 6 and 7 students. The findings indicate that vertically scaling existing tests is possible through a cost-effective design and that numeracy, as measured by the Norwegian national tests, is comparable across four grades. We discuss the implications of our study for creating vertical scales in the context of national assessment systems.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"7 1","pages":"653 - 673"},"PeriodicalIF":3.2,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90440221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-02DOI: 10.1080/0969594X.2022.2156980
Victoria Crisp, J. Greatorex
ABSTRACT As part of GCSE (General Certificate of Secondary Education) reforms in England, requirements for assessing application in science increased. Setting examination questions in context facilitates testing application as students need to apply what they know and understand to a particular situation. This research explored the nature of the contexts used in reformed GCSE combined science examinations and compared contexts used in a specification which specifically emphasises contextualised learning to those in other specifications. Eight combined science specimen examination papers were selected, including four from GCSE Twenty First Century Science (21C). A qualitative coding frame was used to code each contextualised item. Various strategies for testing in context were present. Contextual features that might risk introducing construct-irrelevant variance were infrequent but may suggest areas for attention in setter training. 21C papers included a higher proportion of items with detailed contexts and a higher proportion of items set in science-related adult/professional settings.
{"title":"The appliance of science: exploring the use of context in reformed GCSE science examinations","authors":"Victoria Crisp, J. Greatorex","doi":"10.1080/0969594X.2022.2156980","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2156980","url":null,"abstract":"ABSTRACT As part of GCSE (General Certificate of Secondary Education) reforms in England, requirements for assessing application in science increased. Setting examination questions in context facilitates testing application as students need to apply what they know and understand to a particular situation. This research explored the nature of the contexts used in reformed GCSE combined science examinations and compared contexts used in a specification which specifically emphasises contextualised learning to those in other specifications. Eight combined science specimen examination papers were selected, including four from GCSE Twenty First Century Science (21C). A qualitative coding frame was used to code each contextualised item. Various strategies for testing in context were present. Contextual features that might risk introducing construct-irrelevant variance were infrequent but may suggest areas for attention in setter training. 21C papers included a higher proportion of items with detailed contexts and a higher proportion of items set in science-related adult/professional settings.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"29 1","pages":"689 - 710"},"PeriodicalIF":3.2,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80495055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-02DOI: 10.1080/0969594X.2022.2162481
Kaitlin Riegel, T. Evans, J. Stephens
ABSTRACT Self-efficacy is a significant construct in education due to its predictive relationship with achievement. Existing measures of assessment-related self-efficacy concentrate on students’ beliefs about content-specific tasks but omit beliefs around assessment-taking. This research aimed to develop and test the Measure of Assessment Self-Efficacy (MASE), designed to assess two types of efficacy beliefs related to assessment (i.e. ‘comprehension and execution’ and ‘emotional regulation’) in two scenarios (i.e. a low-stakes online quiz and a high-stakes final exam). Results from confirmatory factor analysis in Study 1 (N = 301) supported the hypothesised two-factor measurement models for both assessment scenarios. In Study 2, results from MGCFA (N = 277) confirmed these models were invariant over time and provided evidence for the scales’ validity. Study 3 demonstrated the exam-related MASE was invariant across cohorts of students (Ns = 277; 329). Potential uses of the developed scales in educational research are discussed.
{"title":"Development of the measure of assessment self-efficacy (MASE) for quizzes and exams","authors":"Kaitlin Riegel, T. Evans, J. Stephens","doi":"10.1080/0969594X.2022.2162481","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2162481","url":null,"abstract":"ABSTRACT Self-efficacy is a significant construct in education due to its predictive relationship with achievement. Existing measures of assessment-related self-efficacy concentrate on students’ beliefs about content-specific tasks but omit beliefs around assessment-taking. This research aimed to develop and test the Measure of Assessment Self-Efficacy (MASE), designed to assess two types of efficacy beliefs related to assessment (i.e. ‘comprehension and execution’ and ‘emotional regulation’) in two scenarios (i.e. a low-stakes online quiz and a high-stakes final exam). Results from confirmatory factor analysis in Study 1 (N = 301) supported the hypothesised two-factor measurement models for both assessment scenarios. In Study 2, results from MGCFA (N = 277) confirmed these models were invariant over time and provided evidence for the scales’ validity. Study 3 demonstrated the exam-related MASE was invariant across cohorts of students (Ns = 277; 329). Potential uses of the developed scales in educational research are discussed.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"29 1","pages":"729 - 745"},"PeriodicalIF":3.2,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82298091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-02DOI: 10.1080/0969594X.2022.2158304
Yuyang Cai, Min Yang, Juanjuan Yao
ABSTRACT This study investigated the relationship between formative assessment and reading achievement in Hong Kong, a Confucian Heritage Culture (CHC) society. 4,837 Hong Kong students were surveyed in a nine-item questionnaire that was used as indicator variable of formative assessment strategies. The study used multi-group structural equation modelling (MG-SEM) to examine the effects of formative assessment strategies on reading achievement across low-, medium-, and high-achievers controlling for gender and social economic status (SES) effects. The result showed that after controlling for SES and gender effects, there was significant effect of formative assessment strategies with low- and medium-reading achievers but not with high-reading achievers. Implications are drawn to inform formative assessment research and practice relevant to students’ reading achievement in CHC societies and other educational contexts.
{"title":"More is not always better: the nonlinear relationship between formative assessment strategies and reading achievement","authors":"Yuyang Cai, Min Yang, Juanjuan Yao","doi":"10.1080/0969594X.2022.2158304","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2158304","url":null,"abstract":"ABSTRACT This study investigated the relationship between formative assessment and reading achievement in Hong Kong, a Confucian Heritage Culture (CHC) society. 4,837 Hong Kong students were surveyed in a nine-item questionnaire that was used as indicator variable of formative assessment strategies. The study used multi-group structural equation modelling (MG-SEM) to examine the effects of formative assessment strategies on reading achievement across low-, medium-, and high-achievers controlling for gender and social economic status (SES) effects. The result showed that after controlling for SES and gender effects, there was significant effect of formative assessment strategies with low- and medium-reading achievers but not with high-reading achievers. Implications are drawn to inform formative assessment research and practice relevant to students’ reading achievement in CHC societies and other educational contexts.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"84 1","pages":"711 - 728"},"PeriodicalIF":3.2,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73079180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-02DOI: 10.1080/0969594X.2022.2147901
Katelin Kelly, M. Richardson, T. Isaacs
ABSTRACT Comparative judgment is gaining popularity as an assessment tool, including for high-stakes testing purposes, despite relatively little research on the use of the technique. Advocates claim two main rationales for its use: that comparative judgment is valid because humans are better at comparative than absolute judgment, and because it distils the aggregate view of expert judges. We explore these contentions. We argue that the psychological underpinnings used to justify the method are superficially treated in the literature. We conceptualise and critique the notion that comparative judgment is ‘intrinsically valid’ due to its use of expert judges. We conclude that the rationales as presented by the comparative judgment literature are incomplete and inconsistent. We recommend that future work should clarify its position regarding the psychological underpinnings of comparative judgment, and if necessary present a more compelling case; for example, by integrating the comparative judgment literature with evidence from other fields.
{"title":"Critiquing the rationales for using comparative judgement: a call for clarity","authors":"Katelin Kelly, M. Richardson, T. Isaacs","doi":"10.1080/0969594X.2022.2147901","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2147901","url":null,"abstract":"ABSTRACT Comparative judgment is gaining popularity as an assessment tool, including for high-stakes testing purposes, despite relatively little research on the use of the technique. Advocates claim two main rationales for its use: that comparative judgment is valid because humans are better at comparative than absolute judgment, and because it distils the aggregate view of expert judges. We explore these contentions. We argue that the psychological underpinnings used to justify the method are superficially treated in the literature. We conceptualise and critique the notion that comparative judgment is ‘intrinsically valid’ due to its use of expert judges. We conclude that the rationales as presented by the comparative judgment literature are incomplete and inconsistent. We recommend that future work should clarify its position regarding the psychological underpinnings of comparative judgment, and if necessary present a more compelling case; for example, by integrating the comparative judgment literature with evidence from other fields.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"10 1","pages":"674 - 688"},"PeriodicalIF":3.2,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79173555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-24DOI: 10.1080/0969594X.2022.2118665
Laura Zieger, John Jerrim, Jake Anders, N. Shure
ABSTRACT The OECD’s Programme for International Student Assessment (PISA) has become one of the key studies for evidence-based education policymaking across the globe. PISA has however received a lot of methodological criticism, including how the test scores are created. The aim of this paper is to investigate the so-called ‘conditioning model’, where background variables are used to derive student achievement scores, and the impact it has upon the PISA results. This includes varying the background variables used within the conditioning model and analysing its impact upon countries relatively positions in the PISA rankings. Our key finding is that the exact specification of the conditioning model matters; cross-country comparisons of PISA scores can change depending upon the statistical methodology used.
{"title":"Conditioning: how background variables can influence PISA scores","authors":"Laura Zieger, John Jerrim, Jake Anders, N. Shure","doi":"10.1080/0969594X.2022.2118665","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2118665","url":null,"abstract":"ABSTRACT The OECD’s Programme for International Student Assessment (PISA) has become one of the key studies for evidence-based education policymaking across the globe. PISA has however received a lot of methodological criticism, including how the test scores are created. The aim of this paper is to investigate the so-called ‘conditioning model’, where background variables are used to derive student achievement scores, and the impact it has upon the PISA results. This includes varying the background variables used within the conditioning model and analysing its impact upon countries relatively positions in the PISA rankings. Our key finding is that the exact specification of the conditioning model matters; cross-country comparisons of PISA scores can change depending upon the statistical methodology used.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"73 1","pages":"632 - 652"},"PeriodicalIF":3.2,"publicationDate":"2022-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80656887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-03DOI: 10.1080/0969594X.2022.2162480
K. Tan
ABSTRACT Assessment systems reward certainty and thrive on predictability. However, the COVID-19 pandemic has punished our assessment systems severely for over reliance on controlled premises for our high stakes assessment, and this should compel us to re-examine the reliance on certainty and control in our assessment policies and reforms. Singapore is a useful context to examine such re-examination of assessment imperatives on a national scale. The city-state typically orchestrates its major policy reform with great discipline, standardisation, and detailed co-ordination. However, such qualities may not be ideal for its assessment reform needs in the post-pandemic future. Three recent assessment reforms are examined as examples of pre-pandemic assessment policies predicated on certainty. This paper discusses whether such reforms are fit for post-pandemic purpose(s), and argues for shifting the emphasis of assessment from securing examination reliability to developing learners’ assessment resilience.
{"title":"Lessons from a disciplined response to COVID 19 disruption to education: beginning the journey from reliability to resilience","authors":"K. Tan","doi":"10.1080/0969594X.2022.2162480","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2162480","url":null,"abstract":"ABSTRACT Assessment systems reward certainty and thrive on predictability. However, the COVID-19 pandemic has punished our assessment systems severely for over reliance on controlled premises for our high stakes assessment, and this should compel us to re-examine the reliance on certainty and control in our assessment policies and reforms. Singapore is a useful context to examine such re-examination of assessment imperatives on a national scale. The city-state typically orchestrates its major policy reform with great discipline, standardisation, and detailed co-ordination. However, such qualities may not be ideal for its assessment reform needs in the post-pandemic future. Three recent assessment reforms are examined as examples of pre-pandemic assessment policies predicated on certainty. This paper discusses whether such reforms are fit for post-pandemic purpose(s), and argues for shifting the emphasis of assessment from securing examination reliability to developing learners’ assessment resilience.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"228 1","pages":"596 - 611"},"PeriodicalIF":3.2,"publicationDate":"2022-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80406079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-03DOI: 10.1080/0969594X.2022.2140889
Jenny Poskitt
ABSTRACT New Zealand’s defined coastal boundaries, isolation and small population were favourable factors to minimise the spread of COVID-19. Decisive governmental leadership and a public willing to comply with high-level lockdown in the first phase, resulted in minimal disruption to assessment. But as the pandemic progressed through Delta and Omicron variants, concerns grew about equitable access to assessments, declining school attendance, and inequitable educational outcomes for students, especially of Māori and Pacific heritage. School and educational agency experiences of high stakes assessment in a period of uncertainty were examined through document analysis and research interviews. Using Gewirtz’s contextual analysis of the multi-dimensional and complex nature of justice, and Rogoff’s conceptual framework of three planes of socio-cultural analysis: the personal (learner), inter-personal (school) and institutional (educational agencies), revealed that though collaborative adaptations minimised assessment disruptions on wellbeing and equity of access, they did not transform high stakes assessment.
{"title":"COVID-19 impact on high stakes assessment: a New Zealand journey of collaborative adaptation amidst disruption","authors":"Jenny Poskitt","doi":"10.1080/0969594X.2022.2140889","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2140889","url":null,"abstract":"ABSTRACT New Zealand’s defined coastal boundaries, isolation and small population were favourable factors to minimise the spread of COVID-19. Decisive governmental leadership and a public willing to comply with high-level lockdown in the first phase, resulted in minimal disruption to assessment. But as the pandemic progressed through Delta and Omicron variants, concerns grew about equitable access to assessments, declining school attendance, and inequitable educational outcomes for students, especially of Māori and Pacific heritage. School and educational agency experiences of high stakes assessment in a period of uncertainty were examined through document analysis and research interviews. Using Gewirtz’s contextual analysis of the multi-dimensional and complex nature of justice, and Rogoff’s conceptual framework of three planes of socio-cultural analysis: the personal (learner), inter-personal (school) and institutional (educational agencies), revealed that though collaborative adaptations minimised assessment disruptions on wellbeing and equity of access, they did not transform high stakes assessment.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"13 1","pages":"575 - 595"},"PeriodicalIF":3.2,"publicationDate":"2022-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82928170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-03DOI: 10.1080/0969594X.2022.2139339
L. Hayward, M. O’Leary
Editing a Special Edition is always interesting but, on the whole, it follows a fairly similar pattern. The Editors identify a theme, debate how that theme might be explored through different lenses in different contexts, receive papers that reflect a variety of positions, and write the Editorial that is largely consistent with the original planning expectation and publish. The construction of this Special Edition was nothing like that. When, as Editors, we began thinking through what articles in this Special Edition on the impact of COVID on high-stakes assessment internationally might report, our starting point was to see COVID as a disruptor. Coming from Ireland and Scotland where, in both countries, the impact of COVID on our high-stakes assessment systems had been significant, we assumed that this would be an international phenomenon. High Stakes Assessment in the Era of COVID: Interruption, Transformation or Regression?, the title of the call we put out when commissioning papers, on reflection, is a good indication of the beliefs we held. Internationally, press coverage, spoke of the COVID challenges and the multiple ways in which societies, and within societies, education was being disrupted. We anticipated that the nature of that disruption would vary: in all societies, COVID would interrupt normal practices, but in some, that disruption would lead to transformation, to the creation of new practices in high-stakes assessment environments that are traditionally regarded as risk averse; whereas, in other societies, COVID interruption might drive practices back to territory that was regarded as safer ground, regressing to assessment approaches that were perceived to be more secure whether or not these practices were educationally desirable (IEAN, 2021). As Editors, we did not and do not challenge the position that COVID globally interrupted societies, in some societies in devastating ways. There is nothing positive about a global pandemic and we do not underestimate the impact that COVID has had physically and emotionally on educational experiences and the mental health and wellbeing of everyone in the education system; young people, their parents, teachers and lecturers. Those whose work was in the specific area of high-stakes assessment were
{"title":"High stakes assessment in the era of COVID-19: interruption, transformation, regression or business as usual?","authors":"L. Hayward, M. O’Leary","doi":"10.1080/0969594X.2022.2139339","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2139339","url":null,"abstract":"Editing a Special Edition is always interesting but, on the whole, it follows a fairly similar pattern. The Editors identify a theme, debate how that theme might be explored through different lenses in different contexts, receive papers that reflect a variety of positions, and write the Editorial that is largely consistent with the original planning expectation and publish. The construction of this Special Edition was nothing like that. When, as Editors, we began thinking through what articles in this Special Edition on the impact of COVID on high-stakes assessment internationally might report, our starting point was to see COVID as a disruptor. Coming from Ireland and Scotland where, in both countries, the impact of COVID on our high-stakes assessment systems had been significant, we assumed that this would be an international phenomenon. High Stakes Assessment in the Era of COVID: Interruption, Transformation or Regression?, the title of the call we put out when commissioning papers, on reflection, is a good indication of the beliefs we held. Internationally, press coverage, spoke of the COVID challenges and the multiple ways in which societies, and within societies, education was being disrupted. We anticipated that the nature of that disruption would vary: in all societies, COVID would interrupt normal practices, but in some, that disruption would lead to transformation, to the creation of new practices in high-stakes assessment environments that are traditionally regarded as risk averse; whereas, in other societies, COVID interruption might drive practices back to territory that was regarded as safer ground, regressing to assessment approaches that were perceived to be more secure whether or not these practices were educationally desirable (IEAN, 2021). As Editors, we did not and do not challenge the position that COVID globally interrupted societies, in some societies in devastating ways. There is nothing positive about a global pandemic and we do not underestimate the impact that COVID has had physically and emotionally on educational experiences and the mental health and wellbeing of everyone in the education system; young people, their parents, teachers and lecturers. Those whose work was in the specific area of high-stakes assessment were","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"18 1","pages":"505 - 517"},"PeriodicalIF":3.2,"publicationDate":"2022-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81569494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-03DOI: 10.1080/0969594X.2023.2166462
Christian Ydesen
ABSTRACT This article explores the struggles over the development of new national tests for the Danish public school system before, during, and after the lockdown of society due to the COVID-19 pandemic. The article throws light on the stakeholder positions, arguments, and discourses involved in assessment policy formation with particular attention on the national tests as the key component of the new assessment system. Drawing on policy documents, media news stories, and interviews with teachers, school leaders, politicians, and civil servants at the municipal and national levels, the article adds to our understanding of how assessment policies come into existence and offers reflections on the implications and conditions for how assessment systems may evolve.
{"title":"New national tests for the Danish public school system – Tensions between renewal and orthodoxy before, during, and after the COVID-19 pandemic","authors":"Christian Ydesen","doi":"10.1080/0969594X.2023.2166462","DOIUrl":"https://doi.org/10.1080/0969594X.2023.2166462","url":null,"abstract":"ABSTRACT This article explores the struggles over the development of new national tests for the Danish public school system before, during, and after the lockdown of society due to the COVID-19 pandemic. The article throws light on the stakeholder positions, arguments, and discourses involved in assessment policy formation with particular attention on the national tests as the key component of the new assessment system. Drawing on policy documents, media news stories, and interviews with teachers, school leaders, politicians, and civil servants at the municipal and national levels, the article adds to our understanding of how assessment policies come into existence and offers reflections on the implications and conditions for how assessment systems may evolve.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"2 1","pages":"612 - 628"},"PeriodicalIF":3.2,"publicationDate":"2022-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77358600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}