Pub Date : 2023-03-04DOI: 10.1080/0969594X.2023.2191161
Lana T. Yang, Zi Yan, Di Zhang, D. Boud, J. A. Datu
ABSTRACT Based on the self-system processes model of motivation, we explored the mediating role of academic self-concept in the relationship between perseverance of effort and self-assessment. The results showed that perseverance of effort has a positive but not statistically significant association with self-assessment when controlling academic self-concept. The results supported our hypotheses that academic self-concept, whether at the domain-specific or component-specific level, significantly mediated the effect of the perseverance of effort on self-assessment, lending empirical support to the closer conceptual link between self-perceptions and self-assessment practices in learning. The results contribute to the literature of the three research lines (grit, academic self-concept and self-assessment) and suggest that academic self-concept enhancement interventions are beneficial not only to academic achievement based on the reciprocal relationship that has been well documented in the self-concept literature but also to self-assessment in the light of the self-system processes model of motivation.
{"title":"Exploring the roles of academic self-concept and perseverance of effort in self-assessment practices","authors":"Lana T. Yang, Zi Yan, Di Zhang, D. Boud, J. A. Datu","doi":"10.1080/0969594X.2023.2191161","DOIUrl":"https://doi.org/10.1080/0969594X.2023.2191161","url":null,"abstract":"ABSTRACT Based on the self-system processes model of motivation, we explored the mediating role of academic self-concept in the relationship between perseverance of effort and self-assessment. The results showed that perseverance of effort has a positive but not statistically significant association with self-assessment when controlling academic self-concept. The results supported our hypotheses that academic self-concept, whether at the domain-specific or component-specific level, significantly mediated the effect of the perseverance of effort on self-assessment, lending empirical support to the closer conceptual link between self-perceptions and self-assessment practices in learning. The results contribute to the literature of the three research lines (grit, academic self-concept and self-assessment) and suggest that academic self-concept enhancement interventions are beneficial not only to academic achievement based on the reciprocal relationship that has been well documented in the self-concept literature but also to self-assessment in the light of the self-system processes model of motivation.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"45 1","pages":"104 - 129"},"PeriodicalIF":3.2,"publicationDate":"2023-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86381719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ABSTRACT To enhance the effectiveness of formative assessment and self-regulated learning, this study focused on evaluative judgement. A process for developing evaluative judgement and co-regulation had proposed. However, this co-regulation and evaluative judgement model lacks validation for use in classroom settings; the process of developing evaluative judgement remains unclear. Thus, this study aims to examine the processes of co-regulation and development of evaluative judgement by applying the co-regulation and evaluative judgement model in a Japanese elementary school. We confirmed that evaluative judgements are shared with students through co-regulation and that both evaluative judgements and learning outcomes are enhanced. The results support the co-regulation and evaluative judgement model and partially reveal the process of the development of evaluative judgement. Evaluative judgements are expected to expand the effectiveness of formative assessment and self-regulation.
{"title":"Examining the process of developing evaluative judgement in Japanese elementary schools—utilising the co-regulation and evaluative judgement model","authors":"Hideaki Yoshida, Kohei Nishizuka, Masahiro Arimoto","doi":"10.1080/0969594X.2023.2193332","DOIUrl":"https://doi.org/10.1080/0969594X.2023.2193332","url":null,"abstract":"ABSTRACT To enhance the effectiveness of formative assessment and self-regulated learning, this study focused on evaluative judgement. A process for developing evaluative judgement and co-regulation had proposed. However, this co-regulation and evaluative judgement model lacks validation for use in classroom settings; the process of developing evaluative judgement remains unclear. Thus, this study aims to examine the processes of co-regulation and development of evaluative judgement by applying the co-regulation and evaluative judgement model in a Japanese elementary school. We confirmed that evaluative judgements are shared with students through co-regulation and that both evaluative judgements and learning outcomes are enhanced. The results support the co-regulation and evaluative judgement model and partially reveal the process of the development of evaluative judgement. Evaluative judgements are expected to expand the effectiveness of formative assessment and self-regulation.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"40 1","pages":"151 - 176"},"PeriodicalIF":3.2,"publicationDate":"2023-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79874673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-04DOI: 10.1080/0969594X.2023.2202836
Natalie Schelling, L. Rubenstein
ABSTRACT The purpose of this mixed-methods study is to examine the outcomes of assessment training for elementary teachers. Only 56% of the surveyed teachers (n = 283) had assessment training in their university courses, compared to 84% that received in-service training. The quantitative results indicate that frequency of assessment training is positively related to assessment self-efficacy, attitudes about assessment, and data-driven decision making practices. Within the qualitative data, teachers (n = 9) explained the conflicts within assessment training: idealism v. realism; pressure v. support; and technical competence v. transferrable understandings. This study demonstrates the importance of assessment training while providing several recommendations for enhancing the efficacy.
{"title":"Pre-service and in-service assessment training: impacts on elementary teachers’ self-efficacy, attitudes, and data-driven decision making practice","authors":"Natalie Schelling, L. Rubenstein","doi":"10.1080/0969594X.2023.2202836","DOIUrl":"https://doi.org/10.1080/0969594X.2023.2202836","url":null,"abstract":"ABSTRACT The purpose of this mixed-methods study is to examine the outcomes of assessment training for elementary teachers. Only 56% of the surveyed teachers (n = 283) had assessment training in their university courses, compared to 84% that received in-service training. The quantitative results indicate that frequency of assessment training is positively related to assessment self-efficacy, attitudes about assessment, and data-driven decision making practices. Within the qualitative data, teachers (n = 9) explained the conflicts within assessment training: idealism v. realism; pressure v. support; and technical competence v. transferrable understandings. This study demonstrates the importance of assessment training while providing several recommendations for enhancing the efficacy.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"71 1","pages":"177 - 202"},"PeriodicalIF":3.2,"publicationDate":"2023-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78057736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-02DOI: 10.1080/0969594X.2023.2182737
Yingqi Cui, Fu Chen, A. Lutsyk, Jacqueline P. Leighton, M. Cutumisu
ABSTRACT With the exponential increase in the volume of data available in the 21st century, data literacy skills have become vitally important in work places and everyday life. This paper provides a systematic review of available data literacy assessments targeted at different audiences and educational levels. The results can help researchers and practitioners better understand the current state of data literacy assessments in terms of issues related to 1) educational levels and audiences; 2) data literacy definitions and competencies; 3) assessment types and item formats; and 4) reliability and validity evidence. The results from the present review led us to conclude that teaching and assessing data literacy is still an emerging field in education. Therefore, high-quality assessment tools are greatly needed to provide valuable insights for students and instructors to monitor progress as well as facilitate and support teaching and learning.
{"title":"Data literacy assessments: a systematic literature review","authors":"Yingqi Cui, Fu Chen, A. Lutsyk, Jacqueline P. Leighton, M. Cutumisu","doi":"10.1080/0969594X.2023.2182737","DOIUrl":"https://doi.org/10.1080/0969594X.2023.2182737","url":null,"abstract":"ABSTRACT With the exponential increase in the volume of data available in the 21st century, data literacy skills have become vitally important in work places and everyday life. This paper provides a systematic review of available data literacy assessments targeted at different audiences and educational levels. The results can help researchers and practitioners better understand the current state of data literacy assessments in terms of issues related to 1) educational levels and audiences; 2) data literacy definitions and competencies; 3) assessment types and item formats; and 4) reliability and validity evidence. The results from the present review led us to conclude that teaching and assessing data literacy is still an emerging field in education. Therefore, high-quality assessment tools are greatly needed to provide valuable insights for students and instructors to monitor progress as well as facilitate and support teaching and learning.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"21 1","pages":"76 - 96"},"PeriodicalIF":3.2,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75697879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-02DOI: 10.1080/0969594X.2023.2166461
Barbara Fresko, Irit Levy-Feldman
ABSTRACT Teacher evaluation has evolved from a task used for administrative decisions to an activity whose main goal is the enhancement of student learning and well-being through the improvement of instruction. The actual implementation of teacher evaluation by school principals will determine greatly the extent to which it can achieve this goal. An attempt was made to examine how principals’ implementation of teacher evaluation was related to their reasons for evaluating (formative or summative), their perceptions of its benefits, their preparation for the evaluator role, and several background variables. Data were gathered by questionnaire from 219 school principals in Israel. Findings indicated that evaluating for improvement rather than for administrative reasons, believing teacher evaluation to benefit school functioning, and feeling adequately trained for the task significantly predicted fuller implementation of the teacher evaluation model. Implications of the findings for preparing and supporting school principals in their role as evaluators is discussed.
{"title":"Principals’ implementation of teacher evaluation and its relationship to intended purpose, perceived benefits, training and background variables","authors":"Barbara Fresko, Irit Levy-Feldman","doi":"10.1080/0969594X.2023.2166461","DOIUrl":"https://doi.org/10.1080/0969594X.2023.2166461","url":null,"abstract":"ABSTRACT Teacher evaluation has evolved from a task used for administrative decisions to an activity whose main goal is the enhancement of student learning and well-being through the improvement of instruction. The actual implementation of teacher evaluation by school principals will determine greatly the extent to which it can achieve this goal. An attempt was made to examine how principals’ implementation of teacher evaluation was related to their reasons for evaluating (formative or summative), their perceptions of its benefits, their preparation for the evaluator role, and several background variables. Data were gathered by questionnaire from 219 school principals in Israel. Findings indicated that evaluating for improvement rather than for administrative reasons, believing teacher evaluation to benefit school functioning, and feeling adequately trained for the task significantly predicted fuller implementation of the teacher evaluation model. Implications of the findings for preparing and supporting school principals in their role as evaluators is discussed.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"15 1","pages":"18 - 32"},"PeriodicalIF":3.2,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90917798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-02DOI: 10.1080/0969594X.2023.2194706
Therese N. Hopfenbeck
It has been well documented in the literature that feedback processes, when used timely and with high quality, can enhance students’ learning (Hattie & Timperley, 2007, Van Der Kleij & Lipnevich, 2021). Unfortunately, despite decades of research in feedback and formative assessment processes, we have few empirical studies investigating such feedback processes. We lack knowledge on how students act upon the feedback they receive, and even less studies apply experimental designs. The first article in this issue offers an important exception. Lipnevich et al. (2023) have conducted a study where the research team examined the influence of feedback comments and praise on student motivation and whether it had any impact on their performance. A total of 147 university students wrote an essay draft, received feedback (detailed comments, detailed comments and praise or control) before they revised their essays to address the feedback they had received. The study confirmed previous studies, documenting that those students who received the detailed feedback comments demonstrated higher motivation than students in the control group, but also greater improvement on their academic work. Further, students who received praise reported lower motivation and reduced improvement, compared to students who did not receive praise in addition to detailed comments. The research team discuss the paradoxical effects of praise and recommendations are provided on how to handle praise wisely in higher education. The second article published by Fresko and Levy-Feldman (2023) outlines the topic of teacher evaluation, an area which continues to be controversial across countries globally. In the current study, the researchers collected data from 219 school principals in Israel to investigate the purpose of teachers’ evaluations used. Analysis of the data indicated that teacher evaluations were mainly used for improvement rather than for administrative reasons. Further, it is reported that for teacher evaluation to benefit schools, principals believe adequate training for the task improves the processes. The research team discussthe implications of the findings and how to better support school principals in their role as evaluators. The third article in this issue tackle a controversial issue, with respect to sampling in OECD’s international assessment study, Programme for International Student Assessment (PISA). Andersson & Sandgren Massih (2023) have used data from PISA 2018 and investigated whether the students’ exclusions from PISA 2018 in Sweden followed the criteria set by the OECD. Since the inception of PISA in 2000, each of the participating countries have had to follow regulations on which students could be excluded (OECD, 2019a, b), and each country must report the exclusion rate of students. As such, some countries have reported higher exclusion rates than others. The authors of the current article have investigated what happened in Sweden when data were collected in 2
文献充分证明,及时、高质量地使用反馈过程可以提高学生的学习效果(Hattie & Timperley, 2007; Van Der Kleij & Lipnevich, 2021)。不幸的是,尽管对反馈和形成性评估过程进行了数十年的研究,我们很少有实证研究来调查这种反馈过程。我们不知道学生如何根据他们收到的反馈采取行动,应用实验设计的研究就更少了。本期的第一篇文章提供了一个重要的例外。Lipnevich et al.(2023)进行了一项研究,研究小组研究了反馈评论和表扬对学生动机的影响,以及是否对他们的表现有任何影响。共有147名大学生写了一篇论文草稿,收到了反馈(详细评论,详细评论和表扬或控制),然后他们修改了他们的文章,以解决他们收到的反馈。这项研究证实了之前的研究,记录了那些收到详细反馈意见的学生比对照组的学生表现出更高的动机,而且在学业上也有了更大的进步。此外,与没有得到表扬和详细评论的学生相比,得到表扬的学生表现出较低的动力和较低的进步。研究小组讨论了表扬的矛盾效应,并就如何在高等教育中明智地处理表扬提出了建议。Fresko和Levy-Feldman(2023)发表的第二篇文章概述了教师评估的主题,这一领域在全球各国仍然存在争议。在目前的研究中,研究人员收集了以色列219所学校校长的数据,以调查使用教师评估的目的。数据分析表明,教师评价主要用于改进,而不是出于行政原因。此外,据报道,为了使教师评价对学校有利,校长们认为充分的培训可以改善评估过程。研究小组讨论了研究结果的意义,以及如何更好地支持校长作为评估者的角色。这期的第三篇文章解决了一个有争议的问题,关于经合组织国际评估研究的抽样,国际学生评估计划(PISA)。Andersson & Sandgren Massih(2023)使用了2018年PISA的数据,并调查了瑞典学生被排除在2018年PISA之外是否符合经合组织制定的标准。自2000年PISA成立以来,每个参与国都必须遵守有关学生可以被排除在外的规定(OECD, 2019a, b),每个国家都必须报告学生的排除率。因此,一些国家报告的排斥率高于其他国家。本文的作者使用定性和定量数据分析,调查了2018年收集数据时瑞典发生的情况。他们得出结论,瑞典在PISA评估中的排他率在教育:原则,政策和实践2023,VOL. 30, NO. 5。1,1 - 3 https://doi.org/10.1080/0969594X.2023.2194706
{"title":"Feedback practices and transparency in data analysis","authors":"Therese N. Hopfenbeck","doi":"10.1080/0969594X.2023.2194706","DOIUrl":"https://doi.org/10.1080/0969594X.2023.2194706","url":null,"abstract":"It has been well documented in the literature that feedback processes, when used timely and with high quality, can enhance students’ learning (Hattie & Timperley, 2007, Van Der Kleij & Lipnevich, 2021). Unfortunately, despite decades of research in feedback and formative assessment processes, we have few empirical studies investigating such feedback processes. We lack knowledge on how students act upon the feedback they receive, and even less studies apply experimental designs. The first article in this issue offers an important exception. Lipnevich et al. (2023) have conducted a study where the research team examined the influence of feedback comments and praise on student motivation and whether it had any impact on their performance. A total of 147 university students wrote an essay draft, received feedback (detailed comments, detailed comments and praise or control) before they revised their essays to address the feedback they had received. The study confirmed previous studies, documenting that those students who received the detailed feedback comments demonstrated higher motivation than students in the control group, but also greater improvement on their academic work. Further, students who received praise reported lower motivation and reduced improvement, compared to students who did not receive praise in addition to detailed comments. The research team discuss the paradoxical effects of praise and recommendations are provided on how to handle praise wisely in higher education. The second article published by Fresko and Levy-Feldman (2023) outlines the topic of teacher evaluation, an area which continues to be controversial across countries globally. In the current study, the researchers collected data from 219 school principals in Israel to investigate the purpose of teachers’ evaluations used. Analysis of the data indicated that teacher evaluations were mainly used for improvement rather than for administrative reasons. Further, it is reported that for teacher evaluation to benefit schools, principals believe adequate training for the task improves the processes. The research team discussthe implications of the findings and how to better support school principals in their role as evaluators. The third article in this issue tackle a controversial issue, with respect to sampling in OECD’s international assessment study, Programme for International Student Assessment (PISA). Andersson & Sandgren Massih (2023) have used data from PISA 2018 and investigated whether the students’ exclusions from PISA 2018 in Sweden followed the criteria set by the OECD. Since the inception of PISA in 2000, each of the participating countries have had to follow regulations on which students could be excluded (OECD, 2019a, b), and each country must report the exclusion rate of students. As such, some countries have reported higher exclusion rates than others. The authors of the current article have investigated what happened in Sweden when data were collected in 2","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"17 1","pages":"1 - 3"},"PeriodicalIF":3.2,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78873427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-02DOI: 10.1080/0969594X.2023.2179956
A. Lipnevich, F. J. Eßer, M. Park, N. Winstone
ABSTRACT Although feedback is one of the most important instructional techniques, strong empirical research on receiving feedback is scarce in comparison to research on feedback provision. In this experimental study, we examined the influence of detailed comments and praise on student motivation and change in performance. 147 university students wrote an essay draft, received feedback (detailed comments, detailed comments and praise, or control) and revised their essay based on feedback. First, we found that students who received detailed comments showed higher motivation and greater improvement compared to their counterparts in the control group. Second, we showed that students who received praise demonstrated lower motivation and reduced improvement, compared to students who did not receive praise in addition to detailed comments. This demonstration of paradoxical effects of praise in higher education is explained in the context of the anchoring bias suggesting that praise should be used wisely.
{"title":"Anchored in praise? Potential manifestation of the anchoring bias in feedback reception","authors":"A. Lipnevich, F. J. Eßer, M. Park, N. Winstone","doi":"10.1080/0969594X.2023.2179956","DOIUrl":"https://doi.org/10.1080/0969594X.2023.2179956","url":null,"abstract":"ABSTRACT Although feedback is one of the most important instructional techniques, strong empirical research on receiving feedback is scarce in comparison to research on feedback provision. In this experimental study, we examined the influence of detailed comments and praise on student motivation and change in performance. 147 university students wrote an essay draft, received feedback (detailed comments, detailed comments and praise, or control) and revised their essay based on feedback. First, we found that students who received detailed comments showed higher motivation and greater improvement compared to their counterparts in the control group. Second, we showed that students who received praise demonstrated lower motivation and reduced improvement, compared to students who did not receive praise in addition to detailed comments. This demonstration of paradoxical effects of praise in higher education is explained in the context of the anchoring bias suggesting that praise should be used wisely.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"1 1","pages":"4 - 17"},"PeriodicalIF":3.2,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88574407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-02DOI: 10.1080/0969594X.2023.2189566
C. Andersson, Sofia Sandgren Massih
ABSTRACT This study assesses whether student exclusions from PISA 2018 in Sweden followed the criteria set by the OECD. We do this using both qualitative and quantitative methods. Our conclusion is that the exclusions made in PISA 2018 in Sweden did not follow OECD criteria and were much too high. Furthermore, interviews with school coordinators indicate that many of them misunderstood the OECD criteria. We also conclude that the National Agency for Education did not sufficiently follow up on exclusions. A review of the Swedish exclusion rate made by the OECD did not present credible results but accepted the results. A recalculation of PISA 2018 scores for Sweden where we assume non-participating students to be low performers show that results are significantly affected.
{"title":"PISA 2018: did Sweden exclude students according to the rules?","authors":"C. Andersson, Sofia Sandgren Massih","doi":"10.1080/0969594X.2023.2189566","DOIUrl":"https://doi.org/10.1080/0969594X.2023.2189566","url":null,"abstract":"ABSTRACT This study assesses whether student exclusions from PISA 2018 in Sweden followed the criteria set by the OECD. We do this using both qualitative and quantitative methods. Our conclusion is that the exclusions made in PISA 2018 in Sweden did not follow OECD criteria and were much too high. Furthermore, interviews with school coordinators indicate that many of them misunderstood the OECD criteria. We also conclude that the National Agency for Education did not sufficiently follow up on exclusions. A review of the Swedish exclusion rate made by the OECD did not present credible results but accepted the results. A recalculation of PISA 2018 scores for Sweden where we assume non-participating students to be low performers show that results are significantly affected.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"25 1","pages":"33 - 52"},"PeriodicalIF":3.2,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73253104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-02DOI: 10.1080/0969594X.2023.2189565
Sarah Wellberg
ABSTRACT Classroom assessment research in the United States has shifted away from the examination of teacher-made tests, but such tests are still widely used and have an enormous impact on students’ educational experiences. Given the major shifts in educational policy in the United States, including the widespread adoption of the Common Core State Standards, I argue that researchers should examine the tests and quizzes that teachers create and administer in order to determine whether those policies have had the intended impact of teachers’ assessment practices. Furthermore, these investigations should be grounded in discipline-specific conventions for developing and demonstrating knowledge. I then propose a research-based framework for analysing mathematics exams that focuses on alignment with learning goals, cognitive complexity, variety of task formats, attentiveness to culture and language, and clarity of expectation. This framework is meant to be used formatively, helping researchers, administrators, and teachers identify strengths and areas for growth.
美国的课堂评估研究已经从教师自编的测试转向了考试,但这种测试仍然被广泛使用,并对学生的教育体验产生了巨大的影响。考虑到美国教育政策的重大转变,包括“共同核心州标准”(Common Core State Standards)的广泛采用,我认为研究人员应该检查教师创建和管理的测试和测验,以确定这些政策是否对教师的评估实践产生了预期的影响。此外,这些调查应以发展和展示知识的特定学科惯例为基础。然后,我提出了一个基于研究的数学考试分析框架,该框架侧重于与学习目标的一致性、认知复杂性、任务格式的多样性、对文化和语言的关注以及期望的清晰度。这个框架的目的是形成,帮助研究人员,管理人员和教师确定优势和发展领域。
{"title":"Teacher-made tests: why they matter and a framework for analysing mathematics exams","authors":"Sarah Wellberg","doi":"10.1080/0969594X.2023.2189565","DOIUrl":"https://doi.org/10.1080/0969594X.2023.2189565","url":null,"abstract":"ABSTRACT Classroom assessment research in the United States has shifted away from the examination of teacher-made tests, but such tests are still widely used and have an enormous impact on students’ educational experiences. Given the major shifts in educational policy in the United States, including the widespread adoption of the Common Core State Standards, I argue that researchers should examine the tests and quizzes that teachers create and administer in order to determine whether those policies have had the intended impact of teachers’ assessment practices. Furthermore, these investigations should be grounded in discipline-specific conventions for developing and demonstrating knowledge. I then propose a research-based framework for analysing mathematics exams that focuses on alignment with learning goals, cognitive complexity, variety of task formats, attentiveness to culture and language, and clarity of expectation. This framework is meant to be used formatively, helping researchers, administrators, and teachers identify strengths and areas for growth.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"458 1","pages":"53 - 75"},"PeriodicalIF":3.2,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79793384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-02DOI: 10.1080/0969594X.2022.2178602
Therese N. Hopfenbeck
As the global education community is adapting to life in a post-pandemic world, controversies in educational assessment continue to challenge researchers across countries and regions. Some of the controversies in educational assessment are linked to inequalities in the education system, and the fact that students do not have access to the same resources globally, which continues to impact them unfairly with respect to how they are assessed. Perhaps the most dramatic development in this respect is countries which continue to deny girls education, with Afghanistan as a recent example. It demonstrates how important it is to work even harder to reach the UN sustainable development goals, with aspiration for a world of peace, prosperity, and dignity where girls and women can live free from discrimination, and actively take part in education and sit exams for future higher education and careers. One of OECD’s ambitions is to provide evidence-based knowledge to policy makers about their education systems and to enhance equality for all students through their large-scale assessment studies such as PISA. Such ambition is thus dependent upon trust in the actual assessment and demands transparency in how concepts are measured and reported. In the first paper of this issue, Zieger et al. (2022) discusses the so-called ‘conditioning model’, which is part of the OECD’s Programme for International Student Assessment (PISA). The aim of the paper is to discuss this practice and use of the model, and what impact it has on the PISA results. PISA is widely used and cited globally after eight cycles of data collection in almost 100 countries, just during the first quarter of the century (Jerrim, 2023). Despite this prominence as the world’s largest and most known comparative international education study, the knowledge around how student background variables are used when deriving students’ achievement scores are less known. More specifically, in their paper, Zieger et al. (this issue) demonstrate that the conditioning model is sensitive to which background variables are included. In fact, changes to how background variables are used lead to changes in the ranking of countries and how they are compared in PISA. This was particularly the case with the variables around socioeconomic background, measures used to measure inequality on education. The authors understandably suggest this issue needs to be further addressed, both within and outside OECD, and results around comparisons of certain measures must be treated with caution. Debates around PISA and other international large-scale studies are not new, and controversial topics around calculations of scores and rankings have been an ongoing debate since the introduction of these studies (Goldstein, 2004). Nevertheless, the call for more openness around the use of different models and the impact it has on the rankings must be addressed, as such studies are dependent upon the public’s trust. ASSESSMENT IN EDUCATION: PRIN
{"title":"Current controversies in educational assessment","authors":"Therese N. Hopfenbeck","doi":"10.1080/0969594X.2022.2178602","DOIUrl":"https://doi.org/10.1080/0969594X.2022.2178602","url":null,"abstract":"As the global education community is adapting to life in a post-pandemic world, controversies in educational assessment continue to challenge researchers across countries and regions. Some of the controversies in educational assessment are linked to inequalities in the education system, and the fact that students do not have access to the same resources globally, which continues to impact them unfairly with respect to how they are assessed. Perhaps the most dramatic development in this respect is countries which continue to deny girls education, with Afghanistan as a recent example. It demonstrates how important it is to work even harder to reach the UN sustainable development goals, with aspiration for a world of peace, prosperity, and dignity where girls and women can live free from discrimination, and actively take part in education and sit exams for future higher education and careers. One of OECD’s ambitions is to provide evidence-based knowledge to policy makers about their education systems and to enhance equality for all students through their large-scale assessment studies such as PISA. Such ambition is thus dependent upon trust in the actual assessment and demands transparency in how concepts are measured and reported. In the first paper of this issue, Zieger et al. (2022) discusses the so-called ‘conditioning model’, which is part of the OECD’s Programme for International Student Assessment (PISA). The aim of the paper is to discuss this practice and use of the model, and what impact it has on the PISA results. PISA is widely used and cited globally after eight cycles of data collection in almost 100 countries, just during the first quarter of the century (Jerrim, 2023). Despite this prominence as the world’s largest and most known comparative international education study, the knowledge around how student background variables are used when deriving students’ achievement scores are less known. More specifically, in their paper, Zieger et al. (this issue) demonstrate that the conditioning model is sensitive to which background variables are included. In fact, changes to how background variables are used lead to changes in the ranking of countries and how they are compared in PISA. This was particularly the case with the variables around socioeconomic background, measures used to measure inequality on education. The authors understandably suggest this issue needs to be further addressed, both within and outside OECD, and results around comparisons of certain measures must be treated with caution. Debates around PISA and other international large-scale studies are not new, and controversial topics around calculations of scores and rankings have been an ongoing debate since the introduction of these studies (Goldstein, 2004). Nevertheless, the call for more openness around the use of different models and the impact it has on the rankings must be addressed, as such studies are dependent upon the public’s trust. ASSESSMENT IN EDUCATION: PRIN","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"57 1","pages":"629 - 631"},"PeriodicalIF":3.2,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80916475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}