Samuel P. León, Ernesto Panadero, Inmaculada García-Martínez
{"title":"我们的学生有多准确?自评评分准确性的元分析系统评价","authors":"Samuel P. León, Ernesto Panadero, Inmaculada García-Martínez","doi":"10.1007/s10648-023-09819-0","DOIUrl":null,"url":null,"abstract":"<p>Developing the ability to self-assess is a crucial skill for students, as it impacts their academic performance and learning strategies, amongst other areas. Most existing research in this field has concentrated on the exploration of the students’ capacity to accurately assign a score to their work that closely mirrors an expert’s evaluation, typically a teacher’s. Though this process is commonly referred to as self-assessment, a more precise term would be self-assessment scoring accuracy. Our aim is to review what is the average accuracy and what moderators might influence this accuracy. Following PRISMA recommendations, we reviewed 160 articles, including data from 29,352 participants. We analysed 9 factors as possible moderators: (1) assessment criteria; (2) use of rubric; (3) self-assessment experience; (4) feedback; (5) content knowledge; (6) incentive; (7) formative assessment; (8) field of knowledge; and (9) educational level. The results showed an overall effect of students’ overestimation (<i>g</i> = 0.206) with an average relationship of <i>z</i> = 0.472 between students’ estimation and the expert’s measure. The overestimation diminishes when students receive feedback, possess greater self-assessment experience and content knowledge, when the assessment does not have formative purposes, and in younger students (primary and secondary education). Importantly, the studies analysed exhibited significant heterogeneity and lacked crucial methodological information. </p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"73 20","pages":""},"PeriodicalIF":10.1000,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"How Accurate Are Our Students? A Meta-analytic Systematic Review on Self-assessment Scoring Accuracy\",\"authors\":\"Samuel P. León, Ernesto Panadero, Inmaculada García-Martínez\",\"doi\":\"10.1007/s10648-023-09819-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Developing the ability to self-assess is a crucial skill for students, as it impacts their academic performance and learning strategies, amongst other areas. Most existing research in this field has concentrated on the exploration of the students’ capacity to accurately assign a score to their work that closely mirrors an expert’s evaluation, typically a teacher’s. Though this process is commonly referred to as self-assessment, a more precise term would be self-assessment scoring accuracy. Our aim is to review what is the average accuracy and what moderators might influence this accuracy. Following PRISMA recommendations, we reviewed 160 articles, including data from 29,352 participants. We analysed 9 factors as possible moderators: (1) assessment criteria; (2) use of rubric; (3) self-assessment experience; (4) feedback; (5) content knowledge; (6) incentive; (7) formative assessment; (8) field of knowledge; and (9) educational level. The results showed an overall effect of students’ overestimation (<i>g</i> = 0.206) with an average relationship of <i>z</i> = 0.472 between students’ estimation and the expert’s measure. The overestimation diminishes when students receive feedback, possess greater self-assessment experience and content knowledge, when the assessment does not have formative purposes, and in younger students (primary and secondary education). Importantly, the studies analysed exhibited significant heterogeneity and lacked crucial methodological information. </p>\",\"PeriodicalId\":48344,\"journal\":{\"name\":\"Educational Psychology Review\",\"volume\":\"73 20\",\"pages\":\"\"},\"PeriodicalIF\":10.1000,\"publicationDate\":\"2023-11-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Educational Psychology Review\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1007/s10648-023-09819-0\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EDUCATIONAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Educational Psychology Review","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1007/s10648-023-09819-0","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EDUCATIONAL","Score":null,"Total":0}
How Accurate Are Our Students? A Meta-analytic Systematic Review on Self-assessment Scoring Accuracy
Developing the ability to self-assess is a crucial skill for students, as it impacts their academic performance and learning strategies, amongst other areas. Most existing research in this field has concentrated on the exploration of the students’ capacity to accurately assign a score to their work that closely mirrors an expert’s evaluation, typically a teacher’s. Though this process is commonly referred to as self-assessment, a more precise term would be self-assessment scoring accuracy. Our aim is to review what is the average accuracy and what moderators might influence this accuracy. Following PRISMA recommendations, we reviewed 160 articles, including data from 29,352 participants. We analysed 9 factors as possible moderators: (1) assessment criteria; (2) use of rubric; (3) self-assessment experience; (4) feedback; (5) content knowledge; (6) incentive; (7) formative assessment; (8) field of knowledge; and (9) educational level. The results showed an overall effect of students’ overestimation (g = 0.206) with an average relationship of z = 0.472 between students’ estimation and the expert’s measure. The overestimation diminishes when students receive feedback, possess greater self-assessment experience and content knowledge, when the assessment does not have formative purposes, and in younger students (primary and secondary education). Importantly, the studies analysed exhibited significant heterogeneity and lacked crucial methodological information.
期刊介绍:
Educational Psychology Review aims to disseminate knowledge and promote dialogue within the field of educational psychology. It serves as a platform for the publication of various types of articles, including peer-reviewed integrative reviews, special thematic issues, reflections on previous research or new research directions, interviews, and research-based advice for practitioners. The journal caters to a diverse readership, ranging from generalists in educational psychology to experts in specific areas of the discipline. The content offers a comprehensive coverage of topics and provides in-depth information to meet the needs of both specialized researchers and practitioners.