Same yardstick, different results: efficacy of rubrics in science education assessment

Erasmos Charamba, Nkhululeko Dlamini-Nxumalo
{"title":"Same yardstick, different results: efficacy of rubrics in science education assessment","authors":"Erasmos Charamba, Nkhululeko Dlamini-Nxumalo","doi":"10.21303/2504-5571.2022.002455","DOIUrl":null,"url":null,"abstract":"Assessments have become integral to today's teaching and learning. Within the world of assessments, there are two paramount ideologies at work: assessments for learning and assessments of learning. The latter are typically administered at the end of a unit or grading period and evaluate a student’s understanding by comparing their achievement against a class, nationwide benchmark, or standard. The former assesses a student’s understanding of a skill or lesson during the learning and teaching process. Assessment for learning enables teachers to collect data that will help them adjust their teaching strategies, and students to adjust their learning strategies. In order to achieve this goal, teachers can make use of several assessment tools, such as concept maps, oral presentations, peer review, portfolios, examinations, written reports, and rubrics. The use of rubrics not only makes the teacher’s standards and result grading explicit but can give students a clear sense of what the expectations are for a high level of performance on a given science assignment. In this study, quantitative data were collected from tasks, assessed by 10 teachers who were purposefully sampled; while qualitative data were collected from interview responses of the same teachers to explore the extent of uniformity in the use of rubrics. The researchers compared and analyzed the different scores, allocated by the respective participants, and analyzed the qualitative data using qualitative data analysis. The study suggests that if interpreted and used well, rubrics support learning by enabling an efficient, consistent, objective, and quick way of assessing students’ work thereby facilitating learning.","PeriodicalId":33606,"journal":{"name":"EUREKA Social and Humanities","volume":"8 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"EUREKA Social and Humanities","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21303/2504-5571.2022.002455","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Assessments have become integral to today's teaching and learning. Within the world of assessments, there are two paramount ideologies at work: assessments for learning and assessments of learning. The latter are typically administered at the end of a unit or grading period and evaluate a student’s understanding by comparing their achievement against a class, nationwide benchmark, or standard. The former assesses a student’s understanding of a skill or lesson during the learning and teaching process. Assessment for learning enables teachers to collect data that will help them adjust their teaching strategies, and students to adjust their learning strategies. In order to achieve this goal, teachers can make use of several assessment tools, such as concept maps, oral presentations, peer review, portfolios, examinations, written reports, and rubrics. The use of rubrics not only makes the teacher’s standards and result grading explicit but can give students a clear sense of what the expectations are for a high level of performance on a given science assignment. In this study, quantitative data were collected from tasks, assessed by 10 teachers who were purposefully sampled; while qualitative data were collected from interview responses of the same teachers to explore the extent of uniformity in the use of rubrics. The researchers compared and analyzed the different scores, allocated by the respective participants, and analyzed the qualitative data using qualitative data analysis. The study suggests that if interpreted and used well, rubrics support learning by enabling an efficient, consistent, objective, and quick way of assessing students’ work thereby facilitating learning.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
相同的尺度,不同的结果:科学教育评价中标准的有效性
评估已成为当今教与学不可或缺的一部分。在评估的世界里,有两种最重要的意识形态在起作用:学习评估和学习评估。后者通常在一个单元或评分期结束时进行,通过将学生的成绩与班级、全国基准或标准进行比较来评估学生的理解能力。前者评估学生在学习和教学过程中对技能或课程的理解。学习评估使教师能够收集数据,帮助他们调整教学策略,帮助学生调整学习策略。为了实现这一目标,教师可以使用几种评估工具,如概念图、口头报告、同行评议、作品集、考试、书面报告和大纲。规则的使用不仅使教师的标准和结果评分明确,而且可以让学生清楚地了解在给定的科学作业中,对高水平表现的期望是什么。在本研究中,从任务中收集定量数据,有目的地抽样10名教师进行评估;而定性数据是从同一位教师的访谈回应中收集的,以探索在使用标题的一致性程度。研究人员比较和分析了各个参与者分配的不同分数,并使用定性数据分析来分析定性数据。研究表明,如果解释和使用得当,标题可以通过有效、一致、客观和快速的方式评估学生的学习,从而促进学习,从而支持学习。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
37
审稿时长
5 weeks
期刊最新文献
Risky sexual behavior and associated factors among in-school adolescents: a school-based, cross-sectional study Experiencing work in the fourth industrial revolution: a qualitative study on work identity and new ways of work Epistemological access: a case of academic development programmes at a university of technology Virtual small business entrepreneurship opportunities Internationalisation of higher education in the new normal in universities: university management perspectives
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1