Developing & Implementing a System of Rubrics for Assessing Interaction Design Students

E. Oliver, Daniel Hatch
{"title":"Developing & Implementing a System of Rubrics for Assessing Interaction Design Students","authors":"E. Oliver, Daniel Hatch","doi":"10.1109/ietc54973.2022.9796983","DOIUrl":null,"url":null,"abstract":"This paper is the continuation and conclusion of a study evaluating students’ creative works by full-time and adjunct faculty (previous study). The study addressed an interaction design program that previously lacked a uniform method for grading student assignments across courses. The first two phases of the study identified and validated issues that encompassed unclear expectations for completing and grading student assignments. Instructors also reported that significant time was required to grade student assignments. An intervention introduced a uniform system of rubrics designed to assess students in first-year interaction design courses. The rubrics integrated course competencies and program outcomes as the assessment criteria. Rubrics provided students and instructors a framework to determine the purpose and requirements of a specific assignment. The data collected included student assignment scores and the participants’ experiences through pre-and post-study surveys and interviews. The data helped answer if uniform grading rubrics can improve student performance based on scores, reduce the time spent grading by instructors, and improve inter-rater reliability amongst faculty members. Students and instructors who used the rubrics noted increased instructor feedback and overall academic achievement. The data revealed a degree of inter-rater reliability across two courses using the rubrics. This study increased the probability that using the same grading system would result in similar assignment scores across different instructors.","PeriodicalId":251518,"journal":{"name":"2022 Intermountain Engineering, Technology and Computing (IETC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Intermountain Engineering, Technology and Computing (IETC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ietc54973.2022.9796983","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

This paper is the continuation and conclusion of a study evaluating students’ creative works by full-time and adjunct faculty (previous study). The study addressed an interaction design program that previously lacked a uniform method for grading student assignments across courses. The first two phases of the study identified and validated issues that encompassed unclear expectations for completing and grading student assignments. Instructors also reported that significant time was required to grade student assignments. An intervention introduced a uniform system of rubrics designed to assess students in first-year interaction design courses. The rubrics integrated course competencies and program outcomes as the assessment criteria. Rubrics provided students and instructors a framework to determine the purpose and requirements of a specific assignment. The data collected included student assignment scores and the participants’ experiences through pre-and post-study surveys and interviews. The data helped answer if uniform grading rubrics can improve student performance based on scores, reduce the time spent grading by instructors, and improve inter-rater reliability amongst faculty members. Students and instructors who used the rubrics noted increased instructor feedback and overall academic achievement. The data revealed a degree of inter-rater reliability across two courses using the rubrics. This study increased the probability that using the same grading system would result in similar assignment scores across different instructors.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
开发和实施一个评估交互设计学生的标准系统
本文是对专任教师和兼职教师评价学生创造性作品研究(前项研究)的延续和总结。这项研究解决了一个交互设计项目,该项目以前缺乏统一的方法来给学生的跨课程作业评分。研究的前两个阶段确定并验证了一些问题,这些问题包括对学生完成作业和评分的期望不明确。教师们还报告说,批改学生作业需要大量的时间。一项干预措施引入了一套统一的标准系统,用于评估第一年交互设计课程的学生。该标准综合了课程能力和项目成果作为评估标准。规则为学生和教师提供了一个框架,以确定特定作业的目的和要求。收集的数据包括学生作业分数和参与者通过学习前和学习后的调查和访谈的经历。这些数据有助于回答统一的评分标准是否可以提高学生的成绩,减少教师评分所花费的时间,并提高教员之间的评分可靠性。使用这些标准的学生和教师注意到教师的反馈和整体学业成绩都有所提高。数据显示,在使用这些标准的两门课程中,评分者之间存在一定程度的可靠性。这项研究增加了使用相同评分系统在不同教师之间产生相似作业分数的可能性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Utilizing a Blockchain for Managing Sensor Metadata in Exposure Health Studies Identifying Patterns in Fault Recovery Techniques and Hardware Status of Radiation Tolerant Computers Using Principal Components Analysis Sketch-a-Map (SAM): Creative Route Art Generation Feature Analysis in Satellite Image Classification Using LC-KSVD and Frozen Dictionary Learning Long Range Sensor Network for Disaster Relief
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1