Sweta Baniya, N. Mentzer, S. Bartholomew, Amelia Chesley, Cameron Moon, Derek Sherman
{"title":"Using Adaptive Comparative Judgment in Writing Assessment: An Investigation of Reliability Among Interdisciplinary Evaluators","authors":"Sweta Baniya, N. Mentzer, S. Bartholomew, Amelia Chesley, Cameron Moon, Derek Sherman","doi":"10.21061/jots.v45i1.a.3","DOIUrl":null,"url":null,"abstract":"Adaptive Comparative Judgment (ACJ) is an assessment method that facilitates holistic, flexible judgments of student work in place of more quantitative or rubric-based methods. This method “requires little training, and has proved very popular with assessors and teachers in several subjects, and in several countries” (Pollitt 2012, p. 281). This research explores ACJ as a holistic, flexible, interdisciplinary assessment and research tool in the context of an integrated program that combines Design, English Composition, and Communications courses. All technology students at Polytechnic Institute at Purdue University are required to take each of these three core courses. Considering the interdisciplinary nature of the program’s curriculum, this research first explored whether three judges from differing backgrounds could reach an acceptable level of reliability in assessment using only ACJ, without the prerequisites of similar disciplinary backgrounds or significant assessment experience, and without extensive negotiation or other calibration efforts. After establishing acceptable reliability among interdisciplinary judges, analysis was also conducted to investigate differences in student learning between integrated (i.e., interdisciplinary) and non-integrated learning environments. These results suggest evaluators from various backgrounds can establish acceptable levels of reliability using ACJ as an alternative assessment tool to more traditional measures of student learning. This research also suggests technology students in the integrated/ interdisciplinary environment may have demonstrated higher learning gains than their peers and that further research should control for student differences to add confidence to these findings.","PeriodicalId":43439,"journal":{"name":"Tecnoscienza-Italian Journal of Science & Technology Studies","volume":"11 1","pages":""},"PeriodicalIF":0.5000,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Tecnoscienza-Italian Journal of Science & Technology Studies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21061/jots.v45i1.a.3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"SOCIAL ISSUES","Score":null,"Total":0}
引用次数: 4
Abstract
Adaptive Comparative Judgment (ACJ) is an assessment method that facilitates holistic, flexible judgments of student work in place of more quantitative or rubric-based methods. This method “requires little training, and has proved very popular with assessors and teachers in several subjects, and in several countries” (Pollitt 2012, p. 281). This research explores ACJ as a holistic, flexible, interdisciplinary assessment and research tool in the context of an integrated program that combines Design, English Composition, and Communications courses. All technology students at Polytechnic Institute at Purdue University are required to take each of these three core courses. Considering the interdisciplinary nature of the program’s curriculum, this research first explored whether three judges from differing backgrounds could reach an acceptable level of reliability in assessment using only ACJ, without the prerequisites of similar disciplinary backgrounds or significant assessment experience, and without extensive negotiation or other calibration efforts. After establishing acceptable reliability among interdisciplinary judges, analysis was also conducted to investigate differences in student learning between integrated (i.e., interdisciplinary) and non-integrated learning environments. These results suggest evaluators from various backgrounds can establish acceptable levels of reliability using ACJ as an alternative assessment tool to more traditional measures of student learning. This research also suggests technology students in the integrated/ interdisciplinary environment may have demonstrated higher learning gains than their peers and that further research should control for student differences to add confidence to these findings.
适应性比较判断(ACJ)是一种评估方法,它有助于对学生作业进行整体、灵活的判断,取代了更多的定量或基于规则的方法。这种方法“几乎不需要培训,并且在几个科目和几个国家的评估人员和教师中非常受欢迎”(Pollitt 2012, p. 281)。本研究将ACJ作为一种综合的、灵活的、跨学科的评估和研究工具,将设计、英语写作和传播课程结合起来。普渡大学理工学院的所有技术专业学生都必须学习这三门核心课程。考虑到该项目课程的跨学科性质,本研究首先探讨了来自不同背景的三名法官是否可以在仅使用ACJ的评估中达到可接受的可靠性水平,而不需要类似的学科背景或重要的评估经验,也不需要广泛的谈判或其他校准工作。在建立了跨学科评判的可接受信度之后,我们还对综合(即跨学科)和非综合学习环境之间的学生学习差异进行了分析。这些结果表明,来自不同背景的评估者可以使用ACJ来建立可接受的可靠性水平,作为更传统的学生学习测量的替代评估工具。这项研究还表明,综合/跨学科环境中的技术学生可能比同龄人表现出更高的学习收益,进一步的研究应该控制学生的差异,以增加对这些发现的信心。