{"title":"Insights from Peer Reviewing in Large University Courses","authors":"Naemi Luckner, Peter Purgathofer","doi":"10.1145/3507923.3507955","DOIUrl":null,"url":null,"abstract":"Teaching a mandatory course for undergraduate computer science students with up to 750 students per semester, we have been making extensive use of peer reviewing. During the semester, each student has to work on a set of assignments. After finishing an assignment, the student has to write three peer reviews for three pieces of work by different, anonymous peers. One of the problems in the use of (student) peer reviewing in large university courses is the quality of written reviews. To address this problem, we devised various provisions to maintain or increase reviewing quality. In this article, we describe one of these provisions, namely the use of three different types of reviews instead of using identical review types three times in a row: guided reviews, open reviews, checkbox reviews. Our aim in this article is to research the impact of these different review types on the students’ experience and acceptance of the reviewing process to inform the design of a reviewing process that better fits the students’ needs. To gain such insights, we gathered feedback using a survey, which was completed by 101 students. Using qualitative analysis, we extracted and defined room for improvement and discuss possible changes for our current peer review system and process. Our learnings show insights into the types of reviews students prefer, and hint at some advantages and pitfalls of peer reviewing that can have substantial impact on the design and application of such a system in large university courses.","PeriodicalId":137168,"journal":{"name":"Proceedings of the 10th Computer Science Education Research Conference","volume":"129 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 10th Computer Science Education Research Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3507923.3507955","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Teaching a mandatory course for undergraduate computer science students with up to 750 students per semester, we have been making extensive use of peer reviewing. During the semester, each student has to work on a set of assignments. After finishing an assignment, the student has to write three peer reviews for three pieces of work by different, anonymous peers. One of the problems in the use of (student) peer reviewing in large university courses is the quality of written reviews. To address this problem, we devised various provisions to maintain or increase reviewing quality. In this article, we describe one of these provisions, namely the use of three different types of reviews instead of using identical review types three times in a row: guided reviews, open reviews, checkbox reviews. Our aim in this article is to research the impact of these different review types on the students’ experience and acceptance of the reviewing process to inform the design of a reviewing process that better fits the students’ needs. To gain such insights, we gathered feedback using a survey, which was completed by 101 students. Using qualitative analysis, we extracted and defined room for improvement and discuss possible changes for our current peer review system and process. Our learnings show insights into the types of reviews students prefer, and hint at some advantages and pitfalls of peer reviewing that can have substantial impact on the design and application of such a system in large university courses.