M. Nakayama, F. Sciarrone, M. Temperini, Masaki Uto
{"title":"通过同伴评估和IRT评估技术评估编程技能","authors":"M. Nakayama, F. Sciarrone, M. Temperini, Masaki Uto","doi":"10.1109/ithet56107.2022.10031766","DOIUrl":null,"url":null,"abstract":"In whatever study program where Computer Science is taught (as a support subject matter, or as the main one) the analysis of students’ programming skills is a complex and crucial endeavour. Peer assessment (PA) can be used to expose students (peers) to a very effective educational methodology, to spur competence, and to evaluate skills in a wide range of subject matters, including Computer Science and programming. An important feature is in that data from PA sessions can be used to model the students, and support the inference of automated grading. In this paper we analyse the data coming from experiments where several PA sessions were conducted, with students having to produce programs, and evaluate their peers’ programs. The main aim is to see how methods of the Item Response Theory (IRT) can be applied in the PA framework, to model the students effectively. The results seem encouraging, allowing to foresee the enrichment of more traditional automated grading techniques by the IRT methods.","PeriodicalId":125795,"journal":{"name":"2022 20th International Conference on Information Technology Based Higher Education and Training (ITHET)","volume":"171 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluation of Programming Skills via Peer Assessment and IRT Estimation Techniques\",\"authors\":\"M. Nakayama, F. Sciarrone, M. Temperini, Masaki Uto\",\"doi\":\"10.1109/ithet56107.2022.10031766\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In whatever study program where Computer Science is taught (as a support subject matter, or as the main one) the analysis of students’ programming skills is a complex and crucial endeavour. Peer assessment (PA) can be used to expose students (peers) to a very effective educational methodology, to spur competence, and to evaluate skills in a wide range of subject matters, including Computer Science and programming. An important feature is in that data from PA sessions can be used to model the students, and support the inference of automated grading. In this paper we analyse the data coming from experiments where several PA sessions were conducted, with students having to produce programs, and evaluate their peers’ programs. The main aim is to see how methods of the Item Response Theory (IRT) can be applied in the PA framework, to model the students effectively. The results seem encouraging, allowing to foresee the enrichment of more traditional automated grading techniques by the IRT methods.\",\"PeriodicalId\":125795,\"journal\":{\"name\":\"2022 20th International Conference on Information Technology Based Higher Education and Training (ITHET)\",\"volume\":\"171 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 20th International Conference on Information Technology Based Higher Education and Training (ITHET)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ithet56107.2022.10031766\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 20th International Conference on Information Technology Based Higher Education and Training (ITHET)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ithet56107.2022.10031766","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Evaluation of Programming Skills via Peer Assessment and IRT Estimation Techniques
In whatever study program where Computer Science is taught (as a support subject matter, or as the main one) the analysis of students’ programming skills is a complex and crucial endeavour. Peer assessment (PA) can be used to expose students (peers) to a very effective educational methodology, to spur competence, and to evaluate skills in a wide range of subject matters, including Computer Science and programming. An important feature is in that data from PA sessions can be used to model the students, and support the inference of automated grading. In this paper we analyse the data coming from experiments where several PA sessions were conducted, with students having to produce programs, and evaluate their peers’ programs. The main aim is to see how methods of the Item Response Theory (IRT) can be applied in the PA framework, to model the students effectively. The results seem encouraging, allowing to foresee the enrichment of more traditional automated grading techniques by the IRT methods.