{"title":"PowerGrader: Automating Code Assessment Based on PowerShell for Programming Courses","authors":"Fei Zuo, J. Rhee, M. Park, Gang Qian","doi":"10.1109/SERA57763.2023.10197671","DOIUrl":null,"url":null,"abstract":"Programming courses in colleges often involve a myriad of coding assignments, which brings heavy grading workloads for instructors. To alleviate this problem, automatic programming evaluation tools are becoming more of a requirement than an option. However, after considering the actual requirements in our teaching practice, we have noticed that the current solutions still suffer from shortcomings and limitations. In the process of addressing the challenges, we propose and implement a brand new code assessment application based on PowerShell, which shows both extendibility and configurability. In particular, we integrate both black-box testing and the lexical analysis into the system, thus achieving a customized solution to meet specific requirements. This paper presents the architecture and design of our automatic code assessment application. Furthermore, we conduct empirical evaluations on the proposed system following the Technology Acceptance Model, and also investigate the drawbacks of manual assessment of coding assignments in terms of reliability and fairness. Finally, the evaluations demonstrate the effectiveness of our proposed auto-grader in facilitating the code assessment targeting college-level programming courses.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SERA57763.2023.10197671","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Programming courses in colleges often involve a myriad of coding assignments, which brings heavy grading workloads for instructors. To alleviate this problem, automatic programming evaluation tools are becoming more of a requirement than an option. However, after considering the actual requirements in our teaching practice, we have noticed that the current solutions still suffer from shortcomings and limitations. In the process of addressing the challenges, we propose and implement a brand new code assessment application based on PowerShell, which shows both extendibility and configurability. In particular, we integrate both black-box testing and the lexical analysis into the system, thus achieving a customized solution to meet specific requirements. This paper presents the architecture and design of our automatic code assessment application. Furthermore, we conduct empirical evaluations on the proposed system following the Technology Acceptance Model, and also investigate the drawbacks of manual assessment of coding assignments in terms of reliability and fairness. Finally, the evaluations demonstrate the effectiveness of our proposed auto-grader in facilitating the code assessment targeting college-level programming courses.