The problem with manual code review for assignments is that students receive feedback when they are already working on the next assignment. Students have neither the chance nor the mindset to revisit their solutions. In our course on object-oriented programming, a team of teaching assistants reviews student solutions after the deadline and publishes individually tailored feedback based on a grading manual. To make the code review more effective, we implemented automatic checking of a significant part of our evaluation criteria. Students receive this automatic review instantaneously and can improve their solution based on this feedback. This system is not intended to eliminate manual grading. Rather, it helps students through immediate feedback, but also teaching assistants who can build upon the automatically generated feedback. Our system, Personal Prof, inspects the solutions’ abstract syntax tree, but more importantly has access to a semantic database of Java-specific meta-information. This enables us to automate a significant part of the code review. We used the tool during the spring semester of the academic year 2019/2020 to check the assignments of 400 students, for a total of 3800 submissions. Students appreciate and use the automatic feedback. Existing complaints about late reviews reported in the previous course evaluations disappeared with Personal Prof.
{"title":"Personal Prof: Automatic Code Review for Java Assignments","authors":"M. Klinik, P. Koopman, Rick van der Wal","doi":"10.1145/3507923.3507930","DOIUrl":"https://doi.org/10.1145/3507923.3507930","url":null,"abstract":"The problem with manual code review for assignments is that students receive feedback when they are already working on the next assignment. Students have neither the chance nor the mindset to revisit their solutions. In our course on object-oriented programming, a team of teaching assistants reviews student solutions after the deadline and publishes individually tailored feedback based on a grading manual. To make the code review more effective, we implemented automatic checking of a significant part of our evaluation criteria. Students receive this automatic review instantaneously and can improve their solution based on this feedback. This system is not intended to eliminate manual grading. Rather, it helps students through immediate feedback, but also teaching assistants who can build upon the automatically generated feedback. Our system, Personal Prof, inspects the solutions’ abstract syntax tree, but more importantly has access to a semantic database of Java-specific meta-information. This enables us to automate a significant part of the code review. We used the tool during the spring semester of the academic year 2019/2020 to check the assignments of 400 students, for a total of 3800 submissions. Students appreciate and use the automatic feedback. Existing complaints about late reviews reported in the previous course evaluations disappeared with Personal Prof.","PeriodicalId":137168,"journal":{"name":"Proceedings of the 10th Computer Science Education Research Conference","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132168774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vivian van der Werf, Efthimia Aivaloglou, F. Hermans, M. Specht
Motivation. Code reading skills are important for comprehension. Explain-in-plain-English tasks (EiPE) are one type of reading exercises that show promising results on the ability of such exercises to differentiate between particular levels of code comprehension. Code reading/explaining skills also correlate with code writing skills. Objective. This paper aims to provide insight in what novice students express in their explanations after reading a piece of code, and what these insights can tell us about how the students comprehend code. Method. We performed an exploratory analysis on four reading assignments extracted from a university-level beginners course in Python programming. We paid specific attention to 1) the core focus of student answers, 2) elements of the code that are often included or omitted, and 3) errors and misconceptions students may present. Results. We found that students prioritize the output that is generated by print-statements in a program. This is indication that these statements may have the ability to aid students make sense of code. Furthermore, students appear to be selective about which elements they find important in their explanation. Assigning variables and asking input was less often included, whereas control-flow elements, print statements and function definitions were more often included. Finally, students were easily confused or distracted by lines of code that seemed to interfere with the newly learned programming constructs. Also domain knowledge (outside of programming) both positively and negatively interfered with reading and interpreting the code. Discussion. Our results pave the way towards a better understanding of how students understand code by reading and of how an exercise containing self-explanations after reading, as a teaching instrument, may be useful to both teachers and students in programming education.
{"title":"What does this Python code do? An exploratory analysis of novice students’ code explanations","authors":"Vivian van der Werf, Efthimia Aivaloglou, F. Hermans, M. Specht","doi":"10.1145/3507923.3507956","DOIUrl":"https://doi.org/10.1145/3507923.3507956","url":null,"abstract":"Motivation. Code reading skills are important for comprehension. Explain-in-plain-English tasks (EiPE) are one type of reading exercises that show promising results on the ability of such exercises to differentiate between particular levels of code comprehension. Code reading/explaining skills also correlate with code writing skills. Objective. This paper aims to provide insight in what novice students express in their explanations after reading a piece of code, and what these insights can tell us about how the students comprehend code. Method. We performed an exploratory analysis on four reading assignments extracted from a university-level beginners course in Python programming. We paid specific attention to 1) the core focus of student answers, 2) elements of the code that are often included or omitted, and 3) errors and misconceptions students may present. Results. We found that students prioritize the output that is generated by print-statements in a program. This is indication that these statements may have the ability to aid students make sense of code. Furthermore, students appear to be selective about which elements they find important in their explanation. Assigning variables and asking input was less often included, whereas control-flow elements, print statements and function definitions were more often included. Finally, students were easily confused or distracted by lines of code that seemed to interfere with the newly learned programming constructs. Also domain knowledge (outside of programming) both positively and negatively interfered with reading and interpreting the code. Discussion. Our results pave the way towards a better understanding of how students understand code by reading and of how an exercise containing self-explanations after reading, as a teaching instrument, may be useful to both teachers and students in programming education.","PeriodicalId":137168,"journal":{"name":"Proceedings of the 10th Computer Science Education Research Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130948596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The frame-based programming language Stride has the potential to simultaneously simplify and accelerate the task of coding for novices. This is facilitated through a combination of reduced cognitive load, assistance when editing and the elimination of certain syntax errors. Stride also offers the opportunity for comparison to Java, another programming language used by novice programmers, as Stride is integrated into the BlueJ (Java) development environment and user data is also captured in the Blackbox dataset. This paper sets out to determine whether there is evidence to support some of these claims. Since compiler error messages are a key mechanism for user feedback, we compare lesser-studied Stride error message data with better understood Java data. Secondly, we identify groups of Stride and Java users in order to characterise their behavior and to discover differences between frame-based and more conventional text-based programming. These groups include cross-sections of random users as well as two sets of Stride and Java programmers that appear to be engaged in similar tasks. We find that the typical Stride user is primarily a Java user and behavior patterns are similar in both languages. However, we also found a small number of Stride users whose programming time was dominated by Stride, and these users exhibit markedly different patterns for generating user-driven events. These results have implications for educators and tool designers, as well as researchers studying Stride, Java, and Blackbox.
{"title":"Portraits of Programmer Behavior in a Frame-Based Language","authors":"Joe Dillane, Ioannis Karvelas, Brett A. Becker","doi":"10.1145/3507923.3507933","DOIUrl":"https://doi.org/10.1145/3507923.3507933","url":null,"abstract":"The frame-based programming language Stride has the potential to simultaneously simplify and accelerate the task of coding for novices. This is facilitated through a combination of reduced cognitive load, assistance when editing and the elimination of certain syntax errors. Stride also offers the opportunity for comparison to Java, another programming language used by novice programmers, as Stride is integrated into the BlueJ (Java) development environment and user data is also captured in the Blackbox dataset. This paper sets out to determine whether there is evidence to support some of these claims. Since compiler error messages are a key mechanism for user feedback, we compare lesser-studied Stride error message data with better understood Java data. Secondly, we identify groups of Stride and Java users in order to characterise their behavior and to discover differences between frame-based and more conventional text-based programming. These groups include cross-sections of random users as well as two sets of Stride and Java programmers that appear to be engaged in similar tasks. We find that the typical Stride user is primarily a Java user and behavior patterns are similar in both languages. However, we also found a small number of Stride users whose programming time was dominated by Stride, and these users exhibit markedly different patterns for generating user-driven events. These results have implications for educators and tool designers, as well as researchers studying Stride, Java, and Blackbox.","PeriodicalId":137168,"journal":{"name":"Proceedings of the 10th Computer Science Education Research Conference","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130132248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is a shortage of qualified people in the IT industry in the world. To address this shortage, transition programmes are being created that help people change to careers in IT. To provide useful programmes, we need to know if the current curriculum provides value to its graduates. Moreover, as the IT industry undergoes continuous change, we need to regularly review what the industry needs and update any existing programmes as appropriate. In this paper we present the results of a survey of graduates of one such programme, the PGCertInfoTech at University of Auckland, with the view to evaluating the currency of the existing programme and to gather data on which to base decisions on updating it. Our conclusion is that our programme is largely useful to graduates, but could be improved with the addition of material on continuous integration, and some adjustment to the time spent on testing, concurrency, and project management. Our results will be useful to any other institutions having, or considering to have, IT transition programmes.
{"title":"The Industry Relevance of an IT Transition Programme","authors":"Yu-Cheng Tu, E. Tempero, Paramvir Singh, A. Meads","doi":"10.1145/3507923.3507957","DOIUrl":"https://doi.org/10.1145/3507923.3507957","url":null,"abstract":"There is a shortage of qualified people in the IT industry in the world. To address this shortage, transition programmes are being created that help people change to careers in IT. To provide useful programmes, we need to know if the current curriculum provides value to its graduates. Moreover, as the IT industry undergoes continuous change, we need to regularly review what the industry needs and update any existing programmes as appropriate. In this paper we present the results of a survey of graduates of one such programme, the PGCertInfoTech at University of Auckland, with the view to evaluating the currency of the existing programme and to gather data on which to base decisions on updating it. Our conclusion is that our programme is largely useful to graduates, but could be improved with the addition of material on continuous integration, and some adjustment to the time spent on testing, concurrency, and project management. Our results will be useful to any other institutions having, or considering to have, IT transition programmes.","PeriodicalId":137168,"journal":{"name":"Proceedings of the 10th Computer Science Education Research Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122129773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is a burgeoning trend of smartphone ownership in Africa due to the low costs of Android smartphones and the global increase in social media usage. Building upon our previous work that introduced a smartphone-based coding course to secondary and tertiary students in Ghana via an in-person program and an online course, this work introduced Africans in 37 countries to our online smartphone-based course in 2019. Students in this 8-week course read lesson notes, submitted assignments, collaborated with peers and facilitators in an online forum, and completed open and closed-ended surveys after the course. We performed qualitative and quantitative analyses on the data from the course. Out of the 709 students that applied, 210 were officially admitted to the course after passing the preliminary assignments. At the end of the course, 72% of the 210 students completed the course. Additionally, students’ assignment submissions and self-reports showed an understanding of the programming concepts, with comparable performance between males and females and across educational levels. Also, students mentioned that the lesson notes were easy to understand and they enjoyed the experience of writing code on their smartphones. Moreover, students adequately received help from peers and facilitators in the course forum. Lastly, results of a survey sent to students a year after completing this program showed that they had developed various applications, wrote online tutorials, and learned several tools and technologies. We were successful at introducing coding skills to Africans using smartphones through SuaCode Africa.
{"title":"SuaCode Africa: Teaching Coding Online to Africans using Smartphones","authors":"George Boateng, Prince Steven Annor, V. Kumbol","doi":"10.1145/3507923.3507928","DOIUrl":"https://doi.org/10.1145/3507923.3507928","url":null,"abstract":"There is a burgeoning trend of smartphone ownership in Africa due to the low costs of Android smartphones and the global increase in social media usage. Building upon our previous work that introduced a smartphone-based coding course to secondary and tertiary students in Ghana via an in-person program and an online course, this work introduced Africans in 37 countries to our online smartphone-based course in 2019. Students in this 8-week course read lesson notes, submitted assignments, collaborated with peers and facilitators in an online forum, and completed open and closed-ended surveys after the course. We performed qualitative and quantitative analyses on the data from the course. Out of the 709 students that applied, 210 were officially admitted to the course after passing the preliminary assignments. At the end of the course, 72% of the 210 students completed the course. Additionally, students’ assignment submissions and self-reports showed an understanding of the programming concepts, with comparable performance between males and females and across educational levels. Also, students mentioned that the lesson notes were easy to understand and they enjoyed the experience of writing code on their smartphones. Moreover, students adequately received help from peers and facilitators in the course forum. Lastly, results of a survey sent to students a year after completing this program showed that they had developed various applications, wrote online tutorials, and learned several tools and technologies. We were successful at introducing coding skills to Africans using smartphones through SuaCode Africa.","PeriodicalId":137168,"journal":{"name":"Proceedings of the 10th Computer Science Education Research Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127442960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prince Steven Annor, S. Boateng, Edwin Pelpuo Kayang, George Boateng
Automatic grading systems have been in existence since the turn of the half-century. Several systems have been developed in the literature with either static analysis and dynamic analysis or a hybrid of both methodologies for computer science courses. This paper presents AutoGrad, a novel portable cross-platform automatic grading system for graphical Processing programs developed on Android smartphones during an online course. AutoGrad uses Processing, which is used in the emerging Interactive Media Arts, and pioneers grading systems utilized outside the sciences to assist tuition in the Arts. It also represents the first system built and tested in an African context across over thirty-five countries across the continent. This paper first explores the design and implementation of AutoGrad. AutoGrad employs APIs to download the assignments from the course platform, performs static and dynamic analysis on the assignment to evaluate the graphical output of the program, and returns the grade and feedback to the student. It then evaluates AutoGrad by analyzing data collected from the two online cohorts of 1000+ students of our SuaCode smartphone-based course. From the analysis and students’ feedback, AutoGrad is shown to be adequate for automatic assessment, feedback provision to students, and easy integration for both cloud and standalone usage by reducing the time and effort required in grading the 4 assignments required to complete the course.
{"title":"AutoGrad: Automated Grading Software for Mobile Game Assignments in SuaCode Courses","authors":"Prince Steven Annor, S. Boateng, Edwin Pelpuo Kayang, George Boateng","doi":"10.1145/3507923.3507954","DOIUrl":"https://doi.org/10.1145/3507923.3507954","url":null,"abstract":"Automatic grading systems have been in existence since the turn of the half-century. Several systems have been developed in the literature with either static analysis and dynamic analysis or a hybrid of both methodologies for computer science courses. This paper presents AutoGrad, a novel portable cross-platform automatic grading system for graphical Processing programs developed on Android smartphones during an online course. AutoGrad uses Processing, which is used in the emerging Interactive Media Arts, and pioneers grading systems utilized outside the sciences to assist tuition in the Arts. It also represents the first system built and tested in an African context across over thirty-five countries across the continent. This paper first explores the design and implementation of AutoGrad. AutoGrad employs APIs to download the assignments from the course platform, performs static and dynamic analysis on the assignment to evaluate the graphical output of the program, and returns the grade and feedback to the student. It then evaluates AutoGrad by analyzing data collected from the two online cohorts of 1000+ students of our SuaCode smartphone-based course. From the analysis and students’ feedback, AutoGrad is shown to be adequate for automatic assessment, feedback provision to students, and easy integration for both cloud and standalone usage by reducing the time and effort required in grading the 4 assignments required to complete the course.","PeriodicalId":137168,"journal":{"name":"Proceedings of the 10th Computer Science Education Research Conference","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115219720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}