James Zhang, C. Wong, Nasser Giacaman, Andrew Luxton-Reilly
Bloom’s taxonomy is a well-known and widely used method of classifying assessment tasks. However, the application of Bloom’s taxonomy in computing education is often difficult and the classification often suffers from poor inter-rater reliability. Automated approaches using machine learning techniques show potential, but their performance is limited by the quality and quantity of the training set. We implement a machine learning model to classify programming questions according to Bloom’s taxonomy using Google’s BERT as the base model, and the Canterbury QuestionBank as a source of questions categorised by computing education experts. Our results demonstrate that the model was able to successfully predict the categories with moderate success, but was more successful in categorising questions at the lower levels of Bloom’s taxonomy. This work demonstrates the potential for machine learning to assist teachers in the analysis of assessment items.
{"title":"Automated Classification of Computing Education Questions using Bloom’s Taxonomy","authors":"James Zhang, C. Wong, Nasser Giacaman, Andrew Luxton-Reilly","doi":"10.1145/3441636.3442305","DOIUrl":"https://doi.org/10.1145/3441636.3442305","url":null,"abstract":"Bloom’s taxonomy is a well-known and widely used method of classifying assessment tasks. However, the application of Bloom’s taxonomy in computing education is often difficult and the classification often suffers from poor inter-rater reliability. Automated approaches using machine learning techniques show potential, but their performance is limited by the quality and quantity of the training set. We implement a machine learning model to classify programming questions according to Bloom’s taxonomy using Google’s BERT as the base model, and the Canterbury QuestionBank as a source of questions categorised by computing education experts. Our results demonstrate that the model was able to successfully predict the categories with moderate success, but was more successful in categorising questions at the lower levels of Bloom’s taxonomy. This work demonstrates the potential for machine learning to assist teachers in the analysis of assessment items.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128229952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present an online IDE (Spinoza 3.0) for teaching Python programming in which the students are (sometimes) required to verbally reflect on their error messages and unit test failures before being allowed to modify their code. This system was designed to be used in large synchronous in-person, remote, or hybrid classes for either in-class problem solving or out-of-class homework problems. For each student and problem, the system makes a random choice about whether to require reflection on all debugging steps. If the student/problem pair required reflection, then after each time the student ran the program and received feedback as an error message or a set of unit test results, they were required to type in a description of the bug and a plan for how to modify the program to eliminate the bug. The main result is that the number of debugging steps to reach a correct solution was statistically significantly less for problems where the students were required to reflect on each debugging step. We suggest that future developers of pedagogical IDEs consider adding features which require students to reflect frequently during the debugging process.
{"title":"Reflective Debugging in Spinoza V3.0","authors":"Fatima Abu Deeb, T. Hickey","doi":"10.1145/3441636.3442313","DOIUrl":"https://doi.org/10.1145/3441636.3442313","url":null,"abstract":"In this paper we present an online IDE (Spinoza 3.0) for teaching Python programming in which the students are (sometimes) required to verbally reflect on their error messages and unit test failures before being allowed to modify their code. This system was designed to be used in large synchronous in-person, remote, or hybrid classes for either in-class problem solving or out-of-class homework problems. For each student and problem, the system makes a random choice about whether to require reflection on all debugging steps. If the student/problem pair required reflection, then after each time the student ran the program and received feedback as an error message or a set of unit test results, they were required to type in a description of the bug and a plan for how to modify the program to eliminate the bug. The main result is that the number of debugging steps to reach a correct solution was statistically significantly less for problems where the students were required to reflect on each debugging step. We suggest that future developers of pedagogical IDEs consider adding features which require students to reflect frequently during the debugging process.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130343626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prior research has reported conflicting results on whether the presence of a contextualized narrative in a problem statement is a help or a hindrance to students when solving problems. On the one hand, results from psychology and mathematics seem to show that contextualized problems can be easier for students. On the other, a recent ITiCSE working group exploring the “problem description effect” found no such benefits for novice programmers. In this work, we study the effects of contextualized problems on problem-solving in an introductory programming course. Students were divided into three groups. Each group was given two different programming problems, involving linear equations, to solve. In the first group both problem statements used the same context while in the second group the context was switched. The third group was given problems that were mathematically similar to the other two groups, but which lacked any contextualized narrative. Contrary to earlier findings in introductory programming, our results show that context does have an effect on student performance. Interestingly depending on the problem, context either helped or was unhelpful to students. We hypothesize that these results are explained by a lack of familiarity with the context when the context was unhelpful, and by poor mathematical skills when the context was helpful. These findings contribute to our understanding of how contextualized problem statements affect novice programmers and their problem solving.
{"title":"Exploring the Effects of Contextualized Problem Descriptions on Problem Solving","authors":"Juho Leinonen, Paul Denny, Jacqueline L. Whalley","doi":"10.1145/3441636.3442302","DOIUrl":"https://doi.org/10.1145/3441636.3442302","url":null,"abstract":"Prior research has reported conflicting results on whether the presence of a contextualized narrative in a problem statement is a help or a hindrance to students when solving problems. On the one hand, results from psychology and mathematics seem to show that contextualized problems can be easier for students. On the other, a recent ITiCSE working group exploring the “problem description effect” found no such benefits for novice programmers. In this work, we study the effects of contextualized problems on problem-solving in an introductory programming course. Students were divided into three groups. Each group was given two different programming problems, involving linear equations, to solve. In the first group both problem statements used the same context while in the second group the context was switched. The third group was given problems that were mathematically similar to the other two groups, but which lacked any contextualized narrative. Contrary to earlier findings in introductory programming, our results show that context does have an effect on student performance. Interestingly depending on the problem, context either helped or was unhelpful to students. We hypothesize that these results are explained by a lack of familiarity with the context when the context was unhelpful, and by poor mathematical skills when the context was helpful. These findings contribute to our understanding of how contextualized problem statements affect novice programmers and their problem solving.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124360136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Imbulpitiya, Jacqueline L. Whalley, Mali Senapathi
This paper presents the development of an initial framework for the classification and analysis of questions in database modelling and design examinations. Guidelines are provided for the classification of these questions using the revised Bloom’s taxonomy of educational objectives. We report the results of applying the classification scheme to 122 questions from 19 introductory database examinations. We found that there was little variation in the topics and question styles employed and that the degree to which design and modelling is assessed in a typical introductory undergraduate database course’s examination varies widely. We also found gaps in the intellectual complexity of the questions with the examinations failing to provide questions at the analyse and evaluate levels of the revised Bloom’s taxonomy.
{"title":"Examining the Exams: Bloom and Database Modelling and Design","authors":"A. Imbulpitiya, Jacqueline L. Whalley, Mali Senapathi","doi":"10.1145/3441636.3442301","DOIUrl":"https://doi.org/10.1145/3441636.3442301","url":null,"abstract":"This paper presents the development of an initial framework for the classification and analysis of questions in database modelling and design examinations. Guidelines are provided for the classification of these questions using the revised Bloom’s taxonomy of educational objectives. We report the results of applying the classification scheme to 122 questions from 19 introductory database examinations. We found that there was little variation in the topics and question styles employed and that the degree to which design and modelling is assessed in a typical introductory undergraduate database course’s examination varies widely. We also found gaps in the intellectual complexity of the questions with the examinations failing to provide questions at the analyse and evaluate levels of the revised Bloom’s taxonomy.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124360637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maintainability is an important quality attribute of code, and so should be a key learning outcome for software engineering programmes. This raises the question of how to assess this learning outcome. In this practical report we describe how we exploited the code review mechanism provided by GitHub, the “pull request”, to assess students’ understanding of maintainability. It requires a slightly non-standard workflow by the students and a reporting tool to assemble the code review comments in a form suitable for assessment. We give the details of what we learned to make it work that should allow others to conduct similar kinds of assessment.
{"title":"Assessing Understanding of Maintainability using Code Review","authors":"E. Tempero, Yu-Cheng Tu","doi":"10.1145/3441636.3442303","DOIUrl":"https://doi.org/10.1145/3441636.3442303","url":null,"abstract":"Maintainability is an important quality attribute of code, and so should be a key learning outcome for software engineering programmes. This raises the question of how to assess this learning outcome. In this practical report we describe how we exploited the code review mechanism provided by GitHub, the “pull request”, to assess students’ understanding of maintainability. It requires a slightly non-standard workflow by the students and a reporting tool to assemble the code review comments in a form suitable for assessment. We give the details of what we learned to make it work that should allow others to conduct similar kinds of assessment.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115007839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work attempts to understand how novices approach runtime analysis tasks, where the number of operations performed by a given program needs to be analyzed by looking at the code. Such tasks are commonly used by instructors in courses on data structures and algorithms. The focus of this work is on the difficulties faced by novices when approaching such tasks and how novices differ from experts in how they approach these tasks. The study involved one-on-one think-aloud interviews with five instructors of an introductory data structures and algorithms course and 14 students enrolled in that course. The interviews were analyzed using a framework introduced in this study for describing the steps needed to perform runtime analysis of simple pieces of code. The study found the interviewed experts to clearly differentiate between the task of formulating a summation describing the number of operations performed by the code and the task of solving that summation to find how many operations are done. Experts in the study also showed fluency in looking at the code from different perspectives and sometimes re-wrote the code to simplify the analysis task. On the other hand, the study found several novices to make mistakes because of not explicitly tracing the code and because of not explicitly describing the number of performed operations using a summation. Many of the novices also seemed inclined to approach the analysis of nested loops by multiplying two running times, even when that is incorrect. The study concluded with a discussion of the implications of these results on the teaching and assessment of runtime analysis.
{"title":"Novice Difficulties with Analyzing the Running Time of Short Pieces of Code","authors":"Ibrahim Albluwi, Haley Zeng","doi":"10.1145/3441636.3441855","DOIUrl":"https://doi.org/10.1145/3441636.3441855","url":null,"abstract":"This work attempts to understand how novices approach runtime analysis tasks, where the number of operations performed by a given program needs to be analyzed by looking at the code. Such tasks are commonly used by instructors in courses on data structures and algorithms. The focus of this work is on the difficulties faced by novices when approaching such tasks and how novices differ from experts in how they approach these tasks. The study involved one-on-one think-aloud interviews with five instructors of an introductory data structures and algorithms course and 14 students enrolled in that course. The interviews were analyzed using a framework introduced in this study for describing the steps needed to perform runtime analysis of simple pieces of code. The study found the interviewed experts to clearly differentiate between the task of formulating a summation describing the number of operations performed by the code and the task of solving that summation to find how many operations are done. Experts in the study also showed fluency in looking at the code from different perspectives and sometimes re-wrote the code to simplify the analysis task. On the other hand, the study found several novices to make mistakes because of not explicitly tracing the code and because of not explicitly describing the number of performed operations using a summation. Many of the novices also seemed inclined to approach the analysis of nested loops by multiplying two running times, even when that is incorrect. The study concluded with a discussion of the implications of these results on the teaching and assessment of runtime analysis.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115098891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many visualisation tools have been designed to help students with learning programming concepts, often showing positive impact on student performance. Analogies have also often been used to assist in teaching students various programming concepts, and have similarly shown to boost student confidence of the concepts. Less work has been done to specifically target polymorphism, and the misconceptions students face with it. This study presents the design of a new visualisation tool along with its supporting analogy. It aims to assist in teaching the concept of polymorphism to students, along with correcting the misconceptions students have when dealing with this concept. Experiences using the tool for a CS2 OOP course are presented, including engagement logs and student feedback. The paper concludes with findings of this experience, discussing directions for future work in this area.
{"title":"Visual Analogy for Understanding Polymorphism Types","authors":"Nathan Mills, Allen Wang, Nasser Giacaman","doi":"10.1145/3441636.3442304","DOIUrl":"https://doi.org/10.1145/3441636.3442304","url":null,"abstract":"Many visualisation tools have been designed to help students with learning programming concepts, often showing positive impact on student performance. Analogies have also often been used to assist in teaching students various programming concepts, and have similarly shown to boost student confidence of the concepts. Less work has been done to specifically target polymorphism, and the misconceptions students face with it. This study presents the design of a new visualisation tool along with its supporting analogy. It aims to assist in teaching the concept of polymorphism to students, along with correcting the misconceptions students have when dealing with this concept. Experiences using the tool for a CS2 OOP course are presented, including engagement logs and student feedback. The paper concludes with findings of this experience, discussing directions for future work in this area.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121640796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Wong, Paul Denny, Andrew Luxton-Reilly, Jacqueline L. Whalley
Multiple choice questions (MCQs) are a popular question format in introductory programming courses as they are a convenient means to provide scalable assessments. However, with typically only four or five answer options and a single correct answer, MCQs are prone to guessing and may lead students into a false sense of confidence. One approach to mitigate this problem is the use of Multiple-Answer MCQs (MAMCQs), where more than one answer option may be correct. This provides a larger solution space and may help students form more accurate assessments of their knowledge. We explore the use of this question format on an exam in a very large introductory programming course. The exam consisted of both MCQ and MAMCQ sections, and students were invited to predict their scores for each section. In addition, students were asked to report their preference for question format. We found that students over predicted their score on the MCQ section to a greater extent and that these prediction errors were more pronounced amongst less capable students. Interestingly, we found that students did not have a strong preference for MCQs over MAMCQs, and we recommend broader adoption of the latter format.
{"title":"The Impact of Multiple Choice Question Design on Predictions of Performance","authors":"C. Wong, Paul Denny, Andrew Luxton-Reilly, Jacqueline L. Whalley","doi":"10.1145/3441636.3442306","DOIUrl":"https://doi.org/10.1145/3441636.3442306","url":null,"abstract":"Multiple choice questions (MCQs) are a popular question format in introductory programming courses as they are a convenient means to provide scalable assessments. However, with typically only four or five answer options and a single correct answer, MCQs are prone to guessing and may lead students into a false sense of confidence. One approach to mitigate this problem is the use of Multiple-Answer MCQs (MAMCQs), where more than one answer option may be correct. This provides a larger solution space and may help students form more accurate assessments of their knowledge. We explore the use of this question format on an exam in a very large introductory programming course. The exam consisted of both MCQ and MAMCQ sections, and students were invited to predict their scores for each section. In addition, students were asked to report their preference for question format. We found that students over predicted their score on the MCQ section to a greater extent and that these prediction errors were more pronounced amongst less capable students. Interestingly, we found that students did not have a strong preference for MCQs over MAMCQs, and we recommend broader adoption of the latter format.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133149570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brett A. Becker, Catherine Mooney, Amruth N. Kumar, Seán Russell
For decades computing educators have been trying to identify and predict at-risk students, particularly early in the first programming course. These efforts range from the analyzing demographic data that pre-exists undergraduate entrance to using instruments such as concept inventories, to the analysis of data arising during education. Such efforts have had varying degrees of success, have not seen widespread adoption, and have left room for improvement. We analyse results from a two-year study with several hundred students in the first year of programming, comprising majors and non-majors. We find evidence supporting a hypothesis that engagement with extra credit assessment provides an effective method of differentiating students who are not at risk from those who may be. Further, this method can be used to predict risk early in the semester, as any engagement – not necessarily completion – is enough to make this differentiation. Additionally, we show that this approach is not dependent on any one programming language. In fact, the extra credit opportunities need not even involve programming. Our results may be of interest to educators, as well as researchers who may want to replicate these results in other settings.
{"title":"A Simple, Language-Independent Approach to Identifying Potentially At-Risk Introductory Programming Students","authors":"Brett A. Becker, Catherine Mooney, Amruth N. Kumar, Seán Russell","doi":"10.1145/3441636.3442318","DOIUrl":"https://doi.org/10.1145/3441636.3442318","url":null,"abstract":"For decades computing educators have been trying to identify and predict at-risk students, particularly early in the first programming course. These efforts range from the analyzing demographic data that pre-exists undergraduate entrance to using instruments such as concept inventories, to the analysis of data arising during education. Such efforts have had varying degrees of success, have not seen widespread adoption, and have left room for improvement. We analyse results from a two-year study with several hundred students in the first year of programming, comprising majors and non-majors. We find evidence supporting a hypothesis that engagement with extra credit assessment provides an effective method of differentiating students who are not at risk from those who may be. Further, this method can be used to predict risk early in the semester, as any engagement – not necessarily completion – is enough to make this differentiation. Additionally, we show that this approach is not dependent on any one programming language. In fact, the extra credit opportunities need not even involve programming. Our results may be of interest to educators, as well as researchers who may want to replicate these results in other settings.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114175958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jacqueline L. Whalley, Amber Settle, Andrew Luxton-Reilly
Debugging code is a complex task that requires knowledge about the mechanics of a programming language, the purpose of a given program, and an understanding of how the program achieves the purpose intended. It is generally accepted that prior experience with similar bugs improves the debugging process and that a systematic process is needed to be able to successfully move from the symptoms of a bug to the cause. Students who are learning to program may struggle with one or more aspect of debugging, and anecdotally, spend a lot of their time debugging faulty code. In this paper we analyse student answers to questions designed to focus student attention on the symptoms of a bug and to use those symptoms to generate a hypothesis about the cause of a bug. To ensure students focus on the symptoms rather than the code, we use paper-based exercises that ask students to reflect on various bugs and to hypothesize about the cause. We analyse the students’ responses to the questions and find that using our structured process most students are able to generalize from a single failing test case to the likely problem in the code, but they are much less able to identify the appropriate location or an actual fix.
{"title":"Analysis of a Process for Introductory Debugging","authors":"Jacqueline L. Whalley, Amber Settle, Andrew Luxton-Reilly","doi":"10.1145/3441636.3442300","DOIUrl":"https://doi.org/10.1145/3441636.3442300","url":null,"abstract":"Debugging code is a complex task that requires knowledge about the mechanics of a programming language, the purpose of a given program, and an understanding of how the program achieves the purpose intended. It is generally accepted that prior experience with similar bugs improves the debugging process and that a systematic process is needed to be able to successfully move from the symptoms of a bug to the cause. Students who are learning to program may struggle with one or more aspect of debugging, and anecdotally, spend a lot of their time debugging faulty code. In this paper we analyse student answers to questions designed to focus student attention on the symptoms of a bug and to use those symptoms to generate a hypothesis about the cause of a bug. To ensure students focus on the symptoms rather than the code, we use paper-based exercises that ask students to reflect on various bugs and to hypothesize about the cause. We analyse the students’ responses to the questions and find that using our structured process most students are able to generalize from a single failing test case to the likely problem in the code, but they are much less able to identify the appropriate location or an actual fix.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126168962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}