Digital learning environments are emerging as a key part of the future of computer science education. However, there is little empirical understanding of what forms of didactic feedback are pedagogically optimal for short- and long-term learning outcomes in these new contexts. Methods for classification of feedback in this new context are thus needed, to enable empirical analysis of what constitutes effectiveness. Whilst numerous taxonomies of feedback exist, they do not provide suitable classification for assessing impact of feedback approaches on student learning. We provide an empirically and theoretically meaningful framework for analysing feedback in digital learning environments. The classification is based on placement along two axes – whether feedback is problem or solution centric, and whether it provides information pertaining to a specific instance of a student's work or generalised to the underlying theory. We apply this framework to analyse feedback given in an online computer programming course, showing that types of feedback provided effect attainment of short-term goal-oriented student outcomes. This motivates its possible application in understanding more long-term acquisition and retention of knowledge, both in computer science education and beyond.
{"title":"Toward Empirical Analysis of Pedagogical Feedback in Computer Programming Learning Environments","authors":"G. Raubenheimer, Bryn Jeffries, K. Yacef","doi":"10.1145/3441636.3442321","DOIUrl":"https://doi.org/10.1145/3441636.3442321","url":null,"abstract":"Digital learning environments are emerging as a key part of the future of computer science education. However, there is little empirical understanding of what forms of didactic feedback are pedagogically optimal for short- and long-term learning outcomes in these new contexts. Methods for classification of feedback in this new context are thus needed, to enable empirical analysis of what constitutes effectiveness. Whilst numerous taxonomies of feedback exist, they do not provide suitable classification for assessing impact of feedback approaches on student learning. We provide an empirically and theoretically meaningful framework for analysing feedback in digital learning environments. The classification is based on placement along two axes – whether feedback is problem or solution centric, and whether it provides information pertaining to a specific instance of a student's work or generalised to the underlying theory. We apply this framework to analyse feedback given in an online computer programming course, showing that types of feedback provided effect attainment of short-term goal-oriented student outcomes. This motivates its possible application in understanding more long-term acquisition and retention of knowledge, both in computer science education and beyond.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122178088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jacqueline L. Whalley, Amber Settle, Andrew Luxton-Reilly
Debugging code is a complex task that requires knowledge about the mechanics of a programming language, the purpose of a given program, and an understanding of how the program achieves the purpose intended. It is generally accepted that prior experience with similar bugs improves the debugging process and that a systematic process is needed to be able to successfully move from the symptoms of a bug to the cause. Students who are learning to program may struggle with one or more aspect of debugging, and anecdotally, spend a lot of their time debugging faulty code. In this paper we analyse student answers to questions designed to focus student attention on the symptoms of a bug and to use those symptoms to generate a hypothesis about the cause of a bug. To ensure students focus on the symptoms rather than the code, we use paper-based exercises that ask students to reflect on various bugs and to hypothesize about the cause. We analyse the students’ responses to the questions and find that using our structured process most students are able to generalize from a single failing test case to the likely problem in the code, but they are much less able to identify the appropriate location or an actual fix.
{"title":"Analysis of a Process for Introductory Debugging","authors":"Jacqueline L. Whalley, Amber Settle, Andrew Luxton-Reilly","doi":"10.1145/3441636.3442300","DOIUrl":"https://doi.org/10.1145/3441636.3442300","url":null,"abstract":"Debugging code is a complex task that requires knowledge about the mechanics of a programming language, the purpose of a given program, and an understanding of how the program achieves the purpose intended. It is generally accepted that prior experience with similar bugs improves the debugging process and that a systematic process is needed to be able to successfully move from the symptoms of a bug to the cause. Students who are learning to program may struggle with one or more aspect of debugging, and anecdotally, spend a lot of their time debugging faulty code. In this paper we analyse student answers to questions designed to focus student attention on the symptoms of a bug and to use those symptoms to generate a hypothesis about the cause of a bug. To ensure students focus on the symptoms rather than the code, we use paper-based exercises that ask students to reflect on various bugs and to hypothesize about the cause. We analyse the students’ responses to the questions and find that using our structured process most students are able to generalize from a single failing test case to the likely problem in the code, but they are much less able to identify the appropriate location or an actual fix.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126168962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brett A. Becker, Paul Denny, J. Prather, Raymond Pettit, Robert Nix, Catherine Mooney
Programming error messages have proven to be notoriously problematic for novices who are learning to program. Although recent efforts have focused on improving message wording, these have been criticized for attempting to improve usability without first understanding and addressing readability. To date, there has been no research dedicated to the readability of programming error messages and how this could be assessed. In this paper we examine human-based assessments of programming error message readability and make two important contributions. First, we conduct an experiment using the top twenty most-frequent error messages in three popular programming languages (Python, Java, and C), revealing that human notions of readability are highly subjective and dependent on both programming experience and language familiarity. Both novices and experts agreed more about which messages are more readable, but disagreed more about which messages are not readable. Second, we leverage the data from this experiment to uncover several key factors that seem to affect message readability: message length, message tone, and use of jargon. We discuss how these factors can help guide future efforts to design a readability metric for programming error messages.
{"title":"Towards Assessing the Readability of Programming Error Messages","authors":"Brett A. Becker, Paul Denny, J. Prather, Raymond Pettit, Robert Nix, Catherine Mooney","doi":"10.1145/3441636.3442320","DOIUrl":"https://doi.org/10.1145/3441636.3442320","url":null,"abstract":"Programming error messages have proven to be notoriously problematic for novices who are learning to program. Although recent efforts have focused on improving message wording, these have been criticized for attempting to improve usability without first understanding and addressing readability. To date, there has been no research dedicated to the readability of programming error messages and how this could be assessed. In this paper we examine human-based assessments of programming error message readability and make two important contributions. First, we conduct an experiment using the top twenty most-frequent error messages in three popular programming languages (Python, Java, and C), revealing that human notions of readability are highly subjective and dependent on both programming experience and language familiarity. Both novices and experts agreed more about which messages are more readable, but disagreed more about which messages are not readable. Second, we leverage the data from this experiment to uncover several key factors that seem to affect message readability: message length, message tone, and use of jargon. We discuss how these factors can help guide future efforts to design a readability metric for programming error messages.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115801958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}