One of the goals of computing education research is to understand and document the effectiveness of pedagogical strategies in computing. Among the many methods available to teach programming, two commonly used techniques to present code in Computer Science classes are static code examples (where pre-written code snippets are used during lectures) and live coding (where code is written before the students during the lecture). Even though prior research has tried comparing the effectiveness of these two teaching techniques on student learning and cognitive load, little is known about the structure of these code presentation techniques. In this study, we analyze the lecture recordings of a mid-level Computer Science course which uses both static code examples and live coding for teaching code snippets. We analyze these recordings with the intent to understand what these pedagogical techniques for teaching and learning programming consist of. We also analyze student feedback about both these pedagogical strategies to better understand these teaching methods from the students’ perspective. We believe that our work will shed light on the usefulness of static code examples and live coding in Computer Science courses.
{"title":"A Qualitative Analysis of Lecture Videos and Student Feedback on Static Code Examples and Live Coding: A Case Study","authors":"Derek Hwang, Vardhan Agarwal, Yuzi Lyu, Divyam Rana, Satya Ganesh Susarla, Adalbert Gerald Soosai Raj","doi":"10.1145/3441636.3442317","DOIUrl":"https://doi.org/10.1145/3441636.3442317","url":null,"abstract":"One of the goals of computing education research is to understand and document the effectiveness of pedagogical strategies in computing. Among the many methods available to teach programming, two commonly used techniques to present code in Computer Science classes are static code examples (where pre-written code snippets are used during lectures) and live coding (where code is written before the students during the lecture). Even though prior research has tried comparing the effectiveness of these two teaching techniques on student learning and cognitive load, little is known about the structure of these code presentation techniques. In this study, we analyze the lecture recordings of a mid-level Computer Science course which uses both static code examples and live coding for teaching code snippets. We analyze these recordings with the intent to understand what these pedagogical techniques for teaching and learning programming consist of. We also analyze student feedback about both these pedagogical strategies to better understand these teaching methods from the students’ perspective. We believe that our work will shed light on the usefulness of static code examples and live coding in Computer Science courses.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116615352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Digital learning environments are emerging as a key part of the future of computer science education. However, there is little empirical understanding of what forms of didactic feedback are pedagogically optimal for short- and long-term learning outcomes in these new contexts. Methods for classification of feedback in this new context are thus needed, to enable empirical analysis of what constitutes effectiveness. Whilst numerous taxonomies of feedback exist, they do not provide suitable classification for assessing impact of feedback approaches on student learning. We provide an empirically and theoretically meaningful framework for analysing feedback in digital learning environments. The classification is based on placement along two axes – whether feedback is problem or solution centric, and whether it provides information pertaining to a specific instance of a student's work or generalised to the underlying theory. We apply this framework to analyse feedback given in an online computer programming course, showing that types of feedback provided effect attainment of short-term goal-oriented student outcomes. This motivates its possible application in understanding more long-term acquisition and retention of knowledge, both in computer science education and beyond.
{"title":"Toward Empirical Analysis of Pedagogical Feedback in Computer Programming Learning Environments","authors":"G. Raubenheimer, Bryn Jeffries, K. Yacef","doi":"10.1145/3441636.3442321","DOIUrl":"https://doi.org/10.1145/3441636.3442321","url":null,"abstract":"Digital learning environments are emerging as a key part of the future of computer science education. However, there is little empirical understanding of what forms of didactic feedback are pedagogically optimal for short- and long-term learning outcomes in these new contexts. Methods for classification of feedback in this new context are thus needed, to enable empirical analysis of what constitutes effectiveness. Whilst numerous taxonomies of feedback exist, they do not provide suitable classification for assessing impact of feedback approaches on student learning. We provide an empirically and theoretically meaningful framework for analysing feedback in digital learning environments. The classification is based on placement along two axes – whether feedback is problem or solution centric, and whether it provides information pertaining to a specific instance of a student's work or generalised to the underlying theory. We apply this framework to analyse feedback given in an online computer programming course, showing that types of feedback provided effect attainment of short-term goal-oriented student outcomes. This motivates its possible application in understanding more long-term acquisition and retention of knowledge, both in computer science education and beyond.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122178088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brett A. Becker, Paul Denny, J. Prather, Raymond Pettit, Robert Nix, Catherine Mooney
Programming error messages have proven to be notoriously problematic for novices who are learning to program. Although recent efforts have focused on improving message wording, these have been criticized for attempting to improve usability without first understanding and addressing readability. To date, there has been no research dedicated to the readability of programming error messages and how this could be assessed. In this paper we examine human-based assessments of programming error message readability and make two important contributions. First, we conduct an experiment using the top twenty most-frequent error messages in three popular programming languages (Python, Java, and C), revealing that human notions of readability are highly subjective and dependent on both programming experience and language familiarity. Both novices and experts agreed more about which messages are more readable, but disagreed more about which messages are not readable. Second, we leverage the data from this experiment to uncover several key factors that seem to affect message readability: message length, message tone, and use of jargon. We discuss how these factors can help guide future efforts to design a readability metric for programming error messages.
{"title":"Towards Assessing the Readability of Programming Error Messages","authors":"Brett A. Becker, Paul Denny, J. Prather, Raymond Pettit, Robert Nix, Catherine Mooney","doi":"10.1145/3441636.3442320","DOIUrl":"https://doi.org/10.1145/3441636.3442320","url":null,"abstract":"Programming error messages have proven to be notoriously problematic for novices who are learning to program. Although recent efforts have focused on improving message wording, these have been criticized for attempting to improve usability without first understanding and addressing readability. To date, there has been no research dedicated to the readability of programming error messages and how this could be assessed. In this paper we examine human-based assessments of programming error message readability and make two important contributions. First, we conduct an experiment using the top twenty most-frequent error messages in three popular programming languages (Python, Java, and C), revealing that human notions of readability are highly subjective and dependent on both programming experience and language familiarity. Both novices and experts agreed more about which messages are more readable, but disagreed more about which messages are not readable. Second, we leverage the data from this experiment to uncover several key factors that seem to affect message readability: message length, message tone, and use of jargon. We discuss how these factors can help guide future efforts to design a readability metric for programming error messages.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115801958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}