In this experience paper, we present an automated assessment and marking generation framework to create capture-the-flag (CTF) questions in the context of Information Technology (IT) Forensics. This allows educators to generate many randomised Virtual Hard Disk (VHD) and packet capture (PCAP) files with different forensic artefacts for each student suitable for assessment tasks in disk-based and network-based forensic courses, respectively. These files are then inscribed inside quizzes, which are constructively aligned to what students have learned in their lecture and tutorial classes. We replaced our invigilated closed-book end-of-semester exams with these open-book multiple-attempt non-invigilated in-semester quizzes. We also conducted a survey asking students about, how the designed quizzes (1) were aligned with (and covering) the promised course learning outcomes, (2) were run to address academic integrity concerns, and (3) helped students manage their stress once their final exams are replaced by the presented quizzes.
{"title":"Automatic Problem Generation for CTF-Style Assessments in IT Forensics Courses","authors":"Sepehr Minagar, A. Sakzad","doi":"10.1145/3587102.3588788","DOIUrl":"https://doi.org/10.1145/3587102.3588788","url":null,"abstract":"In this experience paper, we present an automated assessment and marking generation framework to create capture-the-flag (CTF) questions in the context of Information Technology (IT) Forensics. This allows educators to generate many randomised Virtual Hard Disk (VHD) and packet capture (PCAP) files with different forensic artefacts for each student suitable for assessment tasks in disk-based and network-based forensic courses, respectively. These files are then inscribed inside quizzes, which are constructively aligned to what students have learned in their lecture and tutorial classes. We replaced our invigilated closed-book end-of-semester exams with these open-book multiple-attempt non-invigilated in-semester quizzes. We also conducted a survey asking students about, how the designed quizzes (1) were aligned with (and covering) the promised course learning outcomes, (2) were run to address academic integrity concerns, and (3) helped students manage their stress once their final exams are replaced by the presented quizzes.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"215 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114230047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiali Cui, Runqiu Zhang, Ruochi Li, Yang Song, Fangtong Zhou, E. Gehringer
What skills does a student need to succeed in a programming class? Ostensibly, previous programming experience may affect a student's performance. Most past studies on this topic use self-reporting questionnaires to query students about their programming experience. This paper presents a novel, unified, and replicable way to measure previous programming experience using students' pre-class GitHub contributions. To our knowledge, we are the first to use GitHub contributions in this way. We conducted a comprehensive statistical study of students in an object-oriented design and development class from 2017 to 2022 (n = 751) to explore the relationships between GitHub contributions (commits, comments, pull requests, etc.) and students' performance on exams, projects, designs, etc. in the class. Several kinds of contributions were shown to have statistically significant correlations with performance in the class. A set of two-samplet -tests demonstrate statistical significance of the difference between the means of some contributions from the high-performing and low-performing groups.
{"title":"Correlating Students' Class Performance Based on GitHub Metrics: A Statistical Study","authors":"Jiali Cui, Runqiu Zhang, Ruochi Li, Yang Song, Fangtong Zhou, E. Gehringer","doi":"10.1145/3587102.3588799","DOIUrl":"https://doi.org/10.1145/3587102.3588799","url":null,"abstract":"What skills does a student need to succeed in a programming class? Ostensibly, previous programming experience may affect a student's performance. Most past studies on this topic use self-reporting questionnaires to query students about their programming experience. This paper presents a novel, unified, and replicable way to measure previous programming experience using students' pre-class GitHub contributions. To our knowledge, we are the first to use GitHub contributions in this way. We conducted a comprehensive statistical study of students in an object-oriented design and development class from 2017 to 2022 (n = 751) to explore the relationships between GitHub contributions (commits, comments, pull requests, etc.) and students' performance on exams, projects, designs, etc. in the class. Several kinds of contributions were shown to have statistically significant correlations with performance in the class. A set of two-samplet -tests demonstrate statistical significance of the difference between the means of some contributions from the high-performing and low-performing groups.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121357146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Programming novices often struggle when solving exercises, slowing down progress and causing a dependency on external aid such as a teacher, a more experienced person, or online resources. We present Jinter, a tool to generate hints to solve small exercises involving Java methods. The hints are produced taking into account the current state of an exercise and a backing model solution. The aid may refer to spotting errors or missing parts to achieve the desired outcome while taking into account behavioral equivalences of programming constructs (e.g., loop structures, forms of assignment, boolean expressions, etc). We evaluated the approach by surveying 8 programming instructors, finding that about two-thirds of the automated hints either match or are related to those given by instructors.
{"title":"Jinter: A Hint Generation System for Java Exercises","authors":"Jorge A. Gonçalves, André L. M. Santos","doi":"10.1145/3587102.3588820","DOIUrl":"https://doi.org/10.1145/3587102.3588820","url":null,"abstract":"Programming novices often struggle when solving exercises, slowing down progress and causing a dependency on external aid such as a teacher, a more experienced person, or online resources. We present Jinter, a tool to generate hints to solve small exercises involving Java methods. The hints are produced taking into account the current state of an exercise and a backing model solution. The aid may refer to spotting errors or missing parts to achieve the desired outcome while taking into account behavioral equivalences of programming constructs (e.g., loop structures, forms of assignment, boolean expressions, etc). We evaluated the approach by surveying 8 programming instructors, finding that about two-thirds of the automated hints either match or are related to those given by instructors.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"39 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126078640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The COVID-19 pandemic caused the flaws of traditional grading systems to become even more apparent. In response, a growing number of educators are transitioning their classrooms to focus on alternative methods of assessment. These subversive methods promote more equitable assessments, as they provide a more accurate picture of what a student has learned, cultivate students' intrinsic motivation, and do not privilege students from certain backgrounds. This article details how alternative grading, specifically "ungrading," was integrated into an introductory data science course. I detail how the course components align with the principles of alternative grading, students' responses to the course structure, and the lessons I learned along the way. Finally, I close with a discussion of how infusing alternative methods of assessment into the classroom stands to cultivate the diversity continually lacking in computer science and data science.
{"title":"Human Centered Data Science: Ungrading in an Introductory Data Science Course","authors":"Allison S. Theobold","doi":"10.1145/3587102.3588816","DOIUrl":"https://doi.org/10.1145/3587102.3588816","url":null,"abstract":"The COVID-19 pandemic caused the flaws of traditional grading systems to become even more apparent. In response, a growing number of educators are transitioning their classrooms to focus on alternative methods of assessment. These subversive methods promote more equitable assessments, as they provide a more accurate picture of what a student has learned, cultivate students' intrinsic motivation, and do not privilege students from certain backgrounds. This article details how alternative grading, specifically \"ungrading,\" was integrated into an introductory data science course. I detail how the course components align with the principles of alternative grading, students' responses to the course structure, and the lessons I learned along the way. Finally, I close with a discussion of how infusing alternative methods of assessment into the classroom stands to cultivate the diversity continually lacking in computer science and data science.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129676273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mastery Learning, a pedagogical strategy in which students are allowed to prove mastery of the skills acquired in a course over multiple attempts (and used failed attempts as feedback) is becoming increasingly popular in higher education. Large introductory programming courses can use it to strengthen students' preparation for later courses, but some challenges to its adoption remain, such as how to scale this format to hundreds of students, or how to ensure that students do not fall behind on the material. In Spring 2021, the instructors at the Anonymous University transformed the structure of their CS1 course using a Mastery Learning format, reorganizing the material in units focused on the different course topics. Students were allowed to prove mastery of each unit separately and over multiple times, without penalties for missed or failed attempts. In this experience report, we will describe the strategies adopted to cater to a large cohort of novice students. We will compare the students' learning experience with a cohort of students who took the course in a more traditional format, and show that the students benefited from the new format in terms of quantity of skills mastered. Students also exhibited signs of increased motivation to practice and complete tests without grade incentives. Finally, we will discuss some pitfalls in our design and address some of the concerns of instructors interested in trying a Mastery Learning approach in their CS1 courses.
{"title":"Teaching CS1 with a Mastery Learning Framework: Impact on Students' Learning and Engagement","authors":"Giulia Toti, Guoning Chen, Sebastian Gonzalez","doi":"10.1145/3587102.3588844","DOIUrl":"https://doi.org/10.1145/3587102.3588844","url":null,"abstract":"Mastery Learning, a pedagogical strategy in which students are allowed to prove mastery of the skills acquired in a course over multiple attempts (and used failed attempts as feedback) is becoming increasingly popular in higher education. Large introductory programming courses can use it to strengthen students' preparation for later courses, but some challenges to its adoption remain, such as how to scale this format to hundreds of students, or how to ensure that students do not fall behind on the material. In Spring 2021, the instructors at the Anonymous University transformed the structure of their CS1 course using a Mastery Learning format, reorganizing the material in units focused on the different course topics. Students were allowed to prove mastery of each unit separately and over multiple times, without penalties for missed or failed attempts. In this experience report, we will describe the strategies adopted to cater to a large cohort of novice students. We will compare the students' learning experience with a cohort of students who took the course in a more traditional format, and show that the students benefited from the new format in terms of quantity of skills mastered. Students also exhibited signs of increased motivation to practice and complete tests without grade incentives. Finally, we will discuss some pitfalls in our design and address some of the concerns of instructors interested in trying a Mastery Learning approach in their CS1 courses.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"22 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128831358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The question of how to teach classical, rule-based programming has been driving much of the computing education research since the 1950s. In the K--12 (school) context, a consensus has emerged over time on the paradigmatic elements of computing education, which implicitly assumes a von Neumann computer executing instruction sequences guided by imperative programs. Within this framework, many researchers have focused on how to facilitate learners to develop an accurate mental model of what the computer does when it executes a piece of code. However, the traditional programming approach in computing education is inadequate for understanding and developing machine learning (ML) driven technology. ML has already facilitated significant advancements in automation, ranging from speech and image recognition, autonomous cars, and deepfake videos to super-human performance in board and computer games, and more. Many data-driven approaches that power today's cutting edge services and apps significantly diverge from the central paradigmatic assumptions of traditional programming. Consequently, traditional views on computing education are increasingly being challenged to account for the changes that AI/ML brings. This keynote talk presents early results from a study on how to teach fundamental AI insights and techniques to 200 4--9 graders in 14 primary schools in Eastern Finland. It describes the learning environments, tools, and pedagogical approaches involved, and explores the paradigmatic and conceptual changes required in transitioning from teaching classical programming to teaching ML in K--12 computing education. It outlines the mindset shifts required for this transition and discusses the challenges posed to the development of curricula, educational technology, and learning environments. It further provides examples of how AI ethics concepts, such as algorithmic bias, privacy, misinformation, diversity, and accountability, can be integrated into ML education. The talk discusses the relationship between different literacies in computing and presents an active concept, data agency, that refers to people's volition and capacity for informed actions that make a difference in their digital world. It emphasizes not only the understanding of data (i.e., data literacy) but also the active control and manipulation of information flows and the ethical and wise use of them.
{"title":"K-12 Computing Education for the AI Era: From Data Literacy to Data Agency","authors":"M. Tedre, Henriikka Vartiainen","doi":"10.1145/3587102.3593796","DOIUrl":"https://doi.org/10.1145/3587102.3593796","url":null,"abstract":"The question of how to teach classical, rule-based programming has been driving much of the computing education research since the 1950s. In the K--12 (school) context, a consensus has emerged over time on the paradigmatic elements of computing education, which implicitly assumes a von Neumann computer executing instruction sequences guided by imperative programs. Within this framework, many researchers have focused on how to facilitate learners to develop an accurate mental model of what the computer does when it executes a piece of code. However, the traditional programming approach in computing education is inadequate for understanding and developing machine learning (ML) driven technology. ML has already facilitated significant advancements in automation, ranging from speech and image recognition, autonomous cars, and deepfake videos to super-human performance in board and computer games, and more. Many data-driven approaches that power today's cutting edge services and apps significantly diverge from the central paradigmatic assumptions of traditional programming. Consequently, traditional views on computing education are increasingly being challenged to account for the changes that AI/ML brings. This keynote talk presents early results from a study on how to teach fundamental AI insights and techniques to 200 4--9 graders in 14 primary schools in Eastern Finland. It describes the learning environments, tools, and pedagogical approaches involved, and explores the paradigmatic and conceptual changes required in transitioning from teaching classical programming to teaching ML in K--12 computing education. It outlines the mindset shifts required for this transition and discusses the challenges posed to the development of curricula, educational technology, and learning environments. It further provides examples of how AI ethics concepts, such as algorithmic bias, privacy, misinformation, diversity, and accountability, can be integrated into ML education. The talk discusses the relationship between different literacies in computing and presents an active concept, data agency, that refers to people's volition and capacity for informed actions that make a difference in their digital world. It emphasizes not only the understanding of data (i.e., data literacy) but also the active control and manipulation of information flows and the ethical and wise use of them.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124790538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew Zahn, Isabella Gransbury, S. Heckman, L. Battestilli
Students can have widely varying experiences while working on CS2 coding projects. Challenging experiences can lead to lower motivation and less success in completing these assignments. In this paper, we identify the common struggles CS2 students face while working on course projects and examine whether or not there is evidence of improvement in these areas of struggle between projects. While previous work has been conducted on understanding the importance of self-regulated learning to student success, it has not been fully investigated in the scope of CS2 coursework. We share our observations on investigating student struggles while working on coding projects through their self-reported response to a project reflection form. We apply emergent coding to identify student struggles at three points during the course and compare them against student actions in the course, such as project start times and office hours participation, to identify if students were overcoming these struggles. Through our coding and analysis we have found that while a majority of students encounter struggles with time management and debugging of failing tests, students tend to emphasize wanting to improve their time management skills in future coding assignments.
{"title":"Assessment of Self-Identified Learning Struggles in CS2 Programming Assignments","authors":"Matthew Zahn, Isabella Gransbury, S. Heckman, L. Battestilli","doi":"10.1145/3587102.3588786","DOIUrl":"https://doi.org/10.1145/3587102.3588786","url":null,"abstract":"Students can have widely varying experiences while working on CS2 coding projects. Challenging experiences can lead to lower motivation and less success in completing these assignments. In this paper, we identify the common struggles CS2 students face while working on course projects and examine whether or not there is evidence of improvement in these areas of struggle between projects. While previous work has been conducted on understanding the importance of self-regulated learning to student success, it has not been fully investigated in the scope of CS2 coursework. We share our observations on investigating student struggles while working on coding projects through their self-reported response to a project reflection form. We apply emergent coding to identify student struggles at three points during the course and compare them against student actions in the course, such as project start times and office hours participation, to identify if students were overcoming these struggles. Through our coding and analysis we have found that while a majority of students encounter struggles with time management and debugging of failing tests, students tend to emphasize wanting to improve their time management skills in future coding assignments.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116973624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stephanie J. Lunn, Maíra Marques Samary, A. Peterfreund
Although it is already established that computing education (CEd) is an emergent interdisciplinary field with scholars around the globe contributing to research in this area, it is less clear how and where this research occurs. To better understand the current state of computing education research (CEdR) and opportunities for those engaged in the field, we employed a data collection process involving training, information gathering, and reporting with institutional representatives. We sought to develop an overview of: 1) affiliations of graduate students and faculty conducting CEdR; 2) funding for graduate students and faculty conducting CEdR; 3) the academic degree options and the credit, coursework, and publication requirements for graduate students; and 4) institutions' current and future plans for CEd. Partnerships with contacts spanning 30 institutions spread across five continents provided insight into the pathways and possibilities that presently exist for researchers in the field, as well as a glimpse into future plans for expansion (or the lack thereof). The findings from this investigation offer valuable information for students and faculty seeking potential collaborations, thinking about their career trajectories, or when planning CEd initiatives; for educators trying to develop courses; and for administrators considering creating more formal tracks for those focused on CEd.
{"title":"Calling Upon the Community: Gathering Data on Programmatic and Academic Opportunities in Computing Education Research","authors":"Stephanie J. Lunn, Maíra Marques Samary, A. Peterfreund","doi":"10.1145/3587102.3588813","DOIUrl":"https://doi.org/10.1145/3587102.3588813","url":null,"abstract":"Although it is already established that computing education (CEd) is an emergent interdisciplinary field with scholars around the globe contributing to research in this area, it is less clear how and where this research occurs. To better understand the current state of computing education research (CEdR) and opportunities for those engaged in the field, we employed a data collection process involving training, information gathering, and reporting with institutional representatives. We sought to develop an overview of: 1) affiliations of graduate students and faculty conducting CEdR; 2) funding for graduate students and faculty conducting CEdR; 3) the academic degree options and the credit, coursework, and publication requirements for graduate students; and 4) institutions' current and future plans for CEd. Partnerships with contacts spanning 30 institutions spread across five continents provided insight into the pathways and possibilities that presently exist for researchers in the field, as well as a glimpse into future plans for expansion (or the lack thereof). The findings from this investigation offer valuable information for students and faculty seeking potential collaborations, thinking about their career trajectories, or when planning CEd initiatives; for educators trying to develop courses; and for administrators considering creating more formal tracks for those focused on CEd.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131139212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Brom, Tereza Hannemann, P. Jezek, A. Drobná, Kristina Volná, Katerina Kacerovská
Teaching the principles of how digital technologies work at the K-5 level is underexplored. We created six model lessons for children covering this topic as part of the newly updated Czech national computing curriculum. Here, we introduce these lessons. They can be viewed through the lenses of a constructivist educational framework, Evocation - Realization of Meaning - Reflection. Four lessons target younger learners (Grades ~2-4) and include topics such as storing and deleting data, data size and computer viruses, among others. Two lessons target older children (Grade ~4-5) and focus on the structure and functioning of the internet and digital footprints. The lessons are organized around probing child preconceptions about the lesson topics, showing brief animated videos, and introducing new concepts through instructional analogies, discussions and unplugged activities. The paper describes how the lessons were created and evaluated; discusses their structure along with sample activities and analogies; and presents lessons learnt and how our approach can be used by others. Altogether, the paper contributes by complementing existing reports on primary computing education.
{"title":"Principles of Computers and the Internet - Model Lessons for Primary School Children: Experience Report","authors":"C. Brom, Tereza Hannemann, P. Jezek, A. Drobná, Kristina Volná, Katerina Kacerovská","doi":"10.1145/3587102.3588861","DOIUrl":"https://doi.org/10.1145/3587102.3588861","url":null,"abstract":"Teaching the principles of how digital technologies work at the K-5 level is underexplored. We created six model lessons for children covering this topic as part of the newly updated Czech national computing curriculum. Here, we introduce these lessons. They can be viewed through the lenses of a constructivist educational framework, Evocation - Realization of Meaning - Reflection. Four lessons target younger learners (Grades ~2-4) and include topics such as storing and deleting data, data size and computer viruses, among others. Two lessons target older children (Grade ~4-5) and focus on the structure and functioning of the internet and digital footprints. The lessons are organized around probing child preconceptions about the lesson topics, showing brief animated videos, and introducing new concepts through instructional analogies, discussions and unplugged activities. The paper describes how the lessons were created and evaluated; discusses their structure along with sample activities and analogies; and presents lessons learnt and how our approach can be used by others. Altogether, the paper contributes by complementing existing reports on primary computing education.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130496043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gina R. Bai, Sandeep Sthapit, S. Heckman, T. Price, Kathryn T. Stolee
Software testing is a critical skill for computing students, but learning and practicing testing can be challenging, particularly for beginners. A recent study suggests that a lightweight testing checklist that contains testing strategies and tutorial information could assist students in writing quality tests. However, students expressed a desire for more support in knowing how to test the code/scenario. Moreover, the potential costs and benefits of the testing checklist are not yet examined in a classroom setting. To that end, we improved the checklist by integrating explicit testing strategies to it (ETS Checklist), which provide step-by-step guidance on how to transfer semantic information from instructions to the possible testing scenarios. In this paper, we report our experiences in designing explicit strategies in unit testing, as well as adapting the ETS Checklist as optional tool support in a CS1.5 course. With the quantitative and qualitative analysis of the survey responses and lab assignment submissions generated by students, we discuss students' engagement with the ETS Checklists. Our results suggest that students who used the checklist intervention had significantly higher quality in their student-authored test code, in terms of code coverage, compared to those who did not, especially for assignments earlier in the course. We also observed students' unawareness of their need for help in writing high-quality tests.
{"title":"An Experience Report on Introducing Explicit Strategies into Testing Checklists for Advanced Beginners","authors":"Gina R. Bai, Sandeep Sthapit, S. Heckman, T. Price, Kathryn T. Stolee","doi":"10.1145/3587102.3588781","DOIUrl":"https://doi.org/10.1145/3587102.3588781","url":null,"abstract":"Software testing is a critical skill for computing students, but learning and practicing testing can be challenging, particularly for beginners. A recent study suggests that a lightweight testing checklist that contains testing strategies and tutorial information could assist students in writing quality tests. However, students expressed a desire for more support in knowing how to test the code/scenario. Moreover, the potential costs and benefits of the testing checklist are not yet examined in a classroom setting. To that end, we improved the checklist by integrating explicit testing strategies to it (ETS Checklist), which provide step-by-step guidance on how to transfer semantic information from instructions to the possible testing scenarios. In this paper, we report our experiences in designing explicit strategies in unit testing, as well as adapting the ETS Checklist as optional tool support in a CS1.5 course. With the quantitative and qualitative analysis of the survey responses and lab assignment submissions generated by students, we discuss students' engagement with the ETS Checklists. Our results suggest that students who used the checklist intervention had significantly higher quality in their student-authored test code, in terms of code coverage, compared to those who did not, especially for assignments earlier in the course. We also observed students' unawareness of their need for help in writing high-quality tests.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116299037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}