Software and IT infrastructure keeps changing the way we live and work, but not necessarily for the better for all of us. Considering the implications of software on our society, and the industry's expectations towards computing graduates, it appears natural to address social and ethical competencies within computing curricula. However, this is not necessarily the case in German computing education. Due to this desideratum, this paper addresses the role of ethical guidelines and social responsibility in programming education in contrast to industry expectations. Expected competencies in programming education and ethics modules of CS study programs were identified by a secondary analysis of available data. The present work also gathered and qualitatively analyzed job advertisements with regard to expected competencies. The results (1) illustrate the lack of correspondence with what is expected in educational settings and the profession, and (2) outline implications for a socially responsible programming education. These findings will support educators in developing competency-based pedagogical approaches to address socially responsible learning objectives in future programming courses and CS study programs.
{"title":"Socially Responsible Programming in Computing Education and Expectations in the Profession","authors":"Natalie Kiesler, Carsten Thorbrügge","doi":"10.1145/3587102.3588839","DOIUrl":"https://doi.org/10.1145/3587102.3588839","url":null,"abstract":"Software and IT infrastructure keeps changing the way we live and work, but not necessarily for the better for all of us. Considering the implications of software on our society, and the industry's expectations towards computing graduates, it appears natural to address social and ethical competencies within computing curricula. However, this is not necessarily the case in German computing education. Due to this desideratum, this paper addresses the role of ethical guidelines and social responsibility in programming education in contrast to industry expectations. Expected competencies in programming education and ethics modules of CS study programs were identified by a secondary analysis of available data. The present work also gathered and qualitatively analyzed job advertisements with regard to expected competencies. The results (1) illustrate the lack of correspondence with what is expected in educational settings and the profession, and (2) outline implications for a socially responsible programming education. These findings will support educators in developing competency-based pedagogical approaches to address socially responsible learning objectives in future programming courses and CS study programs.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"36 20","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120825111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graduates distinguish themselves by being creative, innovative, and showing leadership, but such instructional topics are usually formally emphasized in business curricula. Computer Science students could minor in entrepreneurship or other business-related fields in order to foster their innovative mindset and obtain a business acumen that complements the technical expertise they obtain in their majors, but such minors usually require at least five additional courses. We discuss the formal incorporation of entrepreneurship in the computer science curriculum as part of a clinic model that includes real-world experiences, at the expense of two 1-credit hands-on clinics that can be embedded into the number of credits required for the computer science degree. After experimenting with various models and activities over a period of several years, we present our current iteration that was successfully deployed for the past four years as a course framework that combines both entrepreneurial and technical aspects into a two-semester software engineering course sequence with assigned clinic experience during the junior year. Students learn how to find and evaluate ideas, build rapid prototypes, test hypotheses, use different types of business models and financial analysis, market their software products, and understand how to start a business. Students are able to pitch and refine their ideas with various audiences and compete in entrepreneurship competitions. From dreaming ideas to creating and managing teams, computer science students are guided through the process by faculty, entrepreneurs, and potential investors. We present the steps and activities that complement our framework, with case studies and lesson learned.
{"title":"Fostering the Innovative Mindset: Entrepreneurship Clinic Model for Computer Science Students","authors":"A. Rusu, A. Rusu","doi":"10.1145/3587102.3588812","DOIUrl":"https://doi.org/10.1145/3587102.3588812","url":null,"abstract":"Graduates distinguish themselves by being creative, innovative, and showing leadership, but such instructional topics are usually formally emphasized in business curricula. Computer Science students could minor in entrepreneurship or other business-related fields in order to foster their innovative mindset and obtain a business acumen that complements the technical expertise they obtain in their majors, but such minors usually require at least five additional courses. We discuss the formal incorporation of entrepreneurship in the computer science curriculum as part of a clinic model that includes real-world experiences, at the expense of two 1-credit hands-on clinics that can be embedded into the number of credits required for the computer science degree. After experimenting with various models and activities over a period of several years, we present our current iteration that was successfully deployed for the past four years as a course framework that combines both entrepreneurial and technical aspects into a two-semester software engineering course sequence with assigned clinic experience during the junior year. Students learn how to find and evaluate ideas, build rapid prototypes, test hypotheses, use different types of business models and financial analysis, market their software products, and understand how to start a business. Students are able to pitch and refine their ideas with various audiences and compete in entrepreneurship competitions. From dreaming ideas to creating and managing teams, computer science students are guided through the process by faculty, entrepreneurs, and potential investors. We present the steps and activities that complement our framework, with case studies and lesson learned.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125705763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Designing a proper exam that accurately evaluates students' knowledge and skills is one of the important tasks of every teacher. The format of the exams affects the way students learn throughout the course, and a well-designed exam can enhance meaningful learning. In this paper, we address this topic in the context of Data Structures and Algorithms courses, and argue that a good exam should contain questions that students have seen during the semester, and that the grading of those questions should be strict. We describe a case study which, over three semesters, supports the claim that answering these questions require the "Understand" level of Bloom's taxonomy, and that this strategy fosters more meaningful learning and better assesses students' knowledge.
{"title":"Studied Questions in Data Structures and Algorithms Assessments","authors":"I. Gaber, Amir Kirsh, D. Statter","doi":"10.1145/3587102.3588843","DOIUrl":"https://doi.org/10.1145/3587102.3588843","url":null,"abstract":"Designing a proper exam that accurately evaluates students' knowledge and skills is one of the important tasks of every teacher. The format of the exams affects the way students learn throughout the course, and a well-designed exam can enhance meaningful learning. In this paper, we address this topic in the context of Data Structures and Algorithms courses, and argue that a good exam should contain questions that students have seen during the semester, and that the grading of those questions should be strict. We describe a case study which, over three semesters, supports the claim that answering these questions require the \"Understand\" level of Bloom's taxonomy, and that this strategy fosters more meaningful learning and better assesses students' knowledge.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115757025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Reeves, Sami Sarsa, J. Prather, Paul Denny, Brett A. Becker, Arto Hellas, Bailey Kimmel, Garrett B. Powell, Juho Leinonen
The recent emergence of code generation tools powered by large language models has attracted wide attention. Models such as OpenAI Codex can take natural language problem descriptions as input and generate highly accurate source code solutions, with potentially significant implications for computing education. Given the many complexities that students face when learning to write code, they may quickly become reliant on such tools without properly understanding the underlying concepts. One popular approach for scaffolding the code writing process is to use Parsons problems, which present solution lines of code in a scrambled order. These remove the complexities of low-level syntax, and allow students to focus on algorithmic and design-level problem solving. It is unclear how well code generation models can be applied to solve Parsons problems, given the mechanics of these models and prior evidence that they underperform when problems include specific restrictions. In this paper, we explore the performance of the Codex model for solving Parsons problems over various prompt variations. Using a corpus of Parsons problems we sourced from the computing education literature, we find that Codex successfully reorders the problem blocks about half of the time, a much lower rate of success when compared to prior work on more free-form programming tasks. Regarding prompts, we find that small variations in prompting have a noticeable effect on model performance, although the effect is not as pronounced as between different problems.
{"title":"Evaluating the Performance of Code Generation Models for Solving Parsons Problems With Small Prompt Variations","authors":"B. Reeves, Sami Sarsa, J. Prather, Paul Denny, Brett A. Becker, Arto Hellas, Bailey Kimmel, Garrett B. Powell, Juho Leinonen","doi":"10.1145/3587102.3588805","DOIUrl":"https://doi.org/10.1145/3587102.3588805","url":null,"abstract":"The recent emergence of code generation tools powered by large language models has attracted wide attention. Models such as OpenAI Codex can take natural language problem descriptions as input and generate highly accurate source code solutions, with potentially significant implications for computing education. Given the many complexities that students face when learning to write code, they may quickly become reliant on such tools without properly understanding the underlying concepts. One popular approach for scaffolding the code writing process is to use Parsons problems, which present solution lines of code in a scrambled order. These remove the complexities of low-level syntax, and allow students to focus on algorithmic and design-level problem solving. It is unclear how well code generation models can be applied to solve Parsons problems, given the mechanics of these models and prior evidence that they underperform when problems include specific restrictions. In this paper, we explore the performance of the Codex model for solving Parsons problems over various prompt variations. Using a corpus of Parsons problems we sourced from the computing education literature, we find that Codex successfully reorders the problem blocks about half of the time, a much lower rate of success when compared to prior work on more free-form programming tasks. Regarding prompts, we find that small variations in prompting have a noticeable effect on model performance, although the effect is not as pronounced as between different problems.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"3 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134196456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sara Capecchi, Michael Lodi, Violetta Lonati, M. Sbaraglia
This experience report presents a participatory process that involved primary school teachers and computer science education researchers. The objective of the process was to co-design a learning module to teach iteration to second graders using a visual programming environment and based on the Use-Modify-Create methodology. The co-designed learning module was piloted with three second-grade classes. We experienced that sharing and reconciling the different perspectives of researchers and teachers was doubly effective. On the one hand, it improved the quality of the resulting learning module; on the other hand, it constituted a very significant professional development opportunity for both teachers and researchers. We describe the co-designed learning module, discuss the most significant hinges in the process that led to such a product, and reflect on the lessons learned.
{"title":"Castle and Stairs to Learn Iteration: Co-designing a UMC Learning Module with Teachers","authors":"Sara Capecchi, Michael Lodi, Violetta Lonati, M. Sbaraglia","doi":"10.1145/3587102.3588793","DOIUrl":"https://doi.org/10.1145/3587102.3588793","url":null,"abstract":"This experience report presents a participatory process that involved primary school teachers and computer science education researchers. The objective of the process was to co-design a learning module to teach iteration to second graders using a visual programming environment and based on the Use-Modify-Create methodology. The co-designed learning module was piloted with three second-grade classes. We experienced that sharing and reconciling the different perspectives of researchers and teachers was doubly effective. On the one hand, it improved the quality of the resulting learning module; on the other hand, it constituted a very significant professional development opportunity for both teachers and researchers. We describe the co-designed learning module, discuss the most significant hinges in the process that led to such a product, and reflect on the lessons learned.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130872003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Narges Norouzi, H. Habibi, Carmen Robinson, A. Sher
In this paper, we study three focus areas: investigating the identity-related sense of belonging in a gateway computer science course, examining the dynamics of the sense of belonging between the beginning and the end of the course, and offering actions to improve the sense of belonging that addresses the needs of students from intersecting identity groups. We use multivariate logistic regression models to identify how students' identity, prior mathematics and programming knowledge, and social expectations of success shape their sense of belonging entering the course and after completing it. Our multi-dimensional approach allows for consideration of the intersectionality of students' identities as well as other multiple factors at the same time. Our analyses suggest that social perceptions persistently affect students' sense of belonging. Therefore, we argue that more direct interventions targeting social perception are needed to achieve equity.
{"title":"An Equity-minded Multi-dimensional Framework for Exploring the Dynamics of Sense of Belonging in an Introductory CS Course","authors":"Narges Norouzi, H. Habibi, Carmen Robinson, A. Sher","doi":"10.1145/3587102.3588780","DOIUrl":"https://doi.org/10.1145/3587102.3588780","url":null,"abstract":"In this paper, we study three focus areas: investigating the identity-related sense of belonging in a gateway computer science course, examining the dynamics of the sense of belonging between the beginning and the end of the course, and offering actions to improve the sense of belonging that addresses the needs of students from intersecting identity groups. We use multivariate logistic regression models to identify how students' identity, prior mathematics and programming knowledge, and social expectations of success shape their sense of belonging entering the course and after completing it. Our multi-dimensional approach allows for consideration of the intersectionality of students' identities as well as other multiple factors at the same time. Our analyses suggest that social perceptions persistently affect students' sense of belonging. Therefore, we argue that more direct interventions targeting social perception are needed to achieve equity.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133234815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Coding tasks combined with other activities such as Explain in Plain English or Parson Puzzles help CS1 students to develop core programming skills. Students usually receive feedback of code correctness but limited or no feedback on their code quality. Teaching students to evaluate and improve the quality of their code once it is functionally correct should be included in the curricula towards the end of CS1 or during CS2. However, little is known about the student's perceptions of code quality at the end of a CS1 course. This study aims to capture their developing notions of code quality, in order to tailor class activities to support code quality improvements. We directed students to think about the overall quality of small programs by asking them to rank a small set of solutions for a simple problem solving task. Their rankings and explanations have been analysed to identify the criteria underlying their quality assessments. The top quality criteria were Performance (64%), Structure (51%), Conciseness (42%) and Comprehensibility (42%). Although fast execution is a key criteria for ranking, their explanations on why a given option was fast were often flawed, indicating students need more support both to evaluate performance and to include readability or comprehensibility criteria in their assessment.
{"title":"Exploring CS1 Student's Notions of Code Quality","authors":"C. Izu, C. Mirolo","doi":"10.1145/3587102.3588808","DOIUrl":"https://doi.org/10.1145/3587102.3588808","url":null,"abstract":"Coding tasks combined with other activities such as Explain in Plain English or Parson Puzzles help CS1 students to develop core programming skills. Students usually receive feedback of code correctness but limited or no feedback on their code quality. Teaching students to evaluate and improve the quality of their code once it is functionally correct should be included in the curricula towards the end of CS1 or during CS2. However, little is known about the student's perceptions of code quality at the end of a CS1 course. This study aims to capture their developing notions of code quality, in order to tailor class activities to support code quality improvements. We directed students to think about the overall quality of small programs by asking them to rank a small set of solutions for a simple problem solving task. Their rankings and explanations have been analysed to identify the criteria underlying their quality assessments. The top quality criteria were Performance (64%), Structure (51%), Conciseness (42%) and Comprehensibility (42%). Although fast execution is a key criteria for ranking, their explanations on why a given option was fast were often flawed, indicating students need more support both to evaluate performance and to include readability or comprehensibility criteria in their assessment.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115768898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anshul Shah, Vardhan Agarwal, Michael Granado, J. Driscoll, Emma Hogan, Leo Porter, W. Griswold, Adalbert Gerald Soosai Raj
Live coding---a pedagogical technique in which an instructor plans, writes, and executes code in front of a class---is generally considered a best practice when teaching programming. However, only a few studies have evaluated the effect of live coding on student learning in a controlled experiment and most of the literature relating to live coding identifies students' perceived benefits of live-coding examples. In order to empirically evaluate the impact of live coding, we designed a controlled experiment in a CS1 course taught in Python at a large public university. In the two remote lecture sections for the course, one was taught using live-coding examples and the other was taught using static-code examples. Throughout the term, we collected code snapshots from students' programming assignments, students' grades, and the questions that they asked during the remote lectures. We then applied a set of process-oriented programming metrics to students' programming data to compare students' adherence to effective programming processes in the two learning groups and categorized each question asked in lectures following an open-coding approach. Our results revealed a general lack of difference between the two groups across programming processes, grades, and lecture questions asked. However, our experiment uncovered minimal effects in favor of the live-coding group indicating improved programming processes but lower performance on assignments and grades. Our results suggest an overall insignificant impact of the style of presenting code examples, though we reflect on the threats to validity in our study that should be addressed in future work.
{"title":"The Impact of a Remote Live-Coding Pedagogy on Student Programming Processes, Grades, and Lecture Questions Asked","authors":"Anshul Shah, Vardhan Agarwal, Michael Granado, J. Driscoll, Emma Hogan, Leo Porter, W. Griswold, Adalbert Gerald Soosai Raj","doi":"10.1145/3587102.3588846","DOIUrl":"https://doi.org/10.1145/3587102.3588846","url":null,"abstract":"Live coding---a pedagogical technique in which an instructor plans, writes, and executes code in front of a class---is generally considered a best practice when teaching programming. However, only a few studies have evaluated the effect of live coding on student learning in a controlled experiment and most of the literature relating to live coding identifies students' perceived benefits of live-coding examples. In order to empirically evaluate the impact of live coding, we designed a controlled experiment in a CS1 course taught in Python at a large public university. In the two remote lecture sections for the course, one was taught using live-coding examples and the other was taught using static-code examples. Throughout the term, we collected code snapshots from students' programming assignments, students' grades, and the questions that they asked during the remote lectures. We then applied a set of process-oriented programming metrics to students' programming data to compare students' adherence to effective programming processes in the two learning groups and categorized each question asked in lectures following an open-coding approach. Our results revealed a general lack of difference between the two groups across programming processes, grades, and lecture questions asked. However, our experiment uncovered minimal effects in favor of the live-coding group indicating improved programming processes but lower performance on assignments and grades. Our results suggest an overall insignificant impact of the style of presenting code examples, though we reflect on the threats to validity in our study that should be addressed in future work.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126188613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We report our experience with a novel course on binary reverse engineering, a university computer science course that was offered at the second-year level to both computer science majors as well as non-majors, with minimal prerequisites. While reverse engineering has known, important uses in computer security, this was pointedly not framed as a security course, because reverse engineering is a skill that has uses outside computer science and can be taught to a more diverse audience. The original course design intended students to perform hands-on exercises during an in-person class; we describe the systems we developed to support that, along with other online systems we used, which allowed a relatively easy pivot to online learning and back as necessitated by the pandemic. Importantly, we detail our application of "ungrading" within the course, an assessment philosophy that has gained some traction primarily in non-STEM disciplines but has seen little to no discussion in the context of computer science education. The combination of pedagogical methods we present has potential uses in other courses beyond reverse engineering.
{"title":"Binary Reverse Engineering for All","authors":"John Aycock","doi":"10.1145/3587102.3588790","DOIUrl":"https://doi.org/10.1145/3587102.3588790","url":null,"abstract":"We report our experience with a novel course on binary reverse engineering, a university computer science course that was offered at the second-year level to both computer science majors as well as non-majors, with minimal prerequisites. While reverse engineering has known, important uses in computer security, this was pointedly not framed as a security course, because reverse engineering is a skill that has uses outside computer science and can be taught to a more diverse audience. The original course design intended students to perform hands-on exercises during an in-person class; we describe the systems we developed to support that, along with other online systems we used, which allowed a relatively easy pivot to online learning and back as necessitated by the pandemic. Importantly, we detail our application of \"ungrading\" within the course, an assessment philosophy that has gained some traction primarily in non-STEM disciplines but has seen little to no discussion in the context of computer science education. The combination of pedagogical methods we present has potential uses in other courses beyond reverse engineering.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128356825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Students often struggle with time management. They delay work on assignments for too long and/or allocate too little time for the tasks given to them. This negatively impacts their performance, increases stress, and even leads some to switch majors. As such, there is a wealth of previous research on improving student time management through direct intervention. In particular, there is a heavy focus on having students start assignments earlier and spend more time-on-task -- as these metrics have been shown to positively correlate with student performance. In this paper, however, we theorize that poor student time management (at least in CS) is often due to confounding factors -- such as academic stress -- and not a missing skill set. We demonstrate that changes in assignment design and style can cause students to organically manage their time better. Specifically, we compare two alternative designs -- a low risk preparatory assignment and a highly engaging gamified assignment -- against a conventional programming assignment. While the conventional assignment follows common trends, students do better on the alternative designs and exhibit novel behavior on the usual metrics of earliness of work and time-on-task. Of note, on the preparatory assignment, time-on-task is negatively (albeit weakly) correlated with performance -- the opposite of what is standard in the literature. Finally, we provide takeaways and recommendations for other instructors to use in their own approaches and research.
{"title":"More Carrot or Less Stick: Organically Improving Student Time Management With Practice Tasks and Gamified Assignments","authors":"Mac Malone, F. Monrose","doi":"10.1145/3587102.3588825","DOIUrl":"https://doi.org/10.1145/3587102.3588825","url":null,"abstract":"Students often struggle with time management. They delay work on assignments for too long and/or allocate too little time for the tasks given to them. This negatively impacts their performance, increases stress, and even leads some to switch majors. As such, there is a wealth of previous research on improving student time management through direct intervention. In particular, there is a heavy focus on having students start assignments earlier and spend more time-on-task -- as these metrics have been shown to positively correlate with student performance. In this paper, however, we theorize that poor student time management (at least in CS) is often due to confounding factors -- such as academic stress -- and not a missing skill set. We demonstrate that changes in assignment design and style can cause students to organically manage their time better. Specifically, we compare two alternative designs -- a low risk preparatory assignment and a highly engaging gamified assignment -- against a conventional programming assignment. While the conventional assignment follows common trends, students do better on the alternative designs and exhibit novel behavior on the usual metrics of earliness of work and time-on-task. Of note, on the preparatory assignment, time-on-task is negatively (albeit weakly) correlated with performance -- the opposite of what is standard in the literature. Finally, we provide takeaways and recommendations for other instructors to use in their own approaches and research.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128213412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}