Novice learners of programming tend to neglect error messages, even though the messages have a lot of useful information for solving problems. While there exists research that aims to user-friendly error messages by changing the wording and by adding visual assistance, most of them do not focus on drawing learners' attention to error messages. We propose the enbugging quiz, a novel quiz format that requests the learner to craft a program that produces a specified error. This paper reports our design of enbugging quizzes and reports the results of our initial experiment, where we observed positive effects on the learners' attitudes towards error messages.
{"title":"Mind the Error Message: An Inverted Quiz Format to Direct Learner's Attention to Error Messages","authors":"Kazuhiro Tsunoda, H. Masuhara, Youyou Cong","doi":"10.1145/3587102.3588823","DOIUrl":"https://doi.org/10.1145/3587102.3588823","url":null,"abstract":"Novice learners of programming tend to neglect error messages, even though the messages have a lot of useful information for solving problems. While there exists research that aims to user-friendly error messages by changing the wording and by adding visual assistance, most of them do not focus on drawing learners' attention to error messages. We propose the enbugging quiz, a novel quiz format that requests the learner to craft a program that produces a specified error. This paper reports our design of enbugging quizzes and reports the results of our initial experiment, where we observed positive effects on the learners' attitudes towards error messages.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129340286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prior studies in multilingual computing education have shown that many non-native English speakers (NNES) in India struggle with introductory programming courses as they learn both a programming language (e.g., Java) and a natural language (e.g., English) concurrently. Although multiple studies have been conducted with NNES in India whose first language is Hindi or Tamil, we do not yet know the influence a students' native language may have among Spanish speaking students in the United States. This replication study investigates the effects of an instructional design integrating the students' native language along with English on high school students' learning and engagement in a two week CS0 course using the block-based programming language, Scratch. We designed an experiment to teach introductory computing topics (e.g., algorithms, variables, loops, conditionals) to two groups of students from a rural area spanning multiple institutions in the US. The experimental group was taught using English and Spanish (students' native language) and the control section was taught using only English. A pre-test and post-test was conducted to test students' programming knowledge before and after the course. We also recorded all the questions students asked during the course to measure student engagement. We found that teaching Scratch programming using Spanish and English is no different than teaching Scratch programming using only English to high school students whose native language is Spanish. We also found that the students in the experimental group asked more questions when compared to the control group.
{"title":"The Effects of Spanish-English Bilingual Instruction in a CS0 Course for High School Students","authors":"Ismael Villegas Molina, Adrian Salguero, Shera Zhong, Adalbert Gerald Soosai Raj","doi":"10.1145/3587102.3588845","DOIUrl":"https://doi.org/10.1145/3587102.3588845","url":null,"abstract":"Prior studies in multilingual computing education have shown that many non-native English speakers (NNES) in India struggle with introductory programming courses as they learn both a programming language (e.g., Java) and a natural language (e.g., English) concurrently. Although multiple studies have been conducted with NNES in India whose first language is Hindi or Tamil, we do not yet know the influence a students' native language may have among Spanish speaking students in the United States. This replication study investigates the effects of an instructional design integrating the students' native language along with English on high school students' learning and engagement in a two week CS0 course using the block-based programming language, Scratch. We designed an experiment to teach introductory computing topics (e.g., algorithms, variables, loops, conditionals) to two groups of students from a rural area spanning multiple institutions in the US. The experimental group was taught using English and Spanish (students' native language) and the control section was taught using only English. A pre-test and post-test was conducted to test students' programming knowledge before and after the course. We also recorded all the questions students asked during the course to measure student engagement. We found that teaching Scratch programming using Spanish and English is no different than teaching Scratch programming using only English to high school students whose native language is Spanish. We also found that the students in the experimental group asked more questions when compared to the control group.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123700523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we investigate the relationship between student experiences and transactional distance in a 12 week online undergraduate Python programming course. Transactional distance is defined as the psychological and communication space between students and instructors due to geographical separation. Although several studies have examined learning from a transactional distance perspective, there has been a lack of research which describe computing education courses from this perspective. In our online programming course context, students were introduced to Python programming concepts through pre-recorded lectures which were released every week. Towards the end of the course, we provided students with a survey. We analysed the survey responses to examine the relationship between their perceived transactional distance, sense of belonging and satisfaction in the course. We also analysed students' open-ended survey responses regarding feedback about the course, and used a thematic analysis approach to categorise these responses. We found moderate correlations between transactional distance, sense of belonging and student satisfaction based on the survey responses. This implies that reducing transactional distance can improve sense of belonging in students in an online programming course. The themes emerging from the analysis of open-ended survey responses show that feedback in an online course can be considered from a transactional distance perspective. These findings provide implications for instructors and researchers to explore the transactional distance framework as a means to improve instruction, sense of belonging and satisfaction in online programming courses.
{"title":"Understanding Students' Experiences in an Online Programming Course from a Transactional Distance Perspective","authors":"Prajish Prasad, Rishabh Balse, J. Warriem","doi":"10.1145/3587102.3588850","DOIUrl":"https://doi.org/10.1145/3587102.3588850","url":null,"abstract":"In this paper, we investigate the relationship between student experiences and transactional distance in a 12 week online undergraduate Python programming course. Transactional distance is defined as the psychological and communication space between students and instructors due to geographical separation. Although several studies have examined learning from a transactional distance perspective, there has been a lack of research which describe computing education courses from this perspective. In our online programming course context, students were introduced to Python programming concepts through pre-recorded lectures which were released every week. Towards the end of the course, we provided students with a survey. We analysed the survey responses to examine the relationship between their perceived transactional distance, sense of belonging and satisfaction in the course. We also analysed students' open-ended survey responses regarding feedback about the course, and used a thematic analysis approach to categorise these responses. We found moderate correlations between transactional distance, sense of belonging and student satisfaction based on the survey responses. This implies that reducing transactional distance can improve sense of belonging in students in an online programming course. The themes emerging from the analysis of open-ended survey responses show that feedback in an online course can be considered from a transactional distance perspective. These findings provide implications for instructors and researchers to explore the transactional distance framework as a means to improve instruction, sense of belonging and satisfaction in online programming courses.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131695245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peer review can be used as a collaborative learning activity in which people with similar competencies evaluate other students' submissions and/or provide feedback. It provides many potential benefits such as timely feedback, high motivation, reduced workload for teachers, collaboration among the students, improving the code, and seeing other solution strategies. However, there are also challenges and contradictory results such as low motivation, participation, quality, and no improvements in the reviews. This article attempts to shed more light on these issues through an empirical investigation in a university-based introductory programming course with approx. 900 students. In the evaluation, this paper empirically investigates the effects of reviewing other solutions on the view of one's own solution and how students can be motivated to regularly work on voluntary homework assignments. Furthermore, there is an analysis of the peer reviews regarding their quality (length and correctness), and the students' participation and perceptions. The results indicate that giving feedback can change the view on one's own submission regarding the complete correctness, the majority of feedback is rather short, peer review assignments are a major driver for working on the assignments, and the majority of students like seeing other solutions. The majority of students seems to be able to identify correct submissions as correct, however, (partly) incorrect submissions are also often classified as completely correct. Possible measures to address these weaknesses are discussed.
{"title":"Does Peer Code Review Change My Mind on My Submission?","authors":"Sven Strickroth","doi":"10.1145/3587102.3588802","DOIUrl":"https://doi.org/10.1145/3587102.3588802","url":null,"abstract":"Peer review can be used as a collaborative learning activity in which people with similar competencies evaluate other students' submissions and/or provide feedback. It provides many potential benefits such as timely feedback, high motivation, reduced workload for teachers, collaboration among the students, improving the code, and seeing other solution strategies. However, there are also challenges and contradictory results such as low motivation, participation, quality, and no improvements in the reviews. This article attempts to shed more light on these issues through an empirical investigation in a university-based introductory programming course with approx. 900 students. In the evaluation, this paper empirically investigates the effects of reviewing other solutions on the view of one's own solution and how students can be motivated to regularly work on voluntary homework assignments. Furthermore, there is an analysis of the peer reviews regarding their quality (length and correctness), and the students' participation and perceptions. The results indicate that giving feedback can change the view on one's own submission regarding the complete correctness, the majority of feedback is rather short, peer review assignments are a major driver for working on the assignments, and the majority of students like seeing other solutions. The majority of students seems to be able to identify correct submissions as correct, however, (partly) incorrect submissions are also often classified as completely correct. Possible measures to address these weaknesses are discussed.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116635953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura Caspari, Luisa Greifenstein, Ute Heuer, G. Fraser
Scratch is a hugely popular block-based programming environment that is often used in educational settings, and has therefore recently also become a focus for research on programming education. Scratch provides dedicated teacher accounts that make it easy and convenient to handle lessons with school classes. However, once learners join a Scratch classroom, it is challenging to keep track of what they are doing: Both teachers and researchers may be interested in learning analytics to help them monitor students or evaluate teaching material. Researchers may also be interested in understanding how programs are created and how learners use Scratch. Neither use case is supported by Scratch itself currently. In this paper, we introduce ScratchLog, a tool that collects data from learners using Scratch. ScratchLog provides custom user management and makes it easy to set up courses and assignments. Starting from a task description and a starter project, learners transparently use Scratch while ScratchLog collects usage data, such as the history of code edits, or statistics about how the Scratch user interface was used. This data can be viewed on the ScratchLog web interface, or exported for further analysis, for example to inspect the functionality of programs using automated tests.
{"title":"ScratchLog: Live Learning Analytics for Scratch","authors":"Laura Caspari, Luisa Greifenstein, Ute Heuer, G. Fraser","doi":"10.1145/3587102.3588836","DOIUrl":"https://doi.org/10.1145/3587102.3588836","url":null,"abstract":"Scratch is a hugely popular block-based programming environment that is often used in educational settings, and has therefore recently also become a focus for research on programming education. Scratch provides dedicated teacher accounts that make it easy and convenient to handle lessons with school classes. However, once learners join a Scratch classroom, it is challenging to keep track of what they are doing: Both teachers and researchers may be interested in learning analytics to help them monitor students or evaluate teaching material. Researchers may also be interested in understanding how programs are created and how learners use Scratch. Neither use case is supported by Scratch itself currently. In this paper, we introduce ScratchLog, a tool that collects data from learners using Scratch. ScratchLog provides custom user management and makes it easy to set up courses and assignments. Starting from a task description and a starter project, learners transparently use Scratch while ScratchLog collects usage data, such as the history of code edits, or statistics about how the Scratch user interface was used. This data can be viewed on the ScratchLog web interface, or exported for further analysis, for example to inspect the functionality of programs using automated tests.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125053954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The low sense of belonging and self-efficacy have been identified as key factors for underrepresented students not choosing computing careers and retaining in computing disciplines. Researchers advised that without adequate mentorship and role models, many of these students do not view computing as a viable option. Even though several works utilized mentoring, they do not provide detailed guidelines on how this may have been accomplished and how it could be applied to other settings. In this study, we explore the efficacy of incorporating mentoring into the curriculum of an introductory undergraduate computing course. In this work, we seek the answer to the following two research questions: (1) How do culturally diverse mentor-mentee relationships impact the sense of belonging, computing identity, and self-efficacy of underrepresented students in computing programs? (2) How does the integration of mentoring initiatives influence the perceptions of underrepresented students toward computing majors? We implemented mentoring practices in the Fall of 2022 and results show that our mentoring interventions were able to improve participants' sense of belonging and computing identity. Mentors and mentees also shared positive opinions toward our initiatives.
{"title":"Improving Perceptions of Underrepresented Students towards Computing Majors through Mentoring","authors":"S. Mithun, Xiao Luo","doi":"10.1145/3587102.3588817","DOIUrl":"https://doi.org/10.1145/3587102.3588817","url":null,"abstract":"The low sense of belonging and self-efficacy have been identified as key factors for underrepresented students not choosing computing careers and retaining in computing disciplines. Researchers advised that without adequate mentorship and role models, many of these students do not view computing as a viable option. Even though several works utilized mentoring, they do not provide detailed guidelines on how this may have been accomplished and how it could be applied to other settings. In this study, we explore the efficacy of incorporating mentoring into the curriculum of an introductory undergraduate computing course. In this work, we seek the answer to the following two research questions: (1) How do culturally diverse mentor-mentee relationships impact the sense of belonging, computing identity, and self-efficacy of underrepresented students in computing programs? (2) How does the integration of mentoring initiatives influence the perceptions of underrepresented students toward computing majors? We implemented mentoring practices in the Fall of 2022 and results show that our mentoring interventions were able to improve participants' sense of belonging and computing identity. Mentors and mentees also shared positive opinions toward our initiatives.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133537622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natalie Kiesler, Bonnie K. MacKellar, Amruth N. Kumar, R. McCauley, R. Raj, Mihaela Sabin, J. Impagliazzo
Dispositions, along with skills and knowledge, form the three components of competency-based education. Moreover, studies have shown dispositions to be necessary for a successful career. However, unlike evidence-based teaching and learning approaches for knowledge acquisition and skill development, few studies focus on translating dispositions into observable behavioral patterns. An operationalization of dispositions, however, is crucial for students to understand and achieve respective learning outcomes in computing courses. This paper describes a multi-institutional study investigating students' understanding of dispositions in terms of their behaviors while completing coursework. Students in six computing courses at four different institutions filled out a survey describing an instance of applying each of the five surveyed dispositions (adaptable, collaborative, persistent, responsible, and self-directed) in the courses' assignments. The authors evaluated data by using Mayring's qualitative content analysis. The result was a coding scheme with categories summarizing students' concepts of dispositions and how they see themselves applying dispositions in the context of computing. These results are a first step in understanding dispositions in computing education and how they manifest in student behavior. This research has implications for educators developing new pedagogical approaches to promote and facilitate dispositions. Moreover, the operationalized behaviors constitute a starting point for new assessment strategies of dispositions.
{"title":"Computing Students' Understanding of Dispositions: A Qualitative Study","authors":"Natalie Kiesler, Bonnie K. MacKellar, Amruth N. Kumar, R. McCauley, R. Raj, Mihaela Sabin, J. Impagliazzo","doi":"10.1145/3587102.3588797","DOIUrl":"https://doi.org/10.1145/3587102.3588797","url":null,"abstract":"Dispositions, along with skills and knowledge, form the three components of competency-based education. Moreover, studies have shown dispositions to be necessary for a successful career. However, unlike evidence-based teaching and learning approaches for knowledge acquisition and skill development, few studies focus on translating dispositions into observable behavioral patterns. An operationalization of dispositions, however, is crucial for students to understand and achieve respective learning outcomes in computing courses. This paper describes a multi-institutional study investigating students' understanding of dispositions in terms of their behaviors while completing coursework. Students in six computing courses at four different institutions filled out a survey describing an instance of applying each of the five surveyed dispositions (adaptable, collaborative, persistent, responsible, and self-directed) in the courses' assignments. The authors evaluated data by using Mayring's qualitative content analysis. The result was a coding scheme with categories summarizing students' concepts of dispositions and how they see themselves applying dispositions in the context of computing. These results are a first step in understanding dispositions in computing education and how they manifest in student behavior. This research has implications for educators developing new pedagogical approaches to promote and facilitate dispositions. Moreover, the operationalized behaviors constitute a starting point for new assessment strategies of dispositions.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"204 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131824824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Douglas Lusa Krug, Yifan Zhang, C. Mouza, Taylor Barnett, Lori Pollock, David C. Shepherd
Broadening participation in computer science has been widely studied, creating many different techniques to attract, motivate, and engage students. A common meta-strategy is to use an outside domain as a hook, using the concepts in that domain to teach computer science. These domains are selected to interest the student, but students often lack a strong background in these domains. Therefore, a strategy designed to increase students' interest, motivation, and engagement could actually create more barriers for students, who now are faced with learning two new topics. To reduce this potential barrier in the domain of music, this paper presents the use of automated, immediate feedback during programming activities at a summer camp that uses music to teach foundational programming concepts. The feedback guides students musically, correcting notes that are out-of-key or rhythmic phrases that are too long or short, allowing students to focus their learning on the computer science concepts. This paper compares the correctness of students that received automated feedback with students that did not, which shows the effectiveness of the feedback. Follow up focus groups with students confirmed this quantitative data, with students claiming that the feedback was not only useful but that the activities would be much more challenging without the feedback.
{"title":"Using Domain-Specific, Immediate Feedback to Support Students Learning Computer Programming to Make Music","authors":"Douglas Lusa Krug, Yifan Zhang, C. Mouza, Taylor Barnett, Lori Pollock, David C. Shepherd","doi":"10.1145/3587102.3588851","DOIUrl":"https://doi.org/10.1145/3587102.3588851","url":null,"abstract":"Broadening participation in computer science has been widely studied, creating many different techniques to attract, motivate, and engage students. A common meta-strategy is to use an outside domain as a hook, using the concepts in that domain to teach computer science. These domains are selected to interest the student, but students often lack a strong background in these domains. Therefore, a strategy designed to increase students' interest, motivation, and engagement could actually create more barriers for students, who now are faced with learning two new topics. To reduce this potential barrier in the domain of music, this paper presents the use of automated, immediate feedback during programming activities at a summer camp that uses music to teach foundational programming concepts. The feedback guides students musically, correcting notes that are out-of-key or rhythmic phrases that are too long or short, allowing students to focus their learning on the computer science concepts. This paper compares the correctness of students that received automated feedback with students that did not, which shows the effectiveness of the feedback. Follow up focus groups with students confirmed this quantitative data, with students claiming that the feedback was not only useful but that the activities would be much more challenging without the feedback.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"24 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131886051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Instructors often grant students extensions or grace days to relax deadline constraints. However, researchers have yet to investigate the value of time extensions in identifying students' abilities and why students use them in computing education. Our study shows that scheduling conflicts and underestimation of the coursework were the top two reasons why students were late, providing the very first qualitative analysis results. By categorizing students who used and did not use cost-free time extensions, we found that students who used time extensions had a significantly lower assignment and exam grades than those who chose not to use them. We first observed this phenomenon when looking at grace day usages in a final-year programming course and validated the result by looking at the usage of extended lab time in a first-year programming course. This result suggests that offering a cost-free mechanism such as grace days or a time extension can provide a very early indicator of student abilities and those likely to need assistance.
{"title":"The Value of Time Extensions in Identifying Students Abilities","authors":"Huanyi Chen, Paul A. S. Ward","doi":"10.1145/3587102.3588847","DOIUrl":"https://doi.org/10.1145/3587102.3588847","url":null,"abstract":"Instructors often grant students extensions or grace days to relax deadline constraints. However, researchers have yet to investigate the value of time extensions in identifying students' abilities and why students use them in computing education. Our study shows that scheduling conflicts and underestimation of the coursework were the top two reasons why students were late, providing the very first qualitative analysis results. By categorizing students who used and did not use cost-free time extensions, we found that students who used time extensions had a significantly lower assignment and exam grades than those who chose not to use them. We first observed this phenomenon when looking at grace day usages in a final-year programming course and validated the result by looking at the usage of extended lab time in a first-year programming course. This result suggests that offering a cost-free mechanism such as grace days or a time extension can provide a very early indicator of student abilities and those likely to need assistance.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115708785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benjamin Rheault, Alexis Dougherty, Jeremiah J. Blanchard
In college-level introductory computer science courses, the programming ability of students is often evaluated using pseudocode responses to prompts. However, this does not necessarily reflect modern programming practice in industry and academia, where developers have access to compilers to test snippets of code on-the-fly. As a result, use of pseudocode prompts may not capture the full gamut of student capabilities due to lack of support tools usually available when writing programs. An assessment environment where students could write, compile, and run code could provide a more comfortable and familiar experience for students that more accurately captures their abilities. Prior work has found improvement in student performance when digital assessments are used instead of paper-based assessments for pseudocode prompts, but there is limited work focusing on the difference between digital pseudocode and compile-and-run assessment prompts. To investigate the impact of the assessment approach on student experience and performance, we conducted a study at a public university across two introductory programming classes (N=226). We found that students both preferred and performed better on typical programming assessment questions when they utilized a compile-and-run environment compared to a pseudocode environment. Our work suggests that compile-and-run assessments capture more nuanced evaluation of student ability by more closely reflecting the environments of programming practice and supports further work to explore administration of programming assessments.
{"title":"Pseudocode vs. Compile-and-Run Prompts: Comparing Measures of Student Programming Ability in CS1 and CS2","authors":"Benjamin Rheault, Alexis Dougherty, Jeremiah J. Blanchard","doi":"10.1145/3587102.3588834","DOIUrl":"https://doi.org/10.1145/3587102.3588834","url":null,"abstract":"In college-level introductory computer science courses, the programming ability of students is often evaluated using pseudocode responses to prompts. However, this does not necessarily reflect modern programming practice in industry and academia, where developers have access to compilers to test snippets of code on-the-fly. As a result, use of pseudocode prompts may not capture the full gamut of student capabilities due to lack of support tools usually available when writing programs. An assessment environment where students could write, compile, and run code could provide a more comfortable and familiar experience for students that more accurately captures their abilities. Prior work has found improvement in student performance when digital assessments are used instead of paper-based assessments for pseudocode prompts, but there is limited work focusing on the difference between digital pseudocode and compile-and-run assessment prompts. To investigate the impact of the assessment approach on student experience and performance, we conducted a study at a public university across two introductory programming classes (N=226). We found that students both preferred and performed better on typical programming assessment questions when they utilized a compile-and-run environment compared to a pseudocode environment. Our work suggests that compile-and-run assessments capture more nuanced evaluation of student ability by more closely reflecting the environments of programming practice and supports further work to explore administration of programming assessments.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115794402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}