Benjamin Rheault, Alexis Dougherty, Jeremiah J. Blanchard
{"title":"Pseudocode vs. Compile-and-Run Prompts: Comparing Measures of Student Programming Ability in CS1 and CS2","authors":"Benjamin Rheault, Alexis Dougherty, Jeremiah J. Blanchard","doi":"10.1145/3587102.3588834","DOIUrl":null,"url":null,"abstract":"In college-level introductory computer science courses, the programming ability of students is often evaluated using pseudocode responses to prompts. However, this does not necessarily reflect modern programming practice in industry and academia, where developers have access to compilers to test snippets of code on-the-fly. As a result, use of pseudocode prompts may not capture the full gamut of student capabilities due to lack of support tools usually available when writing programs. An assessment environment where students could write, compile, and run code could provide a more comfortable and familiar experience for students that more accurately captures their abilities. Prior work has found improvement in student performance when digital assessments are used instead of paper-based assessments for pseudocode prompts, but there is limited work focusing on the difference between digital pseudocode and compile-and-run assessment prompts. To investigate the impact of the assessment approach on student experience and performance, we conducted a study at a public university across two introductory programming classes (N=226). We found that students both preferred and performed better on typical programming assessment questions when they utilized a compile-and-run environment compared to a pseudocode environment. Our work suggests that compile-and-run assessments capture more nuanced evaluation of student ability by more closely reflecting the environments of programming practice and supports further work to explore administration of programming assessments.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"75 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3587102.3588834","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In college-level introductory computer science courses, the programming ability of students is often evaluated using pseudocode responses to prompts. However, this does not necessarily reflect modern programming practice in industry and academia, where developers have access to compilers to test snippets of code on-the-fly. As a result, use of pseudocode prompts may not capture the full gamut of student capabilities due to lack of support tools usually available when writing programs. An assessment environment where students could write, compile, and run code could provide a more comfortable and familiar experience for students that more accurately captures their abilities. Prior work has found improvement in student performance when digital assessments are used instead of paper-based assessments for pseudocode prompts, but there is limited work focusing on the difference between digital pseudocode and compile-and-run assessment prompts. To investigate the impact of the assessment approach on student experience and performance, we conducted a study at a public university across two introductory programming classes (N=226). We found that students both preferred and performed better on typical programming assessment questions when they utilized a compile-and-run environment compared to a pseudocode environment. Our work suggests that compile-and-run assessments capture more nuanced evaluation of student ability by more closely reflecting the environments of programming practice and supports further work to explore administration of programming assessments.