Pseudocode vs. Compile-and-Run Prompts: Comparing Measures of Student Programming Ability in CS1 and CS2

Benjamin Rheault, Alexis Dougherty, Jeremiah J. Blanchard
{"title":"Pseudocode vs. Compile-and-Run Prompts: Comparing Measures of Student Programming Ability in CS1 and CS2","authors":"Benjamin Rheault, Alexis Dougherty, Jeremiah J. Blanchard","doi":"10.1145/3587102.3588834","DOIUrl":null,"url":null,"abstract":"In college-level introductory computer science courses, the programming ability of students is often evaluated using pseudocode responses to prompts. However, this does not necessarily reflect modern programming practice in industry and academia, where developers have access to compilers to test snippets of code on-the-fly. As a result, use of pseudocode prompts may not capture the full gamut of student capabilities due to lack of support tools usually available when writing programs. An assessment environment where students could write, compile, and run code could provide a more comfortable and familiar experience for students that more accurately captures their abilities. Prior work has found improvement in student performance when digital assessments are used instead of paper-based assessments for pseudocode prompts, but there is limited work focusing on the difference between digital pseudocode and compile-and-run assessment prompts. To investigate the impact of the assessment approach on student experience and performance, we conducted a study at a public university across two introductory programming classes (N=226). We found that students both preferred and performed better on typical programming assessment questions when they utilized a compile-and-run environment compared to a pseudocode environment. Our work suggests that compile-and-run assessments capture more nuanced evaluation of student ability by more closely reflecting the environments of programming practice and supports further work to explore administration of programming assessments.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"75 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3587102.3588834","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In college-level introductory computer science courses, the programming ability of students is often evaluated using pseudocode responses to prompts. However, this does not necessarily reflect modern programming practice in industry and academia, where developers have access to compilers to test snippets of code on-the-fly. As a result, use of pseudocode prompts may not capture the full gamut of student capabilities due to lack of support tools usually available when writing programs. An assessment environment where students could write, compile, and run code could provide a more comfortable and familiar experience for students that more accurately captures their abilities. Prior work has found improvement in student performance when digital assessments are used instead of paper-based assessments for pseudocode prompts, but there is limited work focusing on the difference between digital pseudocode and compile-and-run assessment prompts. To investigate the impact of the assessment approach on student experience and performance, we conducted a study at a public university across two introductory programming classes (N=226). We found that students both preferred and performed better on typical programming assessment questions when they utilized a compile-and-run environment compared to a pseudocode environment. Our work suggests that compile-and-run assessments capture more nuanced evaluation of student ability by more closely reflecting the environments of programming practice and supports further work to explore administration of programming assessments.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
伪代码与编译运行提示:CS1和CS2学生编程能力的比较
在大学水平的计算机科学入门课程中,学生的编程能力经常使用对提示的伪代码响应来评估。然而,这并不一定反映工业和学术界的现代编程实践,开发人员可以使用编译器实时测试代码片段。因此,由于在编写程序时缺乏通常可用的支持工具,使用伪代码提示可能无法捕捉到学生能力的全部范围。学生可以编写、编译和运行代码的评估环境可以为学生提供更舒适和熟悉的体验,从而更准确地捕捉他们的能力。先前的工作已经发现,当使用数字评估代替基于纸张的伪代码提示评估时,学生的表现有所改善,但是关注数字伪代码和编译并运行评估提示之间差异的工作有限。为了调查评估方法对学生体验和表现的影响,我们在一所公立大学进行了两门编程入门课程的研究(N=226)。我们发现,与伪代码环境相比,当学生使用编译和运行环境时,他们更喜欢并在典型的编程评估问题上表现得更好。我们的研究表明,通过更密切地反映编程实践的环境,编译运行评估可以更细致地评估学生的能力,并支持进一步探索编程评估管理的工作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Automatic Problem Generation for CTF-Style Assessments in IT Forensics Courses The Value of Time Extensions in Identifying Students Abilities Studied Questions in Data Structures and Algorithms Assessments Exploring CS1 Student's Notions of Code Quality Pseudocode vs. Compile-and-Run Prompts: Comparing Measures of Student Programming Ability in CS1 and CS2
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1