Exploring CS1 Student's Notions of Code Quality

C. Izu, C. Mirolo
{"title":"Exploring CS1 Student's Notions of Code Quality","authors":"C. Izu, C. Mirolo","doi":"10.1145/3587102.3588808","DOIUrl":null,"url":null,"abstract":"Coding tasks combined with other activities such as Explain in Plain English or Parson Puzzles help CS1 students to develop core programming skills. Students usually receive feedback of code correctness but limited or no feedback on their code quality. Teaching students to evaluate and improve the quality of their code once it is functionally correct should be included in the curricula towards the end of CS1 or during CS2. However, little is known about the student's perceptions of code quality at the end of a CS1 course. This study aims to capture their developing notions of code quality, in order to tailor class activities to support code quality improvements. We directed students to think about the overall quality of small programs by asking them to rank a small set of solutions for a simple problem solving task. Their rankings and explanations have been analysed to identify the criteria underlying their quality assessments. The top quality criteria were Performance (64%), Structure (51%), Conciseness (42%) and Comprehensibility (42%). Although fast execution is a key criteria for ranking, their explanations on why a given option was fast were often flawed, indicating students need more support both to evaluate performance and to include readability or comprehensibility criteria in their assessment.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3587102.3588808","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Coding tasks combined with other activities such as Explain in Plain English or Parson Puzzles help CS1 students to develop core programming skills. Students usually receive feedback of code correctness but limited or no feedback on their code quality. Teaching students to evaluate and improve the quality of their code once it is functionally correct should be included in the curricula towards the end of CS1 or during CS2. However, little is known about the student's perceptions of code quality at the end of a CS1 course. This study aims to capture their developing notions of code quality, in order to tailor class activities to support code quality improvements. We directed students to think about the overall quality of small programs by asking them to rank a small set of solutions for a simple problem solving task. Their rankings and explanations have been analysed to identify the criteria underlying their quality assessments. The top quality criteria were Performance (64%), Structure (51%), Conciseness (42%) and Comprehensibility (42%). Although fast execution is a key criteria for ranking, their explanations on why a given option was fast were often flawed, indicating students need more support both to evaluate performance and to include readability or comprehensibility criteria in their assessment.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
探索CS1学生对代码质量的概念
编码任务与其他活动相结合,如用简单的英语解释或帕森谜题,帮助CS1学生发展核心编程技能。学生通常会收到关于代码正确性的反馈,但很少或没有关于代码质量的反馈。在CS1课程结束或CS2课程中,应该教学生在功能正确的情况下评估和改进代码的质量。然而,在CS1课程结束时,学生对代码质量的看法却知之甚少。这项研究的目的是捕捉他们对代码质量的开发概念,以便定制课堂活动来支持代码质量的改进。我们让学生们为一个简单的问题解决任务对一组解决方案进行排名,从而引导他们思考小程序的整体质量。对它们的排名和解释进行了分析,以确定其质量评估的基本标准。最高质量标准是性能(64%),结构(51%),简洁(42%)和可理解性(42%)。虽然快速执行是排名的关键标准,但他们对为什么给定选项快速的解释往往是有缺陷的,这表明学生在评估性能以及在评估中包括可读性或可理解性标准时需要更多的支持。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Automatic Problem Generation for CTF-Style Assessments in IT Forensics Courses The Value of Time Extensions in Identifying Students Abilities Studied Questions in Data Structures and Algorithms Assessments Exploring CS1 Student's Notions of Code Quality Pseudocode vs. Compile-and-Run Prompts: Comparing Measures of Student Programming Ability in CS1 and CS2
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1