On Novices' Interaction with Compiler Error Messages: A Human Factors Approach

J. Prather, Raymond Pettit, Kayla Holcomb McMurry, Alani Peters, J. Homer, Nevan Simone, Maxine S. Cohen
{"title":"On Novices' Interaction with Compiler Error Messages: A Human Factors Approach","authors":"J. Prather, Raymond Pettit, Kayla Holcomb McMurry, Alani Peters, J. Homer, Nevan Simone, Maxine S. Cohen","doi":"10.1145/3105726.3106169","DOIUrl":null,"url":null,"abstract":"The difficulty in understanding compiler error messages can be a major impediment to novice student learning. To alleviate this issue, multiple researchers have run experiments enhancing compiler error messages in automated assessment tools for programming assignments. The conclusions reached by these published experiments appear to be conducting. We examine these experiments and propose five potential reasons for the inconsistent conclusions concerning enhanced compiler error messages: (1) students do not read them, (2) researchers are measuring the wrong thing, (3) the effects are hard to measure, (4) the messages are not properly designed, (5) the messages are properly designed, but students do not understand them in context due to increased cognitive load. We constructed mixed-methods experiments designed to address reasons 1 and 5 with a specific automated assessment tool, Athene, that previously reported inconclusive results. Testing student comprehension of the enhanced compiler error messages outside the context of an automated assessment tool demonstrated their effectiveness over standard compiler error messages. Quantitative results from a 60 minute one-on-one think-aloud study with 31 students did not show substantial increase in student learning outcomes over the control. However, qualitative results from the one-on-one think-aloud study indicated that most students are reading the enhanced compiler error messages and generally make effective changes after encountering them.","PeriodicalId":267640,"journal":{"name":"Proceedings of the 2017 ACM Conference on International Computing Education Research","volume":"22 12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"55","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2017 ACM Conference on International Computing Education Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3105726.3106169","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 55

Abstract

The difficulty in understanding compiler error messages can be a major impediment to novice student learning. To alleviate this issue, multiple researchers have run experiments enhancing compiler error messages in automated assessment tools for programming assignments. The conclusions reached by these published experiments appear to be conducting. We examine these experiments and propose five potential reasons for the inconsistent conclusions concerning enhanced compiler error messages: (1) students do not read them, (2) researchers are measuring the wrong thing, (3) the effects are hard to measure, (4) the messages are not properly designed, (5) the messages are properly designed, but students do not understand them in context due to increased cognitive load. We constructed mixed-methods experiments designed to address reasons 1 and 5 with a specific automated assessment tool, Athene, that previously reported inconclusive results. Testing student comprehension of the enhanced compiler error messages outside the context of an automated assessment tool demonstrated their effectiveness over standard compiler error messages. Quantitative results from a 60 minute one-on-one think-aloud study with 31 students did not show substantial increase in student learning outcomes over the control. However, qualitative results from the one-on-one think-aloud study indicated that most students are reading the enhanced compiler error messages and generally make effective changes after encountering them.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
新手与编译错误信息的交互:人为因素研究
理解编译器错误消息的困难可能是新手学习的主要障碍。为了缓解这个问题,许多研究人员已经在编程作业的自动评估工具中进行了增强编译器错误消息的实验。这些已发表的实验得出的结论似乎是有效的。我们对这些实验进行了检验,并提出了关于编译错误信息增强的不一致结论的五个潜在原因:(1)学生不阅读它们;(2)研究人员测量的是错误的东西;(3)效果难以测量;(4)信息设计不当;(5)信息设计得当,但由于认知负荷增加,学生无法在语境中理解它们。我们构建了混合方法实验,旨在用特定的自动评估工具Athene解决原因1和5,之前报告了不确定的结果。在自动化评估工具的上下文之外测试学生对增强的编译器错误消息的理解,证明了它们比标准编译器错误消息更有效。对31名学生进行的一项60分钟的一对一有声思维研究的定量结果显示,与对照组相比,学生的学习成绩没有显著提高。然而,一对一有声思考研究的定性结果表明,大多数学生正在阅读增强的编译器错误信息,并且在遇到它们后通常会做出有效的更改。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Hack.edu: Examining How College Hackathons Are Perceived By Student Attendees and Non-Attendees Comparison of Time Metrics in Programming Sometimes, Rainfall Accumulates: Talk-Alouds with Novice Functional Programmers Tools to Support Data-driven Reflective Learning Using Mediational Means during Learning and Understanding of Proof Assignments from Theory of Computation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1