首页 > 最新文献

Proceedings of the 23rd Australasian Computing Education Conference最新文献

英文 中文
Automated Classification of Computing Education Questions using Bloom’s Taxonomy 使用Bloom分类法的计算机教育问题自动分类
Pub Date : 2021-02-02 DOI: 10.1145/3441636.3442305
James Zhang, C. Wong, Nasser Giacaman, Andrew Luxton-Reilly
Bloom’s taxonomy is a well-known and widely used method of classifying assessment tasks. However, the application of Bloom’s taxonomy in computing education is often difficult and the classification often suffers from poor inter-rater reliability. Automated approaches using machine learning techniques show potential, but their performance is limited by the quality and quantity of the training set. We implement a machine learning model to classify programming questions according to Bloom’s taxonomy using Google’s BERT as the base model, and the Canterbury QuestionBank as a source of questions categorised by computing education experts. Our results demonstrate that the model was able to successfully predict the categories with moderate success, but was more successful in categorising questions at the lower levels of Bloom’s taxonomy. This work demonstrates the potential for machine learning to assist teachers in the analysis of assessment items.
Bloom分类法是一种众所周知且广泛使用的评估任务分类方法。然而,布鲁姆分类法在计算机教育中的应用往往是困难的,而且分类往往存在等级间可靠性差的问题。使用机器学习技术的自动化方法显示出潜力,但它们的性能受到训练集的质量和数量的限制。我们实现了一个机器学习模型,根据Bloom的分类法对编程问题进行分类,使用Google的BERT作为基本模型,Canterbury QuestionBank作为由计算教育专家分类的问题的来源。我们的结果表明,该模型能够成功地预测类别,并取得了中等程度的成功,但在布鲁姆分类法的较低层次上对问题进行分类时更为成功。这项工作展示了机器学习在帮助教师分析评估项目方面的潜力。
{"title":"Automated Classification of Computing Education Questions using Bloom’s Taxonomy","authors":"James Zhang, C. Wong, Nasser Giacaman, Andrew Luxton-Reilly","doi":"10.1145/3441636.3442305","DOIUrl":"https://doi.org/10.1145/3441636.3442305","url":null,"abstract":"Bloom’s taxonomy is a well-known and widely used method of classifying assessment tasks. However, the application of Bloom’s taxonomy in computing education is often difficult and the classification often suffers from poor inter-rater reliability. Automated approaches using machine learning techniques show potential, but their performance is limited by the quality and quantity of the training set. We implement a machine learning model to classify programming questions according to Bloom’s taxonomy using Google’s BERT as the base model, and the Canterbury QuestionBank as a source of questions categorised by computing education experts. Our results demonstrate that the model was able to successfully predict the categories with moderate success, but was more successful in categorising questions at the lower levels of Bloom’s taxonomy. This work demonstrates the potential for machine learning to assist teachers in the analysis of assessment items.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128229952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Reflective Debugging in Spinoza V3.0 Spinoza V3.0中的反射调试
Pub Date : 2021-02-02 DOI: 10.1145/3441636.3442313
Fatima Abu Deeb, T. Hickey
In this paper we present an online IDE (Spinoza 3.0) for teaching Python programming in which the students are (sometimes) required to verbally reflect on their error messages and unit test failures before being allowed to modify their code. This system was designed to be used in large synchronous in-person, remote, or hybrid classes for either in-class problem solving or out-of-class homework problems. For each student and problem, the system makes a random choice about whether to require reflection on all debugging steps. If the student/problem pair required reflection, then after each time the student ran the program and received feedback as an error message or a set of unit test results, they were required to type in a description of the bug and a plan for how to modify the program to eliminate the bug. The main result is that the number of debugging steps to reach a correct solution was statistically significantly less for problems where the students were required to reflect on each debugging step. We suggest that future developers of pedagogical IDEs consider adding features which require students to reflect frequently during the debugging process.
在本文中,我们提出了一个在线IDE (Spinoza 3.0),用于教授Python编程,在允许修改代码之前,学生(有时)需要口头反思他们的错误消息和单元测试失败。该系统被设计用于大型同步的面对面、远程或混合课堂,用于课堂上的问题解决或课外的作业问题。对于每个学生和每个问题,系统随机选择是否需要对所有调试步骤进行反思。如果学生/问题对需要反思,那么每次学生运行程序并收到作为错误消息或一组单元测试结果的反馈后,他们都需要输入错误的描述和如何修改程序以消除错误的计划。主要结果是,对于需要学生反思每个调试步骤的问题,达到正确解决方案所需的调试步骤数量在统计上显著减少。我们建议未来的教学类ide开发者考虑添加一些需要学生在调试过程中频繁反思的特性。
{"title":"Reflective Debugging in Spinoza V3.0","authors":"Fatima Abu Deeb, T. Hickey","doi":"10.1145/3441636.3442313","DOIUrl":"https://doi.org/10.1145/3441636.3442313","url":null,"abstract":"In this paper we present an online IDE (Spinoza 3.0) for teaching Python programming in which the students are (sometimes) required to verbally reflect on their error messages and unit test failures before being allowed to modify their code. This system was designed to be used in large synchronous in-person, remote, or hybrid classes for either in-class problem solving or out-of-class homework problems. For each student and problem, the system makes a random choice about whether to require reflection on all debugging steps. If the student/problem pair required reflection, then after each time the student ran the program and received feedback as an error message or a set of unit test results, they were required to type in a description of the bug and a plan for how to modify the program to eliminate the bug. The main result is that the number of debugging steps to reach a correct solution was statistically significantly less for problems where the students were required to reflect on each debugging step. We suggest that future developers of pedagogical IDEs consider adding features which require students to reflect frequently during the debugging process.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130343626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Exploring the Effects of Contextualized Problem Descriptions on Problem Solving 探讨情境化问题描述对问题解决的影响
Pub Date : 2021-02-02 DOI: 10.1145/3441636.3442302
Juho Leinonen, Paul Denny, Jacqueline L. Whalley
Prior research has reported conflicting results on whether the presence of a contextualized narrative in a problem statement is a help or a hindrance to students when solving problems. On the one hand, results from psychology and mathematics seem to show that contextualized problems can be easier for students. On the other, a recent ITiCSE working group exploring the “problem description effect” found no such benefits for novice programmers. In this work, we study the effects of contextualized problems on problem-solving in an introductory programming course. Students were divided into three groups. Each group was given two different programming problems, involving linear equations, to solve. In the first group both problem statements used the same context while in the second group the context was switched. The third group was given problems that were mathematically similar to the other two groups, but which lacked any contextualized narrative. Contrary to earlier findings in introductory programming, our results show that context does have an effect on student performance. Interestingly depending on the problem, context either helped or was unhelpful to students. We hypothesize that these results are explained by a lack of familiarity with the context when the context was unhelpful, and by poor mathematical skills when the context was helpful. These findings contribute to our understanding of how contextualized problem statements affect novice programmers and their problem solving.
先前的研究报告了矛盾的结果,即在问题陈述中出现情境化叙述对学生解决问题是帮助还是阻碍。一方面,心理学和数学的结果似乎表明,情境化问题对学生来说更容易。另一方面,最近的一个ITiCSE工作组研究了“问题描述效应”,发现对新手程序员没有这样的好处。在这项工作中,我们研究了情境化问题对程序设计入门课程中问题解决的影响。学生们被分成三组。每一组都要解决两个不同的规划问题,涉及线性方程。在第一组中,两个问题陈述使用相同的上下文,而在第二组中,上下文是交换的。第三组的问题在数学上与前两组相似,但没有任何情境化的叙述。与早期在入门编程方面的发现相反,我们的研究结果表明,环境确实对学生的表现有影响。有趣的是,根据问题的不同,背景对学生有帮助或没有帮助。我们假设,这些结果可以解释为,当环境没有帮助时,缺乏对环境的熟悉;当环境有帮助时,糟糕的数学技能。这些发现有助于我们理解上下文化的问题陈述是如何影响新手程序员和他们解决问题的。
{"title":"Exploring the Effects of Contextualized Problem Descriptions on Problem Solving","authors":"Juho Leinonen, Paul Denny, Jacqueline L. Whalley","doi":"10.1145/3441636.3442302","DOIUrl":"https://doi.org/10.1145/3441636.3442302","url":null,"abstract":"Prior research has reported conflicting results on whether the presence of a contextualized narrative in a problem statement is a help or a hindrance to students when solving problems. On the one hand, results from psychology and mathematics seem to show that contextualized problems can be easier for students. On the other, a recent ITiCSE working group exploring the “problem description effect” found no such benefits for novice programmers. In this work, we study the effects of contextualized problems on problem-solving in an introductory programming course. Students were divided into three groups. Each group was given two different programming problems, involving linear equations, to solve. In the first group both problem statements used the same context while in the second group the context was switched. The third group was given problems that were mathematically similar to the other two groups, but which lacked any contextualized narrative. Contrary to earlier findings in introductory programming, our results show that context does have an effect on student performance. Interestingly depending on the problem, context either helped or was unhelpful to students. We hypothesize that these results are explained by a lack of familiarity with the context when the context was unhelpful, and by poor mathematical skills when the context was helpful. These findings contribute to our understanding of how contextualized problem statements affect novice programmers and their problem solving.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124360136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Examining the Exams: Bloom and Database Modelling and Design 检查考试:Bloom和数据库建模与设计
Pub Date : 2021-02-02 DOI: 10.1145/3441636.3442301
A. Imbulpitiya, Jacqueline L. Whalley, Mali Senapathi
This paper presents the development of an initial framework for the classification and analysis of questions in database modelling and design examinations. Guidelines are provided for the classification of these questions using the revised Bloom’s taxonomy of educational objectives. We report the results of applying the classification scheme to 122 questions from 19 introductory database examinations. We found that there was little variation in the topics and question styles employed and that the degree to which design and modelling is assessed in a typical introductory undergraduate database course’s examination varies widely. We also found gaps in the intellectual complexity of the questions with the examinations failing to provide questions at the analyse and evaluate levels of the revised Bloom’s taxonomy.
本文介绍了数据库建模和设计考试中问题分类和分析的初步框架的发展。使用修订后的布鲁姆教育目标分类法,为这些问题的分类提供了指导方针。我们报告了将该分类方案应用于19个入门数据库考试中的122个问题的结果。我们发现,在主题和问题风格上几乎没有变化,在典型的本科数据库入门课程考试中,对设计和建模的评估程度差异很大。我们还发现问题的智力复杂性存在差距,考试未能提供经修订的布鲁姆分类法的分析和评估水平的问题。
{"title":"Examining the Exams: Bloom and Database Modelling and Design","authors":"A. Imbulpitiya, Jacqueline L. Whalley, Mali Senapathi","doi":"10.1145/3441636.3442301","DOIUrl":"https://doi.org/10.1145/3441636.3442301","url":null,"abstract":"This paper presents the development of an initial framework for the classification and analysis of questions in database modelling and design examinations. Guidelines are provided for the classification of these questions using the revised Bloom’s taxonomy of educational objectives. We report the results of applying the classification scheme to 122 questions from 19 introductory database examinations. We found that there was little variation in the topics and question styles employed and that the degree to which design and modelling is assessed in a typical introductory undergraduate database course’s examination varies widely. We also found gaps in the intellectual complexity of the questions with the examinations failing to provide questions at the analyse and evaluate levels of the revised Bloom’s taxonomy.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124360637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Assessing Understanding of Maintainability using Code Review 使用代码审查评估对可维护性的理解
Pub Date : 2021-02-02 DOI: 10.1145/3441636.3442303
E. Tempero, Yu-Cheng Tu
Maintainability is an important quality attribute of code, and so should be a key learning outcome for software engineering programmes. This raises the question of how to assess this learning outcome. In this practical report we describe how we exploited the code review mechanism provided by GitHub, the “pull request”, to assess students’ understanding of maintainability. It requires a slightly non-standard workflow by the students and a reporting tool to assemble the code review comments in a form suitable for assessment. We give the details of what we learned to make it work that should allow others to conduct similar kinds of assessment.
可维护性是代码的重要质量属性,因此应该是软件工程项目的关键学习成果。这就提出了如何评估这种学习成果的问题。在这个实践报告中,我们描述了我们如何利用GitHub提供的代码审查机制,即“pull request”,来评估学生对可维护性的理解。它需要学生稍微不标准的工作流程和一个报告工具,以适合评估的形式组装代码审查评论。我们给出了我们所学到的使它起作用的细节,这应该允许其他人进行类似的评估。
{"title":"Assessing Understanding of Maintainability using Code Review","authors":"E. Tempero, Yu-Cheng Tu","doi":"10.1145/3441636.3442303","DOIUrl":"https://doi.org/10.1145/3441636.3442303","url":null,"abstract":"Maintainability is an important quality attribute of code, and so should be a key learning outcome for software engineering programmes. This raises the question of how to assess this learning outcome. In this practical report we describe how we exploited the code review mechanism provided by GitHub, the “pull request”, to assess students’ understanding of maintainability. It requires a slightly non-standard workflow by the students and a reporting tool to assemble the code review comments in a form suitable for assessment. We give the details of what we learned to make it work that should allow others to conduct similar kinds of assessment.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115007839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Novice Difficulties with Analyzing the Running Time of Short Pieces of Code 新手分析短代码段运行时间的困难
Pub Date : 2021-02-02 DOI: 10.1145/3441636.3441855
Ibrahim Albluwi, Haley Zeng
This work attempts to understand how novices approach runtime analysis tasks, where the number of operations performed by a given program needs to be analyzed by looking at the code. Such tasks are commonly used by instructors in courses on data structures and algorithms. The focus of this work is on the difficulties faced by novices when approaching such tasks and how novices differ from experts in how they approach these tasks. The study involved one-on-one think-aloud interviews with five instructors of an introductory data structures and algorithms course and 14 students enrolled in that course. The interviews were analyzed using a framework introduced in this study for describing the steps needed to perform runtime analysis of simple pieces of code. The study found the interviewed experts to clearly differentiate between the task of formulating a summation describing the number of operations performed by the code and the task of solving that summation to find how many operations are done. Experts in the study also showed fluency in looking at the code from different perspectives and sometimes re-wrote the code to simplify the analysis task. On the other hand, the study found several novices to make mistakes because of not explicitly tracing the code and because of not explicitly describing the number of performed operations using a summation. Many of the novices also seemed inclined to approach the analysis of nested loops by multiplying two running times, even when that is incorrect. The study concluded with a discussion of the implications of these results on the teaching and assessment of runtime analysis.
这项工作试图理解新手如何处理运行时分析任务,其中需要通过查看代码来分析给定程序执行的操作数量。这些任务通常在数据结构和算法课程中被讲师使用。这项工作的重点是新手在处理这些任务时面临的困难,以及新手与专家在处理这些任务时的不同之处。这项研究包括对5名数据结构和算法入门课程的讲师和14名该课程的学生进行一对一的思考访谈。使用本研究中介绍的框架来分析访谈,该框架描述了对简单代码片段执行运行时分析所需的步骤。研究发现,接受采访的专家清楚地区分了两种任务,一种是制定一个描述代码执行的操作次数的求和,另一种是求解该求和以找出执行了多少操作。该研究中的专家还表现出从不同角度看代码的流畅性,有时会重写代码以简化分析任务。另一方面,研究发现一些新手会犯错误,因为没有明确地跟踪代码,也因为没有明确地用求和来描述执行的操作的数量。许多新手似乎也倾向于通过将两个运行时间相乘来分析嵌套循环,即使这是不正确的。研究最后讨论了这些结果对运行时分析的教学和评估的影响。
{"title":"Novice Difficulties with Analyzing the Running Time of Short Pieces of Code","authors":"Ibrahim Albluwi, Haley Zeng","doi":"10.1145/3441636.3441855","DOIUrl":"https://doi.org/10.1145/3441636.3441855","url":null,"abstract":"This work attempts to understand how novices approach runtime analysis tasks, where the number of operations performed by a given program needs to be analyzed by looking at the code. Such tasks are commonly used by instructors in courses on data structures and algorithms. The focus of this work is on the difficulties faced by novices when approaching such tasks and how novices differ from experts in how they approach these tasks. The study involved one-on-one think-aloud interviews with five instructors of an introductory data structures and algorithms course and 14 students enrolled in that course. The interviews were analyzed using a framework introduced in this study for describing the steps needed to perform runtime analysis of simple pieces of code. The study found the interviewed experts to clearly differentiate between the task of formulating a summation describing the number of operations performed by the code and the task of solving that summation to find how many operations are done. Experts in the study also showed fluency in looking at the code from different perspectives and sometimes re-wrote the code to simplify the analysis task. On the other hand, the study found several novices to make mistakes because of not explicitly tracing the code and because of not explicitly describing the number of performed operations using a summation. Many of the novices also seemed inclined to approach the analysis of nested loops by multiplying two running times, even when that is incorrect. The study concluded with a discussion of the implications of these results on the teaching and assessment of runtime analysis.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115098891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Visual Analogy for Understanding Polymorphism Types 理解多态类型的视觉类比
Pub Date : 2021-02-02 DOI: 10.1145/3441636.3442304
Nathan Mills, Allen Wang, Nasser Giacaman
Many visualisation tools have been designed to help students with learning programming concepts, often showing positive impact on student performance. Analogies have also often been used to assist in teaching students various programming concepts, and have similarly shown to boost student confidence of the concepts. Less work has been done to specifically target polymorphism, and the misconceptions students face with it. This study presents the design of a new visualisation tool along with its supporting analogy. It aims to assist in teaching the concept of polymorphism to students, along with correcting the misconceptions students have when dealing with this concept. Experiences using the tool for a CS2 OOP course are presented, including engagement logs and student feedback. The paper concludes with findings of this experience, discussing directions for future work in this area.
许多可视化工具旨在帮助学生学习编程概念,通常对学生的表现产生积极影响。类比也经常用于帮助教授学生各种编程概念,并且类似地显示可以提高学生对这些概念的信心。很少有人专门针对多态性,以及学生对多态性的误解。本研究提出了一种新的可视化工具的设计及其支持类比。它的目的是帮助向学生教授多态性的概念,同时纠正学生在处理这个概念时的误解。介绍了在CS2 OOP课程中使用该工具的经验,包括参与日志和学生反馈。本文总结了这一经验的发现,并讨论了今后在这一领域的工作方向。
{"title":"Visual Analogy for Understanding Polymorphism Types","authors":"Nathan Mills, Allen Wang, Nasser Giacaman","doi":"10.1145/3441636.3442304","DOIUrl":"https://doi.org/10.1145/3441636.3442304","url":null,"abstract":"Many visualisation tools have been designed to help students with learning programming concepts, often showing positive impact on student performance. Analogies have also often been used to assist in teaching students various programming concepts, and have similarly shown to boost student confidence of the concepts. Less work has been done to specifically target polymorphism, and the misconceptions students face with it. This study presents the design of a new visualisation tool along with its supporting analogy. It aims to assist in teaching the concept of polymorphism to students, along with correcting the misconceptions students have when dealing with this concept. Experiences using the tool for a CS2 OOP course are presented, including engagement logs and student feedback. The paper concludes with findings of this experience, discussing directions for future work in this area.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121640796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Impact of Multiple Choice Question Design on Predictions of Performance 选择题设计对成绩预测的影响
Pub Date : 2021-02-02 DOI: 10.1145/3441636.3442306
C. Wong, Paul Denny, Andrew Luxton-Reilly, Jacqueline L. Whalley
Multiple choice questions (MCQs) are a popular question format in introductory programming courses as they are a convenient means to provide scalable assessments. However, with typically only four or five answer options and a single correct answer, MCQs are prone to guessing and may lead students into a false sense of confidence. One approach to mitigate this problem is the use of Multiple-Answer MCQs (MAMCQs), where more than one answer option may be correct. This provides a larger solution space and may help students form more accurate assessments of their knowledge. We explore the use of this question format on an exam in a very large introductory programming course. The exam consisted of both MCQ and MAMCQ sections, and students were invited to predict their scores for each section. In addition, students were asked to report their preference for question format. We found that students over predicted their score on the MCQ section to a greater extent and that these prediction errors were more pronounced amongst less capable students. Interestingly, we found that students did not have a strong preference for MCQs over MAMCQs, and we recommend broader adoption of the latter format.
在编程入门课程中,选择题是一种流行的问题形式,因为它们是提供可扩展评估的方便手段。然而,mcq通常只有四到五个答案选项和一个正确答案,容易让人猜测,可能会让学生产生错误的自信感。缓解这个问题的一种方法是使用多答案mcq (mamcq),其中多个答案选项可能是正确的。这提供了一个更大的解决方案空间,可以帮助学生形成更准确的评估他们的知识。我们在一个非常大的编程入门课程的考试中探讨了这种问题格式的使用。考试包括MCQ和MAMCQ两个部分,学生们被邀请预测他们在每个部分的分数。此外,学生们被要求报告他们对问题格式的偏好。我们发现,学生在更大程度上高估了他们MCQ部分的分数,而这些预测错误在能力较差的学生中更为明显。有趣的是,我们发现学生对mcq和mamcq并没有强烈的偏好,我们建议更广泛地采用后者。
{"title":"The Impact of Multiple Choice Question Design on Predictions of Performance","authors":"C. Wong, Paul Denny, Andrew Luxton-Reilly, Jacqueline L. Whalley","doi":"10.1145/3441636.3442306","DOIUrl":"https://doi.org/10.1145/3441636.3442306","url":null,"abstract":"Multiple choice questions (MCQs) are a popular question format in introductory programming courses as they are a convenient means to provide scalable assessments. However, with typically only four or five answer options and a single correct answer, MCQs are prone to guessing and may lead students into a false sense of confidence. One approach to mitigate this problem is the use of Multiple-Answer MCQs (MAMCQs), where more than one answer option may be correct. This provides a larger solution space and may help students form more accurate assessments of their knowledge. We explore the use of this question format on an exam in a very large introductory programming course. The exam consisted of both MCQ and MAMCQ sections, and students were invited to predict their scores for each section. In addition, students were asked to report their preference for question format. We found that students over predicted their score on the MCQ section to a greater extent and that these prediction errors were more pronounced amongst less capable students. Interestingly, we found that students did not have a strong preference for MCQs over MAMCQs, and we recommend broader adoption of the latter format.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133149570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Simple, Language-Independent Approach to Identifying Potentially At-Risk Introductory Programming Students 一个简单的,独立于语言的方法来识别潜在的风险编程入门学生
Pub Date : 2021-02-02 DOI: 10.1145/3441636.3442318
Brett A. Becker, Catherine Mooney, Amruth N. Kumar, Seán Russell
For decades computing educators have been trying to identify and predict at-risk students, particularly early in the first programming course. These efforts range from the analyzing demographic data that pre-exists undergraduate entrance to using instruments such as concept inventories, to the analysis of data arising during education. Such efforts have had varying degrees of success, have not seen widespread adoption, and have left room for improvement. We analyse results from a two-year study with several hundred students in the first year of programming, comprising majors and non-majors. We find evidence supporting a hypothesis that engagement with extra credit assessment provides an effective method of differentiating students who are not at risk from those who may be. Further, this method can be used to predict risk early in the semester, as any engagement – not necessarily completion – is enough to make this differentiation. Additionally, we show that this approach is not dependent on any one programming language. In fact, the extra credit opportunities need not even involve programming. Our results may be of interest to educators, as well as researchers who may want to replicate these results in other settings.
几十年来,计算机教育工作者一直在努力识别和预测有风险的学生,特别是在第一堂编程课程的早期。这些努力的范围从分析本科入学前的人口统计数据,到使用概念清单等工具,再到分析教育过程中产生的数据。这些努力取得了不同程度的成功,但没有被广泛采用,而且还有改进的余地。我们分析了一项为期两年的研究结果,该研究涉及数百名编程第一年的学生,包括专业和非专业学生。我们发现证据支持一个假设,即参与额外的学分评估提供了一种有效的方法来区分那些没有风险的学生和那些可能有风险的学生。此外,这种方法可以用来在学期早期预测风险,因为任何参与——不一定是完成——都足以做出这种区分。此外,我们还展示了这种方法不依赖于任何一种编程语言。事实上,额外的学分机会甚至不需要涉及编程。我们的结果可能会引起教育工作者的兴趣,也可能会引起想要在其他环境中复制这些结果的研究人员的兴趣。
{"title":"A Simple, Language-Independent Approach to Identifying Potentially At-Risk Introductory Programming Students","authors":"Brett A. Becker, Catherine Mooney, Amruth N. Kumar, Seán Russell","doi":"10.1145/3441636.3442318","DOIUrl":"https://doi.org/10.1145/3441636.3442318","url":null,"abstract":"For decades computing educators have been trying to identify and predict at-risk students, particularly early in the first programming course. These efforts range from the analyzing demographic data that pre-exists undergraduate entrance to using instruments such as concept inventories, to the analysis of data arising during education. Such efforts have had varying degrees of success, have not seen widespread adoption, and have left room for improvement. We analyse results from a two-year study with several hundred students in the first year of programming, comprising majors and non-majors. We find evidence supporting a hypothesis that engagement with extra credit assessment provides an effective method of differentiating students who are not at risk from those who may be. Further, this method can be used to predict risk early in the semester, as any engagement – not necessarily completion – is enough to make this differentiation. Additionally, we show that this approach is not dependent on any one programming language. In fact, the extra credit opportunities need not even involve programming. Our results may be of interest to educators, as well as researchers who may want to replicate these results in other settings.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114175958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Analysis of a Process for Introductory Debugging 介绍性调试过程分析
Pub Date : 2021-02-02 DOI: 10.1145/3441636.3442300
Jacqueline L. Whalley, Amber Settle, Andrew Luxton-Reilly
Debugging code is a complex task that requires knowledge about the mechanics of a programming language, the purpose of a given program, and an understanding of how the program achieves the purpose intended. It is generally accepted that prior experience with similar bugs improves the debugging process and that a systematic process is needed to be able to successfully move from the symptoms of a bug to the cause. Students who are learning to program may struggle with one or more aspect of debugging, and anecdotally, spend a lot of their time debugging faulty code. In this paper we analyse student answers to questions designed to focus student attention on the symptoms of a bug and to use those symptoms to generate a hypothesis about the cause of a bug. To ensure students focus on the symptoms rather than the code, we use paper-based exercises that ask students to reflect on various bugs and to hypothesize about the cause. We analyse the students’ responses to the questions and find that using our structured process most students are able to generalize from a single failing test case to the likely problem in the code, but they are much less able to identify the appropriate location or an actual fix.
调试代码是一项复杂的任务,需要了解编程语言的机制、给定程序的目的以及程序如何实现预期目的。人们普遍认为,以前处理类似错误的经验可以改进调试过程,并且需要一个系统的过程才能成功地从错误的症状转移到原因。正在学习编程的学生可能会在调试的一个或多个方面遇到困难,而且据说,他们会花费大量时间调试错误的代码。在本文中,我们分析学生对问题的回答,旨在将学生的注意力集中在bug的症状上,并使用这些症状来产生关于bug原因的假设。为了确保学生关注症状而不是代码,我们使用基于纸张的练习,要求学生反思各种错误并假设原因。我们分析了学生对问题的回答,并发现使用我们的结构化过程,大多数学生能够从单个失败的测试用例归纳到代码中可能出现的问题,但是他们很少能够确定适当的位置或实际的修复。
{"title":"Analysis of a Process for Introductory Debugging","authors":"Jacqueline L. Whalley, Amber Settle, Andrew Luxton-Reilly","doi":"10.1145/3441636.3442300","DOIUrl":"https://doi.org/10.1145/3441636.3442300","url":null,"abstract":"Debugging code is a complex task that requires knowledge about the mechanics of a programming language, the purpose of a given program, and an understanding of how the program achieves the purpose intended. It is generally accepted that prior experience with similar bugs improves the debugging process and that a systematic process is needed to be able to successfully move from the symptoms of a bug to the cause. Students who are learning to program may struggle with one or more aspect of debugging, and anecdotally, spend a lot of their time debugging faulty code. In this paper we analyse student answers to questions designed to focus student attention on the symptoms of a bug and to use those symptoms to generate a hypothesis about the cause of a bug. To ensure students focus on the symptoms rather than the code, we use paper-based exercises that ask students to reflect on various bugs and to hypothesize about the cause. We analyse the students’ responses to the questions and find that using our structured process most students are able to generalize from a single failing test case to the likely problem in the code, but they are much less able to identify the appropriate location or an actual fix.","PeriodicalId":334899,"journal":{"name":"Proceedings of the 23rd Australasian Computing Education Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126168962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings of the 23rd Australasian Computing Education Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1