The first Bebras Challenge in Lithuania was established in 2004. The Challenge quickly became international, with a dozen countries joining every year following. A global community of computer science (CS) educators, known as the Bebras Community, began to emerge; currently, 78 countries are participating. The Bebras Challenge aims to stimulate students' interest in CS from the beginning of their schooling. Creating attractive motivating tasks requiring deep thinking in the CS area is challenging. The number of participants in Lithuania is growing; e.g., in 2015, there were 24,709 participants, and in 2022 - 53,975 participants.
{"title":"Computer Science in K-12 with Bebras Challenge: 20 Years of Lithuanian Experience","authors":"V. Dagienė, G. Stupurienė, Lina Vinikiene","doi":"10.1145/3587103.3594233","DOIUrl":"https://doi.org/10.1145/3587103.3594233","url":null,"abstract":"The first Bebras Challenge in Lithuania was established in 2004. The Challenge quickly became international, with a dozen countries joining every year following. A global community of computer science (CS) educators, known as the Bebras Community, began to emerge; currently, 78 countries are participating. The Bebras Challenge aims to stimulate students' interest in CS from the beginning of their schooling. Creating attractive motivating tasks requiring deep thinking in the CS area is challenging. The number of participants in Lithuania is growing; e.g., in 2015, there were 24,709 participants, and in 2022 - 53,975 participants.","PeriodicalId":366365,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 2","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115056433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is challenging to retain computing students through their first stage of undergraduate education. Attrition is high, with many transferring courses or dropping out. This poster explores preliminary findings from an action research project improving continuation in first-stage undergraduate computing. Five years of data from Falmouth University's Games Academy in the UK suggest improvement in first-stage retention from 66.6% in 2017-18 to 91.2% in 2021-22. Findings support prior work on pair programming, media computation, and peer instruction. However, they also highlight the benefits of collaborative learning facilitated by faculty and informed by learning analytics. Peer reviews and pre-submission clinics, student advisor follow-ups, and retrieval via synoptic assessment also contributed to the improvement.
在本科教育的第一阶段留住计算机专业的学生是一项挑战。流失率很高,许多人转学或辍学。这张海报探讨了一个行动研究项目的初步发现,该项目改善了本科第一阶段计算机的继续学习。英国法尔茅斯大学游戏学院(Falmouth University’s Games Academy)五年的数据表明,第一阶段的留存率从2017-18年的66.6%提高到2021-22年的91.2%。研究结果支持先前在结对编程、媒体计算和同伴指导方面的工作。然而,他们也强调了由教师促进和由学习分析提供信息的协作学习的好处。同行评议和提交前诊所,学生顾问的随访,以及通过概要评估的检索也有助于改善。
{"title":"Retention in First Stage Undergraduate Computing: Lessons Learned from a Collaborative Learning Intervention","authors":"M. Scott, A. Mitchell, Douglas Brown","doi":"10.1145/3587103.3594185","DOIUrl":"https://doi.org/10.1145/3587103.3594185","url":null,"abstract":"It is challenging to retain computing students through their first stage of undergraduate education. Attrition is high, with many transferring courses or dropping out. This poster explores preliminary findings from an action research project improving continuation in first-stage undergraduate computing. Five years of data from Falmouth University's Games Academy in the UK suggest improvement in first-stage retention from 66.6% in 2017-18 to 91.2% in 2021-22. Findings support prior work on pair programming, media computation, and peer instruction. However, they also highlight the benefits of collaborative learning facilitated by faculty and informed by learning analytics. Peer reviews and pre-submission clinics, student advisor follow-ups, and retrieval via synoptic assessment also contributed to the improvement.","PeriodicalId":366365,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 2","volume":"29 23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115053954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Introductory programming courses at university level often face the challenge of having to initiate learning to large student groups with a wide range of prior knowledge. We present a teaching concept that supports individual students' self-learning through online tutorials and programming project work with automated tests on one hand and regular face-to-face project discussion meetings with teaching assistants (TAs) on the other hand. In our teaching approach, TAs take on a new central role with more responsibility by coaching students individually and providing personal feedback. By individualizing and decentralizing the courses with one-to-one tutoring by TAs, they seem to be better adapted to the different needs of individual learners.
{"title":"TA Role Change towards Guiding Students' Self-directed Learning through Automation of Instruction for Programming Novices","authors":"Lukas Faessler, M. Dahinden","doi":"10.1145/3587103.3594192","DOIUrl":"https://doi.org/10.1145/3587103.3594192","url":null,"abstract":"Introductory programming courses at university level often face the challenge of having to initiate learning to large student groups with a wide range of prior knowledge. We present a teaching concept that supports individual students' self-learning through online tutorials and programming project work with automated tests on one hand and regular face-to-face project discussion meetings with teaching assistants (TAs) on the other hand. In our teaching approach, TAs take on a new central role with more responsibility by coaching students individually and providing personal feedback. By individualizing and decentralizing the courses with one-to-one tutoring by TAs, they seem to be better adapted to the different needs of individual learners.","PeriodicalId":366365,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 2","volume":"238 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121308471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This dissertation research explores the potential of using learning analytics to improve programming education. The research goals include replicating previous research through studying heterogeneous groups of students at upper secondary schools over several months. The expected contribution of this dissertation is to provide insights into how learning analytics can identify struggling students.
{"title":"Edit, Run, Error, Repeat: Learning Analytics to Find Struggling Students in Upper Secondary Programming Classes","authors":"Johan Snider","doi":"10.1145/3587103.3594144","DOIUrl":"https://doi.org/10.1145/3587103.3594144","url":null,"abstract":"This dissertation research explores the potential of using learning analytics to improve programming education. The research goals include replicating previous research through studying heterogeneous groups of students at upper secondary schools over several months. The expected contribution of this dissertation is to provide insights into how learning analytics can identify struggling students.","PeriodicalId":366365,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 2","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125445657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer science instructors have long advised students that success in CS1 requires many hours, such as 8-10 hours/week outside class time, but students often don't believe it. Recently, the most-widely used CS1 learning system (zyBooks), which is web-native and records student activity data, began providing instructors with data on student time spent reading and answering reading questions, solving small homework problems, and coding the programming assignments, all online and auto-graded, representing nearly all a student's time outside class. In our 300+ student CS1 course at a large state university in Spring 2022, we required all work to be done in the zyBook and analyzed student time, including analysis relative to self-reported prior programming experience. Students who completed the class averaged 6.1 hours/week, with a large standard deviation of 2.3, and averaged a B+. Students averaged 6.9 hours in weeks 1-5 leading up to the midterm, peaking at 9 hours in Week 5. We found that over 90% of students who averaged 9-12 hours/week earned As or Bs, even those reporting no prior programming experience. Spending under 4 hours/week nearly guaranteed failing the midterm, and almost no students who spent fewer than 6 hours/week got an A on the midterm (unless they had prior experience). We also found that measuring actual time is important because students overreport time in surveys. With this concrete time data available to share with CS1 students, the hope is that future students may be more likely to allocate the time needed for success in CS1.
{"title":"Impact of Student Time Spent on Performance in a CS1 Class, Including Prior Experience Effect","authors":"Frank Vahid, Ashley Pang, Kelly Downey","doi":"10.1145/3587103.3594172","DOIUrl":"https://doi.org/10.1145/3587103.3594172","url":null,"abstract":"Computer science instructors have long advised students that success in CS1 requires many hours, such as 8-10 hours/week outside class time, but students often don't believe it. Recently, the most-widely used CS1 learning system (zyBooks), which is web-native and records student activity data, began providing instructors with data on student time spent reading and answering reading questions, solving small homework problems, and coding the programming assignments, all online and auto-graded, representing nearly all a student's time outside class. In our 300+ student CS1 course at a large state university in Spring 2022, we required all work to be done in the zyBook and analyzed student time, including analysis relative to self-reported prior programming experience. Students who completed the class averaged 6.1 hours/week, with a large standard deviation of 2.3, and averaged a B+. Students averaged 6.9 hours in weeks 1-5 leading up to the midterm, peaking at 9 hours in Week 5. We found that over 90% of students who averaged 9-12 hours/week earned As or Bs, even those reporting no prior programming experience. Spending under 4 hours/week nearly guaranteed failing the midterm, and almost no students who spent fewer than 6 hours/week got an A on the midterm (unless they had prior experience). We also found that measuring actual time is important because students overreport time in surveys. With this concrete time data available to share with CS1 students, the hope is that future students may be more likely to allocate the time needed for success in CS1.","PeriodicalId":366365,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 2","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125630863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Debugging is an important aspect of programming. Most programming languages have some features and tools to facilitate debugging. As the debugging process is also frustrating, it requires good scaffolding, in which a debugger can be a useful tool [3]. Scratch is a visual block-based programming language that is commonly used to teach programming to children, aged 10--14 [4]. It comes with its own integrated development environment (IDE), where children can edit and run their code. This IDE misses some of the tools that are available in traditional IDEs, such as a debugger. In response to this challenge, we developed Blink. Blink is a debugger for Scratch with the aim of being usable to the young audience that typically uses Scratch. We present the currently implemented features of the debugger, and the challenges we faced while implementing those, both from a user-experience standpoint and a technical standpoint.
{"title":"Blink: An Educational Software Debugger for Scratch","authors":"Niko Strijbol, Christophe Scholliers, P. Dawyndt","doi":"10.1145/3587103.3594189","DOIUrl":"https://doi.org/10.1145/3587103.3594189","url":null,"abstract":"Debugging is an important aspect of programming. Most programming languages have some features and tools to facilitate debugging. As the debugging process is also frustrating, it requires good scaffolding, in which a debugger can be a useful tool [3]. Scratch is a visual block-based programming language that is commonly used to teach programming to children, aged 10--14 [4]. It comes with its own integrated development environment (IDE), where children can edit and run their code. This IDE misses some of the tools that are available in traditional IDEs, such as a debugger. In response to this challenge, we developed Blink. Blink is a debugger for Scratch with the aim of being usable to the young audience that typically uses Scratch. We present the currently implemented features of the debugger, and the challenges we faced while implementing those, both from a user-experience standpoint and a technical standpoint.","PeriodicalId":366365,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 2","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127538145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hannan Xiao, Joseph Spring, I. Kuzminykh, Jacopo Cortellazzi
This poster presents an ongoing study that takes a diverse and inclusive approach to designing a practical group work assessment for an undergraduate cybersecurity course delivered to third year university cohorts. Students were given the choice to either work individually or as part of a group to complete the assignment in virtual laboratories. The study evaluates how student grouping preferences change when the teaching structure adapts in response to the the Covid-19 pandemic and how grouping preference impacts upon academic performance. Students reflected positively on the assignment and demonstrated a preference for treating group information as private. No disputes regarding group marks or contributions from different group members arose despite the number of groups involved. Students have taken responsibility for their choices and have accepted the outcomes of their teamwork.
{"title":"Inclusive Group Work Assessment for Cybersecurity","authors":"Hannan Xiao, Joseph Spring, I. Kuzminykh, Jacopo Cortellazzi","doi":"10.1145/3587103.3594173","DOIUrl":"https://doi.org/10.1145/3587103.3594173","url":null,"abstract":"This poster presents an ongoing study that takes a diverse and inclusive approach to designing a practical group work assessment for an undergraduate cybersecurity course delivered to third year university cohorts. Students were given the choice to either work individually or as part of a group to complete the assignment in virtual laboratories. The study evaluates how student grouping preferences change when the teaching structure adapts in response to the the Covid-19 pandemic and how grouping preference impacts upon academic performance. Students reflected positively on the assignment and demonstrated a preference for treating group information as private. No disputes regarding group marks or contributions from different group members arose despite the number of groups involved. Students have taken responsibility for their choices and have accepted the outcomes of their teamwork.","PeriodicalId":366365,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 2","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117271226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Q. Cutts, Maria Kallia, Ruth E. Anderson, T. Crick, M. Devlin, Mohammed F. Farghally, C. Mirolo, Ragnhild Kobro Runde, O. Seppälä, J. Urquiza-Fuentes, J. Vahrenhold
This working group concerns the adoption of computing education (CE) in undergraduate computer science (CS) programmes. Such adoption requires both arguments sufficient to persuade our departmental colleagues and our education committees, and also curricular outlines to assist our colleagues in delivery. The goal of the group is to develop examples of both arguments and curricular outlines, drawing on any prior experience available.
{"title":"Considering Computing Education in Undergraduate Computer Science Programmes","authors":"Q. Cutts, Maria Kallia, Ruth E. Anderson, T. Crick, M. Devlin, Mohammed F. Farghally, C. Mirolo, Ragnhild Kobro Runde, O. Seppälä, J. Urquiza-Fuentes, J. Vahrenhold","doi":"10.1145/3587103.3594210","DOIUrl":"https://doi.org/10.1145/3587103.3594210","url":null,"abstract":"This working group concerns the adoption of computing education (CE) in undergraduate computer science (CS) programmes. Such adoption requires both arguments sufficient to persuade our departmental colleagues and our education committees, and also curricular outlines to assist our colleagues in delivery. The goal of the group is to develop examples of both arguments and curricular outlines, drawing on any prior experience available.","PeriodicalId":366365,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 2","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133696246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Introductory courses usually only teach a small subset of a programming language and its library, in order to focus on the general concepts rather than overwhelm students with the syntactic, semantic and API minutiae of a particular language. This paper presents courseware that checks if a program only uses the subset of the Python language and library defined by the instructor. This allows to automatically check that programming examples, exercises and assessments only use the taught constructs. It also helps detect student code with advanced constructs, possibly copied from Q&A sites or generated by large language models. The tool is easy to install, configure and use. It also checks Python code in Jupyter notebooks, a popular format for interactive textbooks and assessment handouts.
{"title":"Checking Conformance to a Subset of the Python Language","authors":"M. Wermelinger","doi":"10.1145/3587103.3594155","DOIUrl":"https://doi.org/10.1145/3587103.3594155","url":null,"abstract":"Introductory courses usually only teach a small subset of a programming language and its library, in order to focus on the general concepts rather than overwhelm students with the syntactic, semantic and API minutiae of a particular language. This paper presents courseware that checks if a program only uses the subset of the Python language and library defined by the instructor. This allows to automatically check that programming examples, exercises and assessments only use the taught constructs. It also helps detect student code with advanced constructs, possibly copied from Q&A sites or generated by large language models. The tool is easy to install, configure and use. It also checks Python code in Jupyter notebooks, a popular format for interactive textbooks and assessment handouts.","PeriodicalId":366365,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 2","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127799420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic feedback during a coding exam can be very useful for both the students and the examiners. It can provide a quick and objective evaluation of the code written by the students, which can save a lot of time for the examiners and ensure consistency in grading. However, an exam is also a stressful, emotional situation for students. Therefore, the question arises as to what this immediate feedback triggers in weaker students. Does it help them to recognise their mistakes better or does it tend to prevent them from doing so? How should feedback be designed so that it has the most positive effect possible? What are the key mistakes? These and other points will be discussed in our poster session with the interested participants.
{"title":"Automatic Feedback During Coding Exams: Curse or Blessing?","authors":"M. Dahinden, Lukas Faessler","doi":"10.1145/3587103.3594184","DOIUrl":"https://doi.org/10.1145/3587103.3594184","url":null,"abstract":"Automatic feedback during a coding exam can be very useful for both the students and the examiners. It can provide a quick and objective evaluation of the code written by the students, which can save a lot of time for the examiners and ensure consistency in grading. However, an exam is also a stressful, emotional situation for students. Therefore, the question arises as to what this immediate feedback triggers in weaker students. Does it help them to recognise their mistakes better or does it tend to prevent them from doing so? How should feedback be designed so that it has the most positive effect possible? What are the key mistakes? These and other points will be discussed in our poster session with the interested participants.","PeriodicalId":366365,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 2","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114150960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}