Students are able to produce correctly functioning program code even though they have a fragile understanding of how it actually works. Questions derived automatically from individual exercise submissions (QLC) can probe if and how well the students understand the structure and logic of the code they just created. Prior research studied this approach in the context of the first programming course. We replicate the study on a follow-up programming course for engineering students which contains a recap of general concepts in CS1. The task was the classic rainfall problem which was solved by 90% of the students. The QLCs generated from each passing submission were kept intentionally simple, yet 27% of the students failed in at least one of them. Students who struggled with questions about their own program logic had a lower median for overall course points than students who answered correctly.
{"title":"Automated Questions About Learners' Own Code Help to Detect Fragile Prerequisite Knowledge","authors":"T. Lehtinen, O. Seppälä, A. Korhonen","doi":"10.1145/3587102.3588787","DOIUrl":"https://doi.org/10.1145/3587102.3588787","url":null,"abstract":"Students are able to produce correctly functioning program code even though they have a fragile understanding of how it actually works. Questions derived automatically from individual exercise submissions (QLC) can probe if and how well the students understand the structure and logic of the code they just created. Prior research studied this approach in the context of the first programming course. We replicate the study on a follow-up programming course for engineering students which contains a recap of general concepts in CS1. The task was the classic rainfall problem which was solved by 90% of the students. The QLCs generated from each passing submission were kept intentionally simple, yet 27% of the students failed in at least one of them. Students who struggled with questions about their own program logic had a lower median for overall course points than students who answered correctly.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"54 61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121261170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Introducing computational thinking in primary school curricula implies that teachers have to prepare appropriate lesson material. Typically this includes creating programming tasks, which may overwhelm primary school teachers with lacking programming subject knowledge. Inadequate resulting example code may negatively affect learning, and students might adopt bad programming habits or misconceptions. To avoid this problem, automated program analysis tools have the potential to help scaffolding task creation processes. For example, static program analysis tools can automatically detect both good and bad code patterns, and provide hints on improving the code. To explore how teachers generally proceed when creating programming tasks, whether tool support can help, and how it is perceived by teachers, we performed a pre-study with 26 and a main study with 59 teachers in training and the LitterBox static analysis tool for Scratch. We find that teachers in training (1) often start with brainstorming thematic ideas rather than setting learning objectives, (2) write code before the task text, (3) give more hints in their task texts and create fewer bugs when supported by LitterBox, and (4) mention both positive aspects of the tool and suggestions for improvement. These findings provide an improved understanding of how to inform teacher training with respect to support needed by teachers when creating programming tasks.
{"title":"Exploring Programming Task Creation of Primary School Teachers in Training","authors":"Luisa Greifenstein, Ute Heuer, G. Fraser","doi":"10.1145/3587102.3588809","DOIUrl":"https://doi.org/10.1145/3587102.3588809","url":null,"abstract":"Introducing computational thinking in primary school curricula implies that teachers have to prepare appropriate lesson material. Typically this includes creating programming tasks, which may overwhelm primary school teachers with lacking programming subject knowledge. Inadequate resulting example code may negatively affect learning, and students might adopt bad programming habits or misconceptions. To avoid this problem, automated program analysis tools have the potential to help scaffolding task creation processes. For example, static program analysis tools can automatically detect both good and bad code patterns, and provide hints on improving the code. To explore how teachers generally proceed when creating programming tasks, whether tool support can help, and how it is perceived by teachers, we performed a pre-study with 26 and a main study with 59 teachers in training and the LitterBox static analysis tool for Scratch. We find that teachers in training (1) often start with brainstorming thematic ideas rather than setting learning objectives, (2) write code before the task text, (3) give more hints in their task texts and create fewer bugs when supported by LitterBox, and (4) mention both positive aspects of the tool and suggestions for improvement. These findings provide an improved understanding of how to inform teacher training with respect to support needed by teachers when creating programming tasks.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116309400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clemens Sauerwein, Tobias Antensteiner, Stefan Oppl, Iris Groher, Alexander Meschtscherjakov, Philipp Zech, R. Breu
The assessment of source code in university education is a central and important task for lecturers of programming courses. In doing so, educators are confronted with growing numbers of students having increasingly diverse prerequisites, a shortage of tutors, and highly dynamic learning objectives. To support lecturers in meeting these challenges, the use of automated programming assessment systems (APASs), facilitating formative assessments by providing timely, objective feedback, is a promising solution. Measuring the effectiveness and success of these platforms is crucial to understanding how such platforms should be designed, implemented, and used. However, research and practice lack a common understanding of aspects influencing the success of APASs. To address these issues, we have devised a success model for APASs based on established models from information systems as well as blended learning research and conducted an online survey with 414 students using the same APAS. In addition, we examined the role of mediators intervening between technology-, system- or self-related factors, respectively, and the users' satisfaction with APASs. Ultimately, our research has yielded a model of success comprising seven constructs influencing user satisfaction with an APAS.
{"title":"Towards a Success Model for Automated Programming Assessment Systems Used as a Formative Assessment Tool","authors":"Clemens Sauerwein, Tobias Antensteiner, Stefan Oppl, Iris Groher, Alexander Meschtscherjakov, Philipp Zech, R. Breu","doi":"10.1145/3587102.3588848","DOIUrl":"https://doi.org/10.1145/3587102.3588848","url":null,"abstract":"The assessment of source code in university education is a central and important task for lecturers of programming courses. In doing so, educators are confronted with growing numbers of students having increasingly diverse prerequisites, a shortage of tutors, and highly dynamic learning objectives. To support lecturers in meeting these challenges, the use of automated programming assessment systems (APASs), facilitating formative assessments by providing timely, objective feedback, is a promising solution. Measuring the effectiveness and success of these platforms is crucial to understanding how such platforms should be designed, implemented, and used. However, research and practice lack a common understanding of aspects influencing the success of APASs. To address these issues, we have devised a success model for APASs based on established models from information systems as well as blended learning research and conducted an online survey with 414 students using the same APAS. In addition, we examined the role of mediators intervening between technology-, system- or self-related factors, respectively, and the users' satisfaction with APASs. Ultimately, our research has yielded a model of success comprising seven constructs influencing user satisfaction with an APAS.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114954801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Naaz Sibia, Angela Zavaleta Bernuy, J. J. Williams, Michael Liut, Andrew Petersen
Q&A forums are widely used in large classes to provide scalable support. In addition to offering students a space to ask questions, these forums aim to create a community and promote engagement. Prior literature suggests that the way students participate in Q&A forums varies and that most students do not actively post questions or engage in discussions. Students may display different participation behaviours depending on their comfort levels in the class. This paper investigates students' use of a Q&A forum in a CS1 course. We also analyze student opinions about the forum to explain the observed behaviour, focusing on students' lack of visible participation (lurking, anonymity, private posting). We analyzed forum data collected in a CS1 course across two consecutive years and invited students to complete a survey about perspectives on their forum usage. Despite a small cohort of highly engaged students, we confirmed that most students do not actively read or post on the forum. We discuss students' reasons for the low level of engagement and barriers to participating visibly. Common reasons include fearing a lack of knowledge and repercussions from being visible to the student community.
{"title":"Student Usage of Q&A Forums: Signs of Discomfort?","authors":"Naaz Sibia, Angela Zavaleta Bernuy, J. J. Williams, Michael Liut, Andrew Petersen","doi":"10.1145/3587102.3588842","DOIUrl":"https://doi.org/10.1145/3587102.3588842","url":null,"abstract":"Q&A forums are widely used in large classes to provide scalable support. In addition to offering students a space to ask questions, these forums aim to create a community and promote engagement. Prior literature suggests that the way students participate in Q&A forums varies and that most students do not actively post questions or engage in discussions. Students may display different participation behaviours depending on their comfort levels in the class. This paper investigates students' use of a Q&A forum in a CS1 course. We also analyze student opinions about the forum to explain the observed behaviour, focusing on students' lack of visible participation (lurking, anonymity, private posting). We analyzed forum data collected in a CS1 course across two consecutive years and invited students to complete a survey about perspectives on their forum usage. Despite a small cohort of highly engaged students, we confirmed that most students do not actively read or post on the forum. We discuss students' reasons for the low level of engagement and barriers to participating visibly. Common reasons include fearing a lack of knowledge and repercussions from being visible to the student community.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129359011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eng Lieh Ouh, B. Gan, Kyong Jin Shim, Swavek Wlodkowski
In this study, we assess the efficacy of employing the ChatGPT language model to generate solutions for coding exercises within an undergraduate Java programming course. ChatGPT, a large-scale, deep learning-driven natural language processing model, is capable of producing programming code based on textual input. Our evaluation involves analyzing ChatGPT-generated solutions for 80 diverse programming exercises and comparing them to the correct solutions. Our findings indicate that ChatGPT accurately generates Java programming solutions, which are characterized by high readability and well-structured organization. Additionally, the model can produce alternative, memory-efficient solutions. However, as a natural language processing model, ChatGPT struggles with coding exercises containing non-textual descriptions or class files, leading to invalid solutions. In conclusion, ChatGPT holds potential as a valuable tool for students seeking to overcome programming challenges and explore alternative approaches to solving coding problems. By understanding its limitations, educators can design coding exercises that minimize the potential for misuse as a cheating aid while maintaining their validity as assessment tools.
{"title":"ChatGPT, Can You Generate Solutions for my Coding Exercises? An Evaluation on its Effectiveness in an undergraduate Java Programming Course.","authors":"Eng Lieh Ouh, B. Gan, Kyong Jin Shim, Swavek Wlodkowski","doi":"10.1145/3587102.3588794","DOIUrl":"https://doi.org/10.1145/3587102.3588794","url":null,"abstract":"In this study, we assess the efficacy of employing the ChatGPT language model to generate solutions for coding exercises within an undergraduate Java programming course. ChatGPT, a large-scale, deep learning-driven natural language processing model, is capable of producing programming code based on textual input. Our evaluation involves analyzing ChatGPT-generated solutions for 80 diverse programming exercises and comparing them to the correct solutions. Our findings indicate that ChatGPT accurately generates Java programming solutions, which are characterized by high readability and well-structured organization. Additionally, the model can produce alternative, memory-efficient solutions. However, as a natural language processing model, ChatGPT struggles with coding exercises containing non-textual descriptions or class files, leading to invalid solutions. In conclusion, ChatGPT holds potential as a valuable tool for students seeking to overcome programming challenges and explore alternative approaches to solving coding problems. By understanding its limitations, educators can design coding exercises that minimize the potential for misuse as a cheating aid while maintaining their validity as assessment tools.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130064568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Larissa Salerno, S. Tonhão, Igor Steinmacher, Christoph Treude
Open source software (OSS) development offers a unique opportunity for students in Software Engineering to experience and participate in large-scale software development, however, the impact of such courses on students' self-efficacy and the challenges faced by students are not well understood. This paper aims to address this gap by analyzing data from multiple instances of OSS development courses at universities in different countries and reporting on how students' self-efficacy changed as a result of taking the course, as well as the barriers and challenges faced by students.
{"title":"Barriers and Self-Efficacy: A Large-Scale Study on the Impact of OSS Courses on Student Perceptions","authors":"Larissa Salerno, S. Tonhão, Igor Steinmacher, Christoph Treude","doi":"10.1145/3587102.3588789","DOIUrl":"https://doi.org/10.1145/3587102.3588789","url":null,"abstract":"Open source software (OSS) development offers a unique opportunity for students in Software Engineering to experience and participate in large-scale software development, however, the impact of such courses on students' self-efficacy and the challenges faced by students are not well understood. This paper aims to address this gap by analyzing data from multiple instances of OSS development courses at universities in different countries and reporting on how students' self-efficacy changed as a result of taking the course, as well as the barriers and challenges faced by students.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130428203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anastasiia Birillo, Elizaveta Artser, Yaroslav Golubev, Maria Tigina, H. Keuning, Nikolay Vyahhi, T. Bryksin
In this work, we developed an algorithm for detecting code quality issues in the templates of online programming tasks, validated it, and conducted an empirical study on the dataset of student solutions. The algorithm consists of analyzing recurring unfixed issues in solutions of different students, matching them with the code of the template, and then filtering the results. Our manual validation on a subset of tasks demonstrated a precision of 80.8% and a recall of 73.3%. We used the algorithm on 415 Java tasks from the JetBrains Academy platform and discovered that as much as 14.7% of tasks have at least one issue in their template, thus making it harder for students to learn good code quality practices. We describe our results in detail, provide several motivating examples and specific cases, and share the feedback of the developers of the platform, who fixed 51 issues based on the output of our approach.
{"title":"Detecting Code Quality Issues in Pre-written Templates of Programming Tasks in Online Courses","authors":"Anastasiia Birillo, Elizaveta Artser, Yaroslav Golubev, Maria Tigina, H. Keuning, Nikolay Vyahhi, T. Bryksin","doi":"10.1145/3587102.3588800","DOIUrl":"https://doi.org/10.1145/3587102.3588800","url":null,"abstract":"In this work, we developed an algorithm for detecting code quality issues in the templates of online programming tasks, validated it, and conducted an empirical study on the dataset of student solutions. The algorithm consists of analyzing recurring unfixed issues in solutions of different students, matching them with the code of the template, and then filtering the results. Our manual validation on a subset of tasks demonstrated a precision of 80.8% and a recall of 73.3%. We used the algorithm on 415 Java tasks from the JetBrains Academy platform and discovered that as much as 14.7% of tasks have at least one issue in their template, thus making it harder for students to learn good code quality practices. We describe our results in detail, provide several motivating examples and specific cases, and share the feedback of the developers of the platform, who fixed 51 issues based on the output of our approach.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"&NA; 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130381686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hussel Suriyaarachchi, Alaeddin Nassani, Paul Denny, Suranga Nanayakkara
Knowledge of programming and computing is becoming increasingly valuable in today's world, and thus it is crucial that students from all backgrounds have the opportunity to learn. As the teaching of computing at high-school becomes more common, there is a growing need for approaches and tools that are effective and engaging for all students. Especially for students from groups that are traditionally underrepresented at university level, positive experiences at high-school can be an important factor for their future academic choices. In this paper we report on a hands-on programming workshop that we ran over multiple sessions for Maori and Pasifika high-school students who are underrepresented in computer science at the tertiary level in New Zealand. In the workshop, participants developed Scratch programs starting from a simple template we provided. In order to control the action in their programs, half of the participants used standard mouse and keyboard inputs, and the other half had access to plug-and-play sensors that provided real-time environmental data. We explore how students' perceptions of self-efficacy and outcome expectancy -- both key constructs driving academic career choices -- changed during the workshop and how these were impacted by the availability of the sensor toolkit. We found that participants enjoyed the workshop and reported improved self-efficacy with or without use of the toolkit, but outcome expectancy improved only for students who used the sensor toolkit.
{"title":"Using Sensor-Based Programming to Improve Self-Efficacy and Outcome Expectancy for Students from Underrepresented Groups","authors":"Hussel Suriyaarachchi, Alaeddin Nassani, Paul Denny, Suranga Nanayakkara","doi":"10.1145/3587102.3588854","DOIUrl":"https://doi.org/10.1145/3587102.3588854","url":null,"abstract":"Knowledge of programming and computing is becoming increasingly valuable in today's world, and thus it is crucial that students from all backgrounds have the opportunity to learn. As the teaching of computing at high-school becomes more common, there is a growing need for approaches and tools that are effective and engaging for all students. Especially for students from groups that are traditionally underrepresented at university level, positive experiences at high-school can be an important factor for their future academic choices. In this paper we report on a hands-on programming workshop that we ran over multiple sessions for Maori and Pasifika high-school students who are underrepresented in computer science at the tertiary level in New Zealand. In the workshop, participants developed Scratch programs starting from a simple template we provided. In order to control the action in their programs, half of the participants used standard mouse and keyboard inputs, and the other half had access to plug-and-play sensors that provided real-time environmental data. We explore how students' perceptions of self-efficacy and outcome expectancy -- both key constructs driving academic career choices -- changed during the workshop and how these were impacted by the availability of the sensor toolkit. We found that participants enjoyed the workshop and reported improved self-efficacy with or without use of the toolkit, but outcome expectancy improved only for students who used the sensor toolkit.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121017708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juho Leinonen, Paul Denny, S. Macneil, Sami Sarsa, Seth Bernstein, Joanne Kim, Andrew Tran, Arto Hellas
Reasoning about code and explaining its purpose are fundamental skills for computer scientists. There has been extensive research in the field of computing education on the relationship between a student's ability to explain code and other skills such as writing and tracing code. In particular, the ability to describe at a high-level of abstraction how code will behave over all possible inputs correlates strongly with code writing skills. However, developing the expertise to comprehend and explain code accurately and succinctly is a challenge for many students. Existing pedagogical approaches that scaffold the ability to explain code, such as producing exemplar code explanations on demand, do not currently scale well to large classrooms. The recent emergence of powerful large language models (LLMs) may offer a solution. In this paper, we explore the potential of LLMs in generating explanations that can serve as examples to scaffold students' ability to understand and explain code. To evaluate LLM-created explanations, we compare them with explanations created by students in a large course (n ≈ 1000) with respect to accuracy, understandability and length. We find that LLM-created explanations, which can be produced automatically on demand, are rated as being significantly easier to understand and more accurate summaries of code than student-created explanations. We discuss the significance of this finding, and suggest how such models can be incorporated into introductory programming education.
{"title":"Comparing Code Explanations Created by Students and Large Language Models","authors":"Juho Leinonen, Paul Denny, S. Macneil, Sami Sarsa, Seth Bernstein, Joanne Kim, Andrew Tran, Arto Hellas","doi":"10.1145/3587102.3588785","DOIUrl":"https://doi.org/10.1145/3587102.3588785","url":null,"abstract":"Reasoning about code and explaining its purpose are fundamental skills for computer scientists. There has been extensive research in the field of computing education on the relationship between a student's ability to explain code and other skills such as writing and tracing code. In particular, the ability to describe at a high-level of abstraction how code will behave over all possible inputs correlates strongly with code writing skills. However, developing the expertise to comprehend and explain code accurately and succinctly is a challenge for many students. Existing pedagogical approaches that scaffold the ability to explain code, such as producing exemplar code explanations on demand, do not currently scale well to large classrooms. The recent emergence of powerful large language models (LLMs) may offer a solution. In this paper, we explore the potential of LLMs in generating explanations that can serve as examples to scaffold students' ability to understand and explain code. To evaluate LLM-created explanations, we compare them with explanations created by students in a large course (n ≈ 1000) with respect to accuracy, understandability and length. We find that LLM-created explanations, which can be produced automatically on demand, are rated as being significantly easier to understand and more accurate summaries of code than student-created explanations. We discuss the significance of this finding, and suggest how such models can be incorporated into introductory programming education.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131222889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Malinka, Martin Peresíni, Anton Firc, Ondřej Hujňák, Filip Janus
In late 2022, OpenAI released a new version of ChatGPT, a sophisticated natural language processing system capable of holding natural conversations while preserving and responding to the context of the discussion. ChatGPT has exceeded expectations in its abilities, leading to extensive considerations of its potential applications and misuse. In this work, we evaluate the influence of ChatGPT on university education, with a primary focus on computer security-oriented specialization. We gather data regarding the effectiveness and usability of this tool for completing exams, programming assignments, and term papers. We evaluate multiple levels of tool misuse, ranging from utilizing it as a consultant to simply copying its outputs. While we demonstrate how easily ChatGPT can be used to cheat, we also discuss the potentially significant benefits to the educational system. For instance, it might be used as an aid (assistant) to discuss problems encountered while solving an assignment or to speed up the learning process. Ultimately, we discuss how computer science higher education should adapt to tools like ChatGPT.
{"title":"On the Educational Impact of ChatGPT: Is Artificial Intelligence Ready to Obtain a University Degree?","authors":"K. Malinka, Martin Peresíni, Anton Firc, Ondřej Hujňák, Filip Janus","doi":"10.1145/3587102.3588827","DOIUrl":"https://doi.org/10.1145/3587102.3588827","url":null,"abstract":"In late 2022, OpenAI released a new version of ChatGPT, a sophisticated natural language processing system capable of holding natural conversations while preserving and responding to the context of the discussion. ChatGPT has exceeded expectations in its abilities, leading to extensive considerations of its potential applications and misuse. In this work, we evaluate the influence of ChatGPT on university education, with a primary focus on computer security-oriented specialization. We gather data regarding the effectiveness and usability of this tool for completing exams, programming assignments, and term papers. We evaluate multiple levels of tool misuse, ranging from utilizing it as a consultant to simply copying its outputs. While we demonstrate how easily ChatGPT can be used to cheat, we also discuss the potentially significant benefits to the educational system. For instance, it might be used as an aid (assistant) to discuss problems encountered while solving an assignment or to speed up the learning process. Ultimately, we discuss how computer science higher education should adapt to tools like ChatGPT.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116861554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}