Toni Taipalus, Daphne Miedema, Efthimia Aivaloglou
Querying a relational database is typically taught in practice by using an exercise database. Such databases may be simple toy examples or elaborate and complex schemas that mimic the real world. Which of these are preferable for students is yet unknown. Research has shown that while more complex exercise databases may hinder learning, they also benefit student engagement, as more complex databases are seen as more realistic. In our mixed-methods study, we explore what aspects of an exercise database contribute to student engagement in database education. To gain insight into what students would deem engaging, we asked 56 students to design, implement, and reflect on engaging databases for database education. The results imply that students are engaged by highly diverse yet easily understood database business domains, relatively simple database structures, and conceivable yet seemingly realistic amounts of data. The results challenge some previous study results while supporting approaches found in some textbooks, and provide guidelines and inspiration for educators designing exercise databases for querying and introducing relational database concepts.
{"title":"Engaging Databases for Data Systems Education","authors":"Toni Taipalus, Daphne Miedema, Efthimia Aivaloglou","doi":"10.1145/3587102.3588804","DOIUrl":"https://doi.org/10.1145/3587102.3588804","url":null,"abstract":"Querying a relational database is typically taught in practice by using an exercise database. Such databases may be simple toy examples or elaborate and complex schemas that mimic the real world. Which of these are preferable for students is yet unknown. Research has shown that while more complex exercise databases may hinder learning, they also benefit student engagement, as more complex databases are seen as more realistic. In our mixed-methods study, we explore what aspects of an exercise database contribute to student engagement in database education. To gain insight into what students would deem engaging, we asked 56 students to design, implement, and reflect on engaging databases for database education. The results imply that students are engaged by highly diverse yet easily understood database business domains, relatively simple database structures, and conceivable yet seemingly realistic amounts of data. The results challenge some previous study results while supporting approaches found in some textbooks, and provide guidelines and inspiration for educators designing exercise databases for querying and introducing relational database concepts.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124981753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In computer science education (CSEd) it is a well-known challenge to create learning environments in which everyone can experience equal opportunities to identify themselves with the subject, get involved, and feel engaged. Especially for underrepresented groups such as girls or not computer enthusiasts, CSEd seems to lack sufficient opportunities at its current state. In this paper, we present a novel approach of using interdisciplinary online courses in the context of bee mortality and discuss the possibilities of such courses to enhance diverse learning in CSEd. We report summarized findings from a one-year period, including 16 workshops where over 160 secondary school students (aged 10-16) have participated in our online courses. Pre-test-post-test surveys have been conducted to gain insights into students' perceptions and attitude changes. The results show the potential of such interdisciplinary approaches to spark interest in computer science (CS) and to raise positive feelings toward programming. Particularly striking are the results from differentiated analyses of students grouped by characteristics such as low initial self-efficacy, coding aversion, or less computer affinity. We found multiple significant effects of our courses to impact students of those groups positively. Our results clearly indicate the potential of interdisciplinary CSEd to address a more diverse audience, especially traditionally underrepresented groups.
{"title":"Saving Bees with Computer Science: A Way to Spark Enthusiasm and Interest through Interdisciplinary Online Courses","authors":"Kairos M. Marquardt, Lucia Happe","doi":"10.1145/3587102.3588835","DOIUrl":"https://doi.org/10.1145/3587102.3588835","url":null,"abstract":"In computer science education (CSEd) it is a well-known challenge to create learning environments in which everyone can experience equal opportunities to identify themselves with the subject, get involved, and feel engaged. Especially for underrepresented groups such as girls or not computer enthusiasts, CSEd seems to lack sufficient opportunities at its current state. In this paper, we present a novel approach of using interdisciplinary online courses in the context of bee mortality and discuss the possibilities of such courses to enhance diverse learning in CSEd. We report summarized findings from a one-year period, including 16 workshops where over 160 secondary school students (aged 10-16) have participated in our online courses. Pre-test-post-test surveys have been conducted to gain insights into students' perceptions and attitude changes. The results show the potential of such interdisciplinary approaches to spark interest in computer science (CS) and to raise positive feelings toward programming. Particularly striking are the results from differentiated analyses of students grouped by characteristics such as low initial self-efficacy, coding aversion, or less computer affinity. We found multiple significant effects of our courses to impact students of those groups positively. Our results clearly indicate the potential of interdisciplinary CSEd to address a more diverse audience, especially traditionally underrepresented groups.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131541476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ryan Lenfant, Alice Wanner, J. R. Hott, Raymond Pettit
Project-based (PB) learning has become increasingly popular in computer science education, particularly as studies have found that the teaching style better prepares students for future careers and improves learning outcomes through increased student engagement. Online forum usage is one measurable component of engagement. In order to study the impact of PB learning on online forum engagement, Piazza usage data from seven online computer science courses at a higher education institution were collected and examined. We analyzed the differences in online forum usage between PB and assignment-based (AB) learning, in addition to differences between men and women in each course type. Specifically, this study builds upon and replicates a previous study on Piazza that measured student engagement, anonymity usage, and peer parity. We found that students in PB courses were less actively engaged in online forums than students in AB courses; they were less likely to ask and answer questions on Piazza but were more likely to view posts and be logged on more days. Across both course types, students posted anonymously a similar amount as a proportion of the total number of questions and answers and experienced a proportionally similar amount of peer parity. Our findings mirror prior results on gender engagement on Piazza. Across both PB and AB courses, women were more engaged, asked and viewed more questions, posted anonymously more frequently, and were less likely to experience peer parity than men.
{"title":"Project-Based and Assignment-Based Courses: A Study of Piazza Engagement and Gender in Online Courses","authors":"Ryan Lenfant, Alice Wanner, J. R. Hott, Raymond Pettit","doi":"10.1145/3587102.3588833","DOIUrl":"https://doi.org/10.1145/3587102.3588833","url":null,"abstract":"Project-based (PB) learning has become increasingly popular in computer science education, particularly as studies have found that the teaching style better prepares students for future careers and improves learning outcomes through increased student engagement. Online forum usage is one measurable component of engagement. In order to study the impact of PB learning on online forum engagement, Piazza usage data from seven online computer science courses at a higher education institution were collected and examined. We analyzed the differences in online forum usage between PB and assignment-based (AB) learning, in addition to differences between men and women in each course type. Specifically, this study builds upon and replicates a previous study on Piazza that measured student engagement, anonymity usage, and peer parity. We found that students in PB courses were less actively engaged in online forums than students in AB courses; they were less likely to ask and answer questions on Piazza but were more likely to view posts and be logged on more days. Across both course types, students posted anonymously a similar amount as a proportion of the total number of questions and answers and experienced a proportionally similar amount of peer parity. Our findings mirror prior results on gender engagement on Piazza. Across both PB and AB courses, women were more engaged, asked and viewed more questions, posted anonymously more frequently, and were less likely to experience peer parity than men.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122220792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent studies show that AI-driven code generation tools, such as Large Language Models, are able to solve most of the problems usually presented in introductory programming classes. However, it is still unknown how they cope with Object Oriented Programming assignments, where the students are asked to design and implement several interrelated classes (either by composition or inheritance) that follow a set of best-practices. Since the majority of the exercises in these tools' training dataset are written in English, it is also unclear how well they function with exercises published in other languages. In this paper, we report our experience using GPT-3 to solve 6 real-world tasks used in an Object Oriented Programming course at a Portuguese University and written in Portuguese. Our observations, based on an objective evaluation of the code, performed by an open-source Automatic Assessment Tool, show that GPT-3 is able to interpret and handle direct functional requirements, however it tends not to give the best solution in terms of object oriented design. We perform a qualitative analysis of GPT-3's output, and gather a set of recommendations for computer science educators, since we expect students to use and abuse this tool in their academic work.
{"title":"GPT-3 vs Object Oriented Programming Assignments: An Experience Report","authors":"Bruno Pereira Cipriano, P. Alves","doi":"10.1145/3587102.3588814","DOIUrl":"https://doi.org/10.1145/3587102.3588814","url":null,"abstract":"Recent studies show that AI-driven code generation tools, such as Large Language Models, are able to solve most of the problems usually presented in introductory programming classes. However, it is still unknown how they cope with Object Oriented Programming assignments, where the students are asked to design and implement several interrelated classes (either by composition or inheritance) that follow a set of best-practices. Since the majority of the exercises in these tools' training dataset are written in English, it is also unclear how well they function with exercises published in other languages. In this paper, we report our experience using GPT-3 to solve 6 real-world tasks used in an Object Oriented Programming course at a Portuguese University and written in Portuguese. Our observations, based on an objective evaluation of the code, performed by an open-source Automatic Assessment Tool, show that GPT-3 is able to interpret and handle direct functional requirements, however it tends not to give the best solution in terms of object oriented design. We perform a qualitative analysis of GPT-3's output, and gather a set of recommendations for computer science educators, since we expect students to use and abuse this tool in their academic work.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129851630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vivian Van Der Werf, M. Zhang, Efthimia Aivaloglou, F. Hermans, M. Specht
Motivation. Many people interested in learning a programming language choose online courses to develop their skills. The concept of variables is one of the most foundational ones to learn, but can be hard to grasp for novices. Variables are researched, but to our knowledge, few empirical observations on how the concept is taught in practice exist. Objective. We investigate how the concept of variables, and the respective naming practices, are taught in introductory Massive Open Online Courses (MOOCs) teaching programming languages. Methods. We gathered qualitative data related to variables and their naming from 17 MOOCs. Collected data include connections to other programming concepts, formal definitions, used analogies, and presented names. Results. We found that variables are often taught in close connection to data types, expressions, and program execution and are often explained using the 'variable as a box' analogy. The latter finding represents a stronger focus on 'storing values', than on naming, memory, and flexibility. Furthermore, MOOCs are inconsistent when teaching naming practices. Conclusions. We recommend teachers and researchers to pay deliberate attention to the definitions and analogies used to explain the concept of variables as well as to naming practices, and in particular to variable name meaning.
{"title":"Variables in Practice. An Observation of Teaching Variables in Introductory Programming MOOCs","authors":"Vivian Van Der Werf, M. Zhang, Efthimia Aivaloglou, F. Hermans, M. Specht","doi":"10.1145/3587102.3588857","DOIUrl":"https://doi.org/10.1145/3587102.3588857","url":null,"abstract":"Motivation. Many people interested in learning a programming language choose online courses to develop their skills. The concept of variables is one of the most foundational ones to learn, but can be hard to grasp for novices. Variables are researched, but to our knowledge, few empirical observations on how the concept is taught in practice exist. Objective. We investigate how the concept of variables, and the respective naming practices, are taught in introductory Massive Open Online Courses (MOOCs) teaching programming languages. Methods. We gathered qualitative data related to variables and their naming from 17 MOOCs. Collected data include connections to other programming concepts, formal definitions, used analogies, and presented names. Results. We found that variables are often taught in close connection to data types, expressions, and program execution and are often explained using the 'variable as a box' analogy. The latter finding represents a stronger focus on 'storing values', than on naming, memory, and flexibility. Furthermore, MOOCs are inconsistent when teaching naming practices. Conclusions. We recommend teachers and researchers to pay deliberate attention to the definitions and analogies used to explain the concept of variables as well as to naming practices, and in particular to variable name meaning.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116494166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, we report results from a randomized controlled trial where novice programmers completed code mimicking exercises -- writing and modifying code shown to them -- designed to help learn the basics of how variables work. Using a tailored code writing system with feedback on program correctness, we conducted a two-group design study where only one of the groups could see the program output and feedback on the correctness of the program they wrote, while the other group just saw feedback on correctness. Learning gain was measured using a code-reading multiple choice questionnaire as both a pretest and a posttest. Our data suggests that being able to see program output leads to higher learning gains for novices, when compared to just being able to see feedback on the correctness of the code. For more experienced students, we observed benefits from code mimicking in both groups, without a strong distinction between being able to see the output and not being able to see the output. Based on our experiment, we recommend that environments used by novices for learning programming should encourage -- or even require -- running the code before allowing submitting the program for assessment.
{"title":"Seeing Program Output Improves Novice Learning Gains","authors":"Juho Leinonen, Arto Hellas, John Edwards","doi":"10.1145/3587102.3588796","DOIUrl":"https://doi.org/10.1145/3587102.3588796","url":null,"abstract":"In this article, we report results from a randomized controlled trial where novice programmers completed code mimicking exercises -- writing and modifying code shown to them -- designed to help learn the basics of how variables work. Using a tailored code writing system with feedback on program correctness, we conducted a two-group design study where only one of the groups could see the program output and feedback on the correctness of the program they wrote, while the other group just saw feedback on correctness. Learning gain was measured using a code-reading multiple choice questionnaire as both a pretest and a posttest. Our data suggests that being able to see program output leads to higher learning gains for novices, when compared to just being able to see feedback on the correctness of the code. For more experienced students, we observed benefits from code mimicking in both groups, without a strong distinction between being able to see the output and not being able to see the output. Based on our experiment, we recommend that environments used by novices for learning programming should encourage -- or even require -- running the code before allowing submitting the program for assessment.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"os-43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127782321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chris Mayfield, Sean Raleigh, Helen H. Hu, Clifton Kussmaul
From 2017-2022, our research project supported faculty at higher-ed institutions in the United States to adopt POGIL in CS1 courses. The faculty participated in summer workshops and mentoring groups during the academic year. At the end of each term, the faculty submitted a summary of their students' grades to the research team. This paper presents a Bayesian analysis of the student grades using a hierarchical ordinal logistic regression model. The data included the number of A, B, C, D, F, and W grades, disaggregated by gender and race, for all students enrolled in the course. In addition to each POGIL term, faculty submitted grades for one or two previous terms when they taught the same course without POGIL. Most faculty observed an improvement in student pass rates in the second and third term after they began teaching with POGIL. We present detailed visualizations of grade distributions from 25 faculty, along with the results of the statistical analysis. Our model suggests that CS1 faculty adopting POGIL can expect to see a modest increase of A grades and a modest decrease of DFW grades. However, the grades of Black, Hispanic, and Indigenous students decreased slightly, especially in the first term faculty taught with POGIL. The results of this study demonstrate the importance of gender and racial analysis in evaluating pedagogical approaches.
{"title":"Analysis of Student Grades Before and After Adopting POGIL","authors":"Chris Mayfield, Sean Raleigh, Helen H. Hu, Clifton Kussmaul","doi":"10.1145/3587102.3588782","DOIUrl":"https://doi.org/10.1145/3587102.3588782","url":null,"abstract":"From 2017-2022, our research project supported faculty at higher-ed institutions in the United States to adopt POGIL in CS1 courses. The faculty participated in summer workshops and mentoring groups during the academic year. At the end of each term, the faculty submitted a summary of their students' grades to the research team. This paper presents a Bayesian analysis of the student grades using a hierarchical ordinal logistic regression model. The data included the number of A, B, C, D, F, and W grades, disaggregated by gender and race, for all students enrolled in the course. In addition to each POGIL term, faculty submitted grades for one or two previous terms when they taught the same course without POGIL. Most faculty observed an improvement in student pass rates in the second and third term after they began teaching with POGIL. We present detailed visualizations of grade distributions from 25 faculty, along with the results of the statistical analysis. Our model suggests that CS1 faculty adopting POGIL can expect to see a modest increase of A grades and a modest decrease of DFW grades. However, the grades of Black, Hispanic, and Indigenous students decreased slightly, especially in the first term faculty taught with POGIL. The results of this study demonstrate the importance of gender and racial analysis in evaluating pedagogical approaches.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129765037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paul Denny, Brett A. Becker, Juho Leinonen, J. Prather
Recent breakthroughs in deep learning have led to the emergence of generative AI models that exhibit extraordinary performance at producing human-like outputs. Using only simple input prompts, it is possible to generate novel text, images, video, music, and source code, as well as tackle tasks such as answering questions and translating and summarising text. However, the potential for these models to impact computing education practice is only just beginning to be explored. For example, novices learning to code can now use free tools that automatically suggest solutions to programming exercises and assignments; yet these tools were not designed with novices in mind and little to nothing is known about how they will impact learning. Furthermore, much attention has focused on the immediate challenges these models present, such as academic integrity concerns. It seems that even in the AI-era a pending apocalypse sells better than a promising renaissance. Generative AI will likely play an increasing role in people's lives in the reasonably foreseeable future. Model performance seems set to continue accelerating while novel uses and new possibilities multiply. Given this, we should devote just as much effort to identifying and exploiting new opportunities as we do to identifying and mitigating challenges. In this talk, we begin by discussing several concrete and research-backed opportunities for computing educators. Many of these have already shown great promise in positively impacting current practice. We then discuss more short- to medium-term possibilities in areas such as student recruitment, and curricular changes. Finally - against our better judgement - we speculate over the longer-term, including rethinking the very fundamentals of the practice of teaching introductory and advanced computing courses. In these discussions we suggest potential research questions and directions. Although making remotely accurate predictions in such a fast-changing landscape is foolhardy, we believe that now is the time to explore and embrace opportunities to help make positive change in as many computing classrooms as possible.
{"title":"Chat Overflow: Artificially Intelligent Models for Computing Education - renAIssance or apocAIypse?","authors":"Paul Denny, Brett A. Becker, Juho Leinonen, J. Prather","doi":"10.1145/3587102.3588773","DOIUrl":"https://doi.org/10.1145/3587102.3588773","url":null,"abstract":"Recent breakthroughs in deep learning have led to the emergence of generative AI models that exhibit extraordinary performance at producing human-like outputs. Using only simple input prompts, it is possible to generate novel text, images, video, music, and source code, as well as tackle tasks such as answering questions and translating and summarising text. However, the potential for these models to impact computing education practice is only just beginning to be explored. For example, novices learning to code can now use free tools that automatically suggest solutions to programming exercises and assignments; yet these tools were not designed with novices in mind and little to nothing is known about how they will impact learning. Furthermore, much attention has focused on the immediate challenges these models present, such as academic integrity concerns. It seems that even in the AI-era a pending apocalypse sells better than a promising renaissance. Generative AI will likely play an increasing role in people's lives in the reasonably foreseeable future. Model performance seems set to continue accelerating while novel uses and new possibilities multiply. Given this, we should devote just as much effort to identifying and exploiting new opportunities as we do to identifying and mitigating challenges. In this talk, we begin by discussing several concrete and research-backed opportunities for computing educators. Many of these have already shown great promise in positively impacting current practice. We then discuss more short- to medium-term possibilities in areas such as student recruitment, and curricular changes. Finally - against our better judgement - we speculate over the longer-term, including rethinking the very fundamentals of the practice of teaching introductory and advanced computing courses. In these discussions we suggest potential research questions and directions. Although making remotely accurate predictions in such a fast-changing landscape is foolhardy, we believe that now is the time to explore and embrace opportunities to help make positive change in as many computing classrooms as possible.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124501819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Children encounter and use artificial intelligence (AI) with regularity, but the depth of their understanding of AI is often limited. In service of growing an AI and technology-literate K-12 population, it is important for young learners to engage in AI learning activities early and often. To foster the design of AI curricula, it is essential to understand what young children already know and how they feel about AI. The nascent field of AI-related self-report instrument development focuses largely on adult populations or AI's use in specific contexts, such as medicine. There remains a critical need to develop an AI attitudinal survey for young learners (ages 9 to 11). Building upon the extant survey development work of those in education and AI, we have designed a brief survey on students' self-efficacy for AI, interest and motivation toward AI, and attitudes toward AI. We used cognitive interviewing processes to ensure the items in the survey were readable and understandable by young students. Preliminary findings indicate young students have mixed understanding of what AI is, what it can do, and how they feel about AI. We discuss implications for researchers and practitioners and provide an overview of our continuing efforts to validate this instrument.
{"title":"\"AI Teaches Itself\": Exploring Young Learners' Perspectives on Artificial Intelligence for Instrument Development","authors":"Jessica Vandenberg, Bradford W. Mott","doi":"10.1145/3587102.3588778","DOIUrl":"https://doi.org/10.1145/3587102.3588778","url":null,"abstract":"Children encounter and use artificial intelligence (AI) with regularity, but the depth of their understanding of AI is often limited. In service of growing an AI and technology-literate K-12 population, it is important for young learners to engage in AI learning activities early and often. To foster the design of AI curricula, it is essential to understand what young children already know and how they feel about AI. The nascent field of AI-related self-report instrument development focuses largely on adult populations or AI's use in specific contexts, such as medicine. There remains a critical need to develop an AI attitudinal survey for young learners (ages 9 to 11). Building upon the extant survey development work of those in education and AI, we have designed a brief survey on students' self-efficacy for AI, interest and motivation toward AI, and attitudes toward AI. We used cognitive interviewing processes to ensure the items in the survey were readable and understandable by young students. Preliminary findings indicate young students have mixed understanding of what AI is, what it can do, and how they feel about AI. We discuss implications for researchers and practitioners and provide an overview of our continuing efforts to validate this instrument.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123093379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Extensive prior work has identified and described misconceptions held by novice programmers. Much of this prior work has involved at least some automatic detection of potential misconceptions using a variety of methods such as intercepting compiler error messages, pattern matching, and black-box testing. To the best of our knowledge, no independent and flexible tool for automatic detection of misconceptions is currently available to the research community, meaning that detection must be reimplemented from scratch for each new project that aims to understand or support novice programmers using automatic analysis. This is time-consuming work, particularly for misconceptions that require understanding of the context of a program beyond localised syntax patterns. In this paper, we introduce SIDE-lib, a standalone library for detecting symptoms of Python misconceptions. This library is made available with the goal of simplifying and speeding up research on Python misconceptions and the development of tools to support learning. We also describe example use cases for the library, including how we are using it in our ongoing research.
{"title":"SIDE-lib: A Library for Detecting Symptoms of Python Programming Misconceptions","authors":"Abigail Evans, Zihan Wang, Jieren Liu, Mingming Zheng","doi":"10.1145/3587102.3588838","DOIUrl":"https://doi.org/10.1145/3587102.3588838","url":null,"abstract":"Extensive prior work has identified and described misconceptions held by novice programmers. Much of this prior work has involved at least some automatic detection of potential misconceptions using a variety of methods such as intercepting compiler error messages, pattern matching, and black-box testing. To the best of our knowledge, no independent and flexible tool for automatic detection of misconceptions is currently available to the research community, meaning that detection must be reimplemented from scratch for each new project that aims to understand or support novice programmers using automatic analysis. This is time-consuming work, particularly for misconceptions that require understanding of the context of a program beyond localised syntax patterns. In this paper, we introduce SIDE-lib, a standalone library for detecting symptoms of Python misconceptions. This library is made available with the goal of simplifying and speeding up research on Python misconceptions and the development of tools to support learning. We also describe example use cases for the library, including how we are using it in our ongoing research.","PeriodicalId":410890,"journal":{"name":"Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128616103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}