PeerWise (PW) is an online tool that allows students in a course to collaborate and learn by creating, sharing, answering and discussing multiple-choice questions (MCQs). Previous studies of PW at the introductory level have shown that students in computing courses like it, and report statistically significant learning gains in courses taught by the investigators at different institutions. However, we recently conducted three quasi-experimental studies of PW use in upper-division computing courses in the U.S. and failed to replicate these positive results. In this paper we consider various factors that may impact the effectiveness of PW, including instructor engagement, usage requirements and subject-matter issues. We also report several positive results from other STEM courses at the same institution, discuss methodological issues pertaining to our recent studies and propose approaches for further investigation.
{"title":"PeerWise: exploring conflicting efficacy studies","authors":"Paul Denny, Brian F. Hanks, B. Simon, S. Bagley","doi":"10.1145/2016911.2016924","DOIUrl":"https://doi.org/10.1145/2016911.2016924","url":null,"abstract":"PeerWise (PW) is an online tool that allows students in a course to collaborate and learn by creating, sharing, answering and discussing multiple-choice questions (MCQs). Previous studies of PW at the introductory level have shown that students in computing courses like it, and report statistically significant learning gains in courses taught by the investigators at different institutions. However, we recently conducted three quasi-experimental studies of PW use in upper-division computing courses in the U.S. and failed to replicate these positive results. In this paper we consider various factors that may impact the effectiveness of PW, including instructor engagement, usage requirements and subject-matter issues. We also report several positive results from other STEM courses at the same institution, discuss methodological issues pertaining to our recent studies and propose approaches for further investigation.","PeriodicalId":268925,"journal":{"name":"Proceedings of the seventh international workshop on Computing education research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123929901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to explore and validate suitable methods for investigating learning processes, we are currently conducting a case study, exploring the mental models of novice students in the field of object oriented modeling and programming. After abstracting and systemizing the information that was presented to the students of our introductory CS 1 course for non-majors we have asked them to draw concept maps at four points in time. Additionally, we conducted a small midterm exam, where the students had to implement some of the most important concepts and a regular final exam. We found that learning progress can be observed in detail by evaluating the concept maps.
{"title":"What students (should) know about object oriented programming","authors":"Peter Hubwieser, A. Mühling","doi":"10.1145/2016911.2016929","DOIUrl":"https://doi.org/10.1145/2016911.2016929","url":null,"abstract":"In order to explore and validate suitable methods for investigating learning processes, we are currently conducting a case study, exploring the mental models of novice students in the field of object oriented modeling and programming. After abstracting and systemizing the information that was presented to the students of our introductory CS 1 course for non-majors we have asked them to draw concept maps at four points in time. Additionally, we conducted a small midterm exam, where the students had to implement some of the most important concepts and a regular final exam. We found that learning progress can be observed in detail by evaluating the concept maps.","PeriodicalId":268925,"journal":{"name":"Proceedings of the seventh international workshop on Computing education research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123981529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, we attempted to quantify indicators of novice programmer progress in the task of writing programs, and we evaluated the use of these indicators for identifying academically at-risk students. Over the course of nine weeks, students completed five different graded programming exercises in a computer lab. Using an instrumented version of BlueJ, an integrated development environment for Java, we collected novice compilations and explored the errors novices encountered, the locations of these errors, and the frequency with which novices compiled their programs. We identified which frequently encountered errors and which compilation behaviors were characteristic of at-risk students. Based on these findings, we developed linear regression models that allowed prediction of students' scores on a midterm exam. However, the models derived could not accurately predict the at-risk students. Although our goal of identifying at-risk students was not attained, we have gained insights regarding the compilation behavior of our students, which may help us identify students who are in need of intervention.
{"title":"Predicting at-risk novice Java programmers through the analysis of online protocols","authors":"Emily S. Tabanao, M. Rodrigo, Matthew C. Jadud","doi":"10.1145/2016911.2016930","DOIUrl":"https://doi.org/10.1145/2016911.2016930","url":null,"abstract":"In this study, we attempted to quantify indicators of novice programmer progress in the task of writing programs, and we evaluated the use of these indicators for identifying academically at-risk students. Over the course of nine weeks, students completed five different graded programming exercises in a computer lab. Using an instrumented version of BlueJ, an integrated development environment for Java, we collected novice compilations and explored the errors novices encountered, the locations of these errors, and the frequency with which novices compiled their programs. We identified which frequently encountered errors and which compilation behaviors were characteristic of at-risk students. Based on these findings, we developed linear regression models that allowed prediction of students' scores on a midterm exam. However, the models derived could not accurately predict the at-risk students. Although our goal of identifying at-risk students was not attained, we have gained insights regarding the compilation behavior of our students, which may help us identify students who are in need of intervention.","PeriodicalId":268925,"journal":{"name":"Proceedings of the seventh international workshop on Computing education research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122838696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper discusses the results of a Grounded Theory study on students experience with introductory programming assignments in the light of social cognitive theory. In previous studies we have found CS majors experienced the process of doing CS1 programming assignments in different ways; but they universally made programming-related self-efficacy assessments along the way. Notably, students may reflect negatively on their self-efficacy after successfully completing an assignment, or positively after struggling with an assignment. CS majors tended to use their comparisons with self and classmates as a base for their self-efficacy perceptions. This paper takes a deeper look at these results from the lens of Bandura's self-efficacy theory with the goal of detailing viable pedagogical interventions to support students' introductory programming course experiences.
{"title":"CS majors' self-efficacy perceptions in CS1: results in light of social cognitive theory","authors":"P. Kinnunen, B. Simon","doi":"10.1145/2016911.2016917","DOIUrl":"https://doi.org/10.1145/2016911.2016917","url":null,"abstract":"This paper discusses the results of a Grounded Theory study on students experience with introductory programming assignments in the light of social cognitive theory. In previous studies we have found CS majors experienced the process of doing CS1 programming assignments in different ways; but they universally made programming-related self-efficacy assessments along the way. Notably, students may reflect negatively on their self-efficacy after successfully completing an assignment, or positively after struggling with an assignment. CS majors tended to use their comparisons with self and classmates as a base for their self-efficacy perceptions. This paper takes a deeper look at these results from the lens of Bandura's self-efficacy theory with the goal of detailing viable pedagogical interventions to support students' introductory programming course experiences.","PeriodicalId":268925,"journal":{"name":"Proceedings of the seventh international workshop on Computing education research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115237083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We welcome you to Aarhus and to the Sixth International Computing Education Research Workshop, ICER 2010, sponsored by the ACM Special Interest Group in Computer Science Education (SIGCSE). This year's workshop continues its tradition of being the premier forum for presentation of contributions to the computing education research discipline. The call for papers attracted 38 submissions. All papers were double-blind peer-reviewed by members of the international program committee. After the reviewing, 12 papers (32%) were accepted for inclusion in the conference, written by authors across six countries: Australia, Finland, Germany, Israel, the United Kingdom, and the United States of America. The papers span a wide variety of topics, including tools and tool use; conceptions, preconceptions, and misconceptions; attitudes; collaborative learning; research categorization; teacher adaptation to new paradigms; and broad-scale adoption of computing innovations. The program also includes a keynote address by Mordechai (Moti) Ben-Ari from Weizmann Institute of Science, Israel, outlining the non-myths about programming and what this knowledge might offer to computing education researchers and course designers.
{"title":"Proceedings of the seventh international workshop on Computing education research","authors":"M. Caspersen, M. Clancy, Kathryn E. Sanders","doi":"10.1145/2016911","DOIUrl":"https://doi.org/10.1145/2016911","url":null,"abstract":"We welcome you to Aarhus and to the Sixth International Computing Education Research Workshop, ICER 2010, sponsored by the ACM Special Interest Group in Computer Science Education (SIGCSE). This year's workshop continues its tradition of being the premier forum for presentation of contributions to the computing education research discipline. \u0000 \u0000The call for papers attracted 38 submissions. All papers were double-blind peer-reviewed by members of the international program committee. After the reviewing, 12 papers (32%) were accepted for inclusion in the conference, written by authors across six countries: Australia, Finland, Germany, Israel, the United Kingdom, and the United States of America. The papers span a wide variety of topics, including tools and tool use; conceptions, preconceptions, and misconceptions; attitudes; collaborative learning; research categorization; teacher adaptation to new paradigms; and broad-scale adoption of computing innovations. The program also includes a keynote address by Mordechai (Moti) Ben-Ari from Weizmann Institute of Science, Israel, outlining the non-myths about programming and what this knowledge might offer to computing education researchers and course designers.","PeriodicalId":268925,"journal":{"name":"Proceedings of the seventh international workshop on Computing education research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2010-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131389985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}