Judithe Sheard, Simon, A. Carbone, Donald D. Chinn, M. Laakso, T. Clear, Michael de Raadt, Daryl J. D'Souza, James Harland, R. Lister, A. Philpott, G. Warburton
This paper describes the development of a classification scheme that can be used to investigate the characteristics of introductory programming examinations. We describe the process of developing the scheme, explain its categories, and present a taste of the results of a pilot analysis of a set of CS1 exam papers. This study is part of a project that aims to investigate the nature and composition of formal examination instruments used in the summative assessment of introductory programming students, and the pedagogical intentions of the educators who construct these instruments.
{"title":"Exploring programming assessment instruments: a classification scheme for examination questions","authors":"Judithe Sheard, Simon, A. Carbone, Donald D. Chinn, M. Laakso, T. Clear, Michael de Raadt, Daryl J. D'Souza, James Harland, R. Lister, A. Philpott, G. Warburton","doi":"10.1145/2016911.2016920","DOIUrl":"https://doi.org/10.1145/2016911.2016920","url":null,"abstract":"This paper describes the development of a classification scheme that can be used to investigate the characteristics of introductory programming examinations. We describe the process of developing the scheme, explain its categories, and present a taste of the results of a pilot analysis of a set of CS1 exam papers. This study is part of a project that aims to investigate the nature and composition of formal examination instruments used in the summative assessment of introductory programming students, and the pedagogical intentions of the educators who construct these instruments.","PeriodicalId":268925,"journal":{"name":"Proceedings of the seventh international workshop on Computing education research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125017983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Informal learning resources have the potential to reach millions of currently underserved learners teaching themselves about the basics of computing using the Web, example code, peer networks, books, and other materials. In this paper, we investigate the effectiveness of case-based learning aids (CBLAs) as a resource to scaffold informal education in scripting for web and graphic design. We present the design of a new CBLA called ScriptABLE and outline initial evaluation results with respect to its ability to foster both programming ability and more expert understanding of computing concepts.
{"title":"ScriptABLE: supporting informal learning with cases","authors":"Brian Dorn","doi":"10.1145/2016911.2016927","DOIUrl":"https://doi.org/10.1145/2016911.2016927","url":null,"abstract":"Informal learning resources have the potential to reach millions of currently underserved learners teaching themselves about the basics of computing using the Web, example code, peer networks, books, and other materials. In this paper, we investigate the effectiveness of case-based learning aids (CBLAs) as a resource to scaffold informal education in scripting for web and graphic design. We present the design of a new CBLA called ScriptABLE and outline initial evaluation results with respect to its ability to foster both programming ability and more expert understanding of computing concepts.","PeriodicalId":268925,"journal":{"name":"Proceedings of the seventh international workshop on Computing education research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129863781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As CS becomes a larger field, many undergraduate programs are giving students greater freedom in the classes that make up their degree. This study looks at the process by which students within the CS major choose to specialize in some area. In this study we interviewed student advisors, graduated CS students, and students currently in the undergraduate process about their view of CS and how they make decisions. The interviews were analyzed with grounded theory approach. The analysis presents four forces that affect student decision making. One, students often use the amount they enjoy individual classes as a sign of how well they fit with a particular specialization. Two, students often do not research, so they select specializations based on misconceptions. Three, students often rely on the curriculum to protect against poor educational choices. Four, students usually do not have a personal vision for what they hope to do with a Computer Science degree.
{"title":"How CS majors select a specialization","authors":"Michael Hewner, M. Guzdial","doi":"10.1145/2016911.2016916","DOIUrl":"https://doi.org/10.1145/2016911.2016916","url":null,"abstract":"As CS becomes a larger field, many undergraduate programs are giving students greater freedom in the classes that make up their degree. This study looks at the process by which students within the CS major choose to specialize in some area. In this study we interviewed student advisors, graduated CS students, and students currently in the undergraduate process about their view of CS and how they make decisions. The interviews were analyzed with grounded theory approach. The analysis presents four forces that affect student decision making. One, students often use the amount they enjoy individual classes as a sign of how well they fit with a particular specialization. Two, students often do not research, so they select specializations based on misconceptions. Three, students often rely on the curriculum to protect against poor educational choices. Four, students usually do not have a personal vision for what they hope to do with a Computer Science degree.","PeriodicalId":268925,"journal":{"name":"Proceedings of the seventh international workshop on Computing education research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116193622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Discussions of teaching - even some publications - abound with anecdotal evidence. Our intuition often supplants a systematic, scientific approach to finding out what works and what doesn't work. Yet, research is increasingly demonstrating that our gut feelings about teaching are often wrong. In this talk I will discuss some research my group has done on gender issues in science courses and on the effectiveness of classroom demonstrations.
{"title":"The scientific approach to teaching: research as a basis for course design","authors":"E. Mazur","doi":"10.1145/2016911.2016913","DOIUrl":"https://doi.org/10.1145/2016911.2016913","url":null,"abstract":"Discussions of teaching - even some publications - abound with anecdotal evidence. Our intuition often supplants a systematic, scientific approach to finding out what works and what doesn't work. Yet, research is increasingly demonstrating that our gut feelings about teaching are often wrong. In this talk I will discuss some research my group has done on gender issues in science courses and on the effectiveness of classroom demonstrations.","PeriodicalId":268925,"journal":{"name":"Proceedings of the seventh international workshop on Computing education research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133616181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computing and computation are increasingly pervading our lives, careers, and societies - a change driving interest in computing education at the secondary level. But what should define a "general education" computing course at this level? That is, what would you want every person to know, assuming they never take another computing course? We identify possible outcomes for such a course through the experience of designing and implementing a general education university course utilizing best-practice pedagogies. Though we nominally taught programming, the design of the course led students to report gaining core, transferable skills and the confidence to employ them in their future. We discuss how various aspects of the course likely contributed to these gains. Finally, we encourage the community to embrace the challenge of teaching general education computing in contrast to and in conjunction with existing curricula designed primarily to interest students in the field.
{"title":"Computing as the 4th \"R\": a general education approach to computing education","authors":"Q. Cutts, Sarah Esper, B. Simon","doi":"10.1145/2016911.2016938","DOIUrl":"https://doi.org/10.1145/2016911.2016938","url":null,"abstract":"Computing and computation are increasingly pervading our lives, careers, and societies - a change driving interest in computing education at the secondary level. But what should define a \"general education\" computing course at this level? That is, what would you want every person to know, assuming they never take another computing course? We identify possible outcomes for such a course through the experience of designing and implementing a general education university course utilizing best-practice pedagogies. Though we nominally taught programming, the design of the course led students to report gaining core, transferable skills and the confidence to employ them in their future. We discuss how various aspects of the course likely contributed to these gains. Finally, we encourage the community to embrace the challenge of teaching general education computing in contrast to and in conjunction with existing curricula designed primarily to interest students in the field.","PeriodicalId":268925,"journal":{"name":"Proceedings of the seventh international workshop on Computing education research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131539091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonas Boustedt, Anna Eckerdal, R. McCartney, Kate Sanders, L. Thomas, Carol Zander
Research has shown that most learning in the workplace takes place outside of formal training and, given the swiftly changing nature of the field, computer science graduates more than most workers, need to be able to learn computing topics outside of organized classes. In this paper we discuss students' perceptions of the difference between formal and informal learning of computing topics, based on three datasets: essays collected from a technical writing course at a single university; the results of a brainstorming exercise conducted in the same course; and semi-structured interviews conducted at six institutions in three countries. The students report strengths and weaknesses in informal learning. On the one hand, they are motivated, can choose their level of learning, can be more flexible about how they learn, and often retain the material better. On the other hand, they perceive that they may miss important aspects of a topic, learn in an ad hoc way, and have difficulty assessing their learning.
{"title":"Students' perceptions of the differences between formal and informal learning","authors":"Jonas Boustedt, Anna Eckerdal, R. McCartney, Kate Sanders, L. Thomas, Carol Zander","doi":"10.1145/2016911.2016926","DOIUrl":"https://doi.org/10.1145/2016911.2016926","url":null,"abstract":"Research has shown that most learning in the workplace takes place outside of formal training and, given the swiftly changing nature of the field, computer science graduates more than most workers, need to be able to learn computing topics outside of organized classes. In this paper we discuss students' perceptions of the difference between formal and informal learning of computing topics, based on three datasets: essays collected from a technical writing course at a single university; the results of a brainstorming exercise conducted in the same course; and semi-structured interviews conducted at six institutions in three countries. The students report strengths and weaknesses in informal learning. On the one hand, they are motivated, can choose their level of learning, can be more flexible about how they learn, and often retain the material better. On the other hand, they perceive that they may miss important aspects of a topic, learn in an ad hoc way, and have difficulty assessing their learning.","PeriodicalId":268925,"journal":{"name":"Proceedings of the seventh international workshop on Computing education research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128648142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Of the students who pass introductory programming courses, many appear unable to explain the purpose of simple code fragments such as a loop to find the greatest element in an array. It has never been established whether this is because the students are unable to determine the purpose of the code or because they can determine the purpose but lack the ability to express that purpose. This study explores that question by comparing the answers of students in several offerings of an introductory programming course. In the earlier offerings students were asked to express the purpose in their own words; in the later offerings they were asked to choose the purpose from several options in a multiple-choice question. At an overseas campus, students performed significantly better on the multiple-choice version of the question; at a domestic campus, performance was better, but not significantly so. Many students were unable to identify the correct purpose of small fragments of code when given that purpose and some alternatives. The conclusion is that students' failure to perform well in code-explaining questions is not because they cannot express the purpose of the code, but because they are truly unable to determine the purpose of the code - or even to recognize it from a short list.
{"title":"Explaining program code: giving students the answer helps - but only just","authors":"Simon, S. Snowdon","doi":"10.1145/2016911.2016931","DOIUrl":"https://doi.org/10.1145/2016911.2016931","url":null,"abstract":"Of the students who pass introductory programming courses, many appear unable to explain the purpose of simple code fragments such as a loop to find the greatest element in an array. It has never been established whether this is because the students are unable to determine the purpose of the code or because they can determine the purpose but lack the ability to express that purpose. This study explores that question by comparing the answers of students in several offerings of an introductory programming course. In the earlier offerings students were asked to express the purpose in their own words; in the later offerings they were asked to choose the purpose from several options in a multiple-choice question. At an overseas campus, students performed significantly better on the multiple-choice version of the question; at a domestic campus, performance was better, but not significantly so. Many students were unable to identify the correct purpose of small fragments of code when given that purpose and some alternatives. The conclusion is that students' failure to perform well in code-explaining questions is not because they cannot express the purpose of the code, but because they are truly unable to determine the purpose of the code - or even to recognize it from a short list.","PeriodicalId":268925,"journal":{"name":"Proceedings of the seventh international workshop on Computing education research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125498335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many novice programmers view programming tools as all-knowing, infallible authorities about what is right and wrong about code. This misconception is particularly detrimental to beginners, who may view the cold, terse, and often judgmental errors from compilers as a sign of personal failure. It is possible, however, that attributing this failure to the computer, rather than the learner, may improve learners' motivation to program. To test this hypothesis, we present Gidget, a game where the eponymous robot protagonist is cast as a fallible character that blames itself for not being able to correctly write code to complete its missions. Players learn programming by working with Gidget to debug its problematic code. In a two-condition controlled experiment, we manipulated Gidget's level of personification in: communication style, sound effects, and image. We tested our game with 116 self-described novice programmers recruited on Amazon's Mechanical Turk and found that, when given the option to quit at any time, those in the experimental condition (with a personable Gidget) completed significantly more levels in a similar amount of time. Participants in the control and experimental groups played the game for an average time of 39.4 minutes (SD=34.3) and 50.1 minutes (SD=42.6) respectively. These finding suggest that how programming tool feedback is portrayed to learners can have a significant impact on motivation to program and learning success.
{"title":"Personifying programming tool feedback improves novice programmers' learning","authors":"M. Lee, Amy J. Ko","doi":"10.1145/2016911.2016934","DOIUrl":"https://doi.org/10.1145/2016911.2016934","url":null,"abstract":"Many novice programmers view programming tools as all-knowing, infallible authorities about what is right and wrong about code. This misconception is particularly detrimental to beginners, who may view the cold, terse, and often judgmental errors from compilers as a sign of personal failure. It is possible, however, that attributing this failure to the computer, rather than the learner, may improve learners' motivation to program. To test this hypothesis, we present Gidget, a game where the eponymous robot protagonist is cast as a fallible character that blames itself for not being able to correctly write code to complete its missions. Players learn programming by working with Gidget to debug its problematic code. In a two-condition controlled experiment, we manipulated Gidget's level of personification in: communication style, sound effects, and image. We tested our game with 116 self-described novice programmers recruited on Amazon's Mechanical Turk and found that, when given the option to quit at any time, those in the experimental condition (with a personable Gidget) completed significantly more levels in a similar amount of time. Participants in the control and experimental groups played the game for an average time of 39.4 minutes (SD=34.3) and 50.1 minutes (SD=42.6) respectively. These finding suggest that how programming tool feedback is portrayed to learners can have a significant impact on motivation to program and learning success.","PeriodicalId":268925,"journal":{"name":"Proceedings of the seventh international workshop on Computing education research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115420542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We interviewed eight students to better understand what kind of difficulties students have when learning concurrent programming. According to these interviews students does not consider concurrency to be radically more difficult than other Computer Science subjects - something that is in contrast to many research papers. Instead the students found concurrency to be an interesting and fun subject that they considered to be approximately equal in difficulty to other subjects. For some, the added complexity only acted as inspiring challenge.
{"title":"Student views on learning concurrency","authors":"J. Moström","doi":"10.1145/2016911.2016941","DOIUrl":"https://doi.org/10.1145/2016911.2016941","url":null,"abstract":"We interviewed eight students to better understand what kind of difficulties students have when learning concurrent programming. According to these interviews students does not consider concurrency to be radically more difficult than other Computer Science subjects - something that is in contrast to many research papers. Instead the students found concurrency to be an interesting and fun subject that they considered to be approximately equal in difficulty to other subjects. For some, the added complexity only acted as inspiring challenge.","PeriodicalId":268925,"journal":{"name":"Proceedings of the seventh international workshop on Computing education research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122839340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Students' ability to reason and abstract about code is an important factor in the development of their expertise in producing code. The literature has primary focused on the correlation between measures of students' ability to abstract about code and other skills. The studies and proposed work in my thesis take a mixed methods approach to understanding the impact of feedback regarding algorithmic abstraction and application of contextual scaffolding to problems on the learner.
{"title":"Encouraging students to think of code as an algorithmic symphony: the effect of feedback regarding algorithmic abstraction during code production","authors":"Leigh Ann Sudol-DeLyser","doi":"10.1145/2016911.2016944","DOIUrl":"https://doi.org/10.1145/2016911.2016944","url":null,"abstract":"Students' ability to reason and abstract about code is an important factor in the development of their expertise in producing code. The literature has primary focused on the correlation between measures of students' ability to abstract about code and other skills. The studies and proposed work in my thesis take a mixed methods approach to understanding the impact of feedback regarding algorithmic abstraction and application of contextual scaffolding to problems on the learner.","PeriodicalId":268925,"journal":{"name":"Proceedings of the seventh international workshop on Computing education research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124429187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}