TetrisOS and BreakoutOS are projects developed for a sophomore-level computer organization course. Each project teaches a wide range of x86 assembly language topics, including iteration, function calls, data storage, segmentation, communication with devices, and polling-based and interrupt-based I/O. They run "bare-metal" and avoid system calls. Each game can run natively on any PC and boot from a USB stick. The projects were tested on six classes of students over three semesters at two universities, and though rigorous, had a high completion rate.
{"title":"TetrisOS and BreakoutOS: Assembly Language Projects for Computer Organization","authors":"M. Black","doi":"10.1145/3059009.3072976","DOIUrl":"https://doi.org/10.1145/3059009.3072976","url":null,"abstract":"TetrisOS and BreakoutOS are projects developed for a sophomore-level computer organization course. Each project teaches a wide range of x86 assembly language topics, including iteration, function calls, data storage, segmentation, communication with devices, and polling-based and interrupt-based I/O. They run \"bare-metal\" and avoid system calls. Each game can run natively on any PC and boot from a USB stick. The projects were tested on six classes of students over three semesters at two universities, and though rigorous, had a high completion rate.","PeriodicalId":174429,"journal":{"name":"Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education","volume":"411 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122863247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paul Denny, E. Tempero, D. Garbett, Andrew Petersen
Students and instructors expend significant effort, respectively, preparing to be examined and preparing students for exams. This paper investigates question authoring, where students create practice questions as a preparation activity prior to an exam, in an introductory programming context. The key contribution of this study as compared to previous work is an improvement to the design of the experiment. Students were randomly assigned the topics that their questions should target, removing a selection bias that has been a limitation of earlier work. We conduct a large-scale between-subjects experiment (n = 700) and find that students exhibit superior performance on exam questions that relate to the topics they were assigned when compared to those students preparing questions on other assigned topics.
{"title":"Examining a Student-Generated Question Activity Using Random Topic Assignment","authors":"Paul Denny, E. Tempero, D. Garbett, Andrew Petersen","doi":"10.1145/3059009.3059033","DOIUrl":"https://doi.org/10.1145/3059009.3059033","url":null,"abstract":"Students and instructors expend significant effort, respectively, preparing to be examined and preparing students for exams. This paper investigates question authoring, where students create practice questions as a preparation activity prior to an exam, in an introductory programming context. The key contribution of this study as compared to previous work is an improvement to the design of the experiment. Students were randomly assigned the topics that their questions should target, removing a selection bias that has been a limitation of earlier work. We conduct a large-scale between-subjects experiment (n = 700) and find that students exhibit superior performance on exam questions that relate to the topics they were assigned when compared to those students preparing questions on other assigned topics.","PeriodicalId":174429,"journal":{"name":"Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126389153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rémy Siegfried, Severin Klingler, M. Gross, R. Sumner, F. Mondada, Stéphane Magnenat
The strong interest children show for mobile robots makes these devices potentially powerful to teach programming. Moreover, the tangibility of physical objects and the sociability of interacting with them are added benefits. A key skill that novices in programming have to acquire is the ability to mentally trace program execution. However, because of their embodied and real-time nature, robots make the mental tracing of program execution difficult. To address this difficulty, in this paper we propose an automatic program evaluation framework based on a robot simulator. We describe a real-time implementation providing feedback and gamified hints to students. In a user study, we demonstrate that our hint system increases the percentage of students writing correct programs from 50% to 96%, and decreases the average time to write a correct program by 30%. However, we could not show any correlation between the use of the system and the performance of students on a questionnaire testing concept acquisition. This suggests that programming skills and concept understanding are different abilities. Overall, the clear performance gain shows the value of our approach for programming education using robots.
{"title":"Improved Mobile Robot Programming Performance through Real-time Program Assessment","authors":"Rémy Siegfried, Severin Klingler, M. Gross, R. Sumner, F. Mondada, Stéphane Magnenat","doi":"10.1145/3059009.3059044","DOIUrl":"https://doi.org/10.1145/3059009.3059044","url":null,"abstract":"The strong interest children show for mobile robots makes these devices potentially powerful to teach programming. Moreover, the tangibility of physical objects and the sociability of interacting with them are added benefits. A key skill that novices in programming have to acquire is the ability to mentally trace program execution. However, because of their embodied and real-time nature, robots make the mental tracing of program execution difficult. To address this difficulty, in this paper we propose an automatic program evaluation framework based on a robot simulator. We describe a real-time implementation providing feedback and gamified hints to students. In a user study, we demonstrate that our hint system increases the percentage of students writing correct programs from 50% to 96%, and decreases the average time to write a correct program by 30%. However, we could not show any correlation between the use of the system and the performance of students on a questionnaire testing concept acquisition. This suggests that programming skills and concept understanding are different abilities. Overall, the clear performance gain shows the value of our approach for programming education using robots.","PeriodicalId":174429,"journal":{"name":"Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124131671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we introduce hybrid labs, an alternative to open or closed labs for CS 1, in which a set of written instructions, demonstration of techniques, and code examples are provided to students in lieu of a lecture. The hybrid lab also consists of several challenges which require students to write code or answer questions based off the concepts introduced in the document. Students are presented with the lab two days prior to a class period and are given an option of submitting solutions to the challenges on their own time (similar to an open lab) or attending the class in which an instructor is available to provide additional help as needed (similar to a closed lab). We compare a section of CS 1 that utilized a combination of hybrid labs and lectures against a section that utilized only lectures. We found no statistical significance between the abilities of the students of the two sections, but surveys show that students found the hybrid labs to be more engaging and preferred the hybrid labs over lectures as means of instruction. Furthermore, instructors found that the hybrid labs allowed for more tailored, individualized instruction for a variety of student abilities.
{"title":"A Hybrid Open/Closed Lab for CS 1","authors":"T. Urness","doi":"10.1145/3059009.3059014","DOIUrl":"https://doi.org/10.1145/3059009.3059014","url":null,"abstract":"In this paper we introduce hybrid labs, an alternative to open or closed labs for CS 1, in which a set of written instructions, demonstration of techniques, and code examples are provided to students in lieu of a lecture. The hybrid lab also consists of several challenges which require students to write code or answer questions based off the concepts introduced in the document. Students are presented with the lab two days prior to a class period and are given an option of submitting solutions to the challenges on their own time (similar to an open lab) or attending the class in which an instructor is available to provide additional help as needed (similar to a closed lab). We compare a section of CS 1 that utilized a combination of hybrid labs and lectures against a section that utilized only lectures. We found no statistical significance between the abilities of the students of the two sections, but surveys show that students found the hybrid labs to be more engaging and preferred the hybrid labs over lectures as means of instruction. Furthermore, instructors found that the hybrid labs allowed for more tailored, individualized instruction for a variety of student abilities.","PeriodicalId":174429,"journal":{"name":"Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128634377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dan Leyzberg, Jérémie O. Lumbroso, Christopher Moretti
Where would we be without them? Teaching assistants (TAs) make it possible for us to deliver high-quality large-scale computer science courses with relatively few faculty. Though their responsibilities vary by institution, TAs often play a crucial role in student learning. The use of teaching assistants in computer science courses is a common and longstanding practice and, yet, little has been published about how to choose the best TAs among those interested in the job. This paper describes the development of an interview rubric in use by faculty teaching a large introductory computer science course to score applicant responses in a formal in-person 30-minute interview. We describe the motivation behind developing such a rubric, the initial development process, its refinement based on feedback provided by students about their TAs, and the preliminary results of implementing this hiring system.
{"title":"Nailing the TA Interview: Using a Rubric to Hire Teaching Assistants","authors":"Dan Leyzberg, Jérémie O. Lumbroso, Christopher Moretti","doi":"10.1145/3059009.3059057","DOIUrl":"https://doi.org/10.1145/3059009.3059057","url":null,"abstract":"Where would we be without them? Teaching assistants (TAs) make it possible for us to deliver high-quality large-scale computer science courses with relatively few faculty. Though their responsibilities vary by institution, TAs often play a crucial role in student learning. The use of teaching assistants in computer science courses is a common and longstanding practice and, yet, little has been published about how to choose the best TAs among those interested in the job. This paper describes the development of an interview rubric in use by faculty teaching a large introductory computer science course to score applicant responses in a formal in-person 30-minute interview. We describe the motivation behind developing such a rubric, the initial development process, its refinement based on feedback provided by students about their TAs, and the preliminary results of implementing this hiring system.","PeriodicalId":174429,"journal":{"name":"Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125607447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present GradeIT, a system that combines the dual objectives of automated grading and program repairing for introductory programming courses (CS1). Syntax errors pose a significant challenge for testcase-based grading as it is difficult to differentiate between a submission that is almost correct and has some minor syntax errors and another submission that is completely off-the-mark. GradeIT also uses program repair to help in grading submissions that do not compile. This enables running testcases on submissions containing minor syntax errors, thereby awarding partial marks for these submissions (which, without repair, do not compile successfully and, hence, do not pass any testcase). Our experiments on 15613 submissions show that GradeIT results are comparable to manual grading by teaching assistants (TAs), and do not suffer from unintentional variability that happens when multiple TAs grade the same assignment. The repairs performed by GradeIT enabled successful compilation of 56% of the submissions having compilation errors, and resulted in an improvement in marks for 11% of these submissions.
{"title":"Automatic Grading and Feedback using Program Repair for Introductory Programming Courses","authors":"Sagar Parihar, Ziyaan Dadachanji, P. Singh, Rajdeep Das, Amey Karkare, Arnab Bhattacharya","doi":"10.1145/3059009.3059026","DOIUrl":"https://doi.org/10.1145/3059009.3059026","url":null,"abstract":"We present GradeIT, a system that combines the dual objectives of automated grading and program repairing for introductory programming courses (CS1). Syntax errors pose a significant challenge for testcase-based grading as it is difficult to differentiate between a submission that is almost correct and has some minor syntax errors and another submission that is completely off-the-mark. GradeIT also uses program repair to help in grading submissions that do not compile. This enables running testcases on submissions containing minor syntax errors, thereby awarding partial marks for these submissions (which, without repair, do not compile successfully and, hence, do not pass any testcase). Our experiments on 15613 submissions show that GradeIT results are comparable to manual grading by teaching assistants (TAs), and do not suffer from unintentional variability that happens when multiple TAs grade the same assignment. The repairs performed by GradeIT enabled successful compilation of 56% of the submissions having compilation errors, and resulted in an improvement in marks for 11% of these submissions.","PeriodicalId":174429,"journal":{"name":"Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132568357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Rosato, Chery Lucarelli, C. Beckworth, R. Morelli
The College of St. Scholastica, in partnership with Trinity College, adapted the Mobile Computer Science Principles (CSP) curriculum and professional development (PD) for delivery online to reach high school teachers unable to attend traditional face-to-face PD. The Mobile CSP curriculum and PD were designed to increase the number of schools offering computer science (CS) courses and to broaden the participation of traditionally underrepresented students such as females and minorities. A deliberate and intentional process was used that incorporates evidence-based practices for the online environment and professional development. A comparison of student and teacher results suggests that online PD can be a successful strategy for scaling computer science professional development. This paper will discuss not only these results but also challenges from the first year of the project and how they are being addressed in subsequent years. This report focuses primarily on the activities and accomplishments of the online PD, although data and accomplishments are provided for the Mobile CSP project as a whole where appropriate.
{"title":"A Comparison of Online and Hybrid Professional Development for CS Principles Teachers","authors":"J. Rosato, Chery Lucarelli, C. Beckworth, R. Morelli","doi":"10.1145/3059009.3059060","DOIUrl":"https://doi.org/10.1145/3059009.3059060","url":null,"abstract":"The College of St. Scholastica, in partnership with Trinity College, adapted the Mobile Computer Science Principles (CSP) curriculum and professional development (PD) for delivery online to reach high school teachers unable to attend traditional face-to-face PD. The Mobile CSP curriculum and PD were designed to increase the number of schools offering computer science (CS) courses and to broaden the participation of traditionally underrepresented students such as females and minorities. A deliberate and intentional process was used that incorporates evidence-based practices for the online environment and professional development. A comparison of student and teacher results suggests that online PD can be a successful strategy for scaling computer science professional development. This paper will discuss not only these results but also challenges from the first year of the project and how they are being addressed in subsequent years. This report focuses primarily on the activities and accomplishments of the online PD, although data and accomplishments are provided for the Mobile CSP project as a whole where appropriate.","PeriodicalId":174429,"journal":{"name":"Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education","volume":"10 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131575313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This synopsis presents the preliminary results of a larger study that aims to uncover design principles for teaching computational thinking to primary school children. This research focuses on teaching computational thinking to 8-year-olds through ScratchJr. By engaging in a cyclic process in which we create lesson materials and use evaluation data to improve them, we formulate design principles and provide teachers with sample course materials.
{"title":"Teaching Computational Thinking to 8-Year-Olds through ScratchJr","authors":"Hylke H. Faber, J. V. D. Ven, M. Wierdsma","doi":"10.1145/3059009.3072986","DOIUrl":"https://doi.org/10.1145/3059009.3072986","url":null,"abstract":"This synopsis presents the preliminary results of a larger study that aims to uncover design principles for teaching computational thinking to primary school children. This research focuses on teaching computational thinking to 8-year-olds through ScratchJr. By engaging in a cyclic process in which we create lesson materials and use evaluation data to improve them, we formulate design principles and provide teachers with sample course materials.","PeriodicalId":174429,"journal":{"name":"Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132778054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Students need to be told how to make the most of their time at university to best aid their future career. We need to remind them that the "degree" is more than the sum of the diverse classes they take as part of their degree curriculum. We developed a guide for students to contextualize their degree, and "build their future" to develop their university time to suit their career aspirations.
{"title":"Build Your Future: Guiding Student Employability","authors":"B. Scharlau","doi":"10.1145/3059009.3072993","DOIUrl":"https://doi.org/10.1145/3059009.3072993","url":null,"abstract":"Students need to be told how to make the most of their time at university to best aid their future career. We need to remind them that the \"degree\" is more than the sum of the diverse classes they take as part of their degree curriculum. We developed a guide for students to contextualize their degree, and \"build their future\" to develop their university time to suit their career aspirations.","PeriodicalId":174429,"journal":{"name":"Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133464959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Course withdrawal and failure rates are known problems in introductory computer science programming courses (CS1). In turn, these problematic performance rates contribute to declines in retention rates between introductory programming courses and subsequent CS courses. In a bit of a twist, however, retention rates are also influenced by successful student performance. Some students frequently leave the computer science major due to unpleasant experiences and lack of satisfaction, despite earning good grades [1]. These competing retention factors create a paradox to provide a fun learning experience that makes students want to stay in the CS major while simultaneously emphasizing the rigor and discipline needed to advance in the CS major.
{"title":"Addressing the Paradox of Fun and Rigor in Learning Programming","authors":"Mohsen Dorodchi, Nasrin Dehbozorgi","doi":"10.1145/3059009.3073004","DOIUrl":"https://doi.org/10.1145/3059009.3073004","url":null,"abstract":"Course withdrawal and failure rates are known problems in introductory computer science programming courses (CS1). In turn, these problematic performance rates contribute to declines in retention rates between introductory programming courses and subsequent CS courses. In a bit of a twist, however, retention rates are also influenced by successful student performance. Some students frequently leave the computer science major due to unpleasant experiences and lack of satisfaction, despite earning good grades [1]. These competing retention factors create a paradox to provide a fun learning experience that makes students want to stay in the CS major while simultaneously emphasizing the rigor and discipline needed to advance in the CS major.","PeriodicalId":174429,"journal":{"name":"Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131146559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}