Siddharth Prasad, B. Greenman, Tim Nelson, J. Wrenn, S. Krishnamurthi
{"title":"从小麦中提取干草:一种识别误解的类源方法","authors":"Siddharth Prasad, B. Greenman, Tim Nelson, J. Wrenn, S. Krishnamurthi","doi":"10.1145/3564721.3564726","DOIUrl":null,"url":null,"abstract":"Novice programmers often begin coding with a poor understanding of the task at hand and end up solving the wrong problem. A promising way to put novices on the right track is to have them write examples first, before coding, and provide them with feedback by evaluating the examples on a suite of chaff implementations that are flawed in subtle ways. This feedback, however, is only as good as the chaffs themselves. Instructors must anticipate misconceptions and avoid expert blind spots to make a useful suite of chaffs. This paper conjectures that novices’ incorrect examples are a rich source of insight and presents a classsourcing method for identifying misconceptions. First off, we identify incorrect examples using known, correct wheat implementations. The method is to manually cluster incorrect examples by semantic similarity, summarize each cluster with a potential misconception, and use the analysis to generate chaffs—thereby deriving a useful by-product (hay) from examples that fail the wheats. Classsourced misconceptions have revealed expert blind spots and drawn attention to chaffs that seldom arose in practice, one of which had an undiscovered bug.","PeriodicalId":149708,"journal":{"name":"Proceedings of the 22nd Koli Calling International Conference on Computing Education Research","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Making Hay from Wheats: A Classsourcing Method to Identify Misconceptions\",\"authors\":\"Siddharth Prasad, B. Greenman, Tim Nelson, J. Wrenn, S. Krishnamurthi\",\"doi\":\"10.1145/3564721.3564726\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Novice programmers often begin coding with a poor understanding of the task at hand and end up solving the wrong problem. A promising way to put novices on the right track is to have them write examples first, before coding, and provide them with feedback by evaluating the examples on a suite of chaff implementations that are flawed in subtle ways. This feedback, however, is only as good as the chaffs themselves. Instructors must anticipate misconceptions and avoid expert blind spots to make a useful suite of chaffs. This paper conjectures that novices’ incorrect examples are a rich source of insight and presents a classsourcing method for identifying misconceptions. First off, we identify incorrect examples using known, correct wheat implementations. The method is to manually cluster incorrect examples by semantic similarity, summarize each cluster with a potential misconception, and use the analysis to generate chaffs—thereby deriving a useful by-product (hay) from examples that fail the wheats. Classsourced misconceptions have revealed expert blind spots and drawn attention to chaffs that seldom arose in practice, one of which had an undiscovered bug.\",\"PeriodicalId\":149708,\"journal\":{\"name\":\"Proceedings of the 22nd Koli Calling International Conference on Computing Education Research\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 22nd Koli Calling International Conference on Computing Education Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3564721.3564726\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 22nd Koli Calling International Conference on Computing Education Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3564721.3564726","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Making Hay from Wheats: A Classsourcing Method to Identify Misconceptions
Novice programmers often begin coding with a poor understanding of the task at hand and end up solving the wrong problem. A promising way to put novices on the right track is to have them write examples first, before coding, and provide them with feedback by evaluating the examples on a suite of chaff implementations that are flawed in subtle ways. This feedback, however, is only as good as the chaffs themselves. Instructors must anticipate misconceptions and avoid expert blind spots to make a useful suite of chaffs. This paper conjectures that novices’ incorrect examples are a rich source of insight and presents a classsourcing method for identifying misconceptions. First off, we identify incorrect examples using known, correct wheat implementations. The method is to manually cluster incorrect examples by semantic similarity, summarize each cluster with a potential misconception, and use the analysis to generate chaffs—thereby deriving a useful by-product (hay) from examples that fail the wheats. Classsourced misconceptions have revealed expert blind spots and drawn attention to chaffs that seldom arose in practice, one of which had an undiscovered bug.