{"title":"学生编写的单元测试结构的自动评估","authors":"L. Baumstark","doi":"10.1145/3564746.3587002","DOIUrl":null,"url":null,"abstract":"As instructors in courses that introduce unit testing, we want to provide autograded practice problems that guide students in how they write their unit test methods, similar to how existing autograder tools can provide students with traditional coding problems. For courses taught in Java, these existing autograder systems often use instructor-supplied JUnit tests to evaluate student submissions of non-unit-test code; our approach is designed to integrate into these JUnit-based systems and expand their capabilities to include practice coding unit tests. We do this by writing special instructor-provided unit tests that evaluate students' submitted unit tests; we call these \"meta-tests\" to distinguish them from the students' work. This paper describes the use of meta-tests, the technology that facilitates them, and strategies for writing them. Previous work in this space focused on using coverage metrics (e.g., lines-of-code covered or numbers of bugs caught) that evaluate the aggregate performance of suites of unit tests. Our approach instead examines the internal structure of individual unit test methods and provides feedback on whether, for example, the test method creates the correct object(s), calls the correct method-under-test, and/or calls appropriate assertions.","PeriodicalId":322431,"journal":{"name":"Proceedings of the 2023 ACM Southeast Conference","volume":"100 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Automated Evaluation of the Structure of Student-Written Unit Tests\",\"authors\":\"L. Baumstark\",\"doi\":\"10.1145/3564746.3587002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As instructors in courses that introduce unit testing, we want to provide autograded practice problems that guide students in how they write their unit test methods, similar to how existing autograder tools can provide students with traditional coding problems. For courses taught in Java, these existing autograder systems often use instructor-supplied JUnit tests to evaluate student submissions of non-unit-test code; our approach is designed to integrate into these JUnit-based systems and expand their capabilities to include practice coding unit tests. We do this by writing special instructor-provided unit tests that evaluate students' submitted unit tests; we call these \\\"meta-tests\\\" to distinguish them from the students' work. This paper describes the use of meta-tests, the technology that facilitates them, and strategies for writing them. Previous work in this space focused on using coverage metrics (e.g., lines-of-code covered or numbers of bugs caught) that evaluate the aggregate performance of suites of unit tests. Our approach instead examines the internal structure of individual unit test methods and provides feedback on whether, for example, the test method creates the correct object(s), calls the correct method-under-test, and/or calls appropriate assertions.\",\"PeriodicalId\":322431,\"journal\":{\"name\":\"Proceedings of the 2023 ACM Southeast Conference\",\"volume\":\"100 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2023 ACM Southeast Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3564746.3587002\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 ACM Southeast Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3564746.3587002","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Automated Evaluation of the Structure of Student-Written Unit Tests
As instructors in courses that introduce unit testing, we want to provide autograded practice problems that guide students in how they write their unit test methods, similar to how existing autograder tools can provide students with traditional coding problems. For courses taught in Java, these existing autograder systems often use instructor-supplied JUnit tests to evaluate student submissions of non-unit-test code; our approach is designed to integrate into these JUnit-based systems and expand their capabilities to include practice coding unit tests. We do this by writing special instructor-provided unit tests that evaluate students' submitted unit tests; we call these "meta-tests" to distinguish them from the students' work. This paper describes the use of meta-tests, the technology that facilitates them, and strategies for writing them. Previous work in this space focused on using coverage metrics (e.g., lines-of-code covered or numbers of bugs caught) that evaluate the aggregate performance of suites of unit tests. Our approach instead examines the internal structure of individual unit test methods and provides feedback on whether, for example, the test method creates the correct object(s), calls the correct method-under-test, and/or calls appropriate assertions.