学生编写的单元测试结构的自动评估

L. Baumstark
{"title":"学生编写的单元测试结构的自动评估","authors":"L. Baumstark","doi":"10.1145/3564746.3587002","DOIUrl":null,"url":null,"abstract":"As instructors in courses that introduce unit testing, we want to provide autograded practice problems that guide students in how they write their unit test methods, similar to how existing autograder tools can provide students with traditional coding problems. For courses taught in Java, these existing autograder systems often use instructor-supplied JUnit tests to evaluate student submissions of non-unit-test code; our approach is designed to integrate into these JUnit-based systems and expand their capabilities to include practice coding unit tests. We do this by writing special instructor-provided unit tests that evaluate students' submitted unit tests; we call these \"meta-tests\" to distinguish them from the students' work. This paper describes the use of meta-tests, the technology that facilitates them, and strategies for writing them. Previous work in this space focused on using coverage metrics (e.g., lines-of-code covered or numbers of bugs caught) that evaluate the aggregate performance of suites of unit tests. Our approach instead examines the internal structure of individual unit test methods and provides feedback on whether, for example, the test method creates the correct object(s), calls the correct method-under-test, and/or calls appropriate assertions.","PeriodicalId":322431,"journal":{"name":"Proceedings of the 2023 ACM Southeast Conference","volume":"100 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Automated Evaluation of the Structure of Student-Written Unit Tests\",\"authors\":\"L. Baumstark\",\"doi\":\"10.1145/3564746.3587002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As instructors in courses that introduce unit testing, we want to provide autograded practice problems that guide students in how they write their unit test methods, similar to how existing autograder tools can provide students with traditional coding problems. For courses taught in Java, these existing autograder systems often use instructor-supplied JUnit tests to evaluate student submissions of non-unit-test code; our approach is designed to integrate into these JUnit-based systems and expand their capabilities to include practice coding unit tests. We do this by writing special instructor-provided unit tests that evaluate students' submitted unit tests; we call these \\\"meta-tests\\\" to distinguish them from the students' work. This paper describes the use of meta-tests, the technology that facilitates them, and strategies for writing them. Previous work in this space focused on using coverage metrics (e.g., lines-of-code covered or numbers of bugs caught) that evaluate the aggregate performance of suites of unit tests. Our approach instead examines the internal structure of individual unit test methods and provides feedback on whether, for example, the test method creates the correct object(s), calls the correct method-under-test, and/or calls appropriate assertions.\",\"PeriodicalId\":322431,\"journal\":{\"name\":\"Proceedings of the 2023 ACM Southeast Conference\",\"volume\":\"100 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2023 ACM Southeast Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3564746.3587002\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 ACM Southeast Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3564746.3587002","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

作为单元测试课程的讲师,我们希望提供自动评分的练习问题,指导学生如何编写单元测试方法,类似于现有的自动评分工具如何为学生提供传统的编码问题。对于用Java授课的课程,这些现有的自动评分系统通常使用教师提供的JUnit测试来评估学生提交的非单元测试代码;我们的方法被设计成集成到这些基于junit的系统中,并扩展它们的功能,以包括实践编码单元测试。我们通过编写特殊的教师提供的单元测试来评估学生提交的单元测试来做到这一点;我们称这些测试为“元测试”,以区别于学生的作业。本文描述了元测试的使用,促进它们的技术,以及编写它们的策略。这个领域以前的工作集中在使用覆盖度量(例如,覆盖的代码行数或捕获的错误数量)来评估单元测试套件的总体性能。相反,我们的方法检查单个单元测试方法的内部结构,并提供反馈,例如,测试方法是否创建了正确的对象,调用了正确的被测方法,和/或调用了适当的断言。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Automated Evaluation of the Structure of Student-Written Unit Tests
As instructors in courses that introduce unit testing, we want to provide autograded practice problems that guide students in how they write their unit test methods, similar to how existing autograder tools can provide students with traditional coding problems. For courses taught in Java, these existing autograder systems often use instructor-supplied JUnit tests to evaluate student submissions of non-unit-test code; our approach is designed to integrate into these JUnit-based systems and expand their capabilities to include practice coding unit tests. We do this by writing special instructor-provided unit tests that evaluate students' submitted unit tests; we call these "meta-tests" to distinguish them from the students' work. This paper describes the use of meta-tests, the technology that facilitates them, and strategies for writing them. Previous work in this space focused on using coverage metrics (e.g., lines-of-code covered or numbers of bugs caught) that evaluate the aggregate performance of suites of unit tests. Our approach instead examines the internal structure of individual unit test methods and provides feedback on whether, for example, the test method creates the correct object(s), calls the correct method-under-test, and/or calls appropriate assertions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Scan Stack: A Search-based Concurrent Stack for GPU Mobility-based Optimal Relay Node Selection for IoT-oriented SDWSN Conti Ransomware Development Evaluation News Consumption Among CS Majors: Habits, Perceptions, and Challenges Analysis of ECDSA's Computational Impact on IoT Network Performance
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1