An Evaluation of General-Purpose Static Analysis Tools on C/C++ Test Code

Jean Malm, Eduard Paul Enoiu, Masud Abu Naser, B. Lisper, Z. Porkoláb, Sigrid Eldh
{"title":"An Evaluation of General-Purpose Static Analysis Tools on C/C++ Test Code","authors":"Jean Malm, Eduard Paul Enoiu, Masud Abu Naser, B. Lisper, Z. Porkoláb, Sigrid Eldh","doi":"10.1109/SEAA56994.2022.00029","DOIUrl":null,"url":null,"abstract":"In recent years, maintaining test code quality has gained more attention due to increased automation and the growing focus on issues caused during this process.Test code may become long and complex, but maintaining its quality is mostly a manual process, that may not scale in big software projects. Moreover, bugs in test code may give a false impression about the correctness or performance of the production code. Static program analysis (SPA) tools are being used to maintain the quality of software projects nowadays. However, these tools are either not used to analyse test code, or any analysis results on the test code are suppressed.This is especially true since SPA tools are not tailored to generate precise warnings on test code. This paper investigates the use of SPA on test code by employing three state-of-the-art general-purpose static analysers on a curated set of projects used in the industry and a random sample of relatively popular and large open-source C/C++ projects. We have found a number of built-in code checking modules that can detect quality issues in the test code. However, these checkers need some tailoring to obtain relevant results. We observed design choices in test frameworks that raise noisy warnings in analysers and propose a set of augmentations to the checkers or the analysis framework to obtain precise warnings from static analysers.","PeriodicalId":269970,"journal":{"name":"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SEAA56994.2022.00029","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In recent years, maintaining test code quality has gained more attention due to increased automation and the growing focus on issues caused during this process.Test code may become long and complex, but maintaining its quality is mostly a manual process, that may not scale in big software projects. Moreover, bugs in test code may give a false impression about the correctness or performance of the production code. Static program analysis (SPA) tools are being used to maintain the quality of software projects nowadays. However, these tools are either not used to analyse test code, or any analysis results on the test code are suppressed.This is especially true since SPA tools are not tailored to generate precise warnings on test code. This paper investigates the use of SPA on test code by employing three state-of-the-art general-purpose static analysers on a curated set of projects used in the industry and a random sample of relatively popular and large open-source C/C++ projects. We have found a number of built-in code checking modules that can detect quality issues in the test code. However, these checkers need some tailoring to obtain relevant results. We observed design choices in test frameworks that raise noisy warnings in analysers and propose a set of augmentations to the checkers or the analysis framework to obtain precise warnings from static analysers.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通用静态分析工具对C/ c++测试代码的评价
近年来,由于自动化程度的提高和对测试过程中所引起的问题的日益关注,维护测试代码质量得到了更多的关注。测试代码可能变得又长又复杂,但是维护它的质量主要是一个手工过程,在大型软件项目中可能无法扩展。此外,测试代码中的错误可能会给生产代码的正确性或性能带来错误的印象。静态程序分析(SPA)工具现在被用于维护软件项目的质量。然而,这些工具要么不用于分析测试代码,要么测试代码上的任何分析结果都被抑制。这一点尤其正确,因为SPA工具并没有针对测试代码生成精确的警告。本文通过使用三个最先进的通用静态分析器来研究SPA在测试代码上的使用,这些分析器是在行业中使用的一组精心策划的项目和相对流行的大型开源C/ c++项目的随机样本上使用的。我们已经发现了一些内置的代码检查模块,可以检测测试代码中的质量问题。然而,这些检查器需要一些裁剪才能获得相关的结果。我们观察到测试框架中的设计选择会在分析器中产生嘈杂的警告,并建议对检查器或分析框架进行一组增强,以从静态分析器中获得精确的警告。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Service Classification through Machine Learning: Aiding in the Efficient Identification of Reusable Assets in Cloud Application Development Handling Environmental Uncertainty in Design Time Access Control Analysis How are software datasets constructed in Empirical Software Engineering studies? A systematic mapping study Microservices smell detection through dynamic analysis Towards Secure Agile Software Development Process: A Practice-Based Model
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1