Empirically Detecting False Test Alarms Using Association Rules

Kim Herzig, Nachiappan Nagappan
{"title":"Empirically Detecting False Test Alarms Using Association Rules","authors":"Kim Herzig, Nachiappan Nagappan","doi":"10.1109/ICSE.2015.133","DOIUrl":null,"url":null,"abstract":"Applying code changes to software systems and testing these code changes can be a complex task that involves many different types of software testing strategies, e.g. system and integration tests. However, not all test failures reported during code integration are hinting towards code defects. Testing large systems such as the Microsoft Windows operating system requires complex test infrastructures, which may lead to test failures caused by faulty tests and test infrastructure issues. Such false test alarms are particular annoying as they raise engineer attention and require manual inspection without providing any benefit. The goal of this work is to use empirical data to minimize the number of false test alarms reported during system and integration testing. To achieve this goal, we use association rule learning to identify patterns among failing test steps that are typically for false test alarms and can be used to automatically classify them. A successful classification of false test alarms is particularly valuable for product teams as manual test failure inspection is an expensive and time-consuming process that not only costs engineering time and money but also slows down product development. We evaluating our approach on system and integration tests executed during Windows 8.1 and Microsoft Dynamics AX development. Performing more than 10,000 classifications for each product, our model shows a mean precision between 0.85 and 0.90 predicting between 34% and 48% of all false test alarms.","PeriodicalId":330487,"journal":{"name":"2015 IEEE/ACM 37th IEEE International Conference on Software Engineering","volume":"333 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"81","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE/ACM 37th IEEE International Conference on Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSE.2015.133","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 81

Abstract

Applying code changes to software systems and testing these code changes can be a complex task that involves many different types of software testing strategies, e.g. system and integration tests. However, not all test failures reported during code integration are hinting towards code defects. Testing large systems such as the Microsoft Windows operating system requires complex test infrastructures, which may lead to test failures caused by faulty tests and test infrastructure issues. Such false test alarms are particular annoying as they raise engineer attention and require manual inspection without providing any benefit. The goal of this work is to use empirical data to minimize the number of false test alarms reported during system and integration testing. To achieve this goal, we use association rule learning to identify patterns among failing test steps that are typically for false test alarms and can be used to automatically classify them. A successful classification of false test alarms is particularly valuable for product teams as manual test failure inspection is an expensive and time-consuming process that not only costs engineering time and money but also slows down product development. We evaluating our approach on system and integration tests executed during Windows 8.1 and Microsoft Dynamics AX development. Performing more than 10,000 classifications for each product, our model shows a mean precision between 0.85 and 0.90 predicting between 34% and 48% of all false test alarms.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于关联规则的测试误报实证检测
将代码更改应用于软件系统并测试这些代码更改可能是一项复杂的任务,涉及许多不同类型的软件测试策略,例如系统和集成测试。然而,并不是在代码集成期间报告的所有测试失败都暗示着代码缺陷。测试像Microsoft Windows操作系统这样的大型系统需要复杂的测试基础结构,这可能会导致测试失败,这是由错误的测试和测试基础结构问题引起的。这种错误的测试警报尤其令人讨厌,因为它们引起了工程师的注意,需要人工检查而没有提供任何好处。这项工作的目标是使用经验数据来最小化系统和集成测试期间报告的错误测试警报的数量。为了实现这一目标,我们使用关联规则学习来识别失败测试步骤中的模式,这些失败测试步骤通常是错误的测试警报,并且可以用于自动对它们进行分类。对错误测试警报的成功分类对于产品团队来说特别有价值,因为手动测试故障检查是一个昂贵且耗时的过程,不仅花费工程时间和金钱,而且还会减慢产品开发速度。我们通过在Windows 8.1和Microsoft Dynamics AX开发期间执行的系统和集成测试来评估我们的方法。我们的模型对每种产品进行了超过10,000次分类,平均精度在0.85到0.90之间,预测了34%到48%的假测试警报。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Contributor's Performance, Participation Intentions, Its Influencers and Project Performance ZoomIn: Discovering Failures by Detecting Wrong Assertions Agile Project Management: From Self-Managing Teams to Large-Scale Development How Much Up-Front? A Grounded theory of Agile Architecture Avoiding Security Pitfalls with Functional Programming: A Report on the Development of a Secure XML Validator
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1