DLFinder: Characterizing and Detecting Duplicate Logging Code Smells

Zhenhao Li, T. Chen, Jinqiu Yang, Weiyi Shang
{"title":"DLFinder: Characterizing and Detecting Duplicate Logging Code Smells","authors":"Zhenhao Li, T. Chen, Jinqiu Yang, Weiyi Shang","doi":"10.1109/ICSE.2019.00032","DOIUrl":null,"url":null,"abstract":"Developers rely on software logs for a wide variety of tasks, such as debugging, testing, program comprehension, verification, and performance analysis. Despite the importance of logs, prior studies show that there is no industrial standard on how to write logging statements. Recent research on logs often only considers the appropriateness of a log as an individual item (e.g., one single logging statement); while logs are typically analyzed in tandem. In this paper, we focus on studying duplicate logging statements, which are logging statements that have the same static text message. Such duplications in the text message are potential indications of logging code smells, which may affect developers' understanding of the dynamic view of the system. We manually studied over 3K duplicate logging statements and their surrounding code in four large-scale open source systems: Hadoop, CloudStack, ElasticSearch, and Cassandra. We uncovered five patterns of duplicate logging code smells. For each instance of the code smell, we further manually identify the problematic (i.e., require fixes) and justifiable (i.e., do not require fixes) cases. Then, we contact developers in order to verify our manual study result. We integrated our manual study result and developers' feedback into our automated static analysis tool, DLFinder, which automatically detects problematic duplicate logging code smells. We evaluated DLFinder on the four manually studied systems and two additional systems: Camel and Wicket. In total, combining the results of DLFinder and our manual analysis, we reported 82 problematic code smell instances to developers and all of them have been fixed.","PeriodicalId":6736,"journal":{"name":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"49","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSE.2019.00032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 49

Abstract

Developers rely on software logs for a wide variety of tasks, such as debugging, testing, program comprehension, verification, and performance analysis. Despite the importance of logs, prior studies show that there is no industrial standard on how to write logging statements. Recent research on logs often only considers the appropriateness of a log as an individual item (e.g., one single logging statement); while logs are typically analyzed in tandem. In this paper, we focus on studying duplicate logging statements, which are logging statements that have the same static text message. Such duplications in the text message are potential indications of logging code smells, which may affect developers' understanding of the dynamic view of the system. We manually studied over 3K duplicate logging statements and their surrounding code in four large-scale open source systems: Hadoop, CloudStack, ElasticSearch, and Cassandra. We uncovered five patterns of duplicate logging code smells. For each instance of the code smell, we further manually identify the problematic (i.e., require fixes) and justifiable (i.e., do not require fixes) cases. Then, we contact developers in order to verify our manual study result. We integrated our manual study result and developers' feedback into our automated static analysis tool, DLFinder, which automatically detects problematic duplicate logging code smells. We evaluated DLFinder on the four manually studied systems and two additional systems: Camel and Wicket. In total, combining the results of DLFinder and our manual analysis, we reported 82 problematic code smell instances to developers and all of them have been fixed.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
DLFinder:描述和检测重复的日志代码气味
开发人员依靠软件日志来完成各种各样的任务,例如调试、测试、程序理解、验证和性能分析。尽管日志很重要,但之前的研究表明,没有关于如何编写日志记录语句的工业标准。最近对日志的研究通常只考虑日志作为一个单独项目的适当性(例如,一个单独的日志记录语句);而日志通常是串联分析的。本文主要研究重复日志语句,即具有相同静态文本消息的日志语句。文本消息中的这种重复是日志代码异味的潜在指示,这可能会影响开发人员对系统动态视图的理解。我们在四个大型开源系统(Hadoop、CloudStack、ElasticSearch和Cassandra)中手动研究了超过3K个重复的日志语句及其周围代码。我们发现了重复日志代码气味的五种模式。对于代码气味的每个实例,我们进一步手动识别有问题(即,需要修复)和合理(即,不需要修复)的情况。然后,我们联系开发人员以验证我们的手工研究结果。我们将手工研究结果和开发人员的反馈集成到自动静态分析工具DLFinder中,该工具可以自动检测有问题的重复日志代码气味。我们在四个人工研究的系统和另外两个系统(Camel和Wicket)上评估了DLFinder。总的来说,结合DLFinder的结果和我们的手工分析,我们向开发人员报告了82个有问题的代码气味实例,所有这些都已经修复了。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
VFix: Value-Flow-Guided Precise Program Repair for Null Pointer Dereferences Search-Based Energy Testing of Android Scalable Approaches for Test Suite Reduction A System Identification Based Oracle for Control-CPS Software Fault Localization Training Binary Classifiers as Data Structure Invariants
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1