{"title":"On the Value of Bug Reports for Retrieval-Based Bug Localization","authors":"Dawn J Lawrie, D. Binkley","doi":"10.1109/ICSME.2018.00048","DOIUrl":null,"url":null,"abstract":"Software engineering researchers have been applying tools and techniques from information retrieval (IR) to problems such as bug localization to lower the manual effort required to perform maintenance tasks. The central challenge when using an IR-based tool is the formation of a high-quality query. When performing bug localization, one easily accessible source of query words is the bug report. A recent paper investigated the sufficiency of this source by using a genetic algorithm (GA) to build high quality queries. Unfortunately, the GA in essence \"cheats\" as it makes use of query performance when evolving a good query. This raises the question, is it feasible to attain similar results without \"cheating?\" One approach to providing cheat-free queries is to employ automatic summarization. The performance of the resulting summaries calls into question the sufficiency of the bug reports as a source of query words. To better understand the situation, Information Need Analysis (INA) is applied to quantify both how well the GA is performing and, perhaps more importantly, how well a bug report captures the vocabulary needed to perform IR-based bug localization. The results find that summarization shows potential to produce high-quality queries, but it requires more training data. Furthermore, while bug reports provide a useful source of query words, they are rather limited and thus query expansion techniques, perhaps in combination with summarization, will likely produce higher-quality queries.","PeriodicalId":6572,"journal":{"name":"2018 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"9 1","pages":"524-528"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Software Maintenance and Evolution (ICSME)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSME.2018.00048","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Software engineering researchers have been applying tools and techniques from information retrieval (IR) to problems such as bug localization to lower the manual effort required to perform maintenance tasks. The central challenge when using an IR-based tool is the formation of a high-quality query. When performing bug localization, one easily accessible source of query words is the bug report. A recent paper investigated the sufficiency of this source by using a genetic algorithm (GA) to build high quality queries. Unfortunately, the GA in essence "cheats" as it makes use of query performance when evolving a good query. This raises the question, is it feasible to attain similar results without "cheating?" One approach to providing cheat-free queries is to employ automatic summarization. The performance of the resulting summaries calls into question the sufficiency of the bug reports as a source of query words. To better understand the situation, Information Need Analysis (INA) is applied to quantify both how well the GA is performing and, perhaps more importantly, how well a bug report captures the vocabulary needed to perform IR-based bug localization. The results find that summarization shows potential to produce high-quality queries, but it requires more training data. Furthermore, while bug reports provide a useful source of query words, they are rather limited and thus query expansion techniques, perhaps in combination with summarization, will likely produce higher-quality queries.
软件工程研究人员一直在将信息检索(IR)中的工具和技术应用于诸如bug定位之类的问题,以降低执行维护任务所需的人工工作量。使用基于ir的工具时的主要挑战是形成高质量的查询。在执行bug本地化时,一个容易访问的查询词来源是bug报告。最近的一篇论文通过使用遗传算法(GA)来构建高质量查询来研究该来源的充分性。不幸的是,遗传算法在本质上“作弊”,因为它在进化一个好的查询时利用了查询性能。这就提出了一个问题,在没有“作弊”的情况下获得类似的结果是否可行?提供无作弊查询的一种方法是使用自动摘要。结果摘要的性能使人们对bug报告作为查询词来源的充分性产生疑问。为了更好地理解这种情况,应用信息需求分析(Information Need Analysis, INA)来量化遗传算法的执行情况,以及(可能更重要的)bug报告捕获执行基于ir的bug本地化所需词汇表的程度。结果发现,摘要显示出产生高质量查询的潜力,但它需要更多的训练数据。此外,虽然bug报告提供了有用的查询词来源,但它们相当有限,因此查询扩展技术(也许与摘要结合使用)可能会产生更高质量的查询。