Toward Optimal Selection of Information Retrieval Models for Software Engineering Tasks

Md Masudur Rahman, Saikat Chakraborty, G. Kaiser, Baishakhi Ray
{"title":"Toward Optimal Selection of Information Retrieval Models for Software Engineering Tasks","authors":"Md Masudur Rahman, Saikat Chakraborty, G. Kaiser, Baishakhi Ray","doi":"10.1109/SCAM.2019.00022","DOIUrl":null,"url":null,"abstract":"Information Retrieval (IR) plays a pivotal role in diverse Software Engineering (SE) tasks, e.g., bug localization and triaging, bug report routing, code retrieval, requirements analysis, etc. SE tasks operate on diverse types of documents including code, text, stack-traces, and structured, semi-structured and unstructured meta-data that often contain specialized vocabularies. As the performance of any IR-based tool critically depends on the underlying document types, and given the diversity of SE corpora, it is essential to understand which models work best for which types of SE documents and tasks. We empirically investigate the interaction between IR models and document types for two representative SE tasks (bug localization and relevant project search), carefully chosen as they require a diverse set of SE artifacts (mixtures of code and text), and confirm that the models' performance varies significantly with mix of document types. Leveraging this insight, we propose a generalized framework, SRCH, to automatically select the most favorable IR model(s) for a given SE task. We evaluate SRCH w.r.t. these two tasks and confirm its effectiveness. Our preliminary user study shows that SRCH's intelligent adaption of the IR model(s) to the task at hand not only improves precision and recall for SE tasks but may also improve users' satisfaction.","PeriodicalId":431316,"journal":{"name":"2019 19th International Working Conference on Source Code Analysis and Manipulation (SCAM)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 19th International Working Conference on Source Code Analysis and Manipulation (SCAM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SCAM.2019.00022","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Information Retrieval (IR) plays a pivotal role in diverse Software Engineering (SE) tasks, e.g., bug localization and triaging, bug report routing, code retrieval, requirements analysis, etc. SE tasks operate on diverse types of documents including code, text, stack-traces, and structured, semi-structured and unstructured meta-data that often contain specialized vocabularies. As the performance of any IR-based tool critically depends on the underlying document types, and given the diversity of SE corpora, it is essential to understand which models work best for which types of SE documents and tasks. We empirically investigate the interaction between IR models and document types for two representative SE tasks (bug localization and relevant project search), carefully chosen as they require a diverse set of SE artifacts (mixtures of code and text), and confirm that the models' performance varies significantly with mix of document types. Leveraging this insight, we propose a generalized framework, SRCH, to automatically select the most favorable IR model(s) for a given SE task. We evaluate SRCH w.r.t. these two tasks and confirm its effectiveness. Our preliminary user study shows that SRCH's intelligent adaption of the IR model(s) to the task at hand not only improves precision and recall for SE tasks but may also improve users' satisfaction.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
面向软件工程任务的信息检索模型优选研究
信息检索(IR)在各种软件工程(SE)任务中起着关键作用,例如,错误定位和分类、错误报告路由、代码检索、需求分析等。SE任务对不同类型的文档进行操作,包括代码、文本、堆栈跟踪以及通常包含专门词汇表的结构化、半结构化和非结构化元数据。由于任何基于ir的工具的性能都严重依赖于底层文档类型,并且考虑到SE语料库的多样性,因此有必要了解哪种模型最适合哪种类型的SE文档和任务。我们根据经验调查了两个代表性SE任务(bug定位和相关项目搜索)的IR模型和文档类型之间的交互,仔细选择了它们,因为它们需要不同的SE工件集(代码和文本的混合),并确认模型的性能随着文档类型的混合而显着变化。利用这一见解,我们提出了一个广义框架SRCH,为给定的SE任务自动选择最有利的IR模型。我们对SRCH w.r.t.这两项任务进行了评估,并证实了其有效性。我们的初步用户研究表明,SRCH对手头任务的IR模型的智能适应不仅提高了SE任务的准确率和召回率,而且还可以提高用户满意度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
On the Efficacy of Dynamic Behavior Comparison for Judging Functional Equivalence Permission Issues in Open-Source Android Apps: An Exploratory Study Interactive Refactoring Documentation Bot Software Engineering by Source Transformation – Experience with TXL (Most Influential Paper, SCAM 2001) On the Quality of Identifiers in Test Code
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1