Improving requirements completeness: automated assistance through large language models

IF 2.1 3区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Requirements Engineering Pub Date : 2024-03-25 DOI:10.1007/s00766-024-00416-3
Dipeeka Luitel, Shabnam Hassani, Mehrdad Sabetzadeh
{"title":"Improving requirements completeness: automated assistance through large language models","authors":"Dipeeka Luitel, Shabnam Hassani, Mehrdad Sabetzadeh","doi":"10.1007/s00766-024-00416-3","DOIUrl":null,"url":null,"abstract":"<p>Natural language (NL) is arguably the most prevalent medium for expressing systems and software requirements. Detecting incompleteness in NL requirements is a major challenge. One approach to identify incompleteness is to compare requirements with external sources. Given the rise of large language models (LLMs), an interesting question arises: Are LLMs useful external sources of knowledge for detecting potential incompleteness in NL requirements? This article explores this question by utilizing BERT. Specifically, we employ BERT’s masked language model to generate contextualized predictions for filling masked slots in requirements. To simulate incompleteness, we withhold content from the requirements and assess BERT’s ability to predict terminology that is present in the withheld content but absent in the disclosed content. BERT can produce multiple predictions per mask. Our first contribution is determining the optimal number of predictions per mask, striking a balance between effectively identifying omissions in requirements and mitigating noise present in the predictions. Our second contribution involves designing a machine learning-based filter to post-process BERT’s predictions and further reduce noise. We conduct an empirical evaluation using 40 requirements specifications from the PURE dataset. Our findings indicate that: (1) BERT’s predictions effectively highlight terminology that is missing from requirements, (2) BERT outperforms simpler baselines in identifying relevant yet missing terminology, and (3) our filter reduces noise in the predictions, enhancing BERT’s effectiveness for completeness checking of requirements.</p>","PeriodicalId":20912,"journal":{"name":"Requirements Engineering","volume":"49 1","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Requirements Engineering","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00766-024-00416-3","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Natural language (NL) is arguably the most prevalent medium for expressing systems and software requirements. Detecting incompleteness in NL requirements is a major challenge. One approach to identify incompleteness is to compare requirements with external sources. Given the rise of large language models (LLMs), an interesting question arises: Are LLMs useful external sources of knowledge for detecting potential incompleteness in NL requirements? This article explores this question by utilizing BERT. Specifically, we employ BERT’s masked language model to generate contextualized predictions for filling masked slots in requirements. To simulate incompleteness, we withhold content from the requirements and assess BERT’s ability to predict terminology that is present in the withheld content but absent in the disclosed content. BERT can produce multiple predictions per mask. Our first contribution is determining the optimal number of predictions per mask, striking a balance between effectively identifying omissions in requirements and mitigating noise present in the predictions. Our second contribution involves designing a machine learning-based filter to post-process BERT’s predictions and further reduce noise. We conduct an empirical evaluation using 40 requirements specifications from the PURE dataset. Our findings indicate that: (1) BERT’s predictions effectively highlight terminology that is missing from requirements, (2) BERT outperforms simpler baselines in identifying relevant yet missing terminology, and (3) our filter reduces noise in the predictions, enhancing BERT’s effectiveness for completeness checking of requirements.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
提高需求的完整性:通过大型语言模型提供自动协助
自然语言(NL)可以说是表达系统和软件需求最普遍的媒介。检测自然语言需求的不完整性是一项重大挑战。识别不完整性的一种方法是将需求与外部来源进行比较。鉴于大型语言模型(LLM)的兴起,一个有趣的问题出现了:LLM 是否是检测 NL 需求中潜在不完整性的有用外部知识来源?本文利用 BERT 来探讨这个问题。具体来说,我们利用 BERT 的遮蔽语言模型来生成上下文化的预测,以填补需求中的遮蔽槽。为了模拟不完整性,我们扣留了需求中的内容,并评估 BERT 预测在扣留内容中存在但在公开内容中不存在的术语的能力。BERT 可以对每个掩码进行多次预测。我们的第一个贡献是确定每个掩码的最佳预测次数,在有效识别需求中的遗漏和减少预测中的噪音之间取得平衡。我们的第二个贡献是设计一个基于机器学习的过滤器,对 BERT 的预测进行后处理,进一步减少噪音。我们使用 PURE 数据集中的 40 个需求规格进行了实证评估。我们的研究结果表明(1) BERT 的预测有效地突出了需求中缺失的术语,(2) BERT 在识别相关但缺失的术语方面优于简单的基线,(3) 我们的过滤器减少了预测中的噪音,提高了 BERT 在需求完整性检查方面的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Requirements Engineering
Requirements Engineering 工程技术-计算机:软件工程
CiteScore
7.10
自引率
10.70%
发文量
27
审稿时长
>12 weeks
期刊介绍: The journal provides a focus for the dissemination of new results about the elicitation, representation and validation of requirements of software intensive information systems or applications. Theoretical and applied submissions are welcome, but all papers must explicitly address: -the practical consequences of the ideas for the design of complex systems -how the ideas should be evaluated by the reflective practitioner The journal is motivated by a multi-disciplinary view that considers requirements not only in terms of software components specification but also in terms of activities for their elicitation, representation and agreement, carried out within an organisational and social context. To this end, contributions are sought from fields such as software engineering, information systems, occupational sociology, cognitive and organisational psychology, human-computer interaction, computer-supported cooperative work, linguistics and philosophy for work addressing specifically requirements engineering issues.
期刊最新文献
New product development based on non-functional requirements in renewable energy industries using hesitant fuzzy QFD-DFX approach Recommending and release planning of user-driven functionality deletion for mobile apps Benchmarking requirement template systems: comparing appropriateness, usability, and expressiveness A natural language-based method to specify privacy requirements: an evaluation with practitioners Navigating personalized medication: unveiling user needs to forge a cutting-edge platform for proactive prevention and monitoring of adverse drug reactions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1