{"title":"探索在工业测试维护流程中整合大型语言模型","authors":"Ludvig Lemner, Linnea Wahlgren, Gregory Gay, Nasser Mohammadiha, Jingxiong Liu, Joakim Wennerberg","doi":"arxiv-2409.06416","DOIUrl":null,"url":null,"abstract":"Much of the cost and effort required during the software testing process is\ninvested in performing test maintenance - the addition, removal, or\nmodification of test cases to keep the test suite in sync with the\nsystem-under-test or to otherwise improve its quality. Tool support could\nreduce the cost - and improve the quality - of test maintenance by automating\naspects of the process or by providing guidance and support to developers. In this study, we explore the capabilities and applications of large language\nmodels (LLMs) - complex machine learning models adapted to textual analysis -\nto support test maintenance. We conducted a case study at Ericsson AB where we\nexplored the triggers that indicate the need for test maintenance, the actions\nthat LLMs can take, and the considerations that must be made when deploying\nLLMs in an industrial setting. We also proposed and demonstrated\nimplementations of two multi-agent architectures that can predict which test\ncases require maintenance following a change to the source code. Collectively,\nthese contributions advance our theoretical and practical understanding of how\nLLMs can be deployed to benefit industrial test maintenance processes.","PeriodicalId":501278,"journal":{"name":"arXiv - CS - Software Engineering","volume":"45 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring the Integration of Large Language Models in Industrial Test Maintenance Processes\",\"authors\":\"Ludvig Lemner, Linnea Wahlgren, Gregory Gay, Nasser Mohammadiha, Jingxiong Liu, Joakim Wennerberg\",\"doi\":\"arxiv-2409.06416\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Much of the cost and effort required during the software testing process is\\ninvested in performing test maintenance - the addition, removal, or\\nmodification of test cases to keep the test suite in sync with the\\nsystem-under-test or to otherwise improve its quality. Tool support could\\nreduce the cost - and improve the quality - of test maintenance by automating\\naspects of the process or by providing guidance and support to developers. In this study, we explore the capabilities and applications of large language\\nmodels (LLMs) - complex machine learning models adapted to textual analysis -\\nto support test maintenance. We conducted a case study at Ericsson AB where we\\nexplored the triggers that indicate the need for test maintenance, the actions\\nthat LLMs can take, and the considerations that must be made when deploying\\nLLMs in an industrial setting. We also proposed and demonstrated\\nimplementations of two multi-agent architectures that can predict which test\\ncases require maintenance following a change to the source code. Collectively,\\nthese contributions advance our theoretical and practical understanding of how\\nLLMs can be deployed to benefit industrial test maintenance processes.\",\"PeriodicalId\":501278,\"journal\":{\"name\":\"arXiv - CS - Software Engineering\",\"volume\":\"45 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Software Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06416\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06416","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Exploring the Integration of Large Language Models in Industrial Test Maintenance Processes
Much of the cost and effort required during the software testing process is
invested in performing test maintenance - the addition, removal, or
modification of test cases to keep the test suite in sync with the
system-under-test or to otherwise improve its quality. Tool support could
reduce the cost - and improve the quality - of test maintenance by automating
aspects of the process or by providing guidance and support to developers. In this study, we explore the capabilities and applications of large language
models (LLMs) - complex machine learning models adapted to textual analysis -
to support test maintenance. We conducted a case study at Ericsson AB where we
explored the triggers that indicate the need for test maintenance, the actions
that LLMs can take, and the considerations that must be made when deploying
LLMs in an industrial setting. We also proposed and demonstrated
implementations of two multi-agent architectures that can predict which test
cases require maintenance following a change to the source code. Collectively,
these contributions advance our theoretical and practical understanding of how
LLMs can be deployed to benefit industrial test maintenance processes.