AutoSafeCoder:通过静态分析和模糊测试确保 LLM 代码生成安全的多代理框架

Ana Nunez, Nafis Tanveer Islam, Sumit Kumar Jha, Peyman Najafirad
{"title":"AutoSafeCoder:通过静态分析和模糊测试确保 LLM 代码生成安全的多代理框架","authors":"Ana Nunez, Nafis Tanveer Islam, Sumit Kumar Jha, Peyman Najafirad","doi":"arxiv-2409.10737","DOIUrl":null,"url":null,"abstract":"Recent advancements in automatic code generation using large language models\n(LLMs) have brought us closer to fully automated secure software development.\nHowever, existing approaches often rely on a single agent for code generation,\nwhich struggles to produce secure, vulnerability-free code. Traditional program\nsynthesis with LLMs has primarily focused on functional correctness, often\nneglecting critical dynamic security implications that happen during runtime.\nTo address these challenges, we propose AutoSafeCoder, a multi-agent framework\nthat leverages LLM-driven agents for code generation, vulnerability analysis,\nand security enhancement through continuous collaboration. The framework\nconsists of three agents: a Coding Agent responsible for code generation, a\nStatic Analyzer Agent identifying vulnerabilities, and a Fuzzing Agent\nperforming dynamic testing using a mutation-based fuzzing approach to detect\nruntime errors. Our contribution focuses on ensuring the safety of multi-agent\ncode generation by integrating dynamic and static testing in an iterative\nprocess during code generation by LLM that improves security. Experiments using\nthe SecurityEval dataset demonstrate a 13% reduction in code vulnerabilities\ncompared to baseline LLMs, with no compromise in functionality.","PeriodicalId":501278,"journal":{"name":"arXiv - CS - Software Engineering","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AutoSafeCoder: A Multi-Agent Framework for Securing LLM Code Generation through Static Analysis and Fuzz Testing\",\"authors\":\"Ana Nunez, Nafis Tanveer Islam, Sumit Kumar Jha, Peyman Najafirad\",\"doi\":\"arxiv-2409.10737\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent advancements in automatic code generation using large language models\\n(LLMs) have brought us closer to fully automated secure software development.\\nHowever, existing approaches often rely on a single agent for code generation,\\nwhich struggles to produce secure, vulnerability-free code. Traditional program\\nsynthesis with LLMs has primarily focused on functional correctness, often\\nneglecting critical dynamic security implications that happen during runtime.\\nTo address these challenges, we propose AutoSafeCoder, a multi-agent framework\\nthat leverages LLM-driven agents for code generation, vulnerability analysis,\\nand security enhancement through continuous collaboration. The framework\\nconsists of three agents: a Coding Agent responsible for code generation, a\\nStatic Analyzer Agent identifying vulnerabilities, and a Fuzzing Agent\\nperforming dynamic testing using a mutation-based fuzzing approach to detect\\nruntime errors. Our contribution focuses on ensuring the safety of multi-agent\\ncode generation by integrating dynamic and static testing in an iterative\\nprocess during code generation by LLM that improves security. Experiments using\\nthe SecurityEval dataset demonstrate a 13% reduction in code vulnerabilities\\ncompared to baseline LLMs, with no compromise in functionality.\",\"PeriodicalId\":501278,\"journal\":{\"name\":\"arXiv - CS - Software Engineering\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Software Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.10737\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10737","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

使用大型语言模型(LLMs)自动生成代码的最新进展让我们离全自动安全软件开发更近了一步。然而,现有的方法通常依赖单个代理进行代码生成,很难生成安全、无漏洞的代码。为了应对这些挑战,我们提出了 AutoSafeCoder,这是一个多代理框架,利用 LLM 驱动的代理进行代码生成、漏洞分析,并通过持续协作增强安全性。该框架由三个代理组成:负责代码生成的编码代理(Coding Agent)、识别漏洞的静态分析代理(Static Analyzer Agent)和使用基于突变的模糊方法执行动态测试以检测运行时错误的模糊代理(Fuzzing Agent)。我们的贡献主要在于通过在 LLM 代码生成的迭代过程中集成动态和静态测试,确保多代理代码生成的安全性,从而提高安全性。使用 SecurityEval 数据集进行的实验表明,与基线 LLM 相比,代码漏洞减少了 13%,而且功能没有受到影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
AutoSafeCoder: A Multi-Agent Framework for Securing LLM Code Generation through Static Analysis and Fuzz Testing
Recent advancements in automatic code generation using large language models (LLMs) have brought us closer to fully automated secure software development. However, existing approaches often rely on a single agent for code generation, which struggles to produce secure, vulnerability-free code. Traditional program synthesis with LLMs has primarily focused on functional correctness, often neglecting critical dynamic security implications that happen during runtime. To address these challenges, we propose AutoSafeCoder, a multi-agent framework that leverages LLM-driven agents for code generation, vulnerability analysis, and security enhancement through continuous collaboration. The framework consists of three agents: a Coding Agent responsible for code generation, a Static Analyzer Agent identifying vulnerabilities, and a Fuzzing Agent performing dynamic testing using a mutation-based fuzzing approach to detect runtime errors. Our contribution focuses on ensuring the safety of multi-agent code generation by integrating dynamic and static testing in an iterative process during code generation by LLM that improves security. Experiments using the SecurityEval dataset demonstrate a 13% reduction in code vulnerabilities compared to baseline LLMs, with no compromise in functionality.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Promise and Peril of Collaborative Code Generation Models: Balancing Effectiveness and Memorization Shannon Entropy is better Feature than Category and Sentiment in User Feedback Processing Motivations, Challenges, Best Practices, and Benefits for Bots and Conversational Agents in Software Engineering: A Multivocal Literature Review A Taxonomy of Self-Admitted Technical Debt in Deep Learning Systems Investigating team maturity in an agile automotive reorganization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1