HexaCoder:通过 Oracle 引导的合成训练数据安全生成代码

Hossein Hajipour, Lea Schönherr, Thorsten Holz, Mario Fritz
{"title":"HexaCoder:通过 Oracle 引导的合成训练数据安全生成代码","authors":"Hossein Hajipour, Lea Schönherr, Thorsten Holz, Mario Fritz","doi":"arxiv-2409.06446","DOIUrl":null,"url":null,"abstract":"Large language models (LLMs) have shown great potential for automatic code\ngeneration and form the basis for various tools such as GitHub Copilot.\nHowever, recent studies highlight that many LLM-generated code contains serious\nsecurity vulnerabilities. While previous work tries to address this by training\nmodels that generate secure code, these attempts remain constrained by limited\naccess to training data and labor-intensive data preparation. In this paper, we introduce HexaCoder, a novel approach to enhance the\nability of LLMs to generate secure codes by automatically synthesizing secure\ncodes, which reduces the effort of finding suitable training data. HexaCoder\ncomprises two key components: an oracle-guided data synthesis pipeline and a\ntwo-step process for secure code generation. The data synthesis pipeline\ngenerates pairs of vulnerable and fixed codes for specific Common Weakness\nEnumeration (CWE) types by utilizing a state-of-the-art LLM for repairing\nvulnerable code. A security oracle identifies vulnerabilities, and a\nstate-of-the-art LLM repairs them by extending and/or editing the codes,\ncreating data pairs for fine-tuning using the Low-Rank Adaptation (LoRA)\nmethod. Each example of our fine-tuning dataset includes the necessary\nsecurity-related libraries and code that form the basis of our novel two-step\ngeneration approach. This allows the model to integrate security-relevant\nlibraries before generating the main code, significantly reducing the number of\ngenerated vulnerable codes by up to 85% compared to the baseline methods. We\nperform extensive evaluations on three different benchmarks for four LLMs,\ndemonstrating that HexaCoder not only improves the security of the generated\ncode but also maintains a high level of functional correctness.","PeriodicalId":501278,"journal":{"name":"arXiv - CS - Software Engineering","volume":"63 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HexaCoder: Secure Code Generation via Oracle-Guided Synthetic Training Data\",\"authors\":\"Hossein Hajipour, Lea Schönherr, Thorsten Holz, Mario Fritz\",\"doi\":\"arxiv-2409.06446\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large language models (LLMs) have shown great potential for automatic code\\ngeneration and form the basis for various tools such as GitHub Copilot.\\nHowever, recent studies highlight that many LLM-generated code contains serious\\nsecurity vulnerabilities. While previous work tries to address this by training\\nmodels that generate secure code, these attempts remain constrained by limited\\naccess to training data and labor-intensive data preparation. In this paper, we introduce HexaCoder, a novel approach to enhance the\\nability of LLMs to generate secure codes by automatically synthesizing secure\\ncodes, which reduces the effort of finding suitable training data. HexaCoder\\ncomprises two key components: an oracle-guided data synthesis pipeline and a\\ntwo-step process for secure code generation. The data synthesis pipeline\\ngenerates pairs of vulnerable and fixed codes for specific Common Weakness\\nEnumeration (CWE) types by utilizing a state-of-the-art LLM for repairing\\nvulnerable code. A security oracle identifies vulnerabilities, and a\\nstate-of-the-art LLM repairs them by extending and/or editing the codes,\\ncreating data pairs for fine-tuning using the Low-Rank Adaptation (LoRA)\\nmethod. Each example of our fine-tuning dataset includes the necessary\\nsecurity-related libraries and code that form the basis of our novel two-step\\ngeneration approach. This allows the model to integrate security-relevant\\nlibraries before generating the main code, significantly reducing the number of\\ngenerated vulnerable codes by up to 85% compared to the baseline methods. We\\nperform extensive evaluations on three different benchmarks for four LLMs,\\ndemonstrating that HexaCoder not only improves the security of the generated\\ncode but also maintains a high level of functional correctness.\",\"PeriodicalId\":501278,\"journal\":{\"name\":\"arXiv - CS - Software Engineering\",\"volume\":\"63 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Software Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06446\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06446","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(LLM)在自动代码生成方面显示出巨大潜力,并成为 GitHub Copilot 等各种工具的基础。虽然以前的工作试图通过训练生成安全代码的模型来解决这个问题,但这些尝试仍然受到训练数据获取途径有限和数据准备耗费人力的限制。在本文中,我们介绍了 HexaCoder,这是一种通过自动合成安全代码来提高 LLM 生成安全代码能力的新方法,它减少了寻找合适训练数据的工作量。HexaCoderc 由两个关键部分组成:甲骨文指导的数据合成管道和安全代码生成的两步流程。数据合成流水线利用最先进的 LLM 修复漏洞代码,为特定的常见弱点枚举(CWE)类型生成成对的漏洞代码和固定代码。安全甲骨文会识别漏洞,而最先进的 LLM 会通过扩展和/或编辑代码来修复漏洞,并使用低库适应(LoRA)方法创建数据对进行微调。微调数据集的每个示例都包含必要的安全相关库和代码,这些库和代码构成了我们新颖的两步生成方法的基础。这样,模型就能在生成主代码之前集成安全相关的库,与基线方法相比,生成的易受攻击代码数量最多可减少 85%。我们对四种 LLM 的三种不同基准进行了广泛的评估,结果表明 HexaCoder 不仅提高了生成代码的安全性,还保持了较高的功能正确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
HexaCoder: Secure Code Generation via Oracle-Guided Synthetic Training Data
Large language models (LLMs) have shown great potential for automatic code generation and form the basis for various tools such as GitHub Copilot. However, recent studies highlight that many LLM-generated code contains serious security vulnerabilities. While previous work tries to address this by training models that generate secure code, these attempts remain constrained by limited access to training data and labor-intensive data preparation. In this paper, we introduce HexaCoder, a novel approach to enhance the ability of LLMs to generate secure codes by automatically synthesizing secure codes, which reduces the effort of finding suitable training data. HexaCoder comprises two key components: an oracle-guided data synthesis pipeline and a two-step process for secure code generation. The data synthesis pipeline generates pairs of vulnerable and fixed codes for specific Common Weakness Enumeration (CWE) types by utilizing a state-of-the-art LLM for repairing vulnerable code. A security oracle identifies vulnerabilities, and a state-of-the-art LLM repairs them by extending and/or editing the codes, creating data pairs for fine-tuning using the Low-Rank Adaptation (LoRA) method. Each example of our fine-tuning dataset includes the necessary security-related libraries and code that form the basis of our novel two-step generation approach. This allows the model to integrate security-relevant libraries before generating the main code, significantly reducing the number of generated vulnerable codes by up to 85% compared to the baseline methods. We perform extensive evaluations on three different benchmarks for four LLMs, demonstrating that HexaCoder not only improves the security of the generated code but also maintains a high level of functional correctness.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Promise and Peril of Collaborative Code Generation Models: Balancing Effectiveness and Memorization Shannon Entropy is better Feature than Category and Sentiment in User Feedback Processing Motivations, Challenges, Best Practices, and Benefits for Bots and Conversational Agents in Software Engineering: A Multivocal Literature Review A Taxonomy of Self-Admitted Technical Debt in Deep Learning Systems Investigating team maturity in an agile automotive reorganization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1