自动评估人工智能生成的代码在安全方面的正确性

IF 3.7 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Journal of Systems and Software Pub Date : 2024-05-24 DOI:10.1016/j.jss.2024.112113
Domenico Cotroneo, Alessio Foggia, Cristina Improta, Pietro Liguori, Roberto Natella
{"title":"自动评估人工智能生成的代码在安全方面的正确性","authors":"Domenico Cotroneo,&nbsp;Alessio Foggia,&nbsp;Cristina Improta,&nbsp;Pietro Liguori,&nbsp;Roberto Natella","doi":"10.1016/j.jss.2024.112113","DOIUrl":null,"url":null,"abstract":"<div><p>Evaluating the correctness of code generated by AI is a challenging open problem. In this paper, we propose a fully automated method, named <em>ACCA</em>, to evaluate the correctness of AI-generated code for security purposes. The method uses symbolic execution to assess whether the AI-generated code behaves as a reference implementation. We use <em>ACCA</em> to assess four state-of-the-art models trained to generate security-oriented assembly code and compare the results of the evaluation with different baseline solutions, including output similarity metrics, widely used in the field, and the well-known ChatGPT, the AI-powered language model developed by OpenAI.</p><p>Our experiments show that our method outperforms the baseline solutions and assesses the correctness of the AI-generated code similar to the human-based evaluation, which is considered the ground truth for the assessment in the field. Moreover, <em>ACCA</em> has a very strong correlation with the human evaluation (Pearson’s correlation coefficient <span><math><mrow><mi>r</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>84</mn></mrow></math></span> on average). Finally, since it is a full y automated solution that does not require any human intervention, the proposed method performs the assessment of every code snippet in <span><math><mrow><mo>∼</mo><mn>0</mn><mo>.</mo><mn>17</mn></mrow></math></span> s on average, which is definitely lower than the average time required by human analysts to manually inspect the code, based on our experience.</p></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":null,"pages":null},"PeriodicalIF":3.7000,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0164121224001584/pdfft?md5=c7734d2003ab22f80edc9da32fb97026&pid=1-s2.0-S0164121224001584-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Automating the correctness assessment of AI-generated code for security contexts\",\"authors\":\"Domenico Cotroneo,&nbsp;Alessio Foggia,&nbsp;Cristina Improta,&nbsp;Pietro Liguori,&nbsp;Roberto Natella\",\"doi\":\"10.1016/j.jss.2024.112113\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Evaluating the correctness of code generated by AI is a challenging open problem. In this paper, we propose a fully automated method, named <em>ACCA</em>, to evaluate the correctness of AI-generated code for security purposes. The method uses symbolic execution to assess whether the AI-generated code behaves as a reference implementation. We use <em>ACCA</em> to assess four state-of-the-art models trained to generate security-oriented assembly code and compare the results of the evaluation with different baseline solutions, including output similarity metrics, widely used in the field, and the well-known ChatGPT, the AI-powered language model developed by OpenAI.</p><p>Our experiments show that our method outperforms the baseline solutions and assesses the correctness of the AI-generated code similar to the human-based evaluation, which is considered the ground truth for the assessment in the field. Moreover, <em>ACCA</em> has a very strong correlation with the human evaluation (Pearson’s correlation coefficient <span><math><mrow><mi>r</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>84</mn></mrow></math></span> on average). Finally, since it is a full y automated solution that does not require any human intervention, the proposed method performs the assessment of every code snippet in <span><math><mrow><mo>∼</mo><mn>0</mn><mo>.</mo><mn>17</mn></mrow></math></span> s on average, which is definitely lower than the average time required by human analysts to manually inspect the code, based on our experience.</p></div>\",\"PeriodicalId\":51099,\"journal\":{\"name\":\"Journal of Systems and Software\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-05-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0164121224001584/pdfft?md5=c7734d2003ab22f80edc9da32fb97026&pid=1-s2.0-S0164121224001584-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Systems and Software\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0164121224001584\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems and Software","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0164121224001584","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

评估人工智能生成的代码的正确性是一个具有挑战性的开放问题。在本文中,我们提出了一种名为 ACCA 的全自动方法,用于评估人工智能生成代码的正确性,以达到安全目的。该方法使用符号执行来评估人工智能生成的代码是否与参考实现一样。我们使用 ACCA 评估了四种最先进的模型,这些模型经过训练可生成面向安全的汇编代码,并将评估结果与不同的基准解决方案进行了比较,其中包括该领域广泛使用的输出相似度度量,以及著名的 ChatGPT(由 OpenAI 开发的人工智能语言模型)。我们的实验表明,我们的方法优于基准解决方案,对人工智能生成代码正确性的评估与基于人类的评估相似,而人类评估被认为是该领域评估的基本事实。此外,ACCA 与人类评估具有很强的相关性(皮尔逊相关系数 r=0.84)。最后,由于它是一种不需要人工干预的全自动解决方案,因此根据我们的经验,所提出的方法平均只需 ∼0.17 秒就能完成对每个代码片段的评估,这绝对低于人工分析师手动检查代码所需的平均时间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Automating the correctness assessment of AI-generated code for security contexts

Evaluating the correctness of code generated by AI is a challenging open problem. In this paper, we propose a fully automated method, named ACCA, to evaluate the correctness of AI-generated code for security purposes. The method uses symbolic execution to assess whether the AI-generated code behaves as a reference implementation. We use ACCA to assess four state-of-the-art models trained to generate security-oriented assembly code and compare the results of the evaluation with different baseline solutions, including output similarity metrics, widely used in the field, and the well-known ChatGPT, the AI-powered language model developed by OpenAI.

Our experiments show that our method outperforms the baseline solutions and assesses the correctness of the AI-generated code similar to the human-based evaluation, which is considered the ground truth for the assessment in the field. Moreover, ACCA has a very strong correlation with the human evaluation (Pearson’s correlation coefficient r=0.84 on average). Finally, since it is a full y automated solution that does not require any human intervention, the proposed method performs the assessment of every code snippet in 0.17 s on average, which is definitely lower than the average time required by human analysts to manually inspect the code, based on our experience.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Systems and Software
Journal of Systems and Software 工程技术-计算机:理论方法
CiteScore
8.60
自引率
5.70%
发文量
193
审稿时长
16 weeks
期刊介绍: The Journal of Systems and Software publishes papers covering all aspects of software engineering and related hardware-software-systems issues. All articles should include a validation of the idea presented, e.g. through case studies, experiments, or systematic comparisons with other approaches already in practice. Topics of interest include, but are not limited to: • Methods and tools for, and empirical studies on, software requirements, design, architecture, verification and validation, maintenance and evolution • Agile, model-driven, service-oriented, open source and global software development • Approaches for mobile, multiprocessing, real-time, distributed, cloud-based, dependable and virtualized systems • Human factors and management concerns of software development • Data management and big data issues of software systems • Metrics and evaluation, data mining of software development resources • Business and economic aspects of software development processes The journal welcomes state-of-the-art surveys and reports of practical experience for all of these topics.
期刊最新文献
FSECAM: A contextual thematic approach for linking feature to multi-level software architectural components Exploring emergent microservice evolution in elastic deployment environments An empirical study of AI techniques in mobile applications Information needs in bug reports for web applications Development and benchmarking of multilingual code clone detector
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1