Generating Adversarial Source Programs Using Important Tokens-based Structural Transformations

Penglong Chen, Zhuguo Li, Yu Wen, Lili Liu
{"title":"Generating Adversarial Source Programs Using Important Tokens-based Structural Transformations","authors":"Penglong Chen, Zhuguo Li, Yu Wen, Lili Liu","doi":"10.1109/ICECCS54210.2022.00029","DOIUrl":null,"url":null,"abstract":"Deep learning models have been widely used in source code processing tasks, such as code captioning, code summarization, code completion, and code classification. Recent studies have shown that deep learning-based source code processing models are vulnerable. Attackers can generate adversarial examples by adding perturbations to source programs. Existing attack methods perturb a source program by renaming one or multiple variables in the program. These attack methods do not take into account the perturbation of the equivalent structural transformations of the source code. We propose a set of program transformations involving identifier renaming and structural transformations, which can ensure that the perturbed program retains the original semantics but can fool the source code processing model to change the original prediction result. We propose a novel method of applying semantics-preserving structural transformations to attack the source program pro-cessing model in the white-box setting. This is the first time that semantics-preserving structural transformations are applied to generate adversarial examples of source code processing models. We first find the important tokens in the program by calculating the contribution values of each part of the program, then select the best transformation for each important token to generate semantic adversarial examples. The experimental results show that the attack success rate of our attack method can improve 8.29 % on average compared with the state-of-the-art attack method; adversarial training using the adversarial examples generated by our attack method can reduce the attack success rates of source code processing models by 21.79% on average.","PeriodicalId":344493,"journal":{"name":"2022 26th International Conference on Engineering of Complex Computer Systems (ICECCS)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 26th International Conference on Engineering of Complex Computer Systems (ICECCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICECCS54210.2022.00029","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Deep learning models have been widely used in source code processing tasks, such as code captioning, code summarization, code completion, and code classification. Recent studies have shown that deep learning-based source code processing models are vulnerable. Attackers can generate adversarial examples by adding perturbations to source programs. Existing attack methods perturb a source program by renaming one or multiple variables in the program. These attack methods do not take into account the perturbation of the equivalent structural transformations of the source code. We propose a set of program transformations involving identifier renaming and structural transformations, which can ensure that the perturbed program retains the original semantics but can fool the source code processing model to change the original prediction result. We propose a novel method of applying semantics-preserving structural transformations to attack the source program pro-cessing model in the white-box setting. This is the first time that semantics-preserving structural transformations are applied to generate adversarial examples of source code processing models. We first find the important tokens in the program by calculating the contribution values of each part of the program, then select the best transformation for each important token to generate semantic adversarial examples. The experimental results show that the attack success rate of our attack method can improve 8.29 % on average compared with the state-of-the-art attack method; adversarial training using the adversarial examples generated by our attack method can reduce the attack success rates of source code processing models by 21.79% on average.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用重要的基于符号的结构转换生成对抗性源程序
深度学习模型已广泛应用于源代码处理任务,如代码字幕、代码摘要、代码完成和代码分类。最近的研究表明,基于深度学习的源代码处理模型是脆弱的。攻击者可以通过在源程序中添加扰动来生成对抗性示例。现有的攻击方法通过重命名程序中的一个或多个变量来干扰源程序。这些攻击方法没有考虑到源代码等效结构变换的扰动。我们提出了一组包含标识符重命名和结构转换的程序转换,可以保证被扰动的程序保留原始语义,但可以欺骗源代码处理模型来改变原始预测结果。我们提出了一种应用语义保持结构转换来攻击白盒环境下的源程序处理模型的新方法。这是第一次将保持语义的结构转换应用于生成源代码处理模型的对抗性示例。我们首先通过计算程序中每个部分的贡献值来找到程序中的重要标记,然后为每个重要标记选择最佳变换来生成语义对抗示例。实验结果表明,与目前最先进的攻击方法相比,该方法的攻击成功率平均提高8.29%;使用我们的攻击方法生成的对抗性示例进行对抗性训练,可以使源代码处理模型的攻击成功率平均降低21.79%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Parameter Sensitive Pointer Analysis for Java Optimizing Parallel Java Streams Parameterized Design and Formal Verification of Multi-ported Memory Extension-Compression Learning: A deep learning code search method that simulates reading habits Proceedings 2022 26th International Conference on Engineering of Complex Computer Systems [Title page iii]
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1