使用生成式预训练转换器(GPT-2)修复JavaScript程序

Márk Lajkó, Viktor Csuvik, László Vidács
{"title":"使用生成式预训练转换器(GPT-2)修复JavaScript程序","authors":"Márk Lajkó, Viktor Csuvik, László Vidács","doi":"10.1145/3524459.3527350","DOIUrl":null,"url":null,"abstract":"The goal of Automated Program Repair (APR) is to find a fix to software bugs, without human intervention. The so-called Gener-ate and Validate (G&V) approach deemed to be the most popular method in the last few years, where the APR tool creates a patch and it is validated against an oracle. Recent years for Natural Language Processing (NLP) were of great interest, with new pre-trained models shattering records on tasks ranging from sentiment analysis to question answering. Usually these deep learning models inspire the APR community as well. These approaches usually require a large dataset on which the model can be trained (or fine-tuned) and evaluated. The criterion to accept a patch depends on the underlying dataset, but usually the generated patch should be exactly the same as the one created by a human developer. As NLP models are more and more capable to form sentences, and the sentences will form coherent paragraphs, the APR tools are also better and better at generating syntactically and semantically correct source code. As the Generative Pre-trained Transformer (GPT) model is now avail-able to everyone thanks to the NLP and AI research community, it can be fine-tuned to specific tasks (not necessarily on natural language). In this work we use the GPT-2 model to generate source code, to the best of our knowledge, the GPT-2 model was not used for Automated Program Repair so far. The model is fine-tuned for a specific task: it has been taught to fix JavaScript bugs automatically. To do so, we trained the model on 16863JS code snippets, where it could learn the nature of the observed programming language. In our experiments we observed that the GPT-2 model was able to learn how to write syntactically correct source code almost on every attempt, although it failed to learn good bug-fixes in some cases. Nonetheless it was able to generate the correct fixes in most of the cases, resulting in an overall accuracy up to 17.25%.","PeriodicalId":131481,"journal":{"name":"2022 IEEE/ACM International Workshop on Automated Program Repair (APR)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Towards JavaScript program repair with Generative Pre-trained Transformer (GPT-2)\",\"authors\":\"Márk Lajkó, Viktor Csuvik, László Vidács\",\"doi\":\"10.1145/3524459.3527350\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The goal of Automated Program Repair (APR) is to find a fix to software bugs, without human intervention. The so-called Gener-ate and Validate (G&V) approach deemed to be the most popular method in the last few years, where the APR tool creates a patch and it is validated against an oracle. Recent years for Natural Language Processing (NLP) were of great interest, with new pre-trained models shattering records on tasks ranging from sentiment analysis to question answering. Usually these deep learning models inspire the APR community as well. These approaches usually require a large dataset on which the model can be trained (or fine-tuned) and evaluated. The criterion to accept a patch depends on the underlying dataset, but usually the generated patch should be exactly the same as the one created by a human developer. As NLP models are more and more capable to form sentences, and the sentences will form coherent paragraphs, the APR tools are also better and better at generating syntactically and semantically correct source code. As the Generative Pre-trained Transformer (GPT) model is now avail-able to everyone thanks to the NLP and AI research community, it can be fine-tuned to specific tasks (not necessarily on natural language). In this work we use the GPT-2 model to generate source code, to the best of our knowledge, the GPT-2 model was not used for Automated Program Repair so far. The model is fine-tuned for a specific task: it has been taught to fix JavaScript bugs automatically. To do so, we trained the model on 16863JS code snippets, where it could learn the nature of the observed programming language. In our experiments we observed that the GPT-2 model was able to learn how to write syntactically correct source code almost on every attempt, although it failed to learn good bug-fixes in some cases. Nonetheless it was able to generate the correct fixes in most of the cases, resulting in an overall accuracy up to 17.25%.\",\"PeriodicalId\":131481,\"journal\":{\"name\":\"2022 IEEE/ACM International Workshop on Automated Program Repair (APR)\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE/ACM International Workshop on Automated Program Repair (APR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3524459.3527350\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/ACM International Workshop on Automated Program Repair (APR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3524459.3527350","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

自动程序修复(APR)的目标是在没有人工干预的情况下找到软件错误的修复。所谓的生成和验证(G&V)方法被认为是最近几年最流行的方法,其中APR工具创建补丁并针对oracle进行验证。近年来,自然语言处理(NLP)引起了人们的极大兴趣,新的预训练模型打破了从情感分析到问答等任务的记录。通常这些深度学习模型也会启发APR社区。这些方法通常需要一个大的数据集,在这个数据集上模型可以被训练(或微调)和评估。接受补丁的标准取决于底层数据集,但通常生成的补丁应该与人类开发人员创建的补丁完全相同。随着NLP模型越来越有能力形成句子,句子将形成连贯的段落,APR工具在生成语法和语义正确的源代码方面也越来越好。由于NLP和人工智能研究社区的存在,生成预训练转换器(GPT)模型现在可供所有人使用,它可以针对特定任务进行微调(不一定是在自然语言上)。在这项工作中,我们使用GPT-2模型来生成源代码,据我们所知,到目前为止,GPT-2模型还没有用于自动程序修复。该模型针对特定任务进行了微调:它已经学会了自动修复JavaScript错误。为此,我们在16863JS代码片段上训练模型,使其能够学习所观察到的编程语言的性质。在我们的实验中,我们观察到GPT-2模型几乎在每次尝试中都能够学习如何编写语法正确的源代码,尽管在某些情况下它无法学习良好的错误修复。尽管如此,它能够在大多数情况下生成正确的修复,从而使总体准确率达到17.25%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Towards JavaScript program repair with Generative Pre-trained Transformer (GPT-2)
The goal of Automated Program Repair (APR) is to find a fix to software bugs, without human intervention. The so-called Gener-ate and Validate (G&V) approach deemed to be the most popular method in the last few years, where the APR tool creates a patch and it is validated against an oracle. Recent years for Natural Language Processing (NLP) were of great interest, with new pre-trained models shattering records on tasks ranging from sentiment analysis to question answering. Usually these deep learning models inspire the APR community as well. These approaches usually require a large dataset on which the model can be trained (or fine-tuned) and evaluated. The criterion to accept a patch depends on the underlying dataset, but usually the generated patch should be exactly the same as the one created by a human developer. As NLP models are more and more capable to form sentences, and the sentences will form coherent paragraphs, the APR tools are also better and better at generating syntactically and semantically correct source code. As the Generative Pre-trained Transformer (GPT) model is now avail-able to everyone thanks to the NLP and AI research community, it can be fine-tuned to specific tasks (not necessarily on natural language). In this work we use the GPT-2 model to generate source code, to the best of our knowledge, the GPT-2 model was not used for Automated Program Repair so far. The model is fine-tuned for a specific task: it has been taught to fix JavaScript bugs automatically. To do so, we trained the model on 16863JS code snippets, where it could learn the nature of the observed programming language. In our experiments we observed that the GPT-2 model was able to learn how to write syntactically correct source code almost on every attempt, although it failed to learn good bug-fixes in some cases. Nonetheless it was able to generate the correct fixes in most of the cases, resulting in an overall accuracy up to 17.25%.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Language Models Can Prioritize Patches for Practical Program Patching Towards JavaScript program repair with Generative Pre-trained Transformer (GPT-2) Some Automatically Generated Patches are More Likely to be Correct than Others: An Analysis of Defects4J Patch Features Scaling Genetic Improvement and Automated Program Repair Revisiting Object Similarity-based Patch Ranking in Automated Program Repair: An Extensive Study
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1