LongCoder: A Long-Range Pre-trained Language Model for Code Completion

Daya Guo, Canwen Xu, Nan Duan, Jian Yin, Julian McAuley
{"title":"LongCoder: A Long-Range Pre-trained Language Model for Code Completion","authors":"Daya Guo, Canwen Xu, Nan Duan, Jian Yin, Julian McAuley","doi":"10.48550/arXiv.2306.14893","DOIUrl":null,"url":null,"abstract":"In this paper, we introduce a new task for code completion that focuses on handling long code input and propose a sparse Transformer model, called LongCoder, to address this task. LongCoder employs a sliding window mechanism for self-attention and introduces two types of globally accessible tokens - bridge tokens and memory tokens - to improve performance and efficiency. Bridge tokens are inserted throughout the input sequence to aggregate local information and facilitate global interaction, while memory tokens are included to highlight important statements that may be invoked later and need to be memorized, such as package imports and definitions of classes, functions, or structures. We conduct experiments on a newly constructed dataset that contains longer code context and the publicly available CodeXGLUE benchmark. Experimental results demonstrate that LongCoder achieves superior performance on code completion tasks compared to previous models while maintaining comparable efficiency in terms of computational resources during inference. All the codes and data are available at https://github.com/microsoft/CodeBERT.","PeriodicalId":74529,"journal":{"name":"Proceedings of the ... International Conference on Machine Learning. International Conference on Machine Learning","volume":"13 1","pages":"12098-12107"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... International Conference on Machine Learning. International Conference on Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2306.14893","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

In this paper, we introduce a new task for code completion that focuses on handling long code input and propose a sparse Transformer model, called LongCoder, to address this task. LongCoder employs a sliding window mechanism for self-attention and introduces two types of globally accessible tokens - bridge tokens and memory tokens - to improve performance and efficiency. Bridge tokens are inserted throughout the input sequence to aggregate local information and facilitate global interaction, while memory tokens are included to highlight important statements that may be invoked later and need to be memorized, such as package imports and definitions of classes, functions, or structures. We conduct experiments on a newly constructed dataset that contains longer code context and the publicly available CodeXGLUE benchmark. Experimental results demonstrate that LongCoder achieves superior performance on code completion tasks compared to previous models while maintaining comparable efficiency in terms of computational resources during inference. All the codes and data are available at https://github.com/microsoft/CodeBERT.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
LongCoder:用于代码完成的远程预训练语言模型
在本文中,我们介绍了一个用于代码完成的新任务,该任务专注于处理长代码输入,并提出了一个称为LongCoder的稀疏Transformer模型来处理该任务。LongCoder采用滑动窗口机制进行自关注,并引入了两种全局可访问的令牌——桥接令牌和内存令牌——以提高性能和效率。在整个输入序列中插入桥接令牌以聚合本地信息并促进全局交互,而包含内存令牌以突出显示以后可能调用且需要记忆的重要语句,例如包导入和类、函数或结构的定义。我们在一个新构建的数据集上进行实验,该数据集包含更长的代码上下文和公开可用的CodeXGLUE基准。实验结果表明,与以前的模型相比,LongCoder在代码完成任务上取得了更好的性能,同时在推理过程中保持了相当的计算资源效率。所有代码和数据可在https://github.com/microsoft/CodeBERT上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Differential Privacy, Linguistic Fairness, and Training Data Influence: Impossibility and Possibility Theorems for Multilingual Language Models Ske2Grid: Skeleton-to-Grid Representation Learning for Action Recognition Probabilistic Imputation for Time-series Classification with Missing Data Decoding Layer Saliency in Language Transformers Do You Remember? Overcoming Catastrophic Forgetting for Fake Audio Detection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1