{"title":"An empirical study of best practices for code pre-trained models on software engineering classification tasks","authors":"Yu Zhao, Lina Gong, Yaoshen Yu, Zhiqiu Huang, Mingqiang Wei","doi":"10.1016/j.eswa.2025.126762","DOIUrl":null,"url":null,"abstract":"<div><div>Tackling code-specific classification challenges like detecting code vulnerabilities and identifying code clones is pivotal in software engineering (SE) practice. The utilization of pre-trained models (PTMs) from the natural language processing (NLP) field shows profound benefits in text classification by generating contextual token embeddings. Similarly, for code-specific classification tasks, there is a growing trend among researchers and practitioners to leverage code-oriented PTMs to create embeddings for code snippets or directly apply the code PTMs to the downstream tasks based on the pre-training and fine-tuning paradigm. Nonetheless, we observe that SE researchers and practitioners often treat the code and text in the same way as NLP strategies when employing these code PTMs. However, despite previous studies in the SE field indicating similarities between programming languages and natural languages, it may not be entirely appropriate for current researchers to directly apply NLP knowledge to assume similar behavior in code. Therefore, in order to derive best practices for researchers and practitioners to use code PTMs for SE classification tasks, we first conduct an empirical analysis on six distinct code PTMs, namely CodeBERT, StarEncoder, CodeT5, PLBART, CodeGPT, and CodeGen, across three architectural frameworks (encoder-only, decoder-only, and encoder–decoder) in the context of four SE classification tasks: code vulnerability detection, code clone identification, just-in-time defect prediction, and function docstring mismatch detection under two scenarios of code embedding and task model. Our findings reveal several insights on the use of code PTMs for code-specific classification tasks endeavors: (1) Emphasizing the vector representation of individual code tokens leads to better code embedding quality and task model performance than those generated through specific tokens techniques in both the code embedding scenario and task model scenario. (2) Larger-sized code PTMs do not necessarily lead to superior code embedding quality in the code embedding scenario and better task performance in the task model scenario. (3) Adopting the ways to handle code and text data same as the pre-training phrase cannot guarantee the acquisition of high-quality code embeddings in the code embedding scenario while in the task model scenario, it can most likely acquire better task performance.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"272 ","pages":"Article 126762"},"PeriodicalIF":7.5000,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417425003847","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Tackling code-specific classification challenges like detecting code vulnerabilities and identifying code clones is pivotal in software engineering (SE) practice. The utilization of pre-trained models (PTMs) from the natural language processing (NLP) field shows profound benefits in text classification by generating contextual token embeddings. Similarly, for code-specific classification tasks, there is a growing trend among researchers and practitioners to leverage code-oriented PTMs to create embeddings for code snippets or directly apply the code PTMs to the downstream tasks based on the pre-training and fine-tuning paradigm. Nonetheless, we observe that SE researchers and practitioners often treat the code and text in the same way as NLP strategies when employing these code PTMs. However, despite previous studies in the SE field indicating similarities between programming languages and natural languages, it may not be entirely appropriate for current researchers to directly apply NLP knowledge to assume similar behavior in code. Therefore, in order to derive best practices for researchers and practitioners to use code PTMs for SE classification tasks, we first conduct an empirical analysis on six distinct code PTMs, namely CodeBERT, StarEncoder, CodeT5, PLBART, CodeGPT, and CodeGen, across three architectural frameworks (encoder-only, decoder-only, and encoder–decoder) in the context of four SE classification tasks: code vulnerability detection, code clone identification, just-in-time defect prediction, and function docstring mismatch detection under two scenarios of code embedding and task model. Our findings reveal several insights on the use of code PTMs for code-specific classification tasks endeavors: (1) Emphasizing the vector representation of individual code tokens leads to better code embedding quality and task model performance than those generated through specific tokens techniques in both the code embedding scenario and task model scenario. (2) Larger-sized code PTMs do not necessarily lead to superior code embedding quality in the code embedding scenario and better task performance in the task model scenario. (3) Adopting the ways to handle code and text data same as the pre-training phrase cannot guarantee the acquisition of high-quality code embeddings in the code embedding scenario while in the task model scenario, it can most likely acquire better task performance.
期刊介绍:
Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.