Marwa Naïr, Kamel Yamani, Lynda Said Lhadj, Riyadh Baghdadi
{"title":"Curriculum Learning for Small Code Language Models","authors":"Marwa Naïr, Kamel Yamani, Lynda Said Lhadj, Riyadh Baghdadi","doi":"arxiv-2407.10194","DOIUrl":null,"url":null,"abstract":"Code language models have emerged as useful tools for various programming\ntasks, yet they often struggle when it comes to complex ones. In this paper, we\nexplore the potential of curriculum learning in enhancing the performance of\nthese models. While prior research has suggested that curriculum learning does\nnot necessarily help in improving the performance of language models, our\nresults surprisingly show that this may not be the case for code language\nmodels. We demonstrate that a well-designed curriculum learning approach\nsignificantly improves the accuracy of small decoder-only code language models\non the task of code execution, while its effect on code completion is less\nsignificant. To explore the potential of curriculum learning, we train multiple\nGPT models with 1 million parameters each to predict the next token and\nevaluate them on code completion and execution tasks. Our contributions include\nproposing a novel code difficulty assessment metric by combining software code\nmeasures, investigating the effectiveness of Curriculum Learning for code\nlanguage models, and introducing a Novel Curriculum Learning schedule that\nenhances the performance of small decoder-only language models in code\nexecution tasks. The results of this paper open the door for more research on\nthe use of curriculum learning for code language models.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Programming Languages","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.10194","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Code language models have emerged as useful tools for various programming
tasks, yet they often struggle when it comes to complex ones. In this paper, we
explore the potential of curriculum learning in enhancing the performance of
these models. While prior research has suggested that curriculum learning does
not necessarily help in improving the performance of language models, our
results surprisingly show that this may not be the case for code language
models. We demonstrate that a well-designed curriculum learning approach
significantly improves the accuracy of small decoder-only code language models
on the task of code execution, while its effect on code completion is less
significant. To explore the potential of curriculum learning, we train multiple
GPT models with 1 million parameters each to predict the next token and
evaluate them on code completion and execution tasks. Our contributions include
proposing a novel code difficulty assessment metric by combining software code
measures, investigating the effectiveness of Curriculum Learning for code
language models, and introducing a Novel Curriculum Learning schedule that
enhances the performance of small decoder-only language models in code
execution tasks. The results of this paper open the door for more research on
the use of curriculum learning for code language models.