Haau-Sing Li, Patrick Fernandes, Iryna Gurevych, André F. T. Martins
{"title":"DOCE: Finding the Sweet Spot for Execution-Based Code Generation","authors":"Haau-Sing Li, Patrick Fernandes, Iryna Gurevych, André F. T. Martins","doi":"arxiv-2408.13745","DOIUrl":null,"url":null,"abstract":"Recently, a diverse set of decoding and reranking procedures have been shown\neffective for LLM-based code generation. However, a comprehensive framework\nthat links and experimentally compares these methods is missing. We address\nthis by proposing Decoding Objectives for Code Execution, a comprehensive\nframework that includes candidate generation, $n$-best reranking, minimum Bayes\nrisk (MBR) decoding, and self-debugging as the core components. We then study\nthe contributions of these components through execution-based evaluation\nmetrics. Our findings highlight the importance of execution-based methods and\nthe difference gap between execution-based and execution-free methods.\nFurthermore, we assess the impact of filtering based on trial unit tests, a\nsimple and effective strategy that has been often overlooked in prior works. We\nalso propose self-debugging on multiple candidates, obtaining state-of-the-art\nperformance on reranking for code generation. We expect our framework to\nprovide a solid guideline for future research on code generation.","PeriodicalId":501197,"journal":{"name":"arXiv - CS - Programming Languages","volume":"60 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Programming Languages","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.13745","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, a diverse set of decoding and reranking procedures have been shown
effective for LLM-based code generation. However, a comprehensive framework
that links and experimentally compares these methods is missing. We address
this by proposing Decoding Objectives for Code Execution, a comprehensive
framework that includes candidate generation, $n$-best reranking, minimum Bayes
risk (MBR) decoding, and self-debugging as the core components. We then study
the contributions of these components through execution-based evaluation
metrics. Our findings highlight the importance of execution-based methods and
the difference gap between execution-based and execution-free methods.
Furthermore, we assess the impact of filtering based on trial unit tests, a
simple and effective strategy that has been often overlooked in prior works. We
also propose self-debugging on multiple candidates, obtaining state-of-the-art
performance on reranking for code generation. We expect our framework to
provide a solid guideline for future research on code generation.