{"title":"基于检索和生成联合学习的跨模态检索增强代码摘要","authors":"Lixuan Li , Bin Liang , Lin Chen , Xiaofang Zhang","doi":"10.1016/j.infsof.2024.107527","DOIUrl":null,"url":null,"abstract":"<div><h3>Context:</h3><p>Code summarization refers to a task that automatically generates a natural language description of a code snippet to facilitate code comprehension. Existing methods have achieved satisfactory results by incorporating information retrieval into generative deep-learning models for reusing summaries of existing code. However, most of these existing methods employed non-learnable generic retrieval methods for content-based retrieval, resulting in a lack of diversity in the retrieved results during training, thereby making the model over-reliant on retrieved results and reducing the generative model’s ability to generalize to unknown samples.</p></div><div><h3>Objective:</h3><p>To address this issue, this paper introduces CMR-Sum: a novel Cross-Modal Retrieval-enhanced code Summarization framework based on joint learning for generation and retrieval tasks, where both two tasks are allowed to be optimized simultaneously.</p></div><div><h3>Method:</h3><p>Specifically, we use a cross-modal retrieval module to dynamically alter retrieval results during training, which enhances the diversity of the retrieved results and maintains a relative balance between the two tasks. Furthermore, in the summary generation phase, we employ a cross-attention mechanism to generate code summaries based on the alignment between retrieved and generated summaries. We conducted experiments on three real-world datasets, comparing the performance of our method with baseline models. Additionally, we performed extensive qualitative analysis.</p></div><div><h3>Result:</h3><p>Results from qualitative and quantitative experiments indicate that our approach effectively enhances the performance of code summarization. Our method outperforms both the generation-based and the retrieval-enhanced baselines. Further ablation experiments demonstrate the effectiveness of each component of our method. Results from sensitivity analysis experiments suggest that our approach achieves good performance without requiring extensive hyper-parameter search.</p></div><div><h3>Conclusion:</h3><p>The direction of utilizing retrieval-enhanced generation tasks shows great potential. It is essential to increase the diversity of retrieval results during the training process, which is crucial for improving the generality and the performance of the model.</p></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"175 ","pages":"Article 107527"},"PeriodicalIF":3.8000,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-Modal Retrieval-enhanced code Summarization based on joint learning for retrieval and generation\",\"authors\":\"Lixuan Li , Bin Liang , Lin Chen , Xiaofang Zhang\",\"doi\":\"10.1016/j.infsof.2024.107527\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Context:</h3><p>Code summarization refers to a task that automatically generates a natural language description of a code snippet to facilitate code comprehension. Existing methods have achieved satisfactory results by incorporating information retrieval into generative deep-learning models for reusing summaries of existing code. However, most of these existing methods employed non-learnable generic retrieval methods for content-based retrieval, resulting in a lack of diversity in the retrieved results during training, thereby making the model over-reliant on retrieved results and reducing the generative model’s ability to generalize to unknown samples.</p></div><div><h3>Objective:</h3><p>To address this issue, this paper introduces CMR-Sum: a novel Cross-Modal Retrieval-enhanced code Summarization framework based on joint learning for generation and retrieval tasks, where both two tasks are allowed to be optimized simultaneously.</p></div><div><h3>Method:</h3><p>Specifically, we use a cross-modal retrieval module to dynamically alter retrieval results during training, which enhances the diversity of the retrieved results and maintains a relative balance between the two tasks. Furthermore, in the summary generation phase, we employ a cross-attention mechanism to generate code summaries based on the alignment between retrieved and generated summaries. We conducted experiments on three real-world datasets, comparing the performance of our method with baseline models. Additionally, we performed extensive qualitative analysis.</p></div><div><h3>Result:</h3><p>Results from qualitative and quantitative experiments indicate that our approach effectively enhances the performance of code summarization. Our method outperforms both the generation-based and the retrieval-enhanced baselines. Further ablation experiments demonstrate the effectiveness of each component of our method. Results from sensitivity analysis experiments suggest that our approach achieves good performance without requiring extensive hyper-parameter search.</p></div><div><h3>Conclusion:</h3><p>The direction of utilizing retrieval-enhanced generation tasks shows great potential. It is essential to increase the diversity of retrieval results during the training process, which is crucial for improving the generality and the performance of the model.</p></div>\",\"PeriodicalId\":54983,\"journal\":{\"name\":\"Information and Software Technology\",\"volume\":\"175 \",\"pages\":\"Article 107527\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2024-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information and Software Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0950584924001320\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information and Software Technology","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950584924001320","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Cross-Modal Retrieval-enhanced code Summarization based on joint learning for retrieval and generation
Context:
Code summarization refers to a task that automatically generates a natural language description of a code snippet to facilitate code comprehension. Existing methods have achieved satisfactory results by incorporating information retrieval into generative deep-learning models for reusing summaries of existing code. However, most of these existing methods employed non-learnable generic retrieval methods for content-based retrieval, resulting in a lack of diversity in the retrieved results during training, thereby making the model over-reliant on retrieved results and reducing the generative model’s ability to generalize to unknown samples.
Objective:
To address this issue, this paper introduces CMR-Sum: a novel Cross-Modal Retrieval-enhanced code Summarization framework based on joint learning for generation and retrieval tasks, where both two tasks are allowed to be optimized simultaneously.
Method:
Specifically, we use a cross-modal retrieval module to dynamically alter retrieval results during training, which enhances the diversity of the retrieved results and maintains a relative balance between the two tasks. Furthermore, in the summary generation phase, we employ a cross-attention mechanism to generate code summaries based on the alignment between retrieved and generated summaries. We conducted experiments on three real-world datasets, comparing the performance of our method with baseline models. Additionally, we performed extensive qualitative analysis.
Result:
Results from qualitative and quantitative experiments indicate that our approach effectively enhances the performance of code summarization. Our method outperforms both the generation-based and the retrieval-enhanced baselines. Further ablation experiments demonstrate the effectiveness of each component of our method. Results from sensitivity analysis experiments suggest that our approach achieves good performance without requiring extensive hyper-parameter search.
Conclusion:
The direction of utilizing retrieval-enhanced generation tasks shows great potential. It is essential to increase the diversity of retrieval results during the training process, which is crucial for improving the generality and the performance of the model.
期刊介绍:
Information and Software Technology is the international archival journal focusing on research and experience that contributes to the improvement of software development practices. The journal''s scope includes methods and techniques to better engineer software and manage its development. Articles submitted for review should have a clear component of software engineering or address ways to improve the engineering and management of software development. Areas covered by the journal include:
• Software management, quality and metrics,
• Software processes,
• Software architecture, modelling, specification, design and programming
• Functional and non-functional software requirements
• Software testing and verification & validation
• Empirical studies of all aspects of engineering and managing software development
Short Communications is a new section dedicated to short papers addressing new ideas, controversial opinions, "Negative" results and much more. Read the Guide for authors for more information.
The journal encourages and welcomes submissions of systematic literature studies (reviews and maps) within the scope of the journal. Information and Software Technology is the premiere outlet for systematic literature studies in software engineering.