优化选择附加语言混合比例的 Llama-3 70B 后期培训实践

Ningyuan Xi, Yetao Wu, Kun Fan, Teng Chen, Qingqing Gu, Peng Yu, Jinxian Qu, Chenxi Liu, Zhonglin Jiang, Yong Chen, Luo Ji
{"title":"优化选择附加语言混合比例的 Llama-3 70B 后期培训实践","authors":"Ningyuan Xi, Yetao Wu, Kun Fan, Teng Chen, Qingqing Gu, Peng Yu, Jinxian Qu, Chenxi Liu, Zhonglin Jiang, Yong Chen, Luo Ji","doi":"arxiv-2409.06624","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLM) often needs to be Continual Pre-Trained (CPT) to\nobtain the unfamiliar language skill or adapt into new domains. The huge\ntraining cost of CPT often asks for cautious choice of key hyper-parameters\nsuch as the mixture ratio of extra language or domain corpus. However, there is\nno systematic study which bridge the gap between the optimal mixture ratio and\nthe actual model performance, and the gap between experimental scaling law and\nthe actual deployment in the full model size. In this paper, we perform CPT on\nLlama-3 8B and 70B to enhance its Chinese ability. We study the optimal\ncorrelation between the Additional Language Mixture Ratio (ALMR) and the\nLearning Rate (LR) on the 8B size which directly indicate the optimal\nexperimental set up. By thorough choice of hyper-parameter, and subsequent\nfine-tuning, the model capability is improved not only on the Chinese-related\nbenchmark, but also some specific domains including math, coding and emotional\nintelligence. We deploy the final 70B version of LLM on an real-life chat\nsystem which obtain satisfying performance.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Practice of Post-Training on Llama-3 70B with Optimal Selection of Additional Language Mixture Ratio\",\"authors\":\"Ningyuan Xi, Yetao Wu, Kun Fan, Teng Chen, Qingqing Gu, Peng Yu, Jinxian Qu, Chenxi Liu, Zhonglin Jiang, Yong Chen, Luo Ji\",\"doi\":\"arxiv-2409.06624\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large Language Models (LLM) often needs to be Continual Pre-Trained (CPT) to\\nobtain the unfamiliar language skill or adapt into new domains. The huge\\ntraining cost of CPT often asks for cautious choice of key hyper-parameters\\nsuch as the mixture ratio of extra language or domain corpus. However, there is\\nno systematic study which bridge the gap between the optimal mixture ratio and\\nthe actual model performance, and the gap between experimental scaling law and\\nthe actual deployment in the full model size. In this paper, we perform CPT on\\nLlama-3 8B and 70B to enhance its Chinese ability. We study the optimal\\ncorrelation between the Additional Language Mixture Ratio (ALMR) and the\\nLearning Rate (LR) on the 8B size which directly indicate the optimal\\nexperimental set up. By thorough choice of hyper-parameter, and subsequent\\nfine-tuning, the model capability is improved not only on the Chinese-related\\nbenchmark, but also some specific domains including math, coding and emotional\\nintelligence. We deploy the final 70B version of LLM on an real-life chat\\nsystem which obtain satisfying performance.\",\"PeriodicalId\":501030,\"journal\":{\"name\":\"arXiv - CS - Computation and Language\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computation and Language\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06624\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06624","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(LLM)通常需要经过持续预训练(CPT)才能获得陌生语言技能或适应新领域。CPT 高昂的训练成本往往要求对关键超参数(如额外语言或领域语料的混合比)进行谨慎选择。然而,目前还没有系统性的研究来弥合最佳混合比与实际模型性能之间的差距,以及实验缩放规律与实际部署全尺寸模型之间的差距。在本文中,我们对 Llama-3 8B 和 70B 进行了 CPT,以增强其中文能力。我们研究了 8B 大小的附加语言混合比(ALMR)和学习率(LR)之间的最佳相关性,这直接表明了最佳的实验设置。通过对超参数的全面选择和后续的微调,模型的能力不仅在与中文相关的基准测试中得到了提高,而且在数学、编码和情感智能等一些特定领域中也得到了提高。我们在一个真实的聊天系统上部署了最终的 70B 版本的 LLM,并获得了令人满意的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Practice of Post-Training on Llama-3 70B with Optimal Selection of Additional Language Mixture Ratio
Large Language Models (LLM) often needs to be Continual Pre-Trained (CPT) to obtain the unfamiliar language skill or adapt into new domains. The huge training cost of CPT often asks for cautious choice of key hyper-parameters such as the mixture ratio of extra language or domain corpus. However, there is no systematic study which bridge the gap between the optimal mixture ratio and the actual model performance, and the gap between experimental scaling law and the actual deployment in the full model size. In this paper, we perform CPT on Llama-3 8B and 70B to enhance its Chinese ability. We study the optimal correlation between the Additional Language Mixture Ratio (ALMR) and the Learning Rate (LR) on the 8B size which directly indicate the optimal experimental set up. By thorough choice of hyper-parameter, and subsequent fine-tuning, the model capability is improved not only on the Chinese-related benchmark, but also some specific domains including math, coding and emotional intelligence. We deploy the final 70B version of LLM on an real-life chat system which obtain satisfying performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
LLMs + Persona-Plug = Personalized LLMs MEOW: MEMOry Supervised LLM Unlearning Via Inverted Facts Extract-and-Abstract: Unifying Extractive and Abstractive Summarization within Single Encoder-Decoder Framework Development and bilingual evaluation of Japanese medical large language model within reasonably low computational resources Human-like Affective Cognition in Foundation Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1