A curriculum learning approach to training antibody language models.

Sarah M Burbach, Bryan Briney
{"title":"A curriculum learning approach to training antibody language models.","authors":"Sarah M Burbach, Bryan Briney","doi":"10.1101/2025.02.27.640641","DOIUrl":null,"url":null,"abstract":"<p><p>There is growing interest in pre-training antibody language models (<b>AbLMs</b>) with a mixture of unpaired and natively paired sequences, seeking to combine the proven benefits of training with natively paired sequences with the massive scale of unpaired antibody sequence datasets. However, given the novelty of this strategy, the field lacks a systematic evaluation of data processing methods and training strategies that maximize the benefits of mixed training data while accommodating the significant imbalance in the size of existing paired and unpaired datasets. Here we introduce a method of curriculum learning for AbLMs, which facilitates a gradual transition from unpaired to paired sequences during training. We optimize this method and show that a 650M-parameter curriculum model, CurrAb, outperforms existing mixed AbLMs in downstream classification tasks.</p>","PeriodicalId":519960,"journal":{"name":"bioRxiv : the preprint server for biology","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11888446/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"bioRxiv : the preprint server for biology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2025.02.27.640641","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

There is growing interest in pre-training antibody language models (AbLMs) with a mixture of unpaired and natively paired sequences, seeking to combine the proven benefits of training with natively paired sequences with the massive scale of unpaired antibody sequence datasets. However, given the novelty of this strategy, the field lacks a systematic evaluation of data processing methods and training strategies that maximize the benefits of mixed training data while accommodating the significant imbalance in the size of existing paired and unpaired datasets. Here we introduce a method of curriculum learning for AbLMs, which facilitates a gradual transition from unpaired to paired sequences during training. We optimize this method and show that a 650M-parameter curriculum model, CurrAb, outperforms existing mixed AbLMs in downstream classification tasks.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
越来越多的人开始关注用非配对序列和原生配对序列混合预训练抗体语言模型(ABLMs),试图将用原生配对序列训练的公认优势与大规模非配对抗体序列数据集结合起来。然而,鉴于这一策略的新颖性,该领域缺乏对数据处理方法和训练策略的系统评估,而这些方法和策略既能最大限度地发挥混合训练数据的优势,又能适应现有配对和非配对数据集规模严重失衡的情况。在这里,我们介绍了一种针对 AbLMs 的课程学习方法,它有助于在训练过程中从非配对序列逐步过渡到配对序列。我们对这种方法进行了优化,结果表明,在下游分类任务中,6.5 亿参数的课程模型 CurrAb 优于现有的混合 AbLM。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Infection characteristics among Serratia marcescens capsule lineages. Functional redundancy between penicillin-binding proteins during asymmetric cell division in Clostridioides difficile. Pyruvate and Related Energetic Metabolites Modulate Resilience Against High Genetic Risk for Glaucoma. Computational Analysis of the Gut Microbiota-Mediated Drug Metabolism. Jointly representing long-range genetic similarity and spatially heterogeneous isolation-by-distance.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1