Sai Pooja Mahajan, Fátima A Dávila-Hernández, Jeffrey A Ruffolo, Jeffrey J Gray
{"title":"How well do contextual protein encodings learn structure, function, and evolutionary context?","authors":"Sai Pooja Mahajan, Fátima A Dávila-Hernández, Jeffrey A Ruffolo, Jeffrey J Gray","doi":"10.1016/j.cels.2025.101201","DOIUrl":null,"url":null,"abstract":"<p><p>In proteins, the optimal residue at any position is determined by its structural, evolutionary, and functional contexts-much like how a word may be inferred from its context in language. We trained masked label prediction models to learn representations of amino acid residues in different contexts. We focus questions on evolution and structural flexibility and whether and how contextual encodings derived through pretraining and fine-tuning may improve representations for specialized contexts. Sequences sampled from our learned representations fold into template structure and reflect sequence variations seen in related proteins. For flexible proteins, sampled sequences traverse the full conformational space of the native sequence, suggesting that plasticity is encoded in the template structure. For protein-protein interfaces, generated sequences replicate wild-type binding energies across diverse interfaces and binding strengths in silico. For the antibody-antigen interface, fine-tuning recapitulate conserved sequence patterns, while pretraining on general contexts improves sequence recovery for the hypervariable H3 loop. A record of this paper's transparent peer review process is included in the supplemental information.</p>","PeriodicalId":93929,"journal":{"name":"Cell systems","volume":" ","pages":"101201"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cell systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.cels.2025.101201","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In proteins, the optimal residue at any position is determined by its structural, evolutionary, and functional contexts-much like how a word may be inferred from its context in language. We trained masked label prediction models to learn representations of amino acid residues in different contexts. We focus questions on evolution and structural flexibility and whether and how contextual encodings derived through pretraining and fine-tuning may improve representations for specialized contexts. Sequences sampled from our learned representations fold into template structure and reflect sequence variations seen in related proteins. For flexible proteins, sampled sequences traverse the full conformational space of the native sequence, suggesting that plasticity is encoded in the template structure. For protein-protein interfaces, generated sequences replicate wild-type binding energies across diverse interfaces and binding strengths in silico. For the antibody-antigen interface, fine-tuning recapitulate conserved sequence patterns, while pretraining on general contexts improves sequence recovery for the hypervariable H3 loop. A record of this paper's transparent peer review process is included in the supplemental information.