{"title":"Are protein language models the new universal key?","authors":"Konstantin Weissenow , Burkhard Rost","doi":"10.1016/j.sbi.2025.102997","DOIUrl":null,"url":null,"abstract":"<div><div>Protein language models (pLMs) capture some aspects of the grammar of the language of life as written in protein sequences. The so-called pLM embeddings implicitly contain this information. Therefore, embeddings can serve as the exclusive input into downstream supervised methods for protein prediction. Over the last 33 years, evolutionary information extracted through simple averaging for specific protein families from multiple sequence alignments (MSAs) has been the most successful universal key to the success of protein prediction. For many applications, MSA-free pLM-based predictions now have become significantly more accurate. The reason for this is often a combination of two aspects. Firstly, embeddings condense the <em>grammar</em> so efficiently that downstream prediction methods succeed with small models, i.e., they need few free parameters in particular in the era of exploding deep neural networks. Secondly, pLM-based methods provide protein-specific solutions. As additional benefit, once the pLM pre-training is complete, pLM-based solutions tend to consume much fewer resources than MSA-based solutions. In fact, we appeal to the community to rather optimize foundation models than to retrain new ones and to evolve incentives for solutions that require fewer resources even at some loss in accuracy. Although pLMs have not, yet, succeeded to entirely replace the body of solutions developed over three decades, they clearly are rapidly advancing as the universal key for protein prediction.</div></div>","PeriodicalId":10887,"journal":{"name":"Current opinion in structural biology","volume":"91 ","pages":"Article 102997"},"PeriodicalIF":6.1000,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Current opinion in structural biology","FirstCategoryId":"99","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0959440X25000156","RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BIOCHEMISTRY & MOLECULAR BIOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Protein language models (pLMs) capture some aspects of the grammar of the language of life as written in protein sequences. The so-called pLM embeddings implicitly contain this information. Therefore, embeddings can serve as the exclusive input into downstream supervised methods for protein prediction. Over the last 33 years, evolutionary information extracted through simple averaging for specific protein families from multiple sequence alignments (MSAs) has been the most successful universal key to the success of protein prediction. For many applications, MSA-free pLM-based predictions now have become significantly more accurate. The reason for this is often a combination of two aspects. Firstly, embeddings condense the grammar so efficiently that downstream prediction methods succeed with small models, i.e., they need few free parameters in particular in the era of exploding deep neural networks. Secondly, pLM-based methods provide protein-specific solutions. As additional benefit, once the pLM pre-training is complete, pLM-based solutions tend to consume much fewer resources than MSA-based solutions. In fact, we appeal to the community to rather optimize foundation models than to retrain new ones and to evolve incentives for solutions that require fewer resources even at some loss in accuracy. Although pLMs have not, yet, succeeded to entirely replace the body of solutions developed over three decades, they clearly are rapidly advancing as the universal key for protein prediction.
期刊介绍:
Current Opinion in Structural Biology (COSB) aims to stimulate scientifically grounded, interdisciplinary, multi-scale debate and exchange of ideas. It contains polished, concise and timely reviews and opinions, with particular emphasis on those articles published in the past two years. In addition to describing recent trends, the authors are encouraged to give their subjective opinion of the topics discussed.
In COSB, we help the reader by providing in a systematic manner:
1. The views of experts on current advances in their field in a clear and readable form.
2. Evaluations of the most interesting papers, annotated by experts, from the great wealth of original publications.
[...]
The subject of Structural Biology is divided into twelve themed sections, each of which is reviewed once a year. Each issue contains two sections, and the amount of space devoted to each section is related to its importance.
-Folding and Binding-
Nucleic acids and their protein complexes-
Macromolecular Machines-
Theory and Simulation-
Sequences and Topology-
New constructs and expression of proteins-
Membranes-
Engineering and Design-
Carbohydrate-protein interactions and glycosylation-
Biophysical and molecular biological methods-
Multi-protein assemblies in signalling-
Catalysis and Regulation