{"title":"Zero-shot transfer of protein sequence likelihood models to thermostability prediction","authors":"Shawn Reeves, Subha Kalyaanamoorthy","doi":"10.1038/s42256-024-00887-7","DOIUrl":null,"url":null,"abstract":"Protein sequence likelihood models (PSLMs) are an emerging class of self-supervised deep learning algorithms that learn probability distributions over amino acid identities conditioned on structural or evolutionary context. Recently, PSLMs have demonstrated impressive performance in predicting the relative fitness of variant sequences without any task-specific training, but their potential to address a central goal of protein engineering—enhancing stability—remains underexplored. Here we comprehensively analyse the capacity for zero-shot transfer of eight PSLMs towards prediction of relative thermostability for variants of hundreds of heterogeneous proteins across several quantitative datasets. PSLMs are compared with popular task-specific stability models, and we show that some PSLMs have competitive performance when the appropriate statistics are considered. We highlight relative strengths and weaknesses of PSLMs and examine their complementarity with task-specific models, specifically focusing our analyses on stability-engineering applications. Our results indicate that all PSLMs can appreciably augment the predictions of existing methods by integrating insights from their disparate training objectives, suggesting a path forward in the stagnating field of computational stability prediction. Stabilization of proteins is a key task in protein engineering; however, current methods to predict mutant stability face a number of limitations. Reeves and Kalyaanamoorthy study the performance of self-supervised protein sequence likelihood models for stability prediction and find that combining them with task-specific supervised models can lead to appreciable practical gains.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":18.8000,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.nature.com/articles/s42256-024-00887-7","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Protein sequence likelihood models (PSLMs) are an emerging class of self-supervised deep learning algorithms that learn probability distributions over amino acid identities conditioned on structural or evolutionary context. Recently, PSLMs have demonstrated impressive performance in predicting the relative fitness of variant sequences without any task-specific training, but their potential to address a central goal of protein engineering—enhancing stability—remains underexplored. Here we comprehensively analyse the capacity for zero-shot transfer of eight PSLMs towards prediction of relative thermostability for variants of hundreds of heterogeneous proteins across several quantitative datasets. PSLMs are compared with popular task-specific stability models, and we show that some PSLMs have competitive performance when the appropriate statistics are considered. We highlight relative strengths and weaknesses of PSLMs and examine their complementarity with task-specific models, specifically focusing our analyses on stability-engineering applications. Our results indicate that all PSLMs can appreciably augment the predictions of existing methods by integrating insights from their disparate training objectives, suggesting a path forward in the stagnating field of computational stability prediction. Stabilization of proteins is a key task in protein engineering; however, current methods to predict mutant stability face a number of limitations. Reeves and Kalyaanamoorthy study the performance of self-supervised protein sequence likelihood models for stability prediction and find that combining them with task-specific supervised models can lead to appreciable practical gains.
期刊介绍:
Nature Machine Intelligence is a distinguished publication that presents original research and reviews on various topics in machine learning, robotics, and AI. Our focus extends beyond these fields, exploring their profound impact on other scientific disciplines, as well as societal and industrial aspects. We recognize limitless possibilities wherein machine intelligence can augment human capabilities and knowledge in domains like scientific exploration, healthcare, medical diagnostics, and the creation of safe and sustainable cities, transportation, and agriculture. Simultaneously, we acknowledge the emergence of ethical, social, and legal concerns due to the rapid pace of advancements.
To foster interdisciplinary discussions on these far-reaching implications, Nature Machine Intelligence serves as a platform for dialogue facilitated through Comments, News Features, News & Views articles, and Correspondence. Our goal is to encourage a comprehensive examination of these subjects.
Similar to all Nature-branded journals, Nature Machine Intelligence operates under the guidance of a team of skilled editors. We adhere to a fair and rigorous peer-review process, ensuring high standards of copy-editing and production, swift publication, and editorial independence.