ProteinBench: A Holistic Evaluation of Protein Foundation Models

Fei Ye, Zaixiang Zheng, Dongyu Xue, Yuning Shen, Lihao Wang, Yiming Ma, Yan Wang, Xinyou Wang, Xiangxin Zhou, Quanquan Gu
{"title":"ProteinBench: A Holistic Evaluation of Protein Foundation Models","authors":"Fei Ye, Zaixiang Zheng, Dongyu Xue, Yuning Shen, Lihao Wang, Yiming Ma, Yan Wang, Xinyou Wang, Xiangxin Zhou, Quanquan Gu","doi":"arxiv-2409.06744","DOIUrl":null,"url":null,"abstract":"Recent years have witnessed a surge in the development of protein foundation\nmodels, significantly improving performance in protein prediction and\ngenerative tasks ranging from 3D structure prediction and protein design to\nconformational dynamics. However, the capabilities and limitations associated\nwith these models remain poorly understood due to the absence of a unified\nevaluation framework. To fill this gap, we introduce ProteinBench, a holistic\nevaluation framework designed to enhance the transparency of protein foundation\nmodels. Our approach consists of three key components: (i) A taxonomic\nclassification of tasks that broadly encompass the main challenges in the\nprotein domain, based on the relationships between different protein\nmodalities; (ii) A multi-metric evaluation approach that assesses performance\nacross four key dimensions: quality, novelty, diversity, and robustness; and\n(iii) In-depth analyses from various user objectives, providing a holistic view\nof model performance. Our comprehensive evaluation of protein foundation models\nreveals several key findings that shed light on their current capabilities and\nlimitations. To promote transparency and facilitate further research, we\nrelease the evaluation dataset, code, and a public leaderboard publicly for\nfurther analysis and a general modular toolkit. We intend for ProteinBench to\nbe a living benchmark for establishing a standardized, in-depth evaluation\nframework for protein foundation models, driving their development and\napplication while fostering collaboration within the field.","PeriodicalId":501266,"journal":{"name":"arXiv - QuanBio - Quantitative Methods","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Quantitative Methods","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06744","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recent years have witnessed a surge in the development of protein foundation models, significantly improving performance in protein prediction and generative tasks ranging from 3D structure prediction and protein design to conformational dynamics. However, the capabilities and limitations associated with these models remain poorly understood due to the absence of a unified evaluation framework. To fill this gap, we introduce ProteinBench, a holistic evaluation framework designed to enhance the transparency of protein foundation models. Our approach consists of three key components: (i) A taxonomic classification of tasks that broadly encompass the main challenges in the protein domain, based on the relationships between different protein modalities; (ii) A multi-metric evaluation approach that assesses performance across four key dimensions: quality, novelty, diversity, and robustness; and (iii) In-depth analyses from various user objectives, providing a holistic view of model performance. Our comprehensive evaluation of protein foundation models reveals several key findings that shed light on their current capabilities and limitations. To promote transparency and facilitate further research, we release the evaluation dataset, code, and a public leaderboard publicly for further analysis and a general modular toolkit. We intend for ProteinBench to be a living benchmark for establishing a standardized, in-depth evaluation framework for protein foundation models, driving their development and application while fostering collaboration within the field.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
ProteinBench:蛋白质基础模型的整体评估
近年来,蛋白质基础模型的发展突飞猛进,大大提高了蛋白质预测和生成任务(从三维结构预测和蛋白质设计到构象动力学)的性能。然而,由于缺乏统一的评估框架,人们对这些模型的能力和局限性仍然知之甚少。为了填补这一空白,我们推出了 ProteinBench,一个旨在提高蛋白质基础模型透明度的整体评估框架。我们的方法由三个关键部分组成:(i) 基于不同蛋白质模式之间的关系,对任务进行分类,这些任务广泛涵盖了蛋白质领域的主要挑战;(ii) 多指标评估方法,评估四个关键维度的性能:质量、新颖性、多样性和稳健性;(iii) 从不同用户目标进行深入分析,提供模型性能的整体视图。我们对蛋白质基础模型的全面评估揭示了几个关键发现,阐明了这些模型目前的能力和局限性。为了提高透明度和促进进一步的研究,我们公开发布了评估数据集、代码和公共排行榜,供进一步分析和通用模块化工具包使用。我们打算让 ProteinBench 成为一个有生命力的基准,为蛋白质基础模型建立一个标准化、深入的评估框架,推动蛋白质基础模型的开发和应用,同时促进领域内的合作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
How to Build the Virtual Cell with Artificial Intelligence: Priorities and Opportunities Automating proton PBS treatment planning for head and neck cancers using policy gradient-based deep reinforcement learning A computational framework for optimal and Model Predictive Control of stochastic gene regulatory networks Active learning for energy-based antibody optimization and enhanced screening Comorbid anxiety symptoms predict lower odds of improvement in depression symptoms during smartphone-delivered psychotherapy
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1