Yuanqing Wang, Kenichiro Takaba, Michael S. Chen, Marcus Wieder, Yuzhi Xu, John Z. H. Zhang, Kuang Yu, Xinyan Wang, Linfeng Zhang, Daniel J. Cole, Joshua A. Rackers, Joe G. Greener, Peter Eastman, Stefano Martiniani, Mark E. Tuckerman
{"title":"分子力学与机器学习力场之间的设计空间","authors":"Yuanqing Wang, Kenichiro Takaba, Michael S. Chen, Marcus Wieder, Yuzhi Xu, John Z. H. Zhang, Kuang Yu, Xinyan Wang, Linfeng Zhang, Daniel J. Cole, Joshua A. Rackers, Joe G. Greener, Peter Eastman, Stefano Martiniani, Mark E. Tuckerman","doi":"arxiv-2409.01931","DOIUrl":null,"url":null,"abstract":"A force field as accurate as quantum mechanics (QM) and as fast as molecular\nmechanics (MM), with which one can simulate a biomolecular system efficiently\nenough and meaningfully enough to get quantitative insights, is among the most\nardent dreams of biophysicists -- a dream, nevertheless, not to be fulfilled\nany time soon. Machine learning force fields (MLFFs) represent a meaningful\nendeavor towards this direction, where differentiable neural functions are\nparametrized to fit ab initio energies, and furthermore forces through\nautomatic differentiation. We argue that, as of now, the utility of the MLFF\nmodels is no longer bottlenecked by accuracy but primarily by their speed (as\nwell as stability and generalizability), as many recent variants, on limited\nchemical spaces, have long surpassed the chemical accuracy of $1$ kcal/mol --\nthe empirical threshold beyond which realistic chemical predictions are\npossible -- though still magnitudes slower than MM. Hoping to kindle\nexplorations and designs of faster, albeit perhaps slightly less accurate\nMLFFs, in this review, we focus our attention on the design space (the\nspeed-accuracy tradeoff) between MM and ML force fields. After a brief review\nof the building blocks of force fields of either kind, we discuss the desired\nproperties and challenges now faced by the force field development community,\nsurvey the efforts to make MM force fields more accurate and ML force fields\nfaster, envision what the next generation of MLFF might look like.","PeriodicalId":501369,"journal":{"name":"arXiv - PHYS - Computational Physics","volume":"2 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On the design space between molecular mechanics and machine learning force fields\",\"authors\":\"Yuanqing Wang, Kenichiro Takaba, Michael S. Chen, Marcus Wieder, Yuzhi Xu, John Z. H. Zhang, Kuang Yu, Xinyan Wang, Linfeng Zhang, Daniel J. Cole, Joshua A. Rackers, Joe G. Greener, Peter Eastman, Stefano Martiniani, Mark E. Tuckerman\",\"doi\":\"arxiv-2409.01931\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A force field as accurate as quantum mechanics (QM) and as fast as molecular\\nmechanics (MM), with which one can simulate a biomolecular system efficiently\\nenough and meaningfully enough to get quantitative insights, is among the most\\nardent dreams of biophysicists -- a dream, nevertheless, not to be fulfilled\\nany time soon. Machine learning force fields (MLFFs) represent a meaningful\\nendeavor towards this direction, where differentiable neural functions are\\nparametrized to fit ab initio energies, and furthermore forces through\\nautomatic differentiation. We argue that, as of now, the utility of the MLFF\\nmodels is no longer bottlenecked by accuracy but primarily by their speed (as\\nwell as stability and generalizability), as many recent variants, on limited\\nchemical spaces, have long surpassed the chemical accuracy of $1$ kcal/mol --\\nthe empirical threshold beyond which realistic chemical predictions are\\npossible -- though still magnitudes slower than MM. Hoping to kindle\\nexplorations and designs of faster, albeit perhaps slightly less accurate\\nMLFFs, in this review, we focus our attention on the design space (the\\nspeed-accuracy tradeoff) between MM and ML force fields. After a brief review\\nof the building blocks of force fields of either kind, we discuss the desired\\nproperties and challenges now faced by the force field development community,\\nsurvey the efforts to make MM force fields more accurate and ML force fields\\nfaster, envision what the next generation of MLFF might look like.\",\"PeriodicalId\":501369,\"journal\":{\"name\":\"arXiv - PHYS - Computational Physics\",\"volume\":\"2 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - PHYS - Computational Physics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.01931\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Computational Physics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.01931","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
一个像量子力学(QM)一样精确、像分子力学(MM)一样快速的力场,可以让人们足够高效、足够有意义地模拟生物分子系统,从而获得定量的洞察力,这是生物物理学家最迫切的梦想之一--然而,这个梦想不会很快实现。机器学习力场(MLFFs)是朝着这个方向迈出的有意义的一步,它将可微分的神经功能参数化,以适应自始能量,并通过自动微分进一步施加力。我们认为,到目前为止,MLFF 模型的实用性不再受限于准确性,而主要受限于速度(以及稳定性和普适性),因为在有限的化学空间上,许多最新的变体早已超过了 1 美元千卡/摩尔的化学准确性--这是有可能进行现实化学预测的经验阈值--尽管速度仍然比 MM 慢很多。为了帮助探索和设计更快、但准确度可能略低的 ML 力场,在本综述中,我们将重点关注 MM 力场和 ML 力场之间的设计空间(速度-准确度权衡)。在简要回顾了这两种力场的构件之后,我们讨论了力场开发界所期望的特性和目前面临的挑战,调查了为使 MM 力场更精确、ML 力场更快速所做的努力,并设想了下一代 MLFF 的样子。
On the design space between molecular mechanics and machine learning force fields
A force field as accurate as quantum mechanics (QM) and as fast as molecular
mechanics (MM), with which one can simulate a biomolecular system efficiently
enough and meaningfully enough to get quantitative insights, is among the most
ardent dreams of biophysicists -- a dream, nevertheless, not to be fulfilled
any time soon. Machine learning force fields (MLFFs) represent a meaningful
endeavor towards this direction, where differentiable neural functions are
parametrized to fit ab initio energies, and furthermore forces through
automatic differentiation. We argue that, as of now, the utility of the MLFF
models is no longer bottlenecked by accuracy but primarily by their speed (as
well as stability and generalizability), as many recent variants, on limited
chemical spaces, have long surpassed the chemical accuracy of $1$ kcal/mol --
the empirical threshold beyond which realistic chemical predictions are
possible -- though still magnitudes slower than MM. Hoping to kindle
explorations and designs of faster, albeit perhaps slightly less accurate
MLFFs, in this review, we focus our attention on the design space (the
speed-accuracy tradeoff) between MM and ML force fields. After a brief review
of the building blocks of force fields of either kind, we discuss the desired
properties and challenges now faced by the force field development community,
survey the efforts to make MM force fields more accurate and ML force fields
faster, envision what the next generation of MLFF might look like.