chemtrain: Learning deep potential models via automatic differentiation and statistical physics

IF 7.2 2区 物理与天体物理 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Computer Physics Communications Pub Date : 2025-01-28 DOI:10.1016/j.cpc.2025.109512
Paul Fuchs , Stephan Thaler , Sebastien Röcken , Julija Zavadlav
{"title":"chemtrain: Learning deep potential models via automatic differentiation and statistical physics","authors":"Paul Fuchs ,&nbsp;Stephan Thaler ,&nbsp;Sebastien Röcken ,&nbsp;Julija Zavadlav","doi":"10.1016/j.cpc.2025.109512","DOIUrl":null,"url":null,"abstract":"<div><div>Neural Networks (NNs) are effective models for refining the accuracy of molecular dynamics, opening up new fields of application. Typically trained bottom-up, atomistic NN potential models can reach first-principle accuracy, while coarse-grained implicit solvent NN potentials surpass classical continuum solvent models. However, overcoming the limitations of costly generation of accurate reference data and data inefficiency of common bottom-up training demands efficient incorporation of data from many sources. This paper introduces the framework <span>chemtrain</span> to learn sophisticated NN potential models through customizable training routines and advanced training algorithms. These routines can combine multiple top-down and bottom-up algorithms, e.g., to incorporate both experimental and simulation data or pre-train potentials with less costly algorithms. <span>chemtrain</span> provides an object-oriented high-level interface to simplify the creation of custom routines. On the lower level, <span>chemtrain</span> relies on JAX to compute gradients and scale the computations to use available resources. We demonstrate the simplicity and importance of combining multiple algorithms in the examples of parametrizing an all-atomistic model of titanium and a coarse-grained implicit solvent model of alanine dipeptide.</div></div><div><h3>Program summary</h3><div><em>Program Title:</em> <span>chemtrain</span></div><div><em>CPC Library link to program files:</em> <span><span>https://doi.org/10.17632/m6fxmcmfzz.1</span><svg><path></path></svg></span></div><div><em>Developer's repository link:</em> <span><span>https://github.com/tummfm/chemtrain</span><svg><path></path></svg></span></div><div><em>Licensing provisions:</em> Apache-2.0</div><div><em>Programming language:</em> python</div><div><em>Nature of problem:</em> Neural Network (NN) potentials provide the means to accurately model high-order many-body interactions between particles on a molecular level. Through linear computational scaling with the system size, their high expressivity opens up new possibilities for efficiently modeling systems at a higher precision without resorting to expensive, finer-scale computational methods. However, as common for data-driven approaches, the success of NN potentials depends crucially on the availability of accurate training data. Bottom-up trained state-of-the-art models can match ab initio computations closer than their actual accuracy but can still predict deviations from experimental measurements. Including more accurate reference data can, in principle, resolve this issue, but generating sufficient data is infeasible even with less precise methods for increasingly larger systems. Supplementing the training procedure with more data-efficient methods can limit required training data [1]. In addition, the models can be fully or partially trained on macroscopic reference data [2,3]. Therefore, a framework supporting a combination of multiple training algorithms could further expedite the success of NN potential models in various disciplines.</div><div><em>Solution method:</em> We propose a framework that enables the development of NN potential models through customizable training routines. The framework provides the top-down algorithm Differentiable Trajectory Reweighting [2] and the bottom-up learning algorithms Force Matching [1] and Relative Entropy Minimization [1]. A high-level object-oriented API simplifies combining multiple algorithms and setting up sophisticated training routines such as active learning. At a modularly structured lower level, the framework follows a functional programming paradigm relying on the machine learning framework JAX [4] to simplify the creation of algorithms from standard building blocks, e.g., by deriving microscopic quantities such as forces and virials from any JAX-compatible NN potential model and scaling computations to use available resources.</div></div><div><h3>References</h3><div><ul><li><span>[1]</span><span><div>S. Thaler, M. Stupp, J. Zavadlav, Deep coarse-grained potentials via relative entropy minimization, J. Chem. Phys. 157 (24) (2022) 244103, <span><span>https://doi.org/10.1063/5.0124538</span><svg><path></path></svg></span>.</div></span></li><li><span>[2]</span><span><div>S. Thaler, J. Zavadlav, Learning neural network potentials from experimental data via Differentiable Trajectory Reweighting, Nat. Commun. 12 (1) (2021) 6884, <span><span>https://doi.org/10.1038/s41467-021-27241-4</span><svg><path></path></svg></span>.</div></span></li><li><span>[3]</span><span><div>S. Röcken, J. Zavadlav, Accurate machine learning force fields via experimental and simulation data fusion, npj Comput. Mater. 10 (1) (2024) 1–10, <span><span>https://doi.org/10.1038/s41524-024-01251-4</span><svg><path></path></svg></span>.</div></span></li><li><span>[4]</span><span><div>R. Frostig, M. J. Johnson, C. Leary, Compiling machine learning programs via high-level tracing.</div></span></li></ul></div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"310 ","pages":"Article 109512"},"PeriodicalIF":7.2000,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Physics Communications","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0010465525000153","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Neural Networks (NNs) are effective models for refining the accuracy of molecular dynamics, opening up new fields of application. Typically trained bottom-up, atomistic NN potential models can reach first-principle accuracy, while coarse-grained implicit solvent NN potentials surpass classical continuum solvent models. However, overcoming the limitations of costly generation of accurate reference data and data inefficiency of common bottom-up training demands efficient incorporation of data from many sources. This paper introduces the framework chemtrain to learn sophisticated NN potential models through customizable training routines and advanced training algorithms. These routines can combine multiple top-down and bottom-up algorithms, e.g., to incorporate both experimental and simulation data or pre-train potentials with less costly algorithms. chemtrain provides an object-oriented high-level interface to simplify the creation of custom routines. On the lower level, chemtrain relies on JAX to compute gradients and scale the computations to use available resources. We demonstrate the simplicity and importance of combining multiple algorithms in the examples of parametrizing an all-atomistic model of titanium and a coarse-grained implicit solvent model of alanine dipeptide.

Program summary

Program Title: chemtrain
CPC Library link to program files: https://doi.org/10.17632/m6fxmcmfzz.1
Developer's repository link: https://github.com/tummfm/chemtrain
Licensing provisions: Apache-2.0
Programming language: python
Nature of problem: Neural Network (NN) potentials provide the means to accurately model high-order many-body interactions between particles on a molecular level. Through linear computational scaling with the system size, their high expressivity opens up new possibilities for efficiently modeling systems at a higher precision without resorting to expensive, finer-scale computational methods. However, as common for data-driven approaches, the success of NN potentials depends crucially on the availability of accurate training data. Bottom-up trained state-of-the-art models can match ab initio computations closer than their actual accuracy but can still predict deviations from experimental measurements. Including more accurate reference data can, in principle, resolve this issue, but generating sufficient data is infeasible even with less precise methods for increasingly larger systems. Supplementing the training procedure with more data-efficient methods can limit required training data [1]. In addition, the models can be fully or partially trained on macroscopic reference data [2,3]. Therefore, a framework supporting a combination of multiple training algorithms could further expedite the success of NN potential models in various disciplines.
Solution method: We propose a framework that enables the development of NN potential models through customizable training routines. The framework provides the top-down algorithm Differentiable Trajectory Reweighting [2] and the bottom-up learning algorithms Force Matching [1] and Relative Entropy Minimization [1]. A high-level object-oriented API simplifies combining multiple algorithms and setting up sophisticated training routines such as active learning. At a modularly structured lower level, the framework follows a functional programming paradigm relying on the machine learning framework JAX [4] to simplify the creation of algorithms from standard building blocks, e.g., by deriving microscopic quantities such as forces and virials from any JAX-compatible NN potential model and scaling computations to use available resources.

References

  • [1]
    S. Thaler, M. Stupp, J. Zavadlav, Deep coarse-grained potentials via relative entropy minimization, J. Chem. Phys. 157 (24) (2022) 244103, https://doi.org/10.1063/5.0124538.
  • [2]
    S. Thaler, J. Zavadlav, Learning neural network potentials from experimental data via Differentiable Trajectory Reweighting, Nat. Commun. 12 (1) (2021) 6884, https://doi.org/10.1038/s41467-021-27241-4.
  • [3]
    S. Röcken, J. Zavadlav, Accurate machine learning force fields via experimental and simulation data fusion, npj Comput. Mater. 10 (1) (2024) 1–10, https://doi.org/10.1038/s41524-024-01251-4.
  • [4]
    R. Frostig, M. J. Johnson, C. Leary, Compiling machine learning programs via high-level tracing.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
求助全文
约1分钟内获得全文 去求助
来源期刊
Computer Physics Communications
Computer Physics Communications 物理-计算机:跨学科应用
CiteScore
12.10
自引率
3.20%
发文量
287
审稿时长
5.3 months
期刊介绍: The focus of CPC is on contemporary computational methods and techniques and their implementation, the effectiveness of which will normally be evidenced by the author(s) within the context of a substantive problem in physics. Within this setting CPC publishes two types of paper. Computer Programs in Physics (CPiP) These papers describe significant computer programs to be archived in the CPC Program Library which is held in the Mendeley Data repository. The submitted software must be covered by an approved open source licence. Papers and associated computer programs that address a problem of contemporary interest in physics that cannot be solved by current software are particularly encouraged. Computational Physics Papers (CP) These are research papers in, but are not limited to, the following themes across computational physics and related disciplines. mathematical and numerical methods and algorithms; computational models including those associated with the design, control and analysis of experiments; and algebraic computation. Each will normally include software implementation and performance details. The software implementation should, ideally, be available via GitHub, Zenodo or an institutional repository.In addition, research papers on the impact of advanced computer architecture and special purpose computers on computing in the physical sciences and software topics related to, and of importance in, the physical sciences may be considered.
期刊最新文献
Galactic distribution of supernovae and OB associations ToMSGKpoint: A user-friendly package for computing symmetry transformation properties of electronic eigenstates of nonmagnetic and magnetic crystalline materials curvedSpaceSim: A framework for simulating particles interacting along geodesics JAX-based aeroelastic simulation engine for differentiable aircraft dynamics CaLES: A GPU-accelerated solver for large-eddy simulation of wall-bounded flows
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1