Paul Fuchs , Stephan Thaler , Sebastien Röcken , Julija Zavadlav
{"title":"chemtrain: Learning deep potential models via automatic differentiation and statistical physics","authors":"Paul Fuchs , Stephan Thaler , Sebastien Röcken , Julija Zavadlav","doi":"10.1016/j.cpc.2025.109512","DOIUrl":null,"url":null,"abstract":"<div><div>Neural Networks (NNs) are effective models for refining the accuracy of molecular dynamics, opening up new fields of application. Typically trained bottom-up, atomistic NN potential models can reach first-principle accuracy, while coarse-grained implicit solvent NN potentials surpass classical continuum solvent models. However, overcoming the limitations of costly generation of accurate reference data and data inefficiency of common bottom-up training demands efficient incorporation of data from many sources. This paper introduces the framework <span>chemtrain</span> to learn sophisticated NN potential models through customizable training routines and advanced training algorithms. These routines can combine multiple top-down and bottom-up algorithms, e.g., to incorporate both experimental and simulation data or pre-train potentials with less costly algorithms. <span>chemtrain</span> provides an object-oriented high-level interface to simplify the creation of custom routines. On the lower level, <span>chemtrain</span> relies on JAX to compute gradients and scale the computations to use available resources. We demonstrate the simplicity and importance of combining multiple algorithms in the examples of parametrizing an all-atomistic model of titanium and a coarse-grained implicit solvent model of alanine dipeptide.</div></div><div><h3>Program summary</h3><div><em>Program Title:</em> <span>chemtrain</span></div><div><em>CPC Library link to program files:</em> <span><span>https://doi.org/10.17632/m6fxmcmfzz.1</span><svg><path></path></svg></span></div><div><em>Developer's repository link:</em> <span><span>https://github.com/tummfm/chemtrain</span><svg><path></path></svg></span></div><div><em>Licensing provisions:</em> Apache-2.0</div><div><em>Programming language:</em> python</div><div><em>Nature of problem:</em> Neural Network (NN) potentials provide the means to accurately model high-order many-body interactions between particles on a molecular level. Through linear computational scaling with the system size, their high expressivity opens up new possibilities for efficiently modeling systems at a higher precision without resorting to expensive, finer-scale computational methods. However, as common for data-driven approaches, the success of NN potentials depends crucially on the availability of accurate training data. Bottom-up trained state-of-the-art models can match ab initio computations closer than their actual accuracy but can still predict deviations from experimental measurements. Including more accurate reference data can, in principle, resolve this issue, but generating sufficient data is infeasible even with less precise methods for increasingly larger systems. Supplementing the training procedure with more data-efficient methods can limit required training data [1]. In addition, the models can be fully or partially trained on macroscopic reference data [2,3]. Therefore, a framework supporting a combination of multiple training algorithms could further expedite the success of NN potential models in various disciplines.</div><div><em>Solution method:</em> We propose a framework that enables the development of NN potential models through customizable training routines. The framework provides the top-down algorithm Differentiable Trajectory Reweighting [2] and the bottom-up learning algorithms Force Matching [1] and Relative Entropy Minimization [1]. A high-level object-oriented API simplifies combining multiple algorithms and setting up sophisticated training routines such as active learning. At a modularly structured lower level, the framework follows a functional programming paradigm relying on the machine learning framework JAX [4] to simplify the creation of algorithms from standard building blocks, e.g., by deriving microscopic quantities such as forces and virials from any JAX-compatible NN potential model and scaling computations to use available resources.</div></div><div><h3>References</h3><div><ul><li><span>[1]</span><span><div>S. Thaler, M. Stupp, J. Zavadlav, Deep coarse-grained potentials via relative entropy minimization, J. Chem. Phys. 157 (24) (2022) 244103, <span><span>https://doi.org/10.1063/5.0124538</span><svg><path></path></svg></span>.</div></span></li><li><span>[2]</span><span><div>S. Thaler, J. Zavadlav, Learning neural network potentials from experimental data via Differentiable Trajectory Reweighting, Nat. Commun. 12 (1) (2021) 6884, <span><span>https://doi.org/10.1038/s41467-021-27241-4</span><svg><path></path></svg></span>.</div></span></li><li><span>[3]</span><span><div>S. Röcken, J. Zavadlav, Accurate machine learning force fields via experimental and simulation data fusion, npj Comput. Mater. 10 (1) (2024) 1–10, <span><span>https://doi.org/10.1038/s41524-024-01251-4</span><svg><path></path></svg></span>.</div></span></li><li><span>[4]</span><span><div>R. Frostig, M. J. Johnson, C. Leary, Compiling machine learning programs via high-level tracing.</div></span></li></ul></div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"310 ","pages":"Article 109512"},"PeriodicalIF":7.2000,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Physics Communications","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0010465525000153","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Neural Networks (NNs) are effective models for refining the accuracy of molecular dynamics, opening up new fields of application. Typically trained bottom-up, atomistic NN potential models can reach first-principle accuracy, while coarse-grained implicit solvent NN potentials surpass classical continuum solvent models. However, overcoming the limitations of costly generation of accurate reference data and data inefficiency of common bottom-up training demands efficient incorporation of data from many sources. This paper introduces the framework chemtrain to learn sophisticated NN potential models through customizable training routines and advanced training algorithms. These routines can combine multiple top-down and bottom-up algorithms, e.g., to incorporate both experimental and simulation data or pre-train potentials with less costly algorithms. chemtrain provides an object-oriented high-level interface to simplify the creation of custom routines. On the lower level, chemtrain relies on JAX to compute gradients and scale the computations to use available resources. We demonstrate the simplicity and importance of combining multiple algorithms in the examples of parametrizing an all-atomistic model of titanium and a coarse-grained implicit solvent model of alanine dipeptide.
Program summary
Program Title:chemtrain
CPC Library link to program files:https://doi.org/10.17632/m6fxmcmfzz.1
Nature of problem: Neural Network (NN) potentials provide the means to accurately model high-order many-body interactions between particles on a molecular level. Through linear computational scaling with the system size, their high expressivity opens up new possibilities for efficiently modeling systems at a higher precision without resorting to expensive, finer-scale computational methods. However, as common for data-driven approaches, the success of NN potentials depends crucially on the availability of accurate training data. Bottom-up trained state-of-the-art models can match ab initio computations closer than their actual accuracy but can still predict deviations from experimental measurements. Including more accurate reference data can, in principle, resolve this issue, but generating sufficient data is infeasible even with less precise methods for increasingly larger systems. Supplementing the training procedure with more data-efficient methods can limit required training data [1]. In addition, the models can be fully or partially trained on macroscopic reference data [2,3]. Therefore, a framework supporting a combination of multiple training algorithms could further expedite the success of NN potential models in various disciplines.
Solution method: We propose a framework that enables the development of NN potential models through customizable training routines. The framework provides the top-down algorithm Differentiable Trajectory Reweighting [2] and the bottom-up learning algorithms Force Matching [1] and Relative Entropy Minimization [1]. A high-level object-oriented API simplifies combining multiple algorithms and setting up sophisticated training routines such as active learning. At a modularly structured lower level, the framework follows a functional programming paradigm relying on the machine learning framework JAX [4] to simplify the creation of algorithms from standard building blocks, e.g., by deriving microscopic quantities such as forces and virials from any JAX-compatible NN potential model and scaling computations to use available resources.
References
[1]
S. Thaler, M. Stupp, J. Zavadlav, Deep coarse-grained potentials via relative entropy minimization, J. Chem. Phys. 157 (24) (2022) 244103, https://doi.org/10.1063/5.0124538.
[2]
S. Thaler, J. Zavadlav, Learning neural network potentials from experimental data via Differentiable Trajectory Reweighting, Nat. Commun. 12 (1) (2021) 6884, https://doi.org/10.1038/s41467-021-27241-4.
[3]
S. Röcken, J. Zavadlav, Accurate machine learning force fields via experimental and simulation data fusion, npj Comput. Mater. 10 (1) (2024) 1–10, https://doi.org/10.1038/s41524-024-01251-4.
[4]
R. Frostig, M. J. Johnson, C. Leary, Compiling machine learning programs via high-level tracing.
期刊介绍:
The focus of CPC is on contemporary computational methods and techniques and their implementation, the effectiveness of which will normally be evidenced by the author(s) within the context of a substantive problem in physics. Within this setting CPC publishes two types of paper.
Computer Programs in Physics (CPiP)
These papers describe significant computer programs to be archived in the CPC Program Library which is held in the Mendeley Data repository. The submitted software must be covered by an approved open source licence. Papers and associated computer programs that address a problem of contemporary interest in physics that cannot be solved by current software are particularly encouraged.
Computational Physics Papers (CP)
These are research papers in, but are not limited to, the following themes across computational physics and related disciplines.
mathematical and numerical methods and algorithms;
computational models including those associated with the design, control and analysis of experiments; and
algebraic computation.
Each will normally include software implementation and performance details. The software implementation should, ideally, be available via GitHub, Zenodo or an institutional repository.In addition, research papers on the impact of advanced computer architecture and special purpose computers on computing in the physical sciences and software topics related to, and of importance in, the physical sciences may be considered.