Pub Date : 2026-02-08DOI: 10.1038/s41524-026-01979-1
Yury Lysogorskiy, Anton Bochkarev, Ralf Drautz
Foundational machine learning interatomic potentials that can accurately and efficiently model a vast range of materials are critical for accelerating atomistic discovery. We introduce universal potentials based on the graph atomic cluster expansion (GRACE) framework, trained on several of the largest available materials datasets. Through comprehensive benchmarks, we demonstrate that the GRACE models establish a new Pareto front for accuracy versus efficiency among foundational interatomic potentials. We further showcase their exceptional versatility by adapting them to specialized tasks and simpler architectures via fine-tuning and knowledge distillation, achieving high accuracy while preventing catastrophic forgetting. This work establishes GRACE as a robust and adaptable foundation for the next generation of atomistic modeling, enabling high-fidelity simulations across the periodic table.
{"title":"Graph atomic cluster expansion for foundational machine learning interatomic potentials","authors":"Yury Lysogorskiy, Anton Bochkarev, Ralf Drautz","doi":"10.1038/s41524-026-01979-1","DOIUrl":"https://doi.org/10.1038/s41524-026-01979-1","url":null,"abstract":"Foundational machine learning interatomic potentials that can accurately and efficiently model a vast range of materials are critical for accelerating atomistic discovery. We introduce universal potentials based on the graph atomic cluster expansion (GRACE) framework, trained on several of the largest available materials datasets. Through comprehensive benchmarks, we demonstrate that the GRACE models establish a new Pareto front for accuracy versus efficiency among foundational interatomic potentials. We further showcase their exceptional versatility by adapting them to specialized tasks and simpler architectures via fine-tuning and knowledge distillation, achieving high accuracy while preventing catastrophic forgetting. This work establishes GRACE as a robust and adaptable foundation for the next generation of atomistic modeling, enabling high-fidelity simulations across the periodic table.","PeriodicalId":19342,"journal":{"name":"npj Computational Materials","volume":"9 1","pages":""},"PeriodicalIF":9.7,"publicationDate":"2026-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146135609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-06DOI: 10.1038/s41524-026-01991-5
Jiaxuan Guo, Simin Nie, Fritz B. Prinz
{"title":"Layer-dependent and gate-tunable Chern numbers in 2D kagome ferromagnet Yb2(C6H4)3 with a large band gap","authors":"Jiaxuan Guo, Simin Nie, Fritz B. Prinz","doi":"10.1038/s41524-026-01991-5","DOIUrl":"https://doi.org/10.1038/s41524-026-01991-5","url":null,"abstract":"","PeriodicalId":19342,"journal":{"name":"npj Computational Materials","volume":"94 1","pages":""},"PeriodicalIF":9.7,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146135587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-06DOI: 10.1038/s41524-026-01987-1
John Mark P. Martirez
{"title":"Optical properties of a diamond NV color center from capped embedded multiconfigurational correlated wavefunction theory","authors":"John Mark P. Martirez","doi":"10.1038/s41524-026-01987-1","DOIUrl":"https://doi.org/10.1038/s41524-026-01987-1","url":null,"abstract":"","PeriodicalId":19342,"journal":{"name":"npj Computational Materials","volume":"312 1","pages":""},"PeriodicalIF":9.7,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146135588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-06DOI: 10.1038/s41524-026-01982-6
Fraser Birks, Matthew Nutter, Thomas D. Swinburne, James R. Kermode
Machine-learned interatomic potentials can offer near first-principles accuracy but are computationally expensive, limiting their application to large-scale molecular dynamics simulations. Inspired by quantum mechanics/molecular mechanics methods, we present ML-MIX, a CPU- and GPU-compatible package to accelerate simulations by spatially mixing interatomic potentials of different complexities, allowing deployment of modern MLIPs even under restricted computational budgets. We demonstrate our method for ACE, UF3, SNAP and MACE potential architectures and demonstrate how linear ‘cheap’ potentials can be distilled from a given ‘expensive’ potential, allowing close matching in relevant regions of configuration space. The functionality of ML-MIX is demonstrated through tests on point defects in Si, Fe and W-He, in which speedups of up to 11× over ~8000 atoms are demonstrated, without sacrificing accuracy. The scientific potential of ML-MIX is demonstrated via two case studies in W, measuring the mobility of $$b=frac{1}{2}langle 111rangle$$b=12〈111〉 screw dislocations with ACE/ACE mixing and the implantation of He with MACE/SNAP mixing. The latter returns He reflection coefficients which (for the first time) match experimental observations up to an He incident energy of 80 eV—demonstrating the benefits of deploying state-of-the-art models on large, realistic systems.
{"title":"Efficient and accurate spatial mixing of machine learned interatomic potentials for materials science","authors":"Fraser Birks, Matthew Nutter, Thomas D. Swinburne, James R. Kermode","doi":"10.1038/s41524-026-01982-6","DOIUrl":"https://doi.org/10.1038/s41524-026-01982-6","url":null,"abstract":"Machine-learned interatomic potentials can offer near first-principles accuracy but are computationally expensive, limiting their application to large-scale molecular dynamics simulations. Inspired by quantum mechanics/molecular mechanics methods, we present ML-MIX, a CPU- and GPU-compatible package to accelerate simulations by spatially mixing interatomic potentials of different complexities, allowing deployment of modern MLIPs even under restricted computational budgets. We demonstrate our method for ACE, UF3, SNAP and MACE potential architectures and demonstrate how linear ‘cheap’ potentials can be distilled from a given ‘expensive’ potential, allowing close matching in relevant regions of configuration space. The functionality of ML-MIX is demonstrated through tests on point defects in Si, Fe and W-He, in which speedups of up to 11× over ~8000 atoms are demonstrated, without sacrificing accuracy. The scientific potential of ML-MIX is demonstrated via two case studies in W, measuring the mobility of <jats:inline-formula> <jats:alternatives> <jats:tex-math>$$b=frac{1}{2}langle 111rangle$$</jats:tex-math> <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:mrow> <mml:mi>b</mml:mi> <mml:mo>=</mml:mo> <mml:mfrac> <mml:mrow> <mml:mn>1</mml:mn> </mml:mrow> <mml:mrow> <mml:mn>2</mml:mn> </mml:mrow> </mml:mfrac> <mml:mo>〈</mml:mo> <mml:mn>111</mml:mn> <mml:mo>〉</mml:mo> </mml:mrow> </mml:math> </jats:alternatives> </jats:inline-formula> screw dislocations with ACE/ACE mixing and the implantation of He with MACE/SNAP mixing. The latter returns He reflection coefficients which (for the first time) match experimental observations up to an He incident energy of 80 eV—demonstrating the benefits of deploying state-of-the-art models on large, realistic systems.","PeriodicalId":19342,"journal":{"name":"npj Computational Materials","volume":"59 1","pages":""},"PeriodicalIF":9.7,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146135591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-06DOI: 10.1038/s41524-026-01985-3
Qichen Xu, Anna Delin
Understanding how complex systems transition between states requires mapping the energy landscape that governs these changes. Local transition-state networks reveal the barrier architecture that explains observed behaviour and enables mechanism-based prediction across computational chemistry, biology, and physics, yet in many practical settings current approaches either require pre-specified endpoints or rely on single-ended searches that provide only a limited sample of nearby saddles. We present a general optimization framework that systematically expands local coverage by coupling a multi-objective explorer with a bilayer minimum-mode kernel. The inner layer uses Hessian-vector products to recover the lowest-curvature subspace, the outer layer optimizes on a reflected force to reach index-1 saddles, then a two-sided descent certifies connectivity. The GPU-based pipeline is portable across autodiff backends and eigensolvers and, on large atomistic-spin tests, matches explicit-Hessian accuracy while cutting peak memory and wall time by orders of magnitude. Applied to a DFT-parameterized Néel-type skyrmionic model, it recovers known routes and reveals previously unreported mechanisms, including meron-antimeron-mediated Néel-type skyrmionic duplication, annihilation, and chiral-droplet formation, enabling up to 32 pathways between biskyrmion ( Q = 2) and biantiskyrmion ( Q = −2). The same core transfers to Cartesian atoms, automatically mapping canonical rearrangements of a Ni(111) heptamer, underscoring the framework’s generality.
{"title":"A general optimization framework for mapping local transition-state networks","authors":"Qichen Xu, Anna Delin","doi":"10.1038/s41524-026-01985-3","DOIUrl":"https://doi.org/10.1038/s41524-026-01985-3","url":null,"abstract":"Understanding how complex systems transition between states requires mapping the energy landscape that governs these changes. Local transition-state networks reveal the barrier architecture that explains observed behaviour and enables mechanism-based prediction across computational chemistry, biology, and physics, yet in many practical settings current approaches either require pre-specified endpoints or rely on single-ended searches that provide only a limited sample of nearby saddles. We present a general optimization framework that systematically expands local coverage by coupling a multi-objective explorer with a bilayer minimum-mode kernel. The inner layer uses Hessian-vector products to recover the lowest-curvature subspace, the outer layer optimizes on a reflected force to reach index-1 saddles, then a two-sided descent certifies connectivity. The GPU-based pipeline is portable across autodiff backends and eigensolvers and, on large atomistic-spin tests, matches explicit-Hessian accuracy while cutting peak memory and wall time by orders of magnitude. Applied to a DFT-parameterized Néel-type skyrmionic model, it recovers known routes and reveals previously unreported mechanisms, including meron-antimeron-mediated Néel-type skyrmionic duplication, annihilation, and chiral-droplet formation, enabling up to 32 pathways between biskyrmion ( <jats:italic>Q</jats:italic> = 2) and biantiskyrmion ( <jats:italic>Q</jats:italic> = −2). The same core transfers to Cartesian atoms, automatically mapping canonical rearrangements of a Ni(111) heptamer, underscoring the framework’s generality.","PeriodicalId":19342,"journal":{"name":"npj Computational Materials","volume":"9 1","pages":""},"PeriodicalIF":9.7,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146135590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-05DOI: 10.1038/s41524-025-01893-y
Lars Moreels, Ian Lateur, Diego De Gusem, Jeroen Mulkers, Jonathan Maes, Milorad V. Milošević, Jonathan Leliaert, Bartel Van Waeyenberge
{"title":"mumax+: extensible GPU-accelerated micromagnetics and beyond","authors":"Lars Moreels, Ian Lateur, Diego De Gusem, Jeroen Mulkers, Jonathan Maes, Milorad V. Milošević, Jonathan Leliaert, Bartel Van Waeyenberge","doi":"10.1038/s41524-025-01893-y","DOIUrl":"https://doi.org/10.1038/s41524-025-01893-y","url":null,"abstract":"","PeriodicalId":19342,"journal":{"name":"npj Computational Materials","volume":"9 1","pages":""},"PeriodicalIF":9.7,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146135526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-05DOI: 10.1038/s41524-026-01980-8
Michael R. Tonks, David A. Andersson, Assel Aitkaliyeva
{"title":"Computational design of materials for nuclear reactors","authors":"Michael R. Tonks, David A. Andersson, Assel Aitkaliyeva","doi":"10.1038/s41524-026-01980-8","DOIUrl":"https://doi.org/10.1038/s41524-026-01980-8","url":null,"abstract":"","PeriodicalId":19342,"journal":{"name":"npj Computational Materials","volume":"240 1","pages":""},"PeriodicalIF":9.7,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146135592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-05DOI: 10.1038/s41524-025-01894-x
Yeyong Yu, Xilei Bian, Jie Xiong, Xing Wu, Quan Qian
{"title":"AIMATDESIGN: knowledge-augmented reinforcement learning for inverse materials design under data scarcity","authors":"Yeyong Yu, Xilei Bian, Jie Xiong, Xing Wu, Quan Qian","doi":"10.1038/s41524-025-01894-x","DOIUrl":"https://doi.org/10.1038/s41524-025-01894-x","url":null,"abstract":"","PeriodicalId":19342,"journal":{"name":"npj Computational Materials","volume":"16 1","pages":""},"PeriodicalIF":9.7,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146135595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}