Jason B. Gibson, Tesia D. Janicki, Ajinkya C. Hire, Chris Bishop, J. Matthew D. Lane, Richard G. Hennig
{"title":"When More Data Hurts: Optimizing Data Coverage While Mitigating Diversity Induced Underfitting in an Ultra-Fast Machine-Learned Potential","authors":"Jason B. Gibson, Tesia D. Janicki, Ajinkya C. Hire, Chris Bishop, J. Matthew D. Lane, Richard G. Hennig","doi":"arxiv-2409.07610","DOIUrl":null,"url":null,"abstract":"Machine-learned interatomic potentials (MLIPs) are becoming an essential tool\nin materials modeling. However, optimizing the generation of training data used\nto parameterize the MLIPs remains a significant challenge. This is because\nMLIPs can fail when encountering local enviroments too different from those\npresent in the training data. The difficulty of determining \\textit{a priori}\nthe environments that will be encountered during molecular dynamics (MD)\nsimulation necessitates diverse, high-quality training data. This study\ninvestigates how training data diversity affects the performance of MLIPs using\nthe Ultra-Fast Force Field (UF$^3$) to model amorphous silicon nitride. We\nemploy expert and autonomously generated data to create the training data and\nfit four force-field variants to subsets of the data. Our findings reveal a\ncritical balance in training data diversity: insufficient diversity hinders\ngeneralization, while excessive diversity can exceed the MLIP's learning\ncapacity, reducing simulation accuracy. Specifically, we found that the UF$^3$\nvariant trained on a subset of the training data, in which nitrogen-rich\nstructures were removed, offered vastly better prediction and simulation\naccuracy than any other variant. By comparing these UF$^3$ variants, we\nhighlight the nuanced requirements for creating accurate MLIPs, emphasizing the\nimportance of application-specific training data to achieve optimal performance\nin modeling complex material behaviors.","PeriodicalId":501369,"journal":{"name":"arXiv - PHYS - Computational Physics","volume":"59 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Computational Physics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07610","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Machine-learned interatomic potentials (MLIPs) are becoming an essential tool
in materials modeling. However, optimizing the generation of training data used
to parameterize the MLIPs remains a significant challenge. This is because
MLIPs can fail when encountering local enviroments too different from those
present in the training data. The difficulty of determining \textit{a priori}
the environments that will be encountered during molecular dynamics (MD)
simulation necessitates diverse, high-quality training data. This study
investigates how training data diversity affects the performance of MLIPs using
the Ultra-Fast Force Field (UF$^3$) to model amorphous silicon nitride. We
employ expert and autonomously generated data to create the training data and
fit four force-field variants to subsets of the data. Our findings reveal a
critical balance in training data diversity: insufficient diversity hinders
generalization, while excessive diversity can exceed the MLIP's learning
capacity, reducing simulation accuracy. Specifically, we found that the UF$^3$
variant trained on a subset of the training data, in which nitrogen-rich
structures were removed, offered vastly better prediction and simulation
accuracy than any other variant. By comparing these UF$^3$ variants, we
highlight the nuanced requirements for creating accurate MLIPs, emphasizing the
importance of application-specific training data to achieve optimal performance
in modeling complex material behaviors.