H. Eivazi, Jendrik-Alexander Tröger, Stefan H. A. Wittek, S. Hartmann, A. Rausch
{"title":"基于深度神经网络的FE2计算:算法结构、数据生成和实现","authors":"H. Eivazi, Jendrik-Alexander Tröger, Stefan H. A. Wittek, S. Hartmann, A. Rausch","doi":"10.3390/mca28040091","DOIUrl":null,"url":null,"abstract":"Multiscale FE2 computations enable the consideration of the micro-mechanical material structure in macroscopical simulations. However, these computations are very time-consuming because of numerous evaluations of a representative volume element, which represents the microstructure. In contrast, neural networks as machine learning methods are very fast to evaluate once they are trained. Even the DNN-FE2 approach is currently a known procedure, where deep neural networks (DNNs) are applied as a surrogate model of the representative volume element. In this contribution, however, a clear description of the algorithmic FE2 structure and the particular integration of deep neural networks are explained in detail. This comprises a suitable training strategy, where particular knowledge of the material behavior is considered to reduce the required amount of training data, a study of the amount of training data required for reliable FE2 simulations with special focus on the errors compared to conventional FE2 simulations, and the implementation aspect to gain considerable speed-up. As it is known, the Sobolev training and automatic differentiation increase data efficiency, prediction accuracy and speed-up in comparison to using two different neural networks for stress and tangent matrix prediction. To gain a significant speed-up of the FE2 computations, an efficient implementation of the trained neural network in a finite element code is provided. This is achieved by drawing on state-of-the-art high-performance computing libraries and just-in-time compilation yielding a maximum speed-up of a factor of more than 5000 compared to a reference FE2 computation. Moreover, the deep neural network surrogate model is able to overcome load-step size limitations of the RVE computations in step-size controlled computations.","PeriodicalId":53224,"journal":{"name":"Mathematical & Computational Applications","volume":" ","pages":""},"PeriodicalIF":1.9000,"publicationDate":"2023-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"FE2 Computations with Deep Neural Networks: Algorithmic Structure, Data Generation, and Implementation\",\"authors\":\"H. Eivazi, Jendrik-Alexander Tröger, Stefan H. A. Wittek, S. Hartmann, A. Rausch\",\"doi\":\"10.3390/mca28040091\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multiscale FE2 computations enable the consideration of the micro-mechanical material structure in macroscopical simulations. However, these computations are very time-consuming because of numerous evaluations of a representative volume element, which represents the microstructure. In contrast, neural networks as machine learning methods are very fast to evaluate once they are trained. Even the DNN-FE2 approach is currently a known procedure, where deep neural networks (DNNs) are applied as a surrogate model of the representative volume element. In this contribution, however, a clear description of the algorithmic FE2 structure and the particular integration of deep neural networks are explained in detail. This comprises a suitable training strategy, where particular knowledge of the material behavior is considered to reduce the required amount of training data, a study of the amount of training data required for reliable FE2 simulations with special focus on the errors compared to conventional FE2 simulations, and the implementation aspect to gain considerable speed-up. As it is known, the Sobolev training and automatic differentiation increase data efficiency, prediction accuracy and speed-up in comparison to using two different neural networks for stress and tangent matrix prediction. To gain a significant speed-up of the FE2 computations, an efficient implementation of the trained neural network in a finite element code is provided. This is achieved by drawing on state-of-the-art high-performance computing libraries and just-in-time compilation yielding a maximum speed-up of a factor of more than 5000 compared to a reference FE2 computation. Moreover, the deep neural network surrogate model is able to overcome load-step size limitations of the RVE computations in step-size controlled computations.\",\"PeriodicalId\":53224,\"journal\":{\"name\":\"Mathematical & Computational Applications\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2023-08-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Mathematical & Computational Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3390/mca28040091\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mathematical & Computational Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/mca28040091","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
FE2 Computations with Deep Neural Networks: Algorithmic Structure, Data Generation, and Implementation
Multiscale FE2 computations enable the consideration of the micro-mechanical material structure in macroscopical simulations. However, these computations are very time-consuming because of numerous evaluations of a representative volume element, which represents the microstructure. In contrast, neural networks as machine learning methods are very fast to evaluate once they are trained. Even the DNN-FE2 approach is currently a known procedure, where deep neural networks (DNNs) are applied as a surrogate model of the representative volume element. In this contribution, however, a clear description of the algorithmic FE2 structure and the particular integration of deep neural networks are explained in detail. This comprises a suitable training strategy, where particular knowledge of the material behavior is considered to reduce the required amount of training data, a study of the amount of training data required for reliable FE2 simulations with special focus on the errors compared to conventional FE2 simulations, and the implementation aspect to gain considerable speed-up. As it is known, the Sobolev training and automatic differentiation increase data efficiency, prediction accuracy and speed-up in comparison to using two different neural networks for stress and tangent matrix prediction. To gain a significant speed-up of the FE2 computations, an efficient implementation of the trained neural network in a finite element code is provided. This is achieved by drawing on state-of-the-art high-performance computing libraries and just-in-time compilation yielding a maximum speed-up of a factor of more than 5000 compared to a reference FE2 computation. Moreover, the deep neural network surrogate model is able to overcome load-step size limitations of the RVE computations in step-size controlled computations.
期刊介绍:
Mathematical and Computational Applications (MCA) is devoted to original research in the field of engineering, natural sciences or social sciences where mathematical and/or computational techniques are necessary for solving specific problems. The aim of the journal is to provide a medium by which a wide range of experience can be exchanged among researchers from diverse fields such as engineering (electrical, mechanical, civil, industrial, aeronautical, nuclear etc.), natural sciences (physics, mathematics, chemistry, biology etc.) or social sciences (administrative sciences, economics, political sciences etc.). The papers may be theoretical where mathematics is used in a nontrivial way or computational or combination of both. Each paper submitted will be reviewed and only papers of highest quality that contain original ideas and research will be published. Papers containing only experimental techniques and abstract mathematics without any sign of application are discouraged.