基于深度神经网络的FE2计算:算法结构、数据生成和实现

IF 1.9 Q2 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Mathematical & Computational Applications Pub Date : 2023-08-16 DOI:10.3390/mca28040091
H. Eivazi, Jendrik-Alexander Tröger, Stefan H. A. Wittek, S. Hartmann, A. Rausch
{"title":"基于深度神经网络的FE2计算:算法结构、数据生成和实现","authors":"H. Eivazi, Jendrik-Alexander Tröger, Stefan H. A. Wittek, S. Hartmann, A. Rausch","doi":"10.3390/mca28040091","DOIUrl":null,"url":null,"abstract":"Multiscale FE2 computations enable the consideration of the micro-mechanical material structure in macroscopical simulations. However, these computations are very time-consuming because of numerous evaluations of a representative volume element, which represents the microstructure. In contrast, neural networks as machine learning methods are very fast to evaluate once they are trained. Even the DNN-FE2 approach is currently a known procedure, where deep neural networks (DNNs) are applied as a surrogate model of the representative volume element. In this contribution, however, a clear description of the algorithmic FE2 structure and the particular integration of deep neural networks are explained in detail. This comprises a suitable training strategy, where particular knowledge of the material behavior is considered to reduce the required amount of training data, a study of the amount of training data required for reliable FE2 simulations with special focus on the errors compared to conventional FE2 simulations, and the implementation aspect to gain considerable speed-up. As it is known, the Sobolev training and automatic differentiation increase data efficiency, prediction accuracy and speed-up in comparison to using two different neural networks for stress and tangent matrix prediction. To gain a significant speed-up of the FE2 computations, an efficient implementation of the trained neural network in a finite element code is provided. This is achieved by drawing on state-of-the-art high-performance computing libraries and just-in-time compilation yielding a maximum speed-up of a factor of more than 5000 compared to a reference FE2 computation. Moreover, the deep neural network surrogate model is able to overcome load-step size limitations of the RVE computations in step-size controlled computations.","PeriodicalId":53224,"journal":{"name":"Mathematical & Computational Applications","volume":" ","pages":""},"PeriodicalIF":1.9000,"publicationDate":"2023-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"FE2 Computations with Deep Neural Networks: Algorithmic Structure, Data Generation, and Implementation\",\"authors\":\"H. Eivazi, Jendrik-Alexander Tröger, Stefan H. A. Wittek, S. Hartmann, A. Rausch\",\"doi\":\"10.3390/mca28040091\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multiscale FE2 computations enable the consideration of the micro-mechanical material structure in macroscopical simulations. However, these computations are very time-consuming because of numerous evaluations of a representative volume element, which represents the microstructure. In contrast, neural networks as machine learning methods are very fast to evaluate once they are trained. Even the DNN-FE2 approach is currently a known procedure, where deep neural networks (DNNs) are applied as a surrogate model of the representative volume element. In this contribution, however, a clear description of the algorithmic FE2 structure and the particular integration of deep neural networks are explained in detail. This comprises a suitable training strategy, where particular knowledge of the material behavior is considered to reduce the required amount of training data, a study of the amount of training data required for reliable FE2 simulations with special focus on the errors compared to conventional FE2 simulations, and the implementation aspect to gain considerable speed-up. As it is known, the Sobolev training and automatic differentiation increase data efficiency, prediction accuracy and speed-up in comparison to using two different neural networks for stress and tangent matrix prediction. To gain a significant speed-up of the FE2 computations, an efficient implementation of the trained neural network in a finite element code is provided. This is achieved by drawing on state-of-the-art high-performance computing libraries and just-in-time compilation yielding a maximum speed-up of a factor of more than 5000 compared to a reference FE2 computation. Moreover, the deep neural network surrogate model is able to overcome load-step size limitations of the RVE computations in step-size controlled computations.\",\"PeriodicalId\":53224,\"journal\":{\"name\":\"Mathematical & Computational Applications\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2023-08-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Mathematical & Computational Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3390/mca28040091\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mathematical & Computational Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/mca28040091","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 2

摘要

多尺度FE2计算可以在宏观模拟中考虑材料的微观力学结构。然而,这些计算非常耗时,因为要对代表微观结构的代表性体积单元进行多次评估。相比之下,神经网络作为机器学习方法,一旦经过训练,评估速度非常快。甚至DNN-FE2方法目前也是一个已知的过程,其中深度神经网络(dnn)被用作代表性体积元素的代理模型。然而,在这一贡献中,对算法FE2结构的清晰描述和深度神经网络的特定集成进行了详细解释。这包括一个合适的训练策略,其中考虑了材料行为的特定知识,以减少所需的训练数据量,研究可靠的FE2模拟所需的训练数据量,特别关注与传统FE2模拟相比的误差,以及实现方面以获得相当大的加速。众所周知,与使用两种不同的神经网络进行应力和切矩阵预测相比,Sobolev训练和自动微分提高了数据效率、预测精度和速度。为了获得FE2计算的显著加速,提供了在有限元代码中有效实现训练神经网络的方法。这是通过利用最先进的高性能计算库和即时编译实现的,与参考FE2计算相比,其最大速度提高了5000多倍。此外,深度神经网络代理模型能够克服步长控制计算中RVE计算的负载步长限制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
FE2 Computations with Deep Neural Networks: Algorithmic Structure, Data Generation, and Implementation
Multiscale FE2 computations enable the consideration of the micro-mechanical material structure in macroscopical simulations. However, these computations are very time-consuming because of numerous evaluations of a representative volume element, which represents the microstructure. In contrast, neural networks as machine learning methods are very fast to evaluate once they are trained. Even the DNN-FE2 approach is currently a known procedure, where deep neural networks (DNNs) are applied as a surrogate model of the representative volume element. In this contribution, however, a clear description of the algorithmic FE2 structure and the particular integration of deep neural networks are explained in detail. This comprises a suitable training strategy, where particular knowledge of the material behavior is considered to reduce the required amount of training data, a study of the amount of training data required for reliable FE2 simulations with special focus on the errors compared to conventional FE2 simulations, and the implementation aspect to gain considerable speed-up. As it is known, the Sobolev training and automatic differentiation increase data efficiency, prediction accuracy and speed-up in comparison to using two different neural networks for stress and tangent matrix prediction. To gain a significant speed-up of the FE2 computations, an efficient implementation of the trained neural network in a finite element code is provided. This is achieved by drawing on state-of-the-art high-performance computing libraries and just-in-time compilation yielding a maximum speed-up of a factor of more than 5000 compared to a reference FE2 computation. Moreover, the deep neural network surrogate model is able to overcome load-step size limitations of the RVE computations in step-size controlled computations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Mathematical & Computational Applications
Mathematical & Computational Applications MATHEMATICS, INTERDISCIPLINARY APPLICATIONS-
自引率
10.50%
发文量
86
审稿时长
12 weeks
期刊介绍: Mathematical and Computational Applications (MCA) is devoted to original research in the field of engineering, natural sciences or social sciences where mathematical and/or computational techniques are necessary for solving specific problems. The aim of the journal is to provide a medium by which a wide range of experience can be exchanged among researchers from diverse fields such as engineering (electrical, mechanical, civil, industrial, aeronautical, nuclear etc.), natural sciences (physics, mathematics, chemistry, biology etc.) or social sciences (administrative sciences, economics, political sciences etc.). The papers may be theoretical where mathematics is used in a nontrivial way or computational or combination of both. Each paper submitted will be reviewed and only papers of highest quality that contain original ideas and research will be published. Papers containing only experimental techniques and abstract mathematics without any sign of application are discouraged.
期刊最新文献
Asymptotic Behavior of Solutions to a Nonlinear Swelling Soil System with Time Delay and Variable Exponents Exploring the Potential of Mixed Fourier Series in Signal Processing Applications Using One-Dimensional Smooth Closed-Form Functions with Compact Support: A Comprehensive Tutorial Conservation Laws and Symmetry Reductions of the Hunter–Saxton Equation via the Double Reduction Method FE2 Computations with Deep Neural Networks: Algorithmic Structure, Data Generation, and Implementation A Computational Fluid Dynamics-Based Model for Assessing Rupture Risk in Cerebral Arteries with Varying Aneurysm Sizes
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1