Selective laser sintering (SLS) of polymers represents a widely used additive manufacturing process, where the part quality depends highly on the present thermal conditions. One distinct feature of SLS is the existence of separate temperature regions for melting and crystallization (solidification) and that the process optimally operates within said regions. Typically a crystallization model, such as the Nakamura model, is used to predict the degree of crystallization as a function of temperature and time. One limitation of this model is the inability to compute negative rates of the crystallization degree during remelting. As we will show in this work, such an extension is necessary, considering the varying temperature fields appearing in SLS. To this end, an extension is proposed and analyzed in detail. Furthermore, a dependency of the temperature and crystallization fields on the size of geometrical features is presented.
{"title":"Modeling crystallization kinetics for selective laser sintering of polyamide 12","authors":"Dominic Soldner, Paul Steinmann, Julia Mergheim","doi":"10.1002/gamm.202100011","DOIUrl":"10.1002/gamm.202100011","url":null,"abstract":"<p>Selective laser sintering (SLS) of polymers represents a widely used additive manufacturing process, where the part quality depends highly on the present thermal conditions. One distinct feature of SLS is the existence of separate temperature regions for melting and crystallization (solidification) and that the process optimally operates within said regions. Typically a crystallization model, such as the Nakamura model, is used to predict the degree of crystallization as a function of temperature and time. One limitation of this model is the inability to compute negative rates of the crystallization degree during remelting. As we will show in this work, such an extension is necessary, considering the varying temperature fields appearing in SLS. To this end, an extension is proposed and analyzed in detail. Furthermore, a dependency of the temperature and crystallization fields on the size of geometrical features is presented.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"44 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202100011","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73292022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Raßloff, Paul Schulz, Robert Kühne, Marreddy Ambati, Ilja Koch, André T. Zeuner, Maik Gude, Martina Zimmermann, Markus Kästner
Understanding structure–property (SP) relationships is essential for accelerating materials innovation. Still being in the state of ongoing research and development, this is especially true for additive manufacturing (AM) in which process-induced imperfections like pores and microstructural variations significantly influence the material's properties. That is why, the present work aims at proposing an approach for accessing pore SP relationships for AM materials. For this purpose, crystal plasticity (CP) simulations on reconstructed domains based on experimental measurements are employed to allow for a microstructure-sensitive investigation. For the considered Ti–6Al–4V specimen manufactured by laser powder bed fusion, the microstructure and pore characteristics are obtained by utilizing light microscopy and X-ray computed tomography at the microscale. Employing suitable statistical analysis and reconstruction, statistical volume elements with reconstructed pore distributions are created. Using them, microscale CP simulations are performed to obtain fatigue indicating parameters. Employing a further statistical analysis, fatigue ranking parameters are derived for a comparison of different microstructures. Additionally, a comparison with the empirical Murakami's square root area concept is made. Results from first numerical studies underline the potential of the approach for understanding and improving AM materials.
{"title":"Accessing pore microstructure–property relationships for additively manufactured materials","authors":"Alexander Raßloff, Paul Schulz, Robert Kühne, Marreddy Ambati, Ilja Koch, André T. Zeuner, Maik Gude, Martina Zimmermann, Markus Kästner","doi":"10.1002/gamm.202100012","DOIUrl":"10.1002/gamm.202100012","url":null,"abstract":"<p>Understanding structure–property (SP) relationships is essential for accelerating materials innovation. Still being in the state of ongoing research and development, this is especially true for additive manufacturing (AM) in which process-induced imperfections like pores and microstructural variations significantly influence the material's properties. That is why, the present work aims at proposing an approach for accessing pore SP relationships for AM materials. For this purpose, crystal plasticity (CP) simulations on reconstructed domains based on experimental measurements are employed to allow for a microstructure-sensitive investigation. For the considered Ti–6Al–4V specimen manufactured by laser powder bed fusion, the microstructure and pore characteristics are obtained by utilizing light microscopy and X-ray computed tomography at the microscale. Employing suitable statistical analysis and reconstruction, statistical volume elements with reconstructed pore distributions are created. Using them, microscale CP simulations are performed to obtain fatigue indicating parameters. Employing a further statistical analysis, fatigue ranking parameters are derived for a comparison of different microstructures. Additionally, a comparison with the empirical <span>Murakami</span>'s square root area concept is made. Results from first numerical studies underline the potential of the approach for understanding and improving AM materials.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"44 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202100012","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89507082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate models of mechanical system dynamics are often critical for model-based control and reinforcement learning. Fully data-driven dynamics models promise to ease the process of modeling and analysis, but require considerable amounts of data for training and often do not generalize well to unseen parts of the state space. Combining data-driven modeling with prior analytical knowledge is an attractive alternative as the inclusion of structural knowledge into a regression model improves the model's data efficiency and physical integrity. In this article, we survey supervised regression models that combine rigid-body mechanics with data-driven modeling techniques. We analyze the different latent functions (such as kinetic energy or dissipative forces) and operators (such as differential operators and projection matrices) underlying common descriptions of rigid-body mechanics. Based on this analysis, we provide a unified view on the combination of data-driven regression models, such as neural networks and Gaussian processes, with analytical model priors. Furthermore, we review and discuss key techniques for designing structured models such as automatic differentiation.
{"title":"Structured learning of rigid-body dynamics: A survey and unified view from a robotics perspective","authors":"A. René Geist, Sebastian Trimpe","doi":"10.1002/gamm.202100009","DOIUrl":"10.1002/gamm.202100009","url":null,"abstract":"<p>Accurate models of mechanical system dynamics are often critical for model-based control and reinforcement learning. Fully data-driven dynamics models promise to ease the process of modeling and analysis, but require considerable amounts of data for training and often do not generalize well to unseen parts of the state space. Combining data-driven modeling with prior analytical knowledge is an attractive alternative as the inclusion of structural knowledge into a regression model improves the model's data efficiency and physical integrity. In this article, we survey supervised regression models that combine rigid-body mechanics with data-driven modeling techniques. We analyze the different latent functions (such as kinetic energy or dissipative forces) and operators (such as differential operators and projection matrices) underlying common descriptions of rigid-body mechanics. Based on this analysis, we provide a unified view on the combination of data-driven regression models, such as neural networks and Gaussian processes, with analytical model priors. Furthermore, we review and discuss key techniques for designing structured models such as automatic differentiation.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"44 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202100009","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82587375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We already have illustrated in the first issue [1] of this series that the emerging field of scientific machine learning is penetrating traditional fields within scientific computing and beyond. The second issue in this series is also devoted to demonstrating this rapid change. In this part of our special issue of the GAMM Mitteilungen, we continue the presentation of contributions on the topic of scientific machine learning in the context of complex applications across the sciences and engineering. We are pleased that again four teams of authors have accepted our invitation and are now illustrating their insights into recent research highlights as well as pointing the reader to the relevant literature and software. The four papers in this second part of the special issue are:
{"title":"Topical issue scientific machine learning (2/2)","authors":"Peter Benner, Axel Klawonn, Martin Stoll","doi":"10.1002/gamm.202100010","DOIUrl":"10.1002/gamm.202100010","url":null,"abstract":"We already have illustrated in the first issue [1] of this series that the emerging field of scientific machine learning is penetrating traditional fields within scientific computing and beyond. The second issue in this series is also devoted to demonstrating this rapid change. In this part of our special issue of the GAMM Mitteilungen, we continue the presentation of contributions on the topic of scientific machine learning in the context of complex applications across the sciences and engineering. We are pleased that again four teams of authors have accepted our invitation and are now illustrating their insights into recent research highlights as well as pointing the reader to the relevant literature and software. The four papers in this second part of the special issue are:","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"44 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202100010","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82548710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural networks are increasingly used to construct numerical solution methods for partial differential equations. In this expository review, we introduce and contrast three important recent approaches attractive in their simplicity and their suitability for high-dimensional problems: physics-informed neural networks, methods based on the Feynman–Kac formula and methods based on the solution of backward stochastic differential equations. The article is accompanied by a suite of expository software in the form of Jupyter notebooks in which each basic methodology is explained step by step, allowing for a quick assimilation and experimentation. An extensive bibliography summarizes the state of the art.
{"title":"Three ways to solve partial differential equations with neural networks — A review","authors":"Jan Blechschmidt, Oliver G. Ernst","doi":"10.1002/gamm.202100006","DOIUrl":"10.1002/gamm.202100006","url":null,"abstract":"<p>Neural networks are increasingly used to construct numerical solution methods for partial differential equations. In this expository review, we introduce and contrast three important recent approaches attractive in their simplicity and their suitability for high-dimensional problems: physics-informed neural networks, methods based on the Feynman–Kac formula and methods based on the solution of backward stochastic differential equations. The article is accompanied by a suite of expository software in the form of Jupyter notebooks in which each basic methodology is explained step by step, allowing for a quick assimilation and experimentation. An extensive bibliography summarizes the state of the art.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"44 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202100006","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79161983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most modeling approaches lie in either of the two categories: physics-based or data-driven. Recently, a third approach which is a combination of these deterministic and statistical models is emerging for scientific applications. To leverage these developments, our aim in this perspective paper is centered around exploring numerous principle concepts to address the challenges of (i) trustworthiness and generalizability in developing data-driven models to shed light on understanding the fundamental trade-offs in their accuracy and efficiency and (ii) seamless integration of interface learning and multifidelity coupling approaches that transfer and represent information between different entities, particularly when different scales are governed by different physics, each operating on a different level of abstraction. Addressing these challenges could enable the revolution of digital twin technologies for scientific and engineering applications.
{"title":"Hybrid analysis and modeling, eclecticism, and multifidelity computing toward digital twin revolution","authors":"Omer San, Adil Rasheed, Trond Kvamsdal","doi":"10.1002/gamm.202100007","DOIUrl":"10.1002/gamm.202100007","url":null,"abstract":"<p>Most modeling approaches lie in either of the two categories: physics-based or data-driven. Recently, a third approach which is a combination of these deterministic and statistical models is emerging for scientific applications. To leverage these developments, our aim in this perspective paper is centered around exploring numerous principle concepts to address the challenges of (i) trustworthiness and generalizability in developing data-driven models to shed light on understanding the fundamental trade-offs in their accuracy and efficiency and (ii) seamless integration of interface learning and multifidelity coupling approaches that transfer and represent information between different entities, particularly when different scales are governed by different physics, each operating on a different level of abstraction. Addressing these challenges could enable the revolution of digital twin technologies for scientific and engineering applications.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"44 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202100007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73578201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep generative models (DGM) are neural networks with many hidden layers trained to approximate complicated, high-dimensional probability distributions using samples. When trained successfully, we can use the DGM to estimate the likelihood of each observation and to create new samples from the underlying distribution. Developing DGMs has become one of the most hotly researched fields in artificial intelligence in recent years. The literature on DGMs has become vast and is growing rapidly. Some advances have even reached the public sphere, for example, the recent successes in generating realistic-looking images, voices, or movies; so-called deep fakes. Despite these successes, several mathematical and practical issues limit the broader use of DGMs: given a specific dataset, it remains challenging to design and train a DGM and even more challenging to find out why a particular model is or is not effective. To help advance the theoretical understanding of DGMs, we introduce DGMs and provide a concise mathematical framework for modeling the three most popular approaches: normalizing flows, variational autoencoders, and generative adversarial networks. We illustrate the advantages and disadvantages of these basic approaches using numerical experiments. Our goal is to enable and motivate the reader to contribute to this proliferating research area. Our presentation also emphasizes relations between generative modeling and optimal transport.
{"title":"An introduction to deep generative modeling","authors":"Lars Ruthotto, Eldad Haber","doi":"10.1002/gamm.202100008","DOIUrl":"10.1002/gamm.202100008","url":null,"abstract":"<p>Deep generative models (DGM) are neural networks with many hidden layers trained to approximate complicated, high-dimensional probability distributions using samples. When trained successfully, we can use the DGM to estimate the likelihood of each observation and to create new samples from the underlying distribution. Developing DGMs has become one of the most hotly researched fields in artificial intelligence in recent years. The literature on DGMs has become vast and is growing rapidly. Some advances have even reached the public sphere, for example, the recent successes in generating realistic-looking images, voices, or movies; so-called deep fakes. Despite these successes, several mathematical and practical issues limit the broader use of DGMs: given a specific dataset, it remains challenging to design and train a DGM and even more challenging to find out why a particular model is or is not effective. To help advance the theoretical understanding of DGMs, we introduce DGMs and provide a concise mathematical framework for modeling the three most popular approaches: normalizing flows, variational autoencoders, and generative adversarial networks. We illustrate the advantages and disadvantages of these basic approaches using numerical experiments. Our goal is to enable and motivate the reader to contribute to this proliferating research area. Our presentation also emphasizes relations between generative modeling and optimal transport.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"44 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202100008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79434409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a semi-discrete implicit Euler (SDIE) scheme for the Allen-Cahn equation (ACE) with fidelity forcing on graphs. The continuous-in-time version of this differential equation was pioneered by Bertozzi and Flenner in 2012 as a method for graph classification problems, such as semi-supervised learning and image segmentation. In 2013, Merkurjev et. al. used a Merriman-Bence-Osher (MBO) scheme with fidelity forcing instead, as heuristically it was expected to give similar results to the ACE. The current paper rigorously establishes the graph MBO scheme with fidelity forcing as a special case of an SDIE scheme for the graph ACE with fidelity forcing. This connection requires the use of the double-obstacle potential in the ACE, as was already demonstrated by Budd and Van Gennip in 2020 in the context of ACE without a fidelity forcing term. We also prove that solutions of the SDIE scheme converge to solutions of the graph ACE with fidelity forcing as the discrete time step converges to zero. In the second part of the paper we develop the SDIE scheme as a classification algorithm. We also introduce some innovations into the algorithms for the SDIE and MBO schemes. For large graphs, we use a QR decomposition method to compute an eigendecomposition from a Nyström extension, which outperforms the method used by, for example, Bertozzi and Flenner in 2012, in accuracy, stability, and speed. Moreover, we replace the Euler discretization for the scheme's diffusion step by a computation based on the Strang formula for matrix exponentials. We apply this algorithm to a number of image segmentation problems, and compare the performance with that of the graph MBO scheme with fidelity forcing. We find that while the general SDIE scheme does not perform better than the MBO special case at this task, our other innovations lead to a significantly better segmentation than that from previous literature. We also empirically quantify the uncertainty that this segmentation inherits from the randomness in the Nyström extension.
{"title":"Classification and image processing with a semi-discrete scheme for fidelity forced Allen–Cahn on graphs","authors":"Jeremy Budd, Yves van Gennip, Jonas Latz","doi":"10.1002/gamm.202100004","DOIUrl":"10.1002/gamm.202100004","url":null,"abstract":"<p>This paper introduces a semi-discrete implicit Euler (SDIE) scheme for the Allen-Cahn equation (ACE) with fidelity forcing on graphs. The continuous-in-time version of this differential equation was pioneered by Bertozzi and Flenner in 2012 as a method for graph classification problems, such as semi-supervised learning and image segmentation. In 2013, Merkurjev et. al. used a Merriman-Bence-Osher (MBO) scheme with fidelity forcing instead, as heuristically it was expected to give similar results to the ACE. The current paper rigorously establishes the graph MBO scheme with fidelity forcing as a special case of an SDIE scheme for the graph ACE with fidelity forcing. This connection requires the use of the double-obstacle potential in the ACE, as was already demonstrated by Budd and Van Gennip in 2020 in the context of ACE without a fidelity forcing term. We also prove that solutions of the SDIE scheme converge to solutions of the graph ACE with fidelity forcing as the discrete time step converges to zero. In the second part of the paper we develop the SDIE scheme as a classification algorithm. We also introduce some innovations into the algorithms for the SDIE and MBO schemes. For large graphs, we use a QR decomposition method to compute an eigendecomposition from a Nyström extension, which outperforms the method used by, for example, Bertozzi and Flenner in 2012, in accuracy, stability, and speed. Moreover, we replace the Euler discretization for the scheme's diffusion step by a computation based on the Strang formula for matrix exponentials. We apply this algorithm to a number of image segmentation problems, and compare the performance with that of the graph MBO scheme with fidelity forcing. We find that while the general SDIE scheme does not perform better than the MBO special case at this task, our other innovations lead to a significantly better segmentation than that from previous literature. We also empirically quantify the uncertainty that this segmentation inherits from the randomness in the Nyström extension.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202100004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90961049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scientific Machine Learning is a rapidly evolving field of research that combines and further develops techniques of scientific computing and machine learning. Special emphasis is given to the scientific (physical, chemical, biological, etc.) interpretability of models learned from data and their usefulness for robust predictions. On the other hand, this young field also investigates the utilization of Machine Learning methods for improving numerical algorithms in Scientific Computing. The name Scientific Machine Learning has been coined at a Basic Research Needs Workshop of the US Department of Energy (DOE) in January, 2018. It resulted in a report [2] published in February, 2019; see also [1] for a short brochure on this topic. The present special issue of the GAMM Mitteilungen, which is the first of a two-part series, contains contributions on the topic of Scientific Machine Learning in the context of complex applications across the sciences and engineering. Research in this new exciting field needs to address challenges such as complex physics, uncertain parameters, and possibly limited data through the development of new methods that combine algorithms from computational science and engineering and from numerical analysis with state of the art techniques from machine learning. At the GAMM Annual Meeting 2019, the activity group Computational and Mathematical Methods in Data Science (CoMinDS) has been established. Meanwhile, it has become a meeting place for researchers interested in all aspects of data science. All three editors of this special issue are founding members of this activity group. Because of the rapid development both in the theoretical foundations and the applicability of Scientific Machine Learning techniques, it is time to highlight developments within the field in the hope that it will become an essential domain within the GAMM and topical issues like this will have a frequent occurrence within this journal. We are happy that eight teams of authors have accepted our invitation to report on recent research highlights in Scientific Machine Learning, and to point out the relevant literature as well as software. The four papers in this first part of the special issue are: • Stoll, Benner: Machine Learning for Material Characterization with an Application for Predicting Mechanical Properties. This work explores the use of machine learning techniques for material property prediction. Given the abundance of data available in industrial applications, machine learning methods can help finding patterns in the data and the authors focus on the case of the small punch test and tensile data for illustration purposes. • Beck, Kurz: A Perspective on Machine Modelling Learning Methods in Turbulence. Turbulence modelling remains a humongous challenge in the simulation and analysis of complex flows. The authors review the use of data-driven techniques to open up new ways for studying turbulence and focus on the challenges and opportunities t
{"title":"Topical Issue Scientific Machine Learning (1/2)","authors":"Peter Benner, Axel Klawonn, Martin Stoll","doi":"10.1002/gamm.202100005","DOIUrl":"10.1002/gamm.202100005","url":null,"abstract":"Scientific Machine Learning is a rapidly evolving field of research that combines and further develops techniques of scientific computing and machine learning. Special emphasis is given to the scientific (physical, chemical, biological, etc.) interpretability of models learned from data and their usefulness for robust predictions. On the other hand, this young field also investigates the utilization of Machine Learning methods for improving numerical algorithms in Scientific Computing. The name Scientific Machine Learning has been coined at a Basic Research Needs Workshop of the US Department of Energy (DOE) in January, 2018. It resulted in a report [2] published in February, 2019; see also [1] for a short brochure on this topic. The present special issue of the GAMM Mitteilungen, which is the first of a two-part series, contains contributions on the topic of Scientific Machine Learning in the context of complex applications across the sciences and engineering. Research in this new exciting field needs to address challenges such as complex physics, uncertain parameters, and possibly limited data through the development of new methods that combine algorithms from computational science and engineering and from numerical analysis with state of the art techniques from machine learning. At the GAMM Annual Meeting 2019, the activity group Computational and Mathematical Methods in Data Science (CoMinDS) has been established. Meanwhile, it has become a meeting place for researchers interested in all aspects of data science. All three editors of this special issue are founding members of this activity group. Because of the rapid development both in the theoretical foundations and the applicability of Scientific Machine Learning techniques, it is time to highlight developments within the field in the hope that it will become an essential domain within the GAMM and topical issues like this will have a frequent occurrence within this journal. We are happy that eight teams of authors have accepted our invitation to report on recent research highlights in Scientific Machine Learning, and to point out the relevant literature as well as software. The four papers in this first part of the special issue are: • Stoll, Benner: Machine Learning for Material Characterization with an Application for Predicting Mechanical Properties. This work explores the use of machine learning techniques for material property prediction. Given the abundance of data available in industrial applications, machine learning methods can help finding patterns in the data and the authors focus on the case of the small punch test and tensile data for illustration purposes. • Beck, Kurz: A Perspective on Machine Modelling Learning Methods in Turbulence. Turbulence modelling remains a humongous challenge in the simulation and analysis of complex flows. The authors review the use of data-driven techniques to open up new ways for studying turbulence and focus on the challenges and opportunities t","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202100005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78463089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Heinlein, Axel Klawonn, Martin Lanser, Janine Weber
Scientific machine learning (SciML), an area of research where techniques from machine learning and scientific computing are combined, has become of increasing importance and receives growing attention. Here, our focus is on a very specific area within SciML given by the combination of domain decomposition methods (DDMs) with machine learning techniques for the solution of partial differential equations. The aim of the present work is to make an attempt of providing a review of existing and also new approaches within this field as well as to present some known results in a unified framework; no claim of completeness is made. As a concrete example of machine learning enhanced DDMs, an approach is presented which uses neural networks to reduce the computational effort in adaptive DDMs while retaining their robustness. More precisely, deep neural networks are used to predict the geometric location of constraints which are needed to define a robust coarse space. Additionally, two recently published deep domain decomposition approaches are presented in a unified framework. Both approaches use physics-constrained neural networks to replace the discretization and solution of the subdomain problems of a given decomposition of the computational domain. Finally, a brief overview is given of several further approaches which combine machine learning with ideas from DDMs to either increase the performance of already existing algorithms or to create completely new methods.
{"title":"Combining machine learning and domain decomposition methods for the solution of partial differential equations—A review","authors":"Alexander Heinlein, Axel Klawonn, Martin Lanser, Janine Weber","doi":"10.1002/gamm.202100001","DOIUrl":"10.1002/gamm.202100001","url":null,"abstract":"<p>Scientific machine learning (SciML), an area of research where techniques from machine learning and scientific computing are combined, has become of increasing importance and receives growing attention. Here, our focus is on a very specific area within SciML given by the combination of domain decomposition methods (DDMs) with machine learning techniques for the solution of partial differential equations. The aim of the present work is to make an attempt of providing a review of existing and also new approaches within this field as well as to present some known results in a unified framework; no claim of completeness is made. As a concrete example of machine learning enhanced DDMs, an approach is presented which uses neural networks to reduce the computational effort in adaptive DDMs while retaining their robustness. More precisely, deep neural networks are used to predict the geometric location of constraints which are needed to define a robust coarse space. Additionally, two recently published deep domain decomposition approaches are presented in a unified framework. Both approaches use physics-constrained neural networks to replace the discretization and solution of the subdomain problems of a given decomposition of the computational domain. Finally, a brief overview is given of several further approaches which combine machine learning with ideas from DDMs to either increase the performance of already existing algorithms or to create completely new methods.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202100001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89787405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}