This paper introduces a semi-discrete implicit Euler (SDIE) scheme for the Allen-Cahn equation (ACE) with fidelity forcing on graphs. The continuous-in-time version of this differential equation was pioneered by Bertozzi and Flenner in 2012 as a method for graph classification problems, such as semi-supervised learning and image segmentation. In 2013, Merkurjev et. al. used a Merriman-Bence-Osher (MBO) scheme with fidelity forcing instead, as heuristically it was expected to give similar results to the ACE. The current paper rigorously establishes the graph MBO scheme with fidelity forcing as a special case of an SDIE scheme for the graph ACE with fidelity forcing. This connection requires the use of the double-obstacle potential in the ACE, as was already demonstrated by Budd and Van Gennip in 2020 in the context of ACE without a fidelity forcing term. We also prove that solutions of the SDIE scheme converge to solutions of the graph ACE with fidelity forcing as the discrete time step converges to zero. In the second part of the paper we develop the SDIE scheme as a classification algorithm. We also introduce some innovations into the algorithms for the SDIE and MBO schemes. For large graphs, we use a QR decomposition method to compute an eigendecomposition from a Nyström extension, which outperforms the method used by, for example, Bertozzi and Flenner in 2012, in accuracy, stability, and speed. Moreover, we replace the Euler discretization for the scheme's diffusion step by a computation based on the Strang formula for matrix exponentials. We apply this algorithm to a number of image segmentation problems, and compare the performance with that of the graph MBO scheme with fidelity forcing. We find that while the general SDIE scheme does not perform better than the MBO special case at this task, our other innovations lead to a significantly better segmentation than that from previous literature. We also empirically quantify the uncertainty that this segmentation inherits from the randomness in the Nyström extension.
{"title":"Classification and image processing with a semi-discrete scheme for fidelity forced Allen–Cahn on graphs","authors":"Jeremy Budd, Yves van Gennip, Jonas Latz","doi":"10.1002/gamm.202100004","DOIUrl":"10.1002/gamm.202100004","url":null,"abstract":"<p>This paper introduces a semi-discrete implicit Euler (SDIE) scheme for the Allen-Cahn equation (ACE) with fidelity forcing on graphs. The continuous-in-time version of this differential equation was pioneered by Bertozzi and Flenner in 2012 as a method for graph classification problems, such as semi-supervised learning and image segmentation. In 2013, Merkurjev et. al. used a Merriman-Bence-Osher (MBO) scheme with fidelity forcing instead, as heuristically it was expected to give similar results to the ACE. The current paper rigorously establishes the graph MBO scheme with fidelity forcing as a special case of an SDIE scheme for the graph ACE with fidelity forcing. This connection requires the use of the double-obstacle potential in the ACE, as was already demonstrated by Budd and Van Gennip in 2020 in the context of ACE without a fidelity forcing term. We also prove that solutions of the SDIE scheme converge to solutions of the graph ACE with fidelity forcing as the discrete time step converges to zero. In the second part of the paper we develop the SDIE scheme as a classification algorithm. We also introduce some innovations into the algorithms for the SDIE and MBO schemes. For large graphs, we use a QR decomposition method to compute an eigendecomposition from a Nyström extension, which outperforms the method used by, for example, Bertozzi and Flenner in 2012, in accuracy, stability, and speed. Moreover, we replace the Euler discretization for the scheme's diffusion step by a computation based on the Strang formula for matrix exponentials. We apply this algorithm to a number of image segmentation problems, and compare the performance with that of the graph MBO scheme with fidelity forcing. We find that while the general SDIE scheme does not perform better than the MBO special case at this task, our other innovations lead to a significantly better segmentation than that from previous literature. We also empirically quantify the uncertainty that this segmentation inherits from the randomness in the Nyström extension.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202100004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90961049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scientific Machine Learning is a rapidly evolving field of research that combines and further develops techniques of scientific computing and machine learning. Special emphasis is given to the scientific (physical, chemical, biological, etc.) interpretability of models learned from data and their usefulness for robust predictions. On the other hand, this young field also investigates the utilization of Machine Learning methods for improving numerical algorithms in Scientific Computing. The name Scientific Machine Learning has been coined at a Basic Research Needs Workshop of the US Department of Energy (DOE) in January, 2018. It resulted in a report [2] published in February, 2019; see also [1] for a short brochure on this topic. The present special issue of the GAMM Mitteilungen, which is the first of a two-part series, contains contributions on the topic of Scientific Machine Learning in the context of complex applications across the sciences and engineering. Research in this new exciting field needs to address challenges such as complex physics, uncertain parameters, and possibly limited data through the development of new methods that combine algorithms from computational science and engineering and from numerical analysis with state of the art techniques from machine learning. At the GAMM Annual Meeting 2019, the activity group Computational and Mathematical Methods in Data Science (CoMinDS) has been established. Meanwhile, it has become a meeting place for researchers interested in all aspects of data science. All three editors of this special issue are founding members of this activity group. Because of the rapid development both in the theoretical foundations and the applicability of Scientific Machine Learning techniques, it is time to highlight developments within the field in the hope that it will become an essential domain within the GAMM and topical issues like this will have a frequent occurrence within this journal. We are happy that eight teams of authors have accepted our invitation to report on recent research highlights in Scientific Machine Learning, and to point out the relevant literature as well as software. The four papers in this first part of the special issue are: • Stoll, Benner: Machine Learning for Material Characterization with an Application for Predicting Mechanical Properties. This work explores the use of machine learning techniques for material property prediction. Given the abundance of data available in industrial applications, machine learning methods can help finding patterns in the data and the authors focus on the case of the small punch test and tensile data for illustration purposes. • Beck, Kurz: A Perspective on Machine Modelling Learning Methods in Turbulence. Turbulence modelling remains a humongous challenge in the simulation and analysis of complex flows. The authors review the use of data-driven techniques to open up new ways for studying turbulence and focus on the challenges and opportunities t
{"title":"Topical Issue Scientific Machine Learning (1/2)","authors":"Peter Benner, Axel Klawonn, Martin Stoll","doi":"10.1002/gamm.202100005","DOIUrl":"10.1002/gamm.202100005","url":null,"abstract":"Scientific Machine Learning is a rapidly evolving field of research that combines and further develops techniques of scientific computing and machine learning. Special emphasis is given to the scientific (physical, chemical, biological, etc.) interpretability of models learned from data and their usefulness for robust predictions. On the other hand, this young field also investigates the utilization of Machine Learning methods for improving numerical algorithms in Scientific Computing. The name Scientific Machine Learning has been coined at a Basic Research Needs Workshop of the US Department of Energy (DOE) in January, 2018. It resulted in a report [2] published in February, 2019; see also [1] for a short brochure on this topic. The present special issue of the GAMM Mitteilungen, which is the first of a two-part series, contains contributions on the topic of Scientific Machine Learning in the context of complex applications across the sciences and engineering. Research in this new exciting field needs to address challenges such as complex physics, uncertain parameters, and possibly limited data through the development of new methods that combine algorithms from computational science and engineering and from numerical analysis with state of the art techniques from machine learning. At the GAMM Annual Meeting 2019, the activity group Computational and Mathematical Methods in Data Science (CoMinDS) has been established. Meanwhile, it has become a meeting place for researchers interested in all aspects of data science. All three editors of this special issue are founding members of this activity group. Because of the rapid development both in the theoretical foundations and the applicability of Scientific Machine Learning techniques, it is time to highlight developments within the field in the hope that it will become an essential domain within the GAMM and topical issues like this will have a frequent occurrence within this journal. We are happy that eight teams of authors have accepted our invitation to report on recent research highlights in Scientific Machine Learning, and to point out the relevant literature as well as software. The four papers in this first part of the special issue are: • Stoll, Benner: Machine Learning for Material Characterization with an Application for Predicting Mechanical Properties. This work explores the use of machine learning techniques for material property prediction. Given the abundance of data available in industrial applications, machine learning methods can help finding patterns in the data and the authors focus on the case of the small punch test and tensile data for illustration purposes. • Beck, Kurz: A Perspective on Machine Modelling Learning Methods in Turbulence. Turbulence modelling remains a humongous challenge in the simulation and analysis of complex flows. The authors review the use of data-driven techniques to open up new ways for studying turbulence and focus on the challenges and opportunities t","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202100005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78463089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Heinlein, Axel Klawonn, Martin Lanser, Janine Weber
Scientific machine learning (SciML), an area of research where techniques from machine learning and scientific computing are combined, has become of increasing importance and receives growing attention. Here, our focus is on a very specific area within SciML given by the combination of domain decomposition methods (DDMs) with machine learning techniques for the solution of partial differential equations. The aim of the present work is to make an attempt of providing a review of existing and also new approaches within this field as well as to present some known results in a unified framework; no claim of completeness is made. As a concrete example of machine learning enhanced DDMs, an approach is presented which uses neural networks to reduce the computational effort in adaptive DDMs while retaining their robustness. More precisely, deep neural networks are used to predict the geometric location of constraints which are needed to define a robust coarse space. Additionally, two recently published deep domain decomposition approaches are presented in a unified framework. Both approaches use physics-constrained neural networks to replace the discretization and solution of the subdomain problems of a given decomposition of the computational domain. Finally, a brief overview is given of several further approaches which combine machine learning with ideas from DDMs to either increase the performance of already existing algorithms or to create completely new methods.
{"title":"Combining machine learning and domain decomposition methods for the solution of partial differential equations—A review","authors":"Alexander Heinlein, Axel Klawonn, Martin Lanser, Janine Weber","doi":"10.1002/gamm.202100001","DOIUrl":"10.1002/gamm.202100001","url":null,"abstract":"<p>Scientific machine learning (SciML), an area of research where techniques from machine learning and scientific computing are combined, has become of increasing importance and receives growing attention. Here, our focus is on a very specific area within SciML given by the combination of domain decomposition methods (DDMs) with machine learning techniques for the solution of partial differential equations. The aim of the present work is to make an attempt of providing a review of existing and also new approaches within this field as well as to present some known results in a unified framework; no claim of completeness is made. As a concrete example of machine learning enhanced DDMs, an approach is presented which uses neural networks to reduce the computational effort in adaptive DDMs while retaining their robustness. More precisely, deep neural networks are used to predict the geometric location of constraints which are needed to define a robust coarse space. Additionally, two recently published deep domain decomposition approaches are presented in a unified framework. Both approaches use physics-constrained neural networks to replace the discretization and solution of the subdomain problems of a given decomposition of the computational domain. Finally, a brief overview is given of several further approaches which combine machine learning with ideas from DDMs to either increase the performance of already existing algorithms or to create completely new methods.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202100001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89787405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work presents a review of the current state of research in data-driven turbulence closure modeling. It offers a perspective on the challenges and open issues but also on the advantages and promises of machine learning (ML) methods applied to parameter estimation, model identification, closure term reconstruction, and beyond, mostly from the perspective of large Eddy simulation and related techniques. We stress that consistency of the training data, the model, the underlying physics, and the discretization is a key issue that needs to be considered for a successful ML-augmented modeling strategy. In order to make the discussion useful for non-experts in either field, we introduce both the modeling problem in turbulence as well as the prominent ML paradigms and methods in a concise and self-consistent manner. In this study, we present a survey of the current data-driven model concepts and methods, highlight important developments, and put them into the context of the discussed challenges.
{"title":"A perspective on machine learning methods in turbulence modeling","authors":"Andrea Beck, Marius Kurz","doi":"10.1002/gamm.202100002","DOIUrl":"10.1002/gamm.202100002","url":null,"abstract":"<p>This work presents a review of the current state of research in data-driven turbulence closure modeling. It offers a perspective on the challenges and open issues but also on the advantages and promises of machine learning (ML) methods applied to parameter estimation, model identification, closure term reconstruction, and beyond, mostly from the perspective of large Eddy simulation and related techniques. We stress that consistency of the training data, the model, the underlying physics, and the discretization is a key issue that needs to be considered for a successful ML-augmented modeling strategy. In order to make the discussion useful for non-experts in either field, we introduce both the modeling problem in turbulence as well as the prominent ML paradigms and methods in a concise and self-consistent manner. In this study, we present a survey of the current data-driven model concepts and methods, highlight important developments, and put them into the context of the discussed challenges.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202100002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76272585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Currently, the growth of material data from experiments and simulations is expanding beyond processable amounts. This makes the development of new data-driven methods for the discovery of patterns among multiple lengthscales and time-scales and structure-property relationships essential. These data-driven approaches show enormous promise within materials science. The following review covers machine learning (ML) applications for metallic material characterization. Many parameters associated with the processing and the structure of materials affect the properties and the performance of manufactured components. Thus, this study is an attempt to investigate the usefulness of ML methods for material property prediction. Material characteristics such as strength, toughness, hardness, brittleness, or ductility are relevant to categorize a material or component according to their quality. In industry, material tests like tensile tests, compression tests, or creep tests are often time consuming and expensive to perform. Therefore, the application of ML approaches is considered helpful for an easier generation of material property information. This study also gives an application of ML methods on small punch test (SPT) data for the determination of the property ultimate tensile strength for various materials. A strong correlation between SPT data and tensile test data was found which ultimately allows to replace more costly tests by simple and fast tests in combination with ML.
{"title":"Machine learning for material characterization with an application for predicting mechanical properties","authors":"Anke Stoll, Peter Benner","doi":"10.1002/gamm.202100003","DOIUrl":"10.1002/gamm.202100003","url":null,"abstract":"<p>Currently, the growth of material data from experiments and simulations is expanding beyond processable amounts. This makes the development of new data-driven methods for the discovery of patterns among multiple lengthscales and time-scales and structure-property relationships essential. These data-driven approaches show enormous promise within materials science. The following review covers machine learning (ML) applications for metallic material characterization. Many parameters associated with the processing and the structure of materials affect the properties and the performance of manufactured components. Thus, this study is an attempt to investigate the usefulness of ML methods for material property prediction. Material characteristics such as strength, toughness, hardness, brittleness, or ductility are relevant to categorize a material or component according to their quality. In industry, material tests like tensile tests, compression tests, or creep tests are often time consuming and expensive to perform. Therefore, the application of ML approaches is considered helpful for an easier generation of material property information. This study also gives an application of ML methods on small punch test (SPT) data for the determination of the property ultimate tensile strength for various materials. A strong correlation between SPT data and tensile test data was found which ultimately allows to replace more costly tests by simple and fast tests in combination with ML.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202100003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74302427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The present special issue of the GAMM Mitteilungen, which is the second of a two-part series, contains contributions on the topic of Applied and Numerical Linear Algebra, compiled by the GAMM Activity Group of the same name. The Activity Group has already contributed special issues to the GAMM Mitteilungen in 2004, 2006, and 2013. Because of the rapid development both in the theoretical foundations and the applicability of numerical linear algebra techniques throughout science and engineering, it is time again to survey the field and present the results to the readers of the GAMM Mitteilungen. We are happy that eight authors or teams of authors have accepted our invitation to report on recent research highlights in Applied Numerical Linear Algebra, and to point out the relevant literature as well as software.
This work by Federico Poloni reviews a family of algorithms for Lyapunov- and Riccati-type equations which are all related by the idea of doubling. The algorithms are compared and their connections are highlighted. The paper also discusses open problems relating to their theory.
{"title":"Topical Issue Applied and Numerical Linear Algebra (2/2)","authors":"Stefan Güttel, Jörg Liesen","doi":"10.1002/gamm.202000021","DOIUrl":"10.1002/gamm.202000021","url":null,"abstract":"<p>The present special issue of the GAMM Mitteilungen, which is the second of a two-part series, contains contributions on the topic of Applied and Numerical Linear Algebra, compiled by the GAMM Activity Group of the same name. The Activity Group has already contributed special issues to the GAMM Mitteilungen in 2004, 2006, and 2013. Because of the rapid development both in the theoretical foundations and the applicability of numerical linear algebra techniques throughout science and engineering, it is time again to survey the field and present the results to the readers of the GAMM Mitteilungen. We are happy that eight authors or teams of authors have accepted our invitation to report on recent research highlights in Applied Numerical Linear Algebra, and to point out the relevant literature as well as software.</p><p>This work by Federico Poloni reviews a family of algorithms for Lyapunov- and Riccati-type equations which are all related by the idea of doubling. The algorithms are compared and their connections are highlighted. The paper also discusses open problems relating to their theory.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"43 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202000021","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76920951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When simulating a mechanism from science or engineering, or an industrial process, one is frequently required to construct a mathematical model, and then resolve this model numerically. If accurate numerical solutions are necessary or desirable, this can involve solving large-scale systems of equations. One major class of solution methods is that of preconditioned iterative methods, involving preconditioners which are computationally cheap to apply while also capturing information contained in the linear system. In this article, we give a short survey of the field of preconditioning. We introduce a range of preconditioners for partial differential equations, followed by optimization problems, before discussing preconditioners constructed with less standard objectives in mind.
{"title":"Preconditioners for Krylov subspace methods: An overview","authors":"John W. Pearson, Jennifer Pestana","doi":"10.1002/gamm.202000015","DOIUrl":"10.1002/gamm.202000015","url":null,"abstract":"<p>When simulating a mechanism from science or engineering, or an industrial process, one is frequently required to construct a mathematical model, and then resolve this model numerically. If accurate numerical solutions are necessary or desirable, this can involve solving large-scale systems of equations. One major class of solution methods is that of preconditioned iterative methods, involving preconditioners which are computationally cheap to apply while also capturing information contained in the linear system. In this article, we give a short survey of the field of preconditioning. We introduce a range of preconditioners for partial differential equations, followed by optimization problems, before discussing preconditioners constructed with less standard objectives in mind.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"43 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202000015","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73816556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We review a family of algorithms for Lyapunov- and Riccati-type equations which are all related to each other by the idea of doubling: they construct the iterate