Pub Date : 2023-05-01DOI: 10.1017/S0962492922000101
S. Boldo, C. Jeannerod, G. Melquiond, Jean-Michel Muller
Floating-point numbers have an intuitive meaning when it comes to physics-based numerical computations, and they have thus become the most common way of approximating real numbers in computers. The IEEE-754 Standard has played a large part in making floating-point arithmetic ubiquitous today, by specifying its semantics in a strict yet useful way as early as 1985. In particular, floating-point operations should be performed as if their results were first computed with an infinite precision and then rounded to the target format. A consequence is that floating-point arithmetic satisfies the ‘standard model’ that is often used for analysing the accuracy of floating-point algorithms. But that is only scraping the surface, and floating-point arithmetic offers much more. In this survey we recall the history of floating-point arithmetic as well as its specification mandated by the IEEE-754 Standard. We also recall what properties it entails and what every programmer should know when designing a floating-point algorithm. We provide various basic blocks that can be implemented with floating-point arithmetic. In particular, one can actually compute the rounding error caused by some floating-point operations, which paves the way to designing more accurate algorithms. More generally, properties of floating-point arithmetic make it possible to extend the accuracy of computations beyond working precision.
{"title":"Floating-point arithmetic","authors":"S. Boldo, C. Jeannerod, G. Melquiond, Jean-Michel Muller","doi":"10.1017/S0962492922000101","DOIUrl":"https://doi.org/10.1017/S0962492922000101","url":null,"abstract":"Floating-point numbers have an intuitive meaning when it comes to physics-based numerical computations, and they have thus become the most common way of approximating real numbers in computers. The IEEE-754 Standard has played a large part in making floating-point arithmetic ubiquitous today, by specifying its semantics in a strict yet useful way as early as 1985. In particular, floating-point operations should be performed as if their results were first computed with an infinite precision and then rounded to the target format. A consequence is that floating-point arithmetic satisfies the ‘standard model’ that is often used for analysing the accuracy of floating-point algorithms. But that is only scraping the surface, and floating-point arithmetic offers much more. In this survey we recall the history of floating-point arithmetic as well as its specification mandated by the IEEE-754 Standard. We also recall what properties it entails and what every programmer should know when designing a floating-point algorithm. We provide various basic blocks that can be implemented with floating-point arithmetic. In particular, one can actually compute the rounding error caused by some floating-point operations, which paves the way to designing more accurate algorithms. More generally, properties of floating-point arithmetic make it possible to extend the accuracy of computations beyond working precision.","PeriodicalId":48863,"journal":{"name":"Acta Numerica","volume":"32 1","pages":"203 - 290"},"PeriodicalIF":14.2,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44272147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.1017/S0962492922000125
M. Bachmayr
Low-rank tensor representations can provide highly compressed approximations of functions. These concepts, which essentially amount to generalizations of classical techniques of separation of variables, have proved to be particularly fruitful for functions of many variables. We focus here on problems where the target function is given only implicitly as the solution of a partial differential equation. A first natural question is under which conditions we should expect such solutions to be efficiently approximated in low-rank form. Due to the highly nonlinear nature of the resulting low-rank approximations, a crucial second question is at what expense such approximations can be computed in practice. This article surveys basic construction principles of numerical methods based on low-rank representations as well as the analysis of their convergence and computational complexity.
{"title":"Low-rank tensor methods for partial differential equations","authors":"M. Bachmayr","doi":"10.1017/S0962492922000125","DOIUrl":"https://doi.org/10.1017/S0962492922000125","url":null,"abstract":"Low-rank tensor representations can provide highly compressed approximations of functions. These concepts, which essentially amount to generalizations of classical techniques of separation of variables, have proved to be particularly fruitful for functions of many variables. We focus here on problems where the target function is given only implicitly as the solution of a partial differential equation. A first natural question is under which conditions we should expect such solutions to be efficiently approximated in low-rank form. Due to the highly nonlinear nature of the resulting low-rank approximations, a crucial second question is at what expense such approximations can be computed in practice. This article surveys basic construction principles of numerical methods based on low-rank representations as well as the analysis of their convergence and computational complexity.","PeriodicalId":48863,"journal":{"name":"Acta Numerica","volume":"32 1","pages":"1 - 121"},"PeriodicalIF":14.2,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49480468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.1017/S0962492923000016
C. Schütte, Stefan Klus, C. Hartmann
One of the main challenges in molecular dynamics is overcoming the ‘timescale barrier’: in many realistic molecular systems, biologically important rare transitions occur on timescales that are not accessible to direct numerical simulation, even on the largest or specifically dedicated supercomputers. This article discusses how to circumvent the timescale barrier by a collection of transfer operator-based techniques that have emerged from dynamical systems theory, numerical mathematics and machine learning over the last two decades. We will focus on how transfer operators can be used to approximate the dynamical behaviour on long timescales, review the introduction of this approach into molecular dynamics, and outline the respective theory, as well as the algorithmic development, from the early numerics-based methods, via variational reformulations, to modern data-based techniques utilizing and improving concepts from machine learning. Furthermore, its relation to rare event simulation techniques will be explained, revealing a broad equivalence of variational principles for long-time quantities in molecular dynamics. The article will mainly take a mathematical perspective and will leave the application to real-world molecular systems to the more than 1000 research articles already written on this subject.
{"title":"Overcoming the timescale barrier in molecular dynamics: Transfer operators, variational principles and machine learning","authors":"C. Schütte, Stefan Klus, C. Hartmann","doi":"10.1017/S0962492923000016","DOIUrl":"https://doi.org/10.1017/S0962492923000016","url":null,"abstract":"One of the main challenges in molecular dynamics is overcoming the ‘timescale barrier’: in many realistic molecular systems, biologically important rare transitions occur on timescales that are not accessible to direct numerical simulation, even on the largest or specifically dedicated supercomputers. This article discusses how to circumvent the timescale barrier by a collection of transfer operator-based techniques that have emerged from dynamical systems theory, numerical mathematics and machine learning over the last two decades. We will focus on how transfer operators can be used to approximate the dynamical behaviour on long timescales, review the introduction of this approach into molecular dynamics, and outline the respective theory, as well as the algorithmic development, from the early numerics-based methods, via variational reformulations, to modern data-based techniques utilizing and improving concepts from machine learning. Furthermore, its relation to rare event simulation techniques will be explained, revealing a broad equivalence of variational principles for long-time quantities in molecular dynamics. The article will mainly take a mathematical perspective and will leave the application to real-world molecular systems to the more than 1000 research articles already written on this subject.","PeriodicalId":48863,"journal":{"name":"Acta Numerica","volume":"32 1","pages":"517 - 673"},"PeriodicalIF":14.2,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44822524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.1017/s096249292300003x
L. Veiga, F. Brezzi, L. D. Marini, A. Russo, S. Boldo, C. Jeannerod, G. Melquiond, J. Muller, C. Cotter, L. Vandenberghe
{"title":"ANU volume 32 Cover and Front matter","authors":"L. Veiga, F. Brezzi, L. D. Marini, A. Russo, S. Boldo, C. Jeannerod, G. Melquiond, J. Muller, C. Cotter, L. Vandenberghe","doi":"10.1017/s096249292300003x","DOIUrl":"https://doi.org/10.1017/s096249292300003x","url":null,"abstract":"","PeriodicalId":48863,"journal":{"name":"Acta Numerica","volume":"32 1","pages":"f1 - f6"},"PeriodicalIF":14.2,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47121260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-26DOI: 10.48550/arXiv.2302.13337
C. Cotter
This article surveys research on the application of compatible finite element methods to large-scale atmosphere and ocean simulation. Compatible finite element methods extend Arakawa’s C-grid finite difference scheme to the finite element world. They are constructed from a discrete de Rham complex, which is a sequence of finite element spaces linked by the operators of differential calculus. The use of discrete de Rham complexes to solve partial differential equations is well established, but in this article we focus on the specifics of dynamical cores for simulating weather, oceans and climate. The most important consequence of the discrete de Rham complex is the Hodge–Helmholtz decomposition, which has been used to exclude the possibility of several types of spurious oscillations from linear equations of geophysical flow. This means that compatible finite element spaces provide a useful framework for building dynamical cores. In this article we introduce the main concepts of compatible finite element spaces, and discuss their wave propagation properties. We survey some methods for discretizing the transport terms that arise in dynamical core equation systems, and provide some example discretizations, briefly discussing their iterative solution. Then we focus on the recent use of compatible finite element spaces in designing structure preserving methods, surveying variational discretizations, Poisson bracket discretizations and consistent vorticity transport.
{"title":"Compatible finite element methods for geophysical fluid dynamics","authors":"C. Cotter","doi":"10.48550/arXiv.2302.13337","DOIUrl":"https://doi.org/10.48550/arXiv.2302.13337","url":null,"abstract":"This article surveys research on the application of compatible finite element methods to large-scale atmosphere and ocean simulation. Compatible finite element methods extend Arakawa’s C-grid finite difference scheme to the finite element world. They are constructed from a discrete de Rham complex, which is a sequence of finite element spaces linked by the operators of differential calculus. The use of discrete de Rham complexes to solve partial differential equations is well established, but in this article we focus on the specifics of dynamical cores for simulating weather, oceans and climate. The most important consequence of the discrete de Rham complex is the Hodge–Helmholtz decomposition, which has been used to exclude the possibility of several types of spurious oscillations from linear equations of geophysical flow. This means that compatible finite element spaces provide a useful framework for building dynamical cores. In this article we introduce the main concepts of compatible finite element spaces, and discuss their wave propagation properties. We survey some methods for discretizing the transport terms that arise in dynamical core equation systems, and provide some example discretizations, briefly discussing their iterative solution. Then we focus on the recent use of compatible finite element spaces in designing structure preserving methods, surveying variational discretizations, Poisson bracket discretizations and consistent vorticity transport.","PeriodicalId":48863,"journal":{"name":"Acta Numerica","volume":"32 1","pages":"291 - 393"},"PeriodicalIF":14.2,"publicationDate":"2023-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44672553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-01DOI: 10.1017/S0962492922000113
L. Tunçel, L. Vandenberghe
A convex cone is homogeneous if its automorphism group acts transitively on the interior of the cone. Cones that are homogeneous and self-dual are called symmetric. Conic optimization problems over symmetric cones have been extensively studied, particularly in the literature on interior-point algorithms, and as the foundation of modelling tools for convex optimization. In this paper we consider the less well-studied conic optimization problems over cones that are homogeneous but not necessarily self-dual. We start with cones of positive semidefinite symmetric matrices with a given sparsity pattern. Homogeneous cones in this class are characterized by nested block-arrow sparsity patterns, a subset of the chordal sparsity patterns. Chordal sparsity guarantees that positive define matrices in the cone have zero-fill Cholesky factorizations. The stronger properties that make the cone homogeneous guarantee that the inverse Cholesky factors have the same zero-fill pattern. We describe transitive subsets of the cone automorphism groups, and important properties of the composition of log-det barriers with the automorphisms. Next, we consider extensions to linear slices of the positive semidefinite cone, and review conditions that make such cones homogeneous. An important example is the matrix norm cone, the epigraph of a quadratic-over-linear matrix function. The properties of homogeneous sparse matrix cones are shown to extend to this more general class of homogeneous matrix cones. We then give an overview of the algebraic theory of homogeneous cones due to Vinberg and Rothaus. A fundamental consequence of this theory is that every homogeneous cone admits a spectrahedral (linear matrix inequality) representation. We conclude by discussing the role of homogeneous structure in primal–dual symmetric interior-point methods, contrasting this with the well-developed algorithms for symmetric cones that exploit the strong properties of self-scaled barriers, and with symmetric primal–dual methods for general convex cones.
{"title":"Linear optimization over homogeneous matrix cones","authors":"L. Tunçel, L. Vandenberghe","doi":"10.1017/S0962492922000113","DOIUrl":"https://doi.org/10.1017/S0962492922000113","url":null,"abstract":"A convex cone is homogeneous if its automorphism group acts transitively on the interior of the cone. Cones that are homogeneous and self-dual are called symmetric. Conic optimization problems over symmetric cones have been extensively studied, particularly in the literature on interior-point algorithms, and as the foundation of modelling tools for convex optimization. In this paper we consider the less well-studied conic optimization problems over cones that are homogeneous but not necessarily self-dual. We start with cones of positive semidefinite symmetric matrices with a given sparsity pattern. Homogeneous cones in this class are characterized by nested block-arrow sparsity patterns, a subset of the chordal sparsity patterns. Chordal sparsity guarantees that positive define matrices in the cone have zero-fill Cholesky factorizations. The stronger properties that make the cone homogeneous guarantee that the inverse Cholesky factors have the same zero-fill pattern. We describe transitive subsets of the cone automorphism groups, and important properties of the composition of log-det barriers with the automorphisms. Next, we consider extensions to linear slices of the positive semidefinite cone, and review conditions that make such cones homogeneous. An important example is the matrix norm cone, the epigraph of a quadratic-over-linear matrix function. The properties of homogeneous sparse matrix cones are shown to extend to this more general class of homogeneous matrix cones. We then give an overview of the algebraic theory of homogeneous cones due to Vinberg and Rothaus. A fundamental consequence of this theory is that every homogeneous cone admits a spectrahedral (linear matrix inequality) representation. We conclude by discussing the role of homogeneous structure in primal–dual symmetric interior-point methods, contrasting this with the well-developed algorithms for symmetric cones that exploit the strong properties of self-scaled barriers, and with symmetric primal–dual methods for general convex cones.","PeriodicalId":48863,"journal":{"name":"Acta Numerica","volume":"32 1","pages":"675 - 747"},"PeriodicalIF":14.2,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45259328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1017/S0962492922000022
N. Higham, Théo Mary
Today’s floating-point arithmetic landscape is broader than ever. While scientific computing has traditionally used single precision and double precision floating-point arithmetics, half precision is increasingly available in hardware and quadruple precision is supported in software. Lower precision arithmetic brings increased speed and reduced communication and energy costs, but it produces results of correspondingly low accuracy. Higher precisions are more expensive but can potentially provide great benefits, even if used sparingly. A variety of mixed precision algorithms have been developed that combine the superior performance of lower precisions with the better accuracy of higher precisions. Some of these algorithms aim to provide results of the same quality as algorithms running in a fixed precision but at a much lower cost; others use a little higher precision to improve the accuracy of an algorithm. This survey treats a broad range of mixed precision algorithms in numerical linear algebra, both direct and iterative, for problems including matrix multiplication, matrix factorization, linear systems, least squares, eigenvalue decomposition and singular value decomposition. We identify key algorithmic ideas, such as iterative refinement, adapting the precision to the data, and exploiting mixed precision block fused multiply–add operations. We also describe the possible performance benefits and explain what is known about the numerical stability of the algorithms. This survey should be useful to a wide community of researchers and practitioners who wish to develop or benefit from mixed precision numerical linear algebra algorithms.
{"title":"Mixed precision algorithms in numerical linear algebra","authors":"N. Higham, Théo Mary","doi":"10.1017/S0962492922000022","DOIUrl":"https://doi.org/10.1017/S0962492922000022","url":null,"abstract":"Today’s floating-point arithmetic landscape is broader than ever. While scientific computing has traditionally used single precision and double precision floating-point arithmetics, half precision is increasingly available in hardware and quadruple precision is supported in software. Lower precision arithmetic brings increased speed and reduced communication and energy costs, but it produces results of correspondingly low accuracy. Higher precisions are more expensive but can potentially provide great benefits, even if used sparingly. A variety of mixed precision algorithms have been developed that combine the superior performance of lower precisions with the better accuracy of higher precisions. Some of these algorithms aim to provide results of the same quality as algorithms running in a fixed precision but at a much lower cost; others use a little higher precision to improve the accuracy of an algorithm. This survey treats a broad range of mixed precision algorithms in numerical linear algebra, both direct and iterative, for problems including matrix multiplication, matrix factorization, linear systems, least squares, eigenvalue decomposition and singular value decomposition. We identify key algorithmic ideas, such as iterative refinement, adapting the precision to the data, and exploiting mixed precision block fused multiply–add operations. We also describe the possible performance benefits and explain what is known about the numerical stability of the algorithms. This survey should be useful to a wide community of researchers and practitioners who wish to develop or benefit from mixed precision numerical linear algebra algorithms.","PeriodicalId":48863,"journal":{"name":"Acta Numerica","volume":"31 1","pages":"347 - 414"},"PeriodicalIF":14.2,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48737304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1017/s096249292200006x
M. Gander, Hui Zhang, Borjan Geshkovski, E. Zuazua, J. Hesthaven, C. Pagliantini, G. Rozza
{"title":"ANU volume 31 Cover and Front matter","authors":"M. Gander, Hui Zhang, Borjan Geshkovski, E. Zuazua, J. Hesthaven, C. Pagliantini, G. Rozza","doi":"10.1017/s096249292200006x","DOIUrl":"https://doi.org/10.1017/s096249292200006x","url":null,"abstract":"","PeriodicalId":48863,"journal":{"name":"Acta Numerica","volume":"31 1","pages":"f1 - f6"},"PeriodicalIF":14.2,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48690219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-01DOI: 10.1017/S0962492922000034
M. Gander, Hui Zhang
Schwarz methods use a decomposition of the computational domain into subdomains and need to impose boundary conditions on the subdomain boundaries. In domain truncation one restricts the unbounded domain to a bounded computational domain and must also put boundary conditions on the computational domain boundaries. In both fields there are vast bodies of literature and research is very active and ongoing. It turns out to be fruitful to think of the domain decomposition in Schwarz methods as a truncation of the domain onto subdomains. Seminal precursors of this fundamental idea are papers by Hagstrom, Tewarson and Jazcilevich (1988), Després (1990) and Lions (1990). The first truly optimal Schwarz method that converges in a finite number of steps was proposed by Nataf (1993), and used precisely transparent boundary conditions as transmission conditions between subdomains. Approximating these transparent boundary conditions for fast convergence of Schwarz methods led to the development of optimized Schwarz methods – a name that has become common for Schwarz methods based on domain truncation. Compared to classical Schwarz methods, which use simple Dirichlet transmission conditions and have been successfully used in a wide range of applications, optimized Schwarz methods are much less well understood, mainly due to their more sophisticated transmission conditions. A key application of Schwarz methods with such sophisticated transmission conditions turned out to be time-harmonic wave propagation problems, because classical Schwarz methods simply do not work in this case. The past decade has given us many new Schwarz methods based on domain truncation. One review from an algorithmic perspective (Gander and Zhang 2019) showed the equivalence of many of these new methods to optimized Schwarz methods. The analysis of optimized Schwarz methods, however, is lagging behind their algorithmic development. The general abstract Schwarz framework cannot be used for the analysis of these methods, and thus there are many open theoretical questions about their convergence. Just as for practical multigrid methods, Fourier analysis has been instrumental for understanding the convergence of optimized Schwarz methods and for tuning their transmission conditions. Similar to local Fourier mode analysis in multigrid, the unbounded two-subdomain case is used as a model for Fourier analysis of optimized Schwarz methods due to its simplicity. Many aspects of the actual situation, e.g. boundary conditions of the original problem and the number of subdomains, were thus neglected in the unbounded two-subdomain analysis. While this gave important insight, new phenomena beyond the unbounded two-subdomain models were discovered. This present situation is the motivation for our survey: to give a comprehensive review and precise exploration of convergence behaviours of optimized Schwarz methods based on Fourier analysis, taking into account the original boundary conditions, many-subd
{"title":"Schwarz methods by domain truncation","authors":"M. Gander, Hui Zhang","doi":"10.1017/S0962492922000034","DOIUrl":"https://doi.org/10.1017/S0962492922000034","url":null,"abstract":"Schwarz methods use a decomposition of the computational domain into subdomains and need to impose boundary conditions on the subdomain boundaries. In domain truncation one restricts the unbounded domain to a bounded computational domain and must also put boundary conditions on the computational domain boundaries. In both fields there are vast bodies of literature and research is very active and ongoing. It turns out to be fruitful to think of the domain decomposition in Schwarz methods as a truncation of the domain onto subdomains. Seminal precursors of this fundamental idea are papers by Hagstrom, Tewarson and Jazcilevich (1988), Després (1990) and Lions (1990). The first truly optimal Schwarz method that converges in a finite number of steps was proposed by Nataf (1993), and used precisely transparent boundary conditions as transmission conditions between subdomains. Approximating these transparent boundary conditions for fast convergence of Schwarz methods led to the development of optimized Schwarz methods – a name that has become common for Schwarz methods based on domain truncation. Compared to classical Schwarz methods, which use simple Dirichlet transmission conditions and have been successfully used in a wide range of applications, optimized Schwarz methods are much less well understood, mainly due to their more sophisticated transmission conditions. A key application of Schwarz methods with such sophisticated transmission conditions turned out to be time-harmonic wave propagation problems, because classical Schwarz methods simply do not work in this case. The past decade has given us many new Schwarz methods based on domain truncation. One review from an algorithmic perspective (Gander and Zhang 2019) showed the equivalence of many of these new methods to optimized Schwarz methods. The analysis of optimized Schwarz methods, however, is lagging behind their algorithmic development. The general abstract Schwarz framework cannot be used for the analysis of these methods, and thus there are many open theoretical questions about their convergence. Just as for practical multigrid methods, Fourier analysis has been instrumental for understanding the convergence of optimized Schwarz methods and for tuning their transmission conditions. Similar to local Fourier mode analysis in multigrid, the unbounded two-subdomain case is used as a model for Fourier analysis of optimized Schwarz methods due to its simplicity. Many aspects of the actual situation, e.g. boundary conditions of the original problem and the number of subdomains, were thus neglected in the unbounded two-subdomain analysis. While this gave important insight, new phenomena beyond the unbounded two-subdomain models were discovered. This present situation is the motivation for our survey: to give a comprehensive review and precise exploration of convergence behaviours of optimized Schwarz methods based on Fourier analysis, taking into account the original boundary conditions, many-subd","PeriodicalId":48863,"journal":{"name":"Acta Numerica","volume":"31 1","pages":"1 - 134"},"PeriodicalIF":14.2,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44512579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}