SIAM Review, Volume 66, Issue 2, Page 353-353, May 2024. In this issue the Education section presents two contributions. The first paper, “The Poincaré Metric and the Bergman Theory,” by Steven G. Krantz, discusses the Poincaré metric on the unit disc in the complex space and the Bergman metric on an arbitrary domain in any dimensional complex space. To define the Bergman metric the notion of Bergman kernel is crucial. Some striking properties of the Bergman kernel are discussed briefly, and it is calculated when the domain is the open unit ball. The Bergman metric is invariant under biholomorphic maps. The paper ends by discussing several attractive applications. To incorporate invariance within models in applied science, in particular for machine learning applications, there is currently a considerable interest in non-Euclidean metrics, in invariant (under some actions) metrics, and in reproducing kernels, mostly in the real-valued framework. The Bergman theory (1921) is a special case of Aronszajn's theory of Hilbert spaces with reproducing kernels (1950). Invariant metrics are used, in particular, in the study of partial differential equations. Complex-valued kernels have some interesting connections to linear systems theory. This article sheds some new light on the Poincaré metric, the Bergman kernel, the Bergman metric, and their applications in a manner that helps the reader become accustomed to these notions and to enjoy their properties. The second paper, “Dynamics of Signaling Games,” is presented by Hannelore De Silva and Karl Sigmund and is devoted to much-studied types of interactions with incomplete information, analyzing them by means of evolutionary game dynamics. Game theory is often encountered in models describing economic, social, and biological behavior, where decisions can not only be shaped by rational arguments, but may also be influenced by other factors and players. However, it is often restricted to an analysis of equilibria. In signaling games some agents are less informed than others and try to deal with it by observing actions (signals) from better informed agents. Such signals may be even purposely wrong. This article offers a concise guided tour of outcomes of evolutionary dynamics in a number of small dimensional signaling games focusing on the replicator dynamics, the best-reply dynamics, and the adaptive dynamics (dynamics of behavioral strategies whose vector field follows the gradient of the payoff vector). Furthermore, for the model of evolution of populations of players, the authors compare these dynamics. Several interesting examples illustrate that even simple adaptation processes can lead to nonequilibrium outcomes and endless cycling. This tutorial is targeted at graduate/Ph.D. students and researchers who know the basics of game theory and want to learn examples of signaling games, together with evolutionary game theory.
SIAM 评论》,第 66 卷第 2 期,第 353-353 页,2024 年 5 月。 本期教育版块刊登了两篇论文。第一篇论文是 Steven G. Krantz 撰写的 "The Poincaré Metric and the Bergman Theory",讨论了复数空间中单位圆盘上的 Poincaré 度量和任意维复数空间中任意域上的 Bergman 度量。要定义伯格曼度量,伯格曼核的概念至关重要。本文简要讨论了伯格曼核的一些显著性质,并计算了当域为开放单位球时的伯格曼核。伯格曼度量在双全形映射下是不变的。论文最后讨论了几个有吸引力的应用。为了将不变性纳入应用科学模型,特别是机器学习应用,目前人们对非欧几里得度量、不变性(在某些作用下)度量和再现核(主要在实值框架内)相当感兴趣。伯格曼理论(1921 年)是阿隆札恩的重现核希尔伯特空间理论(1950 年)的一个特例。不变度量尤其用于偏微分方程的研究。复值核与线性系统理论有一些有趣的联系。这篇文章对庞加莱度量、伯格曼核、伯格曼度量及其应用作了一些新的阐释,有助于读者习惯这些概念并享受它们的特性。第二篇论文题为 "信号博弈动力学",由汉内洛尔-德-席尔瓦和卡尔-西格蒙德(Karl Sigmund)撰写,专门讨论不完全信息下备受研究的互动类型,并通过演化博弈动力学对其进行分析。博弈论经常出现在描述经济、社会和生物行为的模型中,在这些模型中,决策不仅受理性论证的影响,还可能受其他因素和参与者的影响。然而,博弈论往往局限于对均衡状态的分析。在信号博弈中,一些行为主体的信息不如其他行为主体灵通,他们会试图通过观察信息更灵通的行为主体的行动(信号)来解决这个问题。这些信号甚至可能是故意错误的。本文简要介绍了一些小维度信号博弈中的演化动力学结果,重点关注复制者动力学、最佳回应动力学和适应性动力学(其向量场跟随报酬向量梯度的行为策略动力学)。此外,作者还针对玩家群体的进化模型,对这些动力学进行了比较。几个有趣的例子说明,即使是简单的适应过程也会导致非均衡结果和无休止的循环。本教程面向了解博弈论基础知识并希望结合进化博弈论学习信号博弈实例的研究生/博士生和研究人员。
{"title":"Education","authors":"Hélène Frankowska","doi":"10.1137/24n975906","DOIUrl":"https://doi.org/10.1137/24n975906","url":null,"abstract":"SIAM Review, Volume 66, Issue 2, Page 353-353, May 2024. <br/> In this issue the Education section presents two contributions. The first paper, “The Poincaré Metric and the Bergman Theory,” by Steven G. Krantz, discusses the Poincaré metric on the unit disc in the complex space and the Bergman metric on an arbitrary domain in any dimensional complex space. To define the Bergman metric the notion of Bergman kernel is crucial. Some striking properties of the Bergman kernel are discussed briefly, and it is calculated when the domain is the open unit ball. The Bergman metric is invariant under biholomorphic maps. The paper ends by discussing several attractive applications. To incorporate invariance within models in applied science, in particular for machine learning applications, there is currently a considerable interest in non-Euclidean metrics, in invariant (under some actions) metrics, and in reproducing kernels, mostly in the real-valued framework. The Bergman theory (1921) is a special case of Aronszajn's theory of Hilbert spaces with reproducing kernels (1950). Invariant metrics are used, in particular, in the study of partial differential equations. Complex-valued kernels have some interesting connections to linear systems theory. This article sheds some new light on the Poincaré metric, the Bergman kernel, the Bergman metric, and their applications in a manner that helps the reader become accustomed to these notions and to enjoy their properties. The second paper, “Dynamics of Signaling Games,” is presented by Hannelore De Silva and Karl Sigmund and is devoted to much-studied types of interactions with incomplete information, analyzing them by means of evolutionary game dynamics. Game theory is often encountered in models describing economic, social, and biological behavior, where decisions can not only be shaped by rational arguments, but may also be influenced by other factors and players. However, it is often restricted to an analysis of equilibria. In signaling games some agents are less informed than others and try to deal with it by observing actions (signals) from better informed agents. Such signals may be even purposely wrong. This article offers a concise guided tour of outcomes of evolutionary dynamics in a number of small dimensional signaling games focusing on the replicator dynamics, the best-reply dynamics, and the adaptive dynamics (dynamics of behavioral strategies whose vector field follows the gradient of the payoff vector). Furthermore, for the model of evolution of populations of players, the authors compare these dynamics. Several interesting examples illustrate that even simple adaptation processes can lead to nonequilibrium outcomes and endless cycling. This tutorial is targeted at graduate/Ph.D. students and researchers who know the basics of game theory and want to learn examples of signaling games, together with evolutionary game theory.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"17 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140902933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Review, Volume 66, Issue 2, Page 355-367, May 2024. We treat the Poincaré metric on the disc. In particular we emphasize the fact that it is the canonical holomorphically invariant metric on the unit disc. Then we generalize these ideas to the Bergman metric on a domain in complex space. Along the way we treat the Bergman kernel and study its invariance and uniqueness properties.
{"title":"The Poincaré Metric and the Bergman Theory","authors":"Steven G. Krantz","doi":"10.1137/22m1544622","DOIUrl":"https://doi.org/10.1137/22m1544622","url":null,"abstract":"SIAM Review, Volume 66, Issue 2, Page 355-367, May 2024. <br/> We treat the Poincaré metric on the disc. In particular we emphasize the fact that it is the canonical holomorphically invariant metric on the unit disc. Then we generalize these ideas to the Bergman metric on a domain in complex space. Along the way we treat the Bergman kernel and study its invariance and uniqueness properties.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"30 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140902896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Review, Volume 66, Issue 2, Page 285-285, May 2024. The Gauss transform---convolution with a Gaussian in the continuous case and the sum of $N$ Gaussians at $M$ points in the discrete case---is ubiquitous in applied mathematics, from solving ordinary and partial differential equations to probability density estimation to science applications in astrophysics, image processing, quantum mechanics, and beyond. For the discrete case, the fast Gauss transform (FGT) enables the approximate calculation of the sum of $N$ Gaussians at $M$ points in order $N + M$ (instead of $NM$) operations by a fast summation strategy, which shares work between the sums at different points, similarly to the fast multipole method. In this issue's Research Spotlights section, “A New Version of the Adaptive Fast Gauss Transform for Discrete and Continuous Sources,” authors Leslie F. Greengard, Shidong Jiang, Manas Rachh, and Jun Wang present a new FGT technique that avoids the use of Hermite and local expansions. The new technique employs Fourier spectral approximations, which are accelerated by nonuniform fast Fourier transforms, and results in a considerably more efficient adaptive implementation. Adaptivity is especially vital for realizing the acceleration from a fast transform when points are highly nonuniform. The paper presents compelling illustrations and examples of the computational approach and the adaptive tree-based hierarchy employed. This hierarchy is used to resolve point distributions down to a refinement level determined by accuracy demands; this results in significantly better work per grid point than conventional FGT techniques. Consequently, the authors note that there are potential key benefits in parallelization of the proposed technique. In addition to the technique's clever composition of a broad variety of advanced computing paradigms and exploitation of mathematical structure to facilitate such fast transforms, the authors present several pathways of future research. For example, the analysis is readily accessible from dimensions larger than the illustrative examples illuminate, and univariate sum-of-exponentials structure also may be exploited; the computing techniques detailed by the authors could be tailored to such regimes. These future directions have broad application in scientific computing.
{"title":"Research Spotlights","authors":"Stefan M. Wild","doi":"10.1137/24n975888","DOIUrl":"https://doi.org/10.1137/24n975888","url":null,"abstract":"SIAM Review, Volume 66, Issue 2, Page 285-285, May 2024. <br/> The Gauss transform---convolution with a Gaussian in the continuous case and the sum of $N$ Gaussians at $M$ points in the discrete case---is ubiquitous in applied mathematics, from solving ordinary and partial differential equations to probability density estimation to science applications in astrophysics, image processing, quantum mechanics, and beyond. For the discrete case, the fast Gauss transform (FGT) enables the approximate calculation of the sum of $N$ Gaussians at $M$ points in order $N + M$ (instead of $NM$) operations by a fast summation strategy, which shares work between the sums at different points, similarly to the fast multipole method. In this issue's Research Spotlights section, “A New Version of the Adaptive Fast Gauss Transform for Discrete and Continuous Sources,” authors Leslie F. Greengard, Shidong Jiang, Manas Rachh, and Jun Wang present a new FGT technique that avoids the use of Hermite and local expansions. The new technique employs Fourier spectral approximations, which are accelerated by nonuniform fast Fourier transforms, and results in a considerably more efficient adaptive implementation. Adaptivity is especially vital for realizing the acceleration from a fast transform when points are highly nonuniform. The paper presents compelling illustrations and examples of the computational approach and the adaptive tree-based hierarchy employed. This hierarchy is used to resolve point distributions down to a refinement level determined by accuracy demands; this results in significantly better work per grid point than conventional FGT techniques. Consequently, the authors note that there are potential key benefits in parallelization of the proposed technique. In addition to the technique's clever composition of a broad variety of advanced computing paradigms and exploitation of mathematical structure to facilitate such fast transforms, the authors present several pathways of future research. For example, the analysis is readily accessible from dimensions larger than the illustrative examples illuminate, and univariate sum-of-exponentials structure also may be exploited; the computing techniques detailed by the authors could be tailored to such regimes. These future directions have broad application in scientific computing.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"2 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140903007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zongren Zou, Xuhui Meng, Apostolos F. Psaros, George E. Karniadakis
SIAM Review, Volume 66, Issue 1, Page 161-190, February 2024. Uncertainty quantification (UQ) in machine learning is currently drawing increasing research interest, driven by the rapid deployment of deep neural networks across different fields, such as computer vision and natural language processing, and by the need for reliable tools in risk-sensitive applications. Recently, various machine learning models have also been developed to tackle problems in the field of scientific computing with applications to computational science and engineering (CSE). Physics-informed neural networks and deep operator networks are two such models for solving partial differential equations (PDEs) and learning operator mappings, respectively. In this regard, a comprehensive study of UQ methods tailored specifically for scientific machine learning (SciML) models has been provided in [A. F. Psaros et al., J. Comput. Phys., 477 (2023), art. 111902]. Nevertheless, and despite their theoretical merit, implementations of these methods are not straightforward, especially in large-scale CSE applications, hindering their broad adoption in both research and industry settings. In this paper, we present an open-source Python library (ŭlhttps://github.com/Crunch-UQ4MI), termed NeuralUQ and accompanied by an educational tutorial, for employing UQ methods for SciML in a convenient and structured manner. The library, designed for both educational and research purposes, supports multiple modern UQ methods and SciML models. It is based on a succinct workflow and facilitates flexible employment and easy extensions by the users. We first present a tutorial of NeuralUQ and subsequently demonstrate its applicability and efficiency in four diverse examples, involving dynamical systems and high-dimensional parametric and time-dependent PDEs.
{"title":"NeuralUQ: A Comprehensive Library for Uncertainty Quantification in Neural Differential Equations and Operators","authors":"Zongren Zou, Xuhui Meng, Apostolos F. Psaros, George E. Karniadakis","doi":"10.1137/22m1518189","DOIUrl":"https://doi.org/10.1137/22m1518189","url":null,"abstract":"SIAM Review, Volume 66, Issue 1, Page 161-190, February 2024. <br/> Uncertainty quantification (UQ) in machine learning is currently drawing increasing research interest, driven by the rapid deployment of deep neural networks across different fields, such as computer vision and natural language processing, and by the need for reliable tools in risk-sensitive applications. Recently, various machine learning models have also been developed to tackle problems in the field of scientific computing with applications to computational science and engineering (CSE). Physics-informed neural networks and deep operator networks are two such models for solving partial differential equations (PDEs) and learning operator mappings, respectively. In this regard, a comprehensive study of UQ methods tailored specifically for scientific machine learning (SciML) models has been provided in [A. F. Psaros et al., J. Comput. Phys., 477 (2023), art. 111902]. Nevertheless, and despite their theoretical merit, implementations of these methods are not straightforward, especially in large-scale CSE applications, hindering their broad adoption in both research and industry settings. In this paper, we present an open-source Python library (ŭlhttps://github.com/Crunch-UQ4MI), termed NeuralUQ and accompanied by an educational tutorial, for employing UQ methods for SciML in a convenient and structured manner. The library, designed for both educational and research purposes, supports multiple modern UQ methods and SciML models. It is based on a succinct workflow and facilitates flexible employment and easy extensions by the users. We first present a tutorial of NeuralUQ and subsequently demonstrate its applicability and efficiency in four diverse examples, involving dynamical systems and high-dimensional parametric and time-dependent PDEs.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"71 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139705027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Review, Volume 66, Issue 1, Page 147-147, February 2024. In this issue the Education section presents two contributions. The first paper, “Resonantly Forced ODEs and Repeated Roots,” is written by Allan R. Willms. The resonant forcing problem is as follows: find $y(cdot)$ such that $L[y(x)]=u(x)$, where $L[u(x)]=0$ and $L=a_0(x) + sum_{j=1}^n a_j(x) frac{d^j}{dx^j}$. The repeated roots problem consists in finding $mn$ linearly independent solutions to $L^m[y(x)]=0$ under the assumption that $n$ linearly independent solutions to $L[y(x)]= 0$ are known. A recent article by B. Gouveia and H. A. Stone, “Generating Resonant and Repeated Root Solutions to Ordinary Differential Equations Using Perturbation Methods” [SIAM Rev., 64 (2022), pp. 485--499], discusses a method for finding solutions to these two problems. This new contribution observes that by applying the same mathematical justifications, one may get similar results in a simpler way. The starting point consists in defining operators $L_lambda := hat L -g(lambda)$ with $L_{lambda_0}=L$ for some $lambda_0$ and of a parameter-dependent family of solutions to the homogeneous equations $L_lambda[y(x;lambda)]=0$. Under appropriate assumptions on $g$, differentiating this equality allows one to get solutions to problems of interest. This approach is illustrated on nine examples, seven of which are the same as in the publication of B. Gouveia and H. A. Stone, where for each example $g$ and $hat L$ are appropriately chosen. This approach may be included in a course of ordinary differential equations (ODEs) as a methodology for finding solutions to these two particular classes of ODEs. It can also be used by undergraduate students for individual training as an alternative to variation of parameters. The second paper, “NeuralUQ: A Comprehensive Library for Uncertainty Quantification in Neural Differential Equations and Operators,” is presented by Zongren Zou, Xuhui Meng, Apostolos Psaros, and George E. Karniadakis. In machine learning uncertainty quantification (UQ) is a hot research topic, driven by various questions arising in computer vision and natural language processing, and by risk-sensitive applications. Numerous machine learning models, such as, for instance, physics-informed neural networks and deep operator networks, help in solving partial differential equations and learning operator mappings, respectively. However, some data may be noisy and/or sampled at random locations. This paper presents an open-source Python library (https://github.com/Crunch-UQ4MI) for employing a reliable toolbox of UQ methods for scientific machine learning. It is designed for both educational and research purposes and is illustrated on four examples, involving dynamical systems and high-dimensional parametric and time-dependent PDEs. NeuralUQ is planned to be constantly updated.
SIAM 评论》,第 66 卷第 1 期,第 147-147 页,2024 年 2 月。 本期教育版块刊登了两篇论文。第一篇论文题为 "共振强迫 ODEs 和重复根",作者是 Allan R. Willms。共振强迫问题如下:求 $y(cdot)$ 使 $L[y(x)]=u(x)$,其中 $L[u(x)]=0$ 和 $L=a_0(x)+sum_{j=1}^n a_j(x) frac{d^j}{dx^j}$。重复根问题包括在已知 $n$ 线性独立解 $L[y(x)]=0$ 的前提下,找到 $mn$ 线性独立解 $L^m[y(x)]=0$。B. Gouveia 和 H. A. Stone 最近发表的一篇文章 "使用扰动方法生成常微分方程的共振解和重复根解" [SIAM Rev., 64 (2022), pp.这篇新论文指出,通过应用相同的数学原理,我们可以用更简单的方法得到类似的结果。出发点包括定义算子 $L_lambda := hat L -g(lambda)$,其中 $L_{lambda_0}=L 为某个 $lambda_0$,以及同质方程 $L_lambda[y(x;lambda)]=0$的解的参数依赖族。在对 $g$ 作适当假设的情况下,微分这个等式就能得到相关问题的解。我们用九个例子来说明这种方法,其中七个与 B. Gouveia 和 H. A. Stone 出版物中的例子相同,每个例子中的 $g$ 和 $hat L$ 都经过适当选择。这种方法可以作为寻找这两类特殊 ODE 的解的方法纳入常微分方程(ODE)课程。本科生也可以用这种方法进行个人训练,作为参数变化的替代方法。第二篇论文题为 "NeuralUQ:神经微分方程和算子中不确定性量化的综合库",由邹宗仁、孟旭辉、Apostolos Psaros 和 George E. Karniadakis 发表。在机器学习领域,不确定性量化(UQ)是一个热门研究课题,由计算机视觉和自然语言处理中出现的各种问题以及对风险敏感的应用所驱动。许多机器学习模型,例如物理信息神经网络和深度算子网络,分别有助于求解偏微分方程和学习算子映射。然而,有些数据可能存在噪声和/或采样位置随机。本文介绍了一个开源 Python 库(https://github.com/Crunch-UQ4MI),用于在科学机器学习中使用可靠的 UQ 方法工具箱。该库专为教育和研究目的而设计,并通过四个例子进行了说明,涉及动力系统和高维参数与时间相关的 PDE。NeuralUQ 计划不断更新。
{"title":"Education","authors":"Helene Frankowska","doi":"10.1137/24n975852","DOIUrl":"https://doi.org/10.1137/24n975852","url":null,"abstract":"SIAM Review, Volume 66, Issue 1, Page 147-147, February 2024. <br/> In this issue the Education section presents two contributions. The first paper, “Resonantly Forced ODEs and Repeated Roots,” is written by Allan R. Willms. The resonant forcing problem is as follows: find $y(cdot)$ such that $L[y(x)]=u(x)$, where $L[u(x)]=0$ and $L=a_0(x) + sum_{j=1}^n a_j(x) frac{d^j}{dx^j}$. The repeated roots problem consists in finding $mn$ linearly independent solutions to $L^m[y(x)]=0$ under the assumption that $n$ linearly independent solutions to $L[y(x)]= 0$ are known. A recent article by B. Gouveia and H. A. Stone, “Generating Resonant and Repeated Root Solutions to Ordinary Differential Equations Using Perturbation Methods” [SIAM Rev., 64 (2022), pp. 485--499], discusses a method for finding solutions to these two problems. This new contribution observes that by applying the same mathematical justifications, one may get similar results in a simpler way. The starting point consists in defining operators $L_lambda := hat L -g(lambda)$ with $L_{lambda_0}=L$ for some $lambda_0$ and of a parameter-dependent family of solutions to the homogeneous equations $L_lambda[y(x;lambda)]=0$. Under appropriate assumptions on $g$, differentiating this equality allows one to get solutions to problems of interest. This approach is illustrated on nine examples, seven of which are the same as in the publication of B. Gouveia and H. A. Stone, where for each example $g$ and $hat L$ are appropriately chosen. This approach may be included in a course of ordinary differential equations (ODEs) as a methodology for finding solutions to these two particular classes of ODEs. It can also be used by undergraduate students for individual training as an alternative to variation of parameters. The second paper, “NeuralUQ: A Comprehensive Library for Uncertainty Quantification in Neural Differential Equations and Operators,” is presented by Zongren Zou, Xuhui Meng, Apostolos Psaros, and George E. Karniadakis. In machine learning uncertainty quantification (UQ) is a hot research topic, driven by various questions arising in computer vision and natural language processing, and by risk-sensitive applications. Numerous machine learning models, such as, for instance, physics-informed neural networks and deep operator networks, help in solving partial differential equations and learning operator mappings, respectively. However, some data may be noisy and/or sampled at random locations. This paper presents an open-source Python library (https://github.com/Crunch-UQ4MI) for employing a reliable toolbox of UQ methods for scientific machine learning. It is designed for both educational and research purposes and is illustrated on four examples, involving dynamical systems and high-dimensional parametric and time-dependent PDEs. NeuralUQ is planned to be constantly updated.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"27 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139705088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Review, Volume 66, Issue 1, Page 89-89, February 2024. As modeling, simulation, and data-driven capabilities continue to advance and be adopted for an ever expanding set of applications and downstream tasks, there has been an increased need for quantifying the uncertainty in the resulting predictions. In “Easy Uncertainty Quantification (EasyUQ): Generating Predictive Distributions from Single-Valued Model Output,” authors Eva-Maria Walz, Alexander Henzi, Johanna Ziegel, and Tilmann Gneiting provide a methodology for moving beyond deterministic scalar-valued predictions to obtain particular statistical distributions for these predictions. The approach relies on training data of model output-observation pairs of scalars, and hence does not require access to higher-dimensional inputs or latent variables. The authors use numerical weather prediction as a particular example, where one can obtain repeated forecasts, and corresponding observations, of temperatures at a specific location. Given a predicted temperature, the EasyUQ approach provides a nonparametric distribution of temperatures around this value. EasyUQ uses the training data to effectively minimize an empirical score subject to a stochastic monotonicity constraint, which ensures that the predictive distribution values become larger as the model output value grows. In doing so, the approach inherits the theoretical properties of optimality and consistency enjoyed by so-called isotonic distributional regression methods. The authors emphasize that the basic version of EasyUQ does not require elaborate hyperparameter tuning. They also introduce a more sophisticated version that relies on kernel smoothing to yield predictive probability densities while preserving key properties of the basic version. The paper demonstrates how EasyUQ compares with the standard technique of applying a Gaussian error distribution to a deterministic forecast as well as how EasyUQ can be used to obtain uncertainty estimates for artificial neural network outputs. The approach will be especially of interest for settings when inputs or other latent variables are unreliable or unavailable since it offers a straightforward yet statistically principled and computationally efficient way for working only with outputs and observations.
{"title":"Research Spotlights","authors":"Stefan M. Wild","doi":"10.1137/24n975839","DOIUrl":"https://doi.org/10.1137/24n975839","url":null,"abstract":"SIAM Review, Volume 66, Issue 1, Page 89-89, February 2024. <br/> As modeling, simulation, and data-driven capabilities continue to advance and be adopted for an ever expanding set of applications and downstream tasks, there has been an increased need for quantifying the uncertainty in the resulting predictions. In “Easy Uncertainty Quantification (EasyUQ): Generating Predictive Distributions from Single-Valued Model Output,” authors Eva-Maria Walz, Alexander Henzi, Johanna Ziegel, and Tilmann Gneiting provide a methodology for moving beyond deterministic scalar-valued predictions to obtain particular statistical distributions for these predictions. The approach relies on training data of model output-observation pairs of scalars, and hence does not require access to higher-dimensional inputs or latent variables. The authors use numerical weather prediction as a particular example, where one can obtain repeated forecasts, and corresponding observations, of temperatures at a specific location. Given a predicted temperature, the EasyUQ approach provides a nonparametric distribution of temperatures around this value. EasyUQ uses the training data to effectively minimize an empirical score subject to a stochastic monotonicity constraint, which ensures that the predictive distribution values become larger as the model output value grows. In doing so, the approach inherits the theoretical properties of optimality and consistency enjoyed by so-called isotonic distributional regression methods. The authors emphasize that the basic version of EasyUQ does not require elaborate hyperparameter tuning. They also introduce a more sophisticated version that relies on kernel smoothing to yield predictive probability densities while preserving key properties of the basic version. The paper demonstrates how EasyUQ compares with the standard technique of applying a Gaussian error distribution to a deterministic forecast as well as how EasyUQ can be used to obtain uncertainty estimates for artificial neural network outputs. The approach will be especially of interest for settings when inputs or other latent variables are unreliable or unavailable since it offers a straightforward yet statistically principled and computationally efficient way for working only with outputs and observations.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"5 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139704929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Review, Volume 66, Issue 1, Page 125-146, February 2024. We analyze the spectrum of the operator $Delta^{-1} [nabla cdot (Knabla u)]$ subject to homogeneous Dirichlet or Neumann boundary conditions, where $Delta$ denotes the Laplacian and $K=K(x,y)$ is a symmetric tensor. Our main result shows that this spectrum can be derived from the spectral decomposition $K=Q Lambda Q^T$, where $Q=Q(x,y)$ is an orthogonal matrix and $Lambda=Lambda(x,y)$ is a diagonal matrix. More precisely, provided that $K$ is continuous, the spectrum equals the convex hull of the ranges of the diagonal function entries of $Lambda$. The domain involved is assumed to be bounded and Lipschitz. In addition to studying operators defined on infinite-dimensional Sobolev spaces, we also report on recent results concerning their discretized finite-dimensional counterparts. More specifically, even though $Delta^{-1} [nabla cdot (Knabla u)]$ is not compact, it turns out that every point in the spectrum of this operator can, to an arbitrary accuracy, be approximated by eigenvalues of the associated generalized algebraic eigenvalue problems arising from discretizations. Our theoretical investigations are illuminated by numerical experiments. The results presented in this paper extend previous analyses which have addressed elliptic differential operators with scalar coefficient functions. Our investigation is motivated by both preconditioning issues (efficient numerical computations) and the need to further develop the spectral theory of second order PDEs (core analysis).
{"title":"A Simple Formula for the Generalized Spectrum of Second Order Self-Adjoint Differential Operators","authors":"Bjørn Fredrik Nielsen, Zdeněk Strakoš","doi":"10.1137/23m1600992","DOIUrl":"https://doi.org/10.1137/23m1600992","url":null,"abstract":"SIAM Review, Volume 66, Issue 1, Page 125-146, February 2024. <br/> We analyze the spectrum of the operator $Delta^{-1} [nabla cdot (Knabla u)]$ subject to homogeneous Dirichlet or Neumann boundary conditions, where $Delta$ denotes the Laplacian and $K=K(x,y)$ is a symmetric tensor. Our main result shows that this spectrum can be derived from the spectral decomposition $K=Q Lambda Q^T$, where $Q=Q(x,y)$ is an orthogonal matrix and $Lambda=Lambda(x,y)$ is a diagonal matrix. More precisely, provided that $K$ is continuous, the spectrum equals the convex hull of the ranges of the diagonal function entries of $Lambda$. The domain involved is assumed to be bounded and Lipschitz. In addition to studying operators defined on infinite-dimensional Sobolev spaces, we also report on recent results concerning their discretized finite-dimensional counterparts. More specifically, even though $Delta^{-1} [nabla cdot (Knabla u)]$ is not compact, it turns out that every point in the spectrum of this operator can, to an arbitrary accuracy, be approximated by eigenvalues of the associated generalized algebraic eigenvalue problems arising from discretizations. Our theoretical investigations are illuminated by numerical experiments. The results presented in this paper extend previous analyses which have addressed elliptic differential operators with scalar coefficient functions. Our investigation is motivated by both preconditioning issues (efficient numerical computations) and the need to further develop the spectral theory of second order PDEs (core analysis).","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"3 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139705096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Review, Volume 66, Issue 1, Page 149-160, February 2024. In a recent article in this journal, Gouveia and Stone [``Generating Resonant and Repeated Root Solutions to Ordinary Differential Equations Using Perturbation Methods,” SIAM Rev., 64 (2022), pp. 485--499] described a method for finding exact solutions to resonantly forced linear ordinary differential equations, and for finding the general solution of repeated root linear systems. It is shown here that applying their mathematical justification directly yields a method that is faster and algebraically simpler than the method they described. This method seems to be unknown in the undergraduate textbook literature, although it certainly should be present there as it is elegant and simple to apply, generally giving solutions with much less work than variation of parameters.
{"title":"Resonantly Forced ODEs and Repeated Roots","authors":"Allan R. Willms","doi":"10.1137/23m1545148","DOIUrl":"https://doi.org/10.1137/23m1545148","url":null,"abstract":"SIAM Review, Volume 66, Issue 1, Page 149-160, February 2024. <br/> In a recent article in this journal, Gouveia and Stone [``Generating Resonant and Repeated Root Solutions to Ordinary Differential Equations Using Perturbation Methods,” SIAM Rev., 64 (2022), pp. 485--499] described a method for finding exact solutions to resonantly forced linear ordinary differential equations, and for finding the general solution of repeated root linear systems. It is shown here that applying their mathematical justification directly yields a method that is faster and algebraically simpler than the method they described. This method seems to be unknown in the undergraduate textbook literature, although it certainly should be present there as it is elegant and simple to apply, generally giving solutions with much less work than variation of parameters.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"97 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139704932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eva-Maria Walz, Alexander Henzi, Johanna Ziegel, Tilmann Gneiting
SIAM Review, Volume 66, Issue 1, Page 91-122, February 2024. How can we quantify uncertainty if our favorite computational tool---be it a numerical, statistical, or machine learning approach, or just any computer model---provides single-valued output only? In this article, we introduce the Easy Uncertainty Quantification (EasyUQ) technique, which transforms real-valued model output into calibrated statistical distributions, based solely on training data of model output--outcome pairs, without any need to access model input. In its basic form, EasyUQ is a special case of the recently introduced isotonic distributional regression (IDR) technique that leverages the pool-adjacent-violators algorithm for nonparametric isotonic regression. EasyUQ yields discrete predictive distributions that are calibrated and optimal in finite samples, subject to stochastic monotonicity. The workflow is fully automated, without any need for tuning. The Smooth EasyUQ approach supplements IDR with kernel smoothing, to yield continuous predictive distributions that preserve key properties of the basic form, including both stochastic monotonicity with respect to the original model output and asymptotic consistency. For the selection of kernel parameters, we introduce multiple one-fit grid search, a computationally much less demanding approximation to leave-one-out cross-validation. We use simulation examples and forecast data from weather prediction to illustrate the techniques. In a study of benchmark problems from machine learning, we show how EasyUQ and Smooth EasyUQ can be integrated into the workflow of neural network learning and hyperparameter tuning, and we find EasyUQ to be competitive with conformal prediction as well as more elaborate input-based approaches.
{"title":"Easy Uncertainty Quantification (EasyUQ): Generating Predictive Distributions from Single-Valued Model Output","authors":"Eva-Maria Walz, Alexander Henzi, Johanna Ziegel, Tilmann Gneiting","doi":"10.1137/22m1541915","DOIUrl":"https://doi.org/10.1137/22m1541915","url":null,"abstract":"SIAM Review, Volume 66, Issue 1, Page 91-122, February 2024. <br/> How can we quantify uncertainty if our favorite computational tool---be it a numerical, statistical, or machine learning approach, or just any computer model---provides single-valued output only? In this article, we introduce the Easy Uncertainty Quantification (EasyUQ) technique, which transforms real-valued model output into calibrated statistical distributions, based solely on training data of model output--outcome pairs, without any need to access model input. In its basic form, EasyUQ is a special case of the recently introduced isotonic distributional regression (IDR) technique that leverages the pool-adjacent-violators algorithm for nonparametric isotonic regression. EasyUQ yields discrete predictive distributions that are calibrated and optimal in finite samples, subject to stochastic monotonicity. The workflow is fully automated, without any need for tuning. The Smooth EasyUQ approach supplements IDR with kernel smoothing, to yield continuous predictive distributions that preserve key properties of the basic form, including both stochastic monotonicity with respect to the original model output and asymptotic consistency. For the selection of kernel parameters, we introduce multiple one-fit grid search, a computationally much less demanding approximation to leave-one-out cross-validation. We use simulation examples and forecast data from weather prediction to illustrate the techniques. In a study of benchmark problems from machine learning, we show how EasyUQ and Smooth EasyUQ can be integrated into the workflow of neural network learning and hyperparameter tuning, and we find EasyUQ to be competitive with conformal prediction as well as more elaborate input-based approaches.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"18 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139704937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Review, Volume 66, Issue 1, Page 123-123, February 2024. The SIGEST article in this issue is “A Simple Formula for the Generalized Spectrum of Second Order Self-Adjoint Differential Operators,” by Bjørn Fredrik Nielsen and Zdeněk Strakoš. This paper studies the eigenvalues of second-order self-adjoint differential operators in the continuum and discrete settings. In particular, they investigate second-order diffusion with a diffusion tensor preconditioned by the inverse Laplacian. They prove that there is a one-to-one correspondence between the spectrum of the preconditioned system and the eigenvalues of the diffusion tensor. Moreover, they investigate the relationship between the spectrum of the preconditioned operator and the generalized eigenvalue problem for its discretized counterpart and show that the latter asymptotically approximates the former. The results presented in the paper are fundamental to anyone wanting to solve elliptic PDEs. Understanding the distribution of eigenvalues is crucial for solving associated linear systems via, e.g., conjugate gradient descent whose convergence rate depends on the spread of the spectrum of the system matrix. The approach of operator preconditioning as done here with the inverse Laplacian turns the unbounded spectrum of a second-order diffusion operator into one that is completely characterized by the diffusion tensor itself. This carries over to the discrete setting, where the support of the spectrum without preconditioning is increasing as one over the squared mesh size, while in the operator preconditioned case mesh independent bounds for the eigenvalues, completely determined by the diffusion tensor, can be obtained. The original version of this article appeared in the SIAM Journal on Numerical Analysis in 2020 and has been recognized as an outstanding and well-presented result in the community. In preparing this SIGEST version, the authors have added new material to sections 1 and 2 in order to increase accessibility, added clarifications to sections 6 and 7, and added the new section 8, which contains a description of more recent results concerning the numerical approximation of the continuous spectrum. It also comments on the related differences between the (generalized) PDE eigenvalue problems for compact and noncompact operators and provides several new references.
SIAM 评论》,第 66 卷第 1 期,第 123-123 页,2024 年 2 月。 本期的 SIGEST 文章是 Bjørn Fredrik Nielsen 和 Zdeněk Strakoš 撰写的 "A Simple Formula for the Generalized Spectrum of Second Order Self-Adjoint Differential Operators"。这篇论文研究了连续和离散环境中二阶自交微分算子的特征值。他们特别研究了以反拉普拉奇为前提条件的二阶扩散张量。他们证明,预处理系统的谱与扩散张量的特征值之间存在一一对应关系。此外,他们还研究了预处理算子的频谱与其离散对应的广义特征值问题之间的关系,并证明后者近似于前者。论文中提出的结果对于任何想要求解椭圆 PDE 的人来说都是至关重要的。了解特征值的分布对于通过共轭梯度下降等方法求解相关线性系统至关重要,而共轭梯度下降的收敛速度取决于系统矩阵频谱的分布。这里使用的逆拉普拉斯算子预处理方法,将二阶扩散算子的无界频谱转化为完全由扩散张量本身表征的频谱。这一点延续到离散设置中,在没有预处理的情况下,频谱的支持率随网格大小的平方递增,而在算子预处理的情况下,可以得到完全由扩散张量决定的特征值的网格无关边界。这篇文章的原始版本于 2020 年发表在《SIAM 数值分析期刊》上,并被公认为是一项杰出的、出色的成果。在编写此 SIGEST 版本时,作者在第 1 节和第 2 节中添加了新材料,以增加可读性;在第 6 节和第 7 节中添加了说明;并添加了新的第 8 节,其中包含对有关连续谱数值逼近的最新结果的描述。这一节还评论了紧凑和非紧凑算子的(广义)PDE 特征值问题之间的相关差异,并提供了一些新的参考文献。
{"title":"SIGEST","authors":"The Editors","doi":"10.1137/24n975840","DOIUrl":"https://doi.org/10.1137/24n975840","url":null,"abstract":"SIAM Review, Volume 66, Issue 1, Page 123-123, February 2024. <br/> The SIGEST article in this issue is “A Simple Formula for the Generalized Spectrum of Second Order Self-Adjoint Differential Operators,” by Bjørn Fredrik Nielsen and Zdeněk Strakoš. This paper studies the eigenvalues of second-order self-adjoint differential operators in the continuum and discrete settings. In particular, they investigate second-order diffusion with a diffusion tensor preconditioned by the inverse Laplacian. They prove that there is a one-to-one correspondence between the spectrum of the preconditioned system and the eigenvalues of the diffusion tensor. Moreover, they investigate the relationship between the spectrum of the preconditioned operator and the generalized eigenvalue problem for its discretized counterpart and show that the latter asymptotically approximates the former. The results presented in the paper are fundamental to anyone wanting to solve elliptic PDEs. Understanding the distribution of eigenvalues is crucial for solving associated linear systems via, e.g., conjugate gradient descent whose convergence rate depends on the spread of the spectrum of the system matrix. The approach of operator preconditioning as done here with the inverse Laplacian turns the unbounded spectrum of a second-order diffusion operator into one that is completely characterized by the diffusion tensor itself. This carries over to the discrete setting, where the support of the spectrum without preconditioning is increasing as one over the squared mesh size, while in the operator preconditioned case mesh independent bounds for the eigenvalues, completely determined by the diffusion tensor, can be obtained. The original version of this article appeared in the SIAM Journal on Numerical Analysis in 2020 and has been recognized as an outstanding and well-presented result in the community. In preparing this SIGEST version, the authors have added new material to sections 1 and 2 in order to increase accessibility, added clarifications to sections 6 and 7, and added the new section 8, which contains a description of more recent results concerning the numerical approximation of the continuous spectrum. It also comments on the related differences between the (generalized) PDE eigenvalue problems for compact and noncompact operators and provides several new references.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"3 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139705043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}