首页 > 最新文献

SIAM Review最新文献

英文 中文
Book Review:; Essential Statistics for Data Science: A Concise Crash Course 书评:;数据科学的基本统计:简明速成课程
IF 10.2 1区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-02-06 DOI: 10.1137/24m167562x
David Banks
SIAM Review, Volume 67, Issue 1, Page 206-207, March 2025.
This is a bold book! Professor Zhu wants to provide the basic statistical knowledge needed by data scientists in a super-short volume. It reminds me a bit of Larry Wasserman’s All of Statistics (Springer, 2014), but is aimed at Masters students (often from fields other than statistics) or advanced undergraduates (also often from other fields). As an attendee at far too many faculty meetings, I applaud brevity and focus. As an amateur stylist, I admire strong technical writing. And as an applied statistician who has taught basic statistics to Masters and Ph.D. students from other disciplines, I appreciate the need for a book of this kind. For the right course I would happily use this book, although I would need to supplement it with other material.
SIAM评论,第67卷,第1期,第206-207页,2025年3月。这是一本大胆的书!朱教授希望以极短的篇幅提供数据科学家所需的基本统计知识。它让我想起了Larry Wasserman的All of Statistics (b施普林格,2014),但它针对的是硕士生(通常来自统计以外的领域)或高级本科生(也通常来自其他领域)。作为参加过太多教师会议的人,我赞赏简洁和专注。作为一名业余发型师,我欣赏强烈的技术写作。作为一名应用统计学家,我曾向来自其他学科的硕士和博士教授基础统计学,我很欣赏这样一本书的必要性。对于正确的课程,我很乐意使用这本书,尽管我需要用其他材料补充它。
{"title":"Book Review:; Essential Statistics for Data Science: A Concise Crash Course","authors":"David Banks","doi":"10.1137/24m167562x","DOIUrl":"https://doi.org/10.1137/24m167562x","url":null,"abstract":"SIAM Review, Volume 67, Issue 1, Page 206-207, March 2025. <br/> This is a bold book! Professor Zhu wants to provide the basic statistical knowledge needed by data scientists in a super-short volume. It reminds me a bit of Larry Wasserman’s All of Statistics (Springer, 2014), but is aimed at Masters students (often from fields other than statistics) or advanced undergraduates (also often from other fields). As an attendee at far too many faculty meetings, I applaud brevity and focus. As an amateur stylist, I admire strong technical writing. And as an applied statistician who has taught basic statistics to Masters and Ph.D. students from other disciplines, I appreciate the need for a book of this kind. For the right course I would happily use this book, although I would need to supplement it with other material.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"79 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143258247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Troublesome Kernel: On Hallucinations, No Free Lunches, and the Accuracy-Stability Tradeoff in Inverse Problems 麻烦的核:关于幻觉,没有免费的午餐,以及反问题的精度-稳定性权衡
IF 10.2 1区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-02-06 DOI: 10.1137/23m1568739
Nina M. Gottschling, Vegard Antun, Anders C. Hansen, Ben Adcock
SIAM Review, Volume 67, Issue 1, Page 73-104, March 2025.
Abstract.Methods inspired by artificial intelligence (AI) are starting to fundamentally change computational science and engineering through breakthrough performance on challenging problems. However, the reliability and trustworthiness of such techniques is a major concern. In inverse problems in imaging, the focus of this paper, there is increasing empirical evidence that methods may suffer from hallucinations, i.e., false, but realistic-looking artifacts; instability, i.e., sensitivity to perturbations in the data; and unpredictable generalization, i.e., excellent performance on some images, but significant deterioration on others. This paper provides a theoretical foundation for these phenomena. We give mathematical explanations for how and when such effects arise in arbitrary reconstruction methods, with several of our results taking the form of “no free lunch” theorems. Specifically, we show that (i) methods that overperform on a single image can wrongly transfer details from one image to another, creating a hallucination; (ii) methods that overperform on two or more images can hallucinate or be unstable; (iii) optimizing the accuracy-stability tradeoff is generally difficult; (iv) hallucinations and instabilities, if they occur, are not rare events and may be encouraged by standard training; and (v) it may be impossible to construct optimal reconstruction maps for certain problems. Our results trace these effects to the kernel of the forward operator whenever it is nontrivial, but also apply to the case when the forward operator is ill-conditioned. Based on these insights, our work aims to spur research into new ways to develop robust and reliable AI-based methods for inverse problems in imaging.
SIAM评论,第67卷,第1期,第73-104页,2025年3月。摘要。受人工智能(AI)启发的方法通过在挑战性问题上的突破性表现,开始从根本上改变计算科学和工程。然而,这些技术的可靠性和可信赖性是一个主要问题。在成像的逆问题中,本文的重点,有越来越多的经验证据表明,方法可能会产生幻觉,即虚假的,但看起来很逼真的工件;不稳定性,即对数据扰动的敏感性;不可预测的泛化,即在某些图像上表现出色,但在其他图像上明显恶化。本文为这些现象提供了理论依据。我们给出了在任意重建方法中如何以及何时出现这种效应的数学解释,我们的一些结果采用了“没有免费的午餐”定理的形式。具体来说,我们表明(i)在单张图像上表现过度的方法可能会错误地将细节从一张图像转移到另一张图像,从而产生幻觉;(ii)在两个或多个图像上表现过度的方法可能会产生幻觉或不稳定;(iii)优化精度与稳定性的权衡通常是困难的;(iv)幻觉和不稳定,如果发生,不是罕见事件,可以通过标准训练加以鼓励;(5)对于某些问题,可能不可能构造出最优的重构图。我们的结果将这些影响追溯到正向运算符的核,只要它是非平凡的,但也适用于正向运算符是病态的情况。基于这些见解,我们的工作旨在促进研究新方法,以开发稳健可靠的基于人工智能的成像逆问题方法。
{"title":"The Troublesome Kernel: On Hallucinations, No Free Lunches, and the Accuracy-Stability Tradeoff in Inverse Problems","authors":"Nina M. Gottschling, Vegard Antun, Anders C. Hansen, Ben Adcock","doi":"10.1137/23m1568739","DOIUrl":"https://doi.org/10.1137/23m1568739","url":null,"abstract":"SIAM Review, Volume 67, Issue 1, Page 73-104, March 2025. <br/> Abstract.Methods inspired by artificial intelligence (AI) are starting to fundamentally change computational science and engineering through breakthrough performance on challenging problems. However, the reliability and trustworthiness of such techniques is a major concern. In inverse problems in imaging, the focus of this paper, there is increasing empirical evidence that methods may suffer from hallucinations, i.e., false, but realistic-looking artifacts; instability, i.e., sensitivity to perturbations in the data; and unpredictable generalization, i.e., excellent performance on some images, but significant deterioration on others. This paper provides a theoretical foundation for these phenomena. We give mathematical explanations for how and when such effects arise in arbitrary reconstruction methods, with several of our results taking the form of “no free lunch” theorems. Specifically, we show that (i) methods that overperform on a single image can wrongly transfer details from one image to another, creating a hallucination; (ii) methods that overperform on two or more images can hallucinate or be unstable; (iii) optimizing the accuracy-stability tradeoff is generally difficult; (iv) hallucinations and instabilities, if they occur, are not rare events and may be encouraged by standard training; and (v) it may be impossible to construct optimal reconstruction maps for certain problems. Our results trace these effects to the kernel of the forward operator whenever it is nontrivial, but also apply to the case when the forward operator is ill-conditioned. Based on these insights, our work aims to spur research into new ways to develop robust and reliable AI-based methods for inverse problems in imaging.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"123 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143258252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Book Review:; Numerical Methods in Physics with Python. Second Edition 书评:;物理中的数值方法与Python。第二版
IF 10.2 1区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-02-06 DOI: 10.1137/24m1650466
Gabriele Ciaramella
SIAM Review, Volume 67, Issue 1, Page 204-205, March 2025.
Numerical Methods in Physics with Python by Alex Gezerlis is an excellent example of a textbook built on long and established teaching experience. The goals are clearly defined in the preface: Gezerlis aims to gently introduce undergraduate physics students to the branch of numerical methods and their concrete implementation in Python. To this end, the author considers a physics-applications-first approach. Every chapter begins with a motivation section on real physics problems (simple but adequate for undergraduate students), ends with a concrete project on a physics application, and is completed by a rich list of exercises often designed with a physics appeal.
SIAM评论,第67卷,第1期,第204-205页,2025年3月。Alex Gezerlis的《用Python编写的物理数值方法》是一本建立在长期和成熟教学经验基础上的教科书的优秀范例。本书的目标在前言中有明确的定义:Gezerlis旨在向物理专业的本科生介绍数值方法的分支及其在Python中的具体实现。为此,作者考虑了物理应用优先的方法。每一章都以一个关于真实物理问题的动机部分开始(简单但对本科生来说足够了),以一个关于物理应用的具体项目结束,并以一个丰富的练习列表完成,这些练习通常被设计成具有物理吸引力。
{"title":"Book Review:; Numerical Methods in Physics with Python. Second Edition","authors":"Gabriele Ciaramella","doi":"10.1137/24m1650466","DOIUrl":"https://doi.org/10.1137/24m1650466","url":null,"abstract":"SIAM Review, Volume 67, Issue 1, Page 204-205, March 2025. <br/> Numerical Methods in Physics with Python by Alex Gezerlis is an excellent example of a textbook built on long and established teaching experience. The goals are clearly defined in the preface: Gezerlis aims to gently introduce undergraduate physics students to the branch of numerical methods and their concrete implementation in Python. To this end, the author considers a physics-applications-first approach. Every chapter begins with a motivation section on real physics problems (simple but adequate for undergraduate students), ends with a concrete project on a physics application, and is completed by a rich list of exercises often designed with a physics appeal.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"140 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143258249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Limits of Learning Dynamical Systems 学习动力系统的极限
IF 10.2 1区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-02-06 DOI: 10.1137/24m1696974
Tyrus Berry, Suddhasattwa Das
SIAM Review, Volume 67, Issue 1, Page 107-137, March 2025.
Abstract.A dynamical system is a transformation of a phase space, and the transformation law is the primary means of defining as well as identifying the dynamical system and is the object of focus of many learning techniques. However, there are many secondary aspects of dynamical systems—invariant sets, the Koopman operator, and Markov approximations—that provide alternative objectives for learning techniques. Crucially, while many learning methods are focused on the transformation law, we find that forecast performance can depend on how well these other aspects of the dynamics are approximated. These different facets of a dynamical system correspond to objects in completely different spaces—namely, interpolation spaces, compact Hausdorff sets, unitary operators, and Markov operators, respectively. Thus, learning techniques targeting any of these four facets perform different kinds of approximations. We examine whether an approximation of any one of these aspects of the dynamics could lead to an approximation of another facet. Many connections and obstructions are brought to light in this analysis. Special focus is placed on methods of learning the primary feature—the dynamics law itself. The main question considered is the connection between learning this law and reconstructing the Koopman operator and the invariant set. The answers are tied to the ergodic and topological properties of the dynamics, and they reveal how these properties determine the limits of forecasting techniques.
SIAM评论,67卷,第1期,107-137页,2025年3月。摘要。动力系统是相空间的变换,而变换规律是定义和识别动力系统的主要手段,也是许多学习技术关注的对象。然而,动力系统的许多次要方面——不变集、库普曼算子和马尔可夫近似——为学习技术提供了替代目标。至关重要的是,虽然许多学习方法都集中在转换律上,但我们发现预测性能可能取决于动态的其他方面的近似程度。动力系统的这些不同方面分别对应于完全不同空间中的对象,即插值空间、紧致Hausdorff集、幺正算子和马尔可夫算子。因此,针对这四个方面中的任何一个的学习技术执行不同类型的近似。我们研究是否一个近似的任何这些方面的动力学可能导致近似的另一个方面。在这一分析中,许多联系和障碍被揭示出来。特别的重点放在学习主要特征的方法-动力学定律本身。所考虑的主要问题是学习这一定律与重构Koopman算子和不变集之间的联系。答案与动力学的遍历和拓扑特性有关,它们揭示了这些特性如何决定预测技术的局限性。
{"title":"Limits of Learning Dynamical Systems","authors":"Tyrus Berry, Suddhasattwa Das","doi":"10.1137/24m1696974","DOIUrl":"https://doi.org/10.1137/24m1696974","url":null,"abstract":"SIAM Review, Volume 67, Issue 1, Page 107-137, March 2025. <br/> Abstract.A dynamical system is a transformation of a phase space, and the transformation law is the primary means of defining as well as identifying the dynamical system and is the object of focus of many learning techniques. However, there are many secondary aspects of dynamical systems—invariant sets, the Koopman operator, and Markov approximations—that provide alternative objectives for learning techniques. Crucially, while many learning methods are focused on the transformation law, we find that forecast performance can depend on how well these other aspects of the dynamics are approximated. These different facets of a dynamical system correspond to objects in completely different spaces—namely, interpolation spaces, compact Hausdorff sets, unitary operators, and Markov operators, respectively. Thus, learning techniques targeting any of these four facets perform different kinds of approximations. We examine whether an approximation of any one of these aspects of the dynamics could lead to an approximation of another facet. Many connections and obstructions are brought to light in this analysis. Special focus is placed on methods of learning the primary feature—the dynamics law itself. The main question considered is the connection between learning this law and reconstructing the Koopman operator and the invariant set. The answers are tied to the ergodic and topological properties of the dynamics, and they reveal how these properties determine the limits of forecasting techniques.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"47 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143258457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research Spotlights 研究聚光灯
IF 10.2 1区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-02-06 DOI: 10.1137/24m1691442
Stefan M. Wild
SIAM Review, Volume 67, Issue 1, Page 71-71, March 2025.
SIAM评论,第67卷,第1期,第71-71页,2025年3月。
{"title":"Research Spotlights","authors":"Stefan M. Wild","doi":"10.1137/24m1691442","DOIUrl":"https://doi.org/10.1137/24m1691442","url":null,"abstract":"SIAM Review, Volume 67, Issue 1, Page 71-71, March 2025. <br/>","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"62 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143258253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SIGEST 团体
IF 10.2 1区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-02-06 DOI: 10.1137/24m1691454
The Editors
SIAM Review, Volume 67, Issue 1, Page 105-105, March 2025.
SIAM评论,67卷,第1期,105-105页,2025年3月。
{"title":"SIGEST","authors":"The Editors","doi":"10.1137/24m1691454","DOIUrl":"https://doi.org/10.1137/24m1691454","url":null,"abstract":"SIAM Review, Volume 67, Issue 1, Page 105-105, March 2025. <br/>","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"78 1 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143258456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Featured Review:; Numerical Integration of Differential Equations 评论:;微分方程的数值积分
IF 10.2 1区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-02-06 DOI: 10.1137/24m1678684
John C. Butcher, Robert M. Corless
SIAM Review, Volume 67, Issue 1, Page 197-204, March 2025.
The book under review was originally published under the auspices of the National Research Council in 1933 (the year John was born), and it was republished as a Dover edition in 1956 (three years before Rob was born). At 108 pages—including title page, preface, table of contents, and index—it’s very short. Even so, it contains a significant amount of information that was of technical importance for its time and is of historical importance now.
SIAM评论,第67卷,第1期,第197-204页,2025年3月。这本书最初是在1933年(约翰出生的那一年)在国家研究委员会的赞助下出版的,1956年(罗布出生的前三年)以多佛版本再版。108页——包括标题页、序言、目录和索引——非常短。即便如此,它还是包含了大量的信息,这些信息在当时具有重要的技术意义,现在也具有重要的历史意义。
{"title":"Featured Review:; Numerical Integration of Differential Equations","authors":"John C. Butcher, Robert M. Corless","doi":"10.1137/24m1678684","DOIUrl":"https://doi.org/10.1137/24m1678684","url":null,"abstract":"SIAM Review, Volume 67, Issue 1, Page 197-204, March 2025. <br/> The book under review was originally published under the auspices of the National Research Council in 1933 (the year John was born), and it was republished as a Dover edition in 1956 (three years before Rob was born). At 108 pages—including title page, preface, table of contents, and index—it’s very short. Even so, it contains a significant amount of information that was of technical importance for its time and is of historical importance now.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"14 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143258250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Neural Networks and Applied Linear Algebra 图神经网络与应用线性代数
IF 10.2 1区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-02-06 DOI: 10.1137/23m1609786
Nicholas S. Moore, Eric C. Cyr, Peter Ohm, Christopher M. Siefert, Raymond S. Tuminaro
SIAM Review, Volume 67, Issue 1, Page 141-175, March 2025.
Abstract.Sparse matrix computations are ubiquitous in scientific computing. Given the recent interest in scientific machine learning, it is natural to ask how sparse matrix computations can leverage neural networks (NNs). Unfortunately, multilayer perceptron (MLP) NNs are typically not natural for either graph or sparse matrix computations. The issue lies with the fact that MLPs require fixed-sized inputs, while scientific applications generally generate sparse matrices with arbitrary dimensions and a wide range of different nonzero patterns (or matrix graph vertex interconnections). While convolutional NNs could possibly address matrix graphs where all vertices have the same number of nearest neighbors, a more general approach is needed for arbitrary sparse matrices, e.g., those arising from discretized partial differential equations on unstructured meshes. Graph neural networks (GNNs) are one such approach suitable to sparse matrices. The key idea is to define aggregation functions (e.g., summations) that operate on variable-size input data to produce data of a fixed output size so that MLPs can be applied. The goal of this paper is to provide an introduction to GNNs for a numerical linear algebra audience. Concrete GNN examples are provided to illustrate how many common linear algebra tasks can be accomplished using GNNs. We focus on iterative and multigrid methods that employ computational kernels such as matrix-vector products, interpolation, relaxation methods, and strength-of-connection measures. Our GNN examples include cases where parameters are determined a priori as well as cases where parameters must be learned. The intent of this paper is to help computational scientists understand how GNNs can be used to adapt machine learning concepts to computational tasks associated with sparse matrices. It is hoped that this understanding will further stimulate data-driven extensions of classical sparse linear algebra tasks.
SIAM评论,第67卷,第1期,第141-175页,2025年3月。摘要。稀疏矩阵计算在科学计算中无处不在。鉴于最近对科学机器学习的兴趣,很自然地要问稀疏矩阵计算如何利用神经网络(nn)。不幸的是,多层感知器(MLP)神经网络通常不适合图或稀疏矩阵计算。问题在于mlp需要固定大小的输入,而科学应用通常生成具有任意维度和各种不同非零模式(或矩阵图顶点互连)的稀疏矩阵。虽然卷积神经网络可以处理所有顶点具有相同数量近邻的矩阵图,但对于任意稀疏矩阵需要更通用的方法,例如,那些由非结构化网格上的离散偏微分方程产生的矩阵图。图神经网络(gnn)就是一种适用于稀疏矩阵的方法。关键思想是定义聚合函数(例如,求和),这些函数对可变大小的输入数据进行操作,以产生固定大小的输出数据,从而可以应用mlp。本文的目标是为数值线性代数读者提供gnn的介绍。提供了具体的GNN示例来说明使用GNN可以完成多少常见的线性代数任务。我们专注于迭代和多重网格方法,这些方法采用计算核,如矩阵向量乘积、插值、松弛方法和连接强度测量。我们的GNN示例包括参数是先验确定的情况以及参数必须学习的情况。本文的目的是帮助计算科学家理解如何使用gnn将机器学习概念应用于与稀疏矩阵相关的计算任务。希望这种理解将进一步刺激经典稀疏线性代数任务的数据驱动扩展。
{"title":"Graph Neural Networks and Applied Linear Algebra","authors":"Nicholas S. Moore, Eric C. Cyr, Peter Ohm, Christopher M. Siefert, Raymond S. Tuminaro","doi":"10.1137/23m1609786","DOIUrl":"https://doi.org/10.1137/23m1609786","url":null,"abstract":"SIAM Review, Volume 67, Issue 1, Page 141-175, March 2025. <br/> Abstract.Sparse matrix computations are ubiquitous in scientific computing. Given the recent interest in scientific machine learning, it is natural to ask how sparse matrix computations can leverage neural networks (NNs). Unfortunately, multilayer perceptron (MLP) NNs are typically not natural for either graph or sparse matrix computations. The issue lies with the fact that MLPs require fixed-sized inputs, while scientific applications generally generate sparse matrices with arbitrary dimensions and a wide range of different nonzero patterns (or matrix graph vertex interconnections). While convolutional NNs could possibly address matrix graphs where all vertices have the same number of nearest neighbors, a more general approach is needed for arbitrary sparse matrices, e.g., those arising from discretized partial differential equations on unstructured meshes. Graph neural networks (GNNs) are one such approach suitable to sparse matrices. The key idea is to define aggregation functions (e.g., summations) that operate on variable-size input data to produce data of a fixed output size so that MLPs can be applied. The goal of this paper is to provide an introduction to GNNs for a numerical linear algebra audience. Concrete GNN examples are provided to illustrate how many common linear algebra tasks can be accomplished using GNNs. We focus on iterative and multigrid methods that employ computational kernels such as matrix-vector products, interpolation, relaxation methods, and strength-of-connection measures. Our GNN examples include cases where parameters are determined a priori as well as cases where parameters must be learned. The intent of this paper is to help computational scientists understand how GNNs can be used to adapt machine learning concepts to computational tasks associated with sparse matrices. It is hoped that this understanding will further stimulate data-driven extensions of classical sparse linear algebra tasks.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"40 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143258251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Book Review:; Probability Adventures 书评:;概率的冒险
IF 10.2 1区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-02-06 DOI: 10.1137/24m1646108
Nevena Marić
SIAM Review, Volume 67, Issue 1, Page 205-206, March 2025.
The first look at Probability Adventures brought back memories of a conference in Ubatuba, Brazil, in 2001, where as a young Master’s student I worried that true science had to be deadly serious. Fortunately, several inspiring teachers came to the rescue. Andrei Toom’s words resonated deeply with me when he began his lecture by saying, “Every mathematician is a big child.” The esteemed audience beamed with approval. Today, I look at Probability Adventures and applaud Mark Huber for honoring the child in all of us and offering a reading that is both fun and mathematically rigorous.
SIAM评论,67卷,第1期,205-206页,2025年3月。第一次看《概率冒险》让我回想起2001年在巴西乌巴图巴举行的一次会议,当时我还是一名年轻的硕士研究生,我担心真正的科学必须是极其严肃的。幸运的是,几位鼓舞人心的老师前来救援。安德烈·图姆在演讲开始时说:“每个数学家都是一个大孩子。”这句话让我产生了深刻的共鸣。受人尊敬的听众露出赞许的笑容。今天,我看着《概率冒险》,为马克·休伯向我们所有人的童心致敬,为我们提供了一本既有趣又数学严谨的读物。
{"title":"Book Review:; Probability Adventures","authors":"Nevena Marić","doi":"10.1137/24m1646108","DOIUrl":"https://doi.org/10.1137/24m1646108","url":null,"abstract":"SIAM Review, Volume 67, Issue 1, Page 205-206, March 2025. <br/> The first look at Probability Adventures brought back memories of a conference in Ubatuba, Brazil, in 2001, where as a young Master’s student I worried that true science had to be deadly serious. Fortunately, several inspiring teachers came to the rescue. Andrei Toom’s words resonated deeply with me when he began his lecture by saying, “Every mathematician is a big child.” The esteemed audience beamed with approval. Today, I look at Probability Adventures and applaud Mark Huber for honoring the child in all of us and offering a reading that is both fun and mathematically rigorous.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"45 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143258248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Risk-Adaptive Approaches to Stochastic Optimization: A Survey 随机优化的风险自适应方法综述
IF 10.2 1区 数学 Q1 MATHEMATICS, APPLIED Pub Date : 2025-02-06 DOI: 10.1137/22m1538946
Johannes O. Royset
SIAM Review, Volume 67, Issue 1, Page 3-70, March 2025.
Abstract.Uncertainty is prevalent in engineering design and data-driven problems and, more broadly, in decision making. Due to inherent risk-averseness and ambiguity about assumptions, it is common to address uncertainty by formulating and solving conservative optimization models expressed using measures of risk and related concepts. We survey the rapid development of risk measures over the last quarter century. From their beginning in financial engineering, we recount their spread to nearly all areas of engineering and applied mathematics. Solidly rooted in convex analysis, risk measures furnish a general framework for handling uncertainty with significant computational and theoretical advantages. We describe the key facts, list several concrete algorithms, and provide an extensive list of references for further reading. The survey recalls connections with utility theory and distributionally robust optimization, points to emerging applications areas such as fair machine learning, and defines measures of reliability.
SIAM评论,第67卷,第1期,第3-70页,2025年3月。摘要。不确定性普遍存在于工程设计和数据驱动问题中,更广泛地说,存在于决策制定中。由于固有的风险厌恶和假设的模糊性,通常通过制定和求解使用风险度量和相关概念表示的保守优化模型来解决不确定性。我们回顾了过去四分之一个世纪以来风险度量的快速发展。从金融工程开始,我们叙述了它们几乎扩展到工程和应用数学的所有领域。在凸分析的基础上,风险度量为处理不确定性提供了一个总体框架,具有显著的计算和理论优势。我们描述了关键的事实,列出了几个具体的算法,并提供了一个广泛的参考书目供进一步阅读。该调查回顾了与效用理论和分布式鲁棒优化的联系,指出了公平机器学习等新兴应用领域,并定义了可靠性的衡量标准。
{"title":"Risk-Adaptive Approaches to Stochastic Optimization: A Survey","authors":"Johannes O. Royset","doi":"10.1137/22m1538946","DOIUrl":"https://doi.org/10.1137/22m1538946","url":null,"abstract":"SIAM Review, Volume 67, Issue 1, Page 3-70, March 2025. <br/> Abstract.Uncertainty is prevalent in engineering design and data-driven problems and, more broadly, in decision making. Due to inherent risk-averseness and ambiguity about assumptions, it is common to address uncertainty by formulating and solving conservative optimization models expressed using measures of risk and related concepts. We survey the rapid development of risk measures over the last quarter century. From their beginning in financial engineering, we recount their spread to nearly all areas of engineering and applied mathematics. Solidly rooted in convex analysis, risk measures furnish a general framework for handling uncertainty with significant computational and theoretical advantages. We describe the key facts, list several concrete algorithms, and provide an extensive list of references for further reading. The survey recalls connections with utility theory and distributionally robust optimization, points to emerging applications areas such as fair machine learning, and defines measures of reliability.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"128 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143258254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
SIAM Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1