SIAM Review, Volume 67, Issue 4, Page 917-918, December 2025. In the current academic landscape, nearly every mathematician will at some point be called upon to contribute—be it through teaching or research—to the burgeoning fields of data science and machine learning. Acquiring the necessary fundamentals in these areas ought to be straightforward. However, for many mathematicians, a significant language barrier arises when encountering the more computer science oriented literature. Bohn, Garcke, and Griebel tackle this challenge from a thoroughly mathematical perspective. Their notation is impeccable, consistently clarifying whether the subject at hand is a scalar, vector, matrix, or function. Concepts are introduced with unwavering rigor, distinguishing between well-posed and ill-posed problems, as well as between algorithms backed by convergence results and those that remain heuristic in nature.
{"title":"Book Review:; Algorithmic Mathematics in Machine Learning","authors":"Volker H. Schulz","doi":"10.1137/25m1741121","DOIUrl":"https://doi.org/10.1137/25m1741121","url":null,"abstract":"SIAM Review, Volume 67, Issue 4, Page 917-918, December 2025. <br/> In the current academic landscape, nearly every mathematician will at some point be called upon to contribute—be it through teaching or research—to the burgeoning fields of data science and machine learning. Acquiring the necessary fundamentals in these areas ought to be straightforward. However, for many mathematicians, a significant language barrier arises when encountering the more computer science oriented literature. Bohn, Garcke, and Griebel tackle this challenge from a thoroughly mathematical perspective. Their notation is impeccable, consistently clarifying whether the subject at hand is a scalar, vector, matrix, or function. Concepts are introduced with unwavering rigor, distinguishing between well-posed and ill-posed problems, as well as between algorithms backed by convergence results and those that remain heuristic in nature.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"1 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145448276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Review, Volume 67, Issue 4, Page 914-915, December 2025. This textbook on classical numerical analysis is a true gem for students, educators, and practitioners in applied mathematics. With its broad scope and meticulous organization, it serves as a cornerstone reference for a wide range of topics from numerical linear algebra to numerical differential equations, optimization, and approximation theory. Whether you are teaching or attending an entry-level graduate course, this textbook offers all the essential tools to build a solid foundation in numerical analysis.
{"title":"Book Review:; Classical Numerical Analysis: A Comprehensive Course","authors":"Guosheng Fu","doi":"10.1137/24m1700983","DOIUrl":"https://doi.org/10.1137/24m1700983","url":null,"abstract":"SIAM Review, Volume 67, Issue 4, Page 914-915, December 2025. <br/> This textbook on classical numerical analysis is a true gem for students, educators, and practitioners in applied mathematics. With its broad scope and meticulous organization, it serves as a cornerstone reference for a wide range of topics from numerical linear algebra to numerical differential equations, optimization, and approximation theory. Whether you are teaching or attending an entry-level graduate course, this textbook offers all the essential tools to build a solid foundation in numerical analysis.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"78 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145448278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Review, Volume 67, Issue 4, Page 865-872, December 2025. Abstract.For many of the classic problems of linear algebra, effective and efficient numerical algorithms exist, particularly for situations where dimensions are not too large. The linear least squares problem is one such example: excellent algorithms exist when [math] factorization is feasible. However, for large-dimensional (often sparse) linear least squares problems there currently exist good solution algorithms only for well-conditioned problems or for problems where there are lots of data but only a few variables in the solution. Such approaches ubiquitously employ normal equations and so have to contend with conditioning issues. We explore some alternative approaches that we characterize as not-normal equations where conditioning may not be such an issue.
{"title":"Least Squares and the Not-Normal Equations","authors":"Andrew J. Wathen","doi":"10.1137/23m161851x","DOIUrl":"https://doi.org/10.1137/23m161851x","url":null,"abstract":"SIAM Review, Volume 67, Issue 4, Page 865-872, December 2025. <br/> Abstract.For many of the classic problems of linear algebra, effective and efficient numerical algorithms exist, particularly for situations where dimensions are not too large. The linear least squares problem is one such example: excellent algorithms exist when [math] factorization is feasible. However, for large-dimensional (often sparse) linear least squares problems there currently exist good solution algorithms only for well-conditioned problems or for problems where there are lots of data but only a few variables in the solution. Such approaches ubiquitously employ normal equations and so have to contend with conditioning issues. We explore some alternative approaches that we characterize as not-normal equations where conditioning may not be such an issue.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"42 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145448282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Athanasios C. Antoulas, Ion Victor Gosea, Charles Poussot-Vassal
SIAM Review, Volume 67, Issue 4, Page 737-770, December 2025. Abstract.The Loewner framework is an interpolatory approach for the approximation of linear and nonlinear systems. The purpose here is to extend this framework to linear parametric systems with an arbitrary number [math] of parameters. To achieve this, a new generalized multivariate rational function realization is proposed. We then introduce the [math]-dimensional multivariate Loewner matrices and show that they can be computed by solving a set of coupled Sylvester equations. The null space of these Loewner matrices allows the construction of multivariate rational functions in barycentric form. The principal result of this work is to show how the null space of [math]-dimensional Loewner matrices can be computed using a sequence of one-dimensional Loewner matrices. Thus, a decoupling of the variables is achieved, which leads to a drastic reduction of the computational burden. Equally importantly, this burden is alleviated by avoiding the explicit construction of large-scale [math]-dimensional Loewner matrices of size [math]. The proposed methodology achieves the decoupling of variables, leading (i) to a reduction in complexity from [math] to below [math] when [math] and (ii) to memory storage bounded by the largest variable dimension rather than their product, thus taming the curse of dimensionality and making the solution scalable to very large data sets. This decoupling of the variables leads to a result similar to the Kolmogorov superposition theorem for rational functions. Thus, making use of barycentric representations, every multivariate rational function can be computed using the composition and superposition of single-variable functions. Finally, we suggest two algorithms (one direct and one iterative) to construct, directly from data, multivariate (or parametric) realizations ensuring (approximate) interpolation. Numerical examples highlight the effectiveness and scalability of the method.
{"title":"On the Loewner Framework, the Kolmogorov Superposition Theorem, and the Curse of Dimensionality","authors":"Athanasios C. Antoulas, Ion Victor Gosea, Charles Poussot-Vassal","doi":"10.1137/24m1656657","DOIUrl":"https://doi.org/10.1137/24m1656657","url":null,"abstract":"SIAM Review, Volume 67, Issue 4, Page 737-770, December 2025. <br/> Abstract.The Loewner framework is an interpolatory approach for the approximation of linear and nonlinear systems. The purpose here is to extend this framework to linear parametric systems with an arbitrary number [math] of parameters. To achieve this, a new generalized multivariate rational function realization is proposed. We then introduce the [math]-dimensional multivariate Loewner matrices and show that they can be computed by solving a set of coupled Sylvester equations. The null space of these Loewner matrices allows the construction of multivariate rational functions in barycentric form. The principal result of this work is to show how the null space of [math]-dimensional Loewner matrices can be computed using a sequence of one-dimensional Loewner matrices. Thus, a decoupling of the variables is achieved, which leads to a drastic reduction of the computational burden. Equally importantly, this burden is alleviated by avoiding the explicit construction of large-scale [math]-dimensional Loewner matrices of size [math]. The proposed methodology achieves the decoupling of variables, leading (i) to a reduction in complexity from [math] to below [math] when [math] and (ii) to memory storage bounded by the largest variable dimension rather than their product, thus taming the curse of dimensionality and making the solution scalable to very large data sets. This decoupling of the variables leads to a result similar to the Kolmogorov superposition theorem for rational functions. Thus, making use of barycentric representations, every multivariate rational function can be computed using the composition and superposition of single-variable functions. Finally, we suggest two algorithms (one direct and one iterative) to construct, directly from data, multivariate (or parametric) realizations ensuring (approximate) interpolation. Numerical examples highlight the effectiveness and scalability of the method.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"135 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145448288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rikha Rahim, Ahmad F. Sihombing, Ika W. Palupi, Nona T. Sapulette
SIAM Review, Volume 67, Issue 4, Page 915-917, December 2025. This book offers a fresh and innovative approach to competitive system modeling by introducing strategic aggression as a central factor in population dynamics. Through rigorous mathematical analysis, the authors provide valuable insights for researchers and academics in applied mathematics, economics, and social sciences. Moreover, the model’s relevance to real-world phenomena such as the increasing frequency and duration of civil conflicts over recent decades further enhances the book’s significance, making it a valuable resource for those seeking to understand conflict dynamics through a mathematical lens. We confirm that we have no affiliations with the book’s authors or editors. However, we recognize that this book aligns well with one of the courses in our research group, the Industrial and Financial Mathematics Research Group, specifically in the study of dynamic systems, where we also explore extensions of the Lotka–Volterra model by incorporating aggressive strategy considerations.
{"title":"Book Review:; A New Lotka–Volterra Model of Competition With Strategic Aggression: Civil Wars When Strategy Comes into Play","authors":"Rikha Rahim, Ahmad F. Sihombing, Ika W. Palupi, Nona T. Sapulette","doi":"10.1137/25m1740838","DOIUrl":"https://doi.org/10.1137/25m1740838","url":null,"abstract":"SIAM Review, Volume 67, Issue 4, Page 915-917, December 2025. <br/> This book offers a fresh and innovative approach to competitive system modeling by introducing strategic aggression as a central factor in population dynamics. Through rigorous mathematical analysis, the authors provide valuable insights for researchers and academics in applied mathematics, economics, and social sciences. Moreover, the model’s relevance to real-world phenomena such as the increasing frequency and duration of civil conflicts over recent decades further enhances the book’s significance, making it a valuable resource for those seeking to understand conflict dynamics through a mathematical lens. We confirm that we have no affiliations with the book’s authors or editors. However, we recognize that this book aligns well with one of the courses in our research group, the Industrial and Financial Mathematics Research Group, specifically in the study of dynamic systems, where we also explore extensions of the Lotka–Volterra model by incorporating aggressive strategy considerations.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"35 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145448277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Review, Volume 67, Issue 4, Page 913-913, December 2025. This is the second book I have reviewed in the Oxford University Press A Very Short Introduction series. The first one was Eric Lauga’s Fluid Mechanics: A Very Short Introduction, reviewed in this journal a year ago. These A Very Short Introduction books are pocket-sized and written by expert authors, and (judging by the book list published by the Oxford University Press) they present all kinds of interesting and challenging topics in a readable way. Earl’s book is no exception—its author has succeeded in making a few highly technical topics accessible.
{"title":"Book Review:; Mathematical Analysis: A Very Short Introduction","authors":"Anita T. Layton","doi":"10.1137/24m1676211","DOIUrl":"https://doi.org/10.1137/24m1676211","url":null,"abstract":"SIAM Review, Volume 67, Issue 4, Page 913-913, December 2025. <br/> This is the second book I have reviewed in the Oxford University Press A Very Short Introduction series. The first one was Eric Lauga’s Fluid Mechanics: A Very Short Introduction, reviewed in this journal a year ago. These A Very Short Introduction books are pocket-sized and written by expert authors, and (judging by the book list published by the Oxford University Press) they present all kinds of interesting and challenging topics in a readable way. Earl’s book is no exception—its author has succeeded in making a few highly technical topics accessible.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"135 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145448279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Review, Volume 67, Issue 4, Page 801-861, December 2025. Abstract.We develop and analyze a method to reduce the size of a very large set of data points in a high-dimensional Euclidean space [math] to a small set of weighted points such that the result of a predetermined data analysis task on the reduced set is approximately the same as that for the original point set. For example, computing the first [math] principal components of the reduced set will return approximately the first [math] principal components of the original set, or computing the centers of a [math]-means clustering on the reduced set will return an approximation for the original set. Such a reduced set is also known as a coreset. The main new features of our construction are that the cardinality of the reduced set is independent of the dimension [math] of the input space and that the sets are mergeable [P. K. Agarwal et al., Proceedings of the 31st ACM SIGMOD-SIGACT-SIGAI Symposium on Principals of Database Systems, 2012, pp. 23–34]. The latter property means that the union of two reduced sets is a reduced set for the union of the two original sets. It allows us to turn our methods into streaming or distributed algorithms using standard approaches. For problems such as [math]-means and subspace approximation the coreset sizes are also independent of the number of input points. Our method is based on data-dependently projecting the points on a low-dimensional subspace and reducing the cardinality of the points inside this subspace using known methods. The proposed approach works for a wide range of data analysis techniques including [math]-means clustering, principal component analysis, and subspace clustering. The main conceptual contribution is a new coreset definition that allows charging for the costs that appear for every solution to an additive constant.
SIAM评论,第67卷,第4期,801-861页,2025年12月。摘要。我们开发并分析了一种方法,将高维欧几里德空间(数学)中非常大的数据点集的大小减少到一个小的加权点集,从而使预定的数据分析任务的结果与原始点集的结果大致相同。例如,计算约简集的第一个[math]主成分将近似返回原始集的第一个[math]主成分,或者计算约简集上的[math]均值聚类的中心将返回原始集的近似值。这样的约简集也被称为核集。我们的构造的主要新特征是,约简集的基数与输入空间的维数[math]无关,并且集合是可合并的[P]。K. Agarwal et al.,第31届ACM SIGMOD-SIGACT-SIGAI数据库系统研讨会论文集,2012,pp. 23-34]。后一个性质意味着两个简化集的并集是两个原始集的并集的简化集。它允许我们使用标准方法将我们的方法转换为流或分布式算法。对于像[math]-means和子空间近似这样的问题,核心集的大小也与输入点的数量无关。我们的方法是基于基于数据的低维子空间上的点投影,并使用已知方法减少该子空间内点的基数。所提出的方法适用于广泛的数据分析技术,包括[数学]均值聚类、主成分分析和子空间聚类。主要的概念贡献是一个新的核心定义,允许对每个解决方案出现的成本收取一个附加常数。
{"title":"Turning Big Data Into Tiny Data: Coresets for Unsupervised Learning Problems","authors":"Dan Feldman, Melanie Schmidt, Christian Sohler","doi":"10.1137/25m1799684","DOIUrl":"https://doi.org/10.1137/25m1799684","url":null,"abstract":"SIAM Review, Volume 67, Issue 4, Page 801-861, December 2025. <br/> Abstract.We develop and analyze a method to reduce the size of a very large set of data points in a high-dimensional Euclidean space [math] to a small set of weighted points such that the result of a predetermined data analysis task on the reduced set is approximately the same as that for the original point set. For example, computing the first [math] principal components of the reduced set will return approximately the first [math] principal components of the original set, or computing the centers of a [math]-means clustering on the reduced set will return an approximation for the original set. Such a reduced set is also known as a coreset. The main new features of our construction are that the cardinality of the reduced set is independent of the dimension [math] of the input space and that the sets are mergeable [P. K. Agarwal et al., Proceedings of the 31st ACM SIGMOD-SIGACT-SIGAI Symposium on Principals of Database Systems, 2012, pp. 23–34]. The latter property means that the union of two reduced sets is a reduced set for the union of the two original sets. It allows us to turn our methods into streaming or distributed algorithms using standard approaches. For problems such as [math]-means and subspace approximation the coreset sizes are also independent of the number of input points. Our method is based on data-dependently projecting the points on a low-dimensional subspace and reducing the cardinality of the points inside this subspace using known methods. The proposed approach works for a wide range of data analysis techniques including [math]-means clustering, principal component analysis, and subspace clustering. The main conceptual contribution is a new coreset definition that allows charging for the costs that appear for every solution to an additive constant.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"53 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145448284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Review, Volume 67, Issue 4, Page 661-733, December 2025. Abstract.The Mathieu function is a special function satisfying the Mathieu differential equation. Since its inception in 1868, numerous algorithms and programs have been published to calculate it, and so it is about time to review the performance of available software. First, the fundamentals of Mathieu functions are summarized such as definition, normalization, nomenclature, and methods of solution. Then, we review several programs for Mathieu functions of integer orders with real parameters and compare the results numerically by running individual software; in addition, Bessel function routines are also compared. Finally, a straightforward algorithm is recommended with codes written in MATLAB and GNU Octave.
{"title":"Numerical Review of Mathieu Function Programs for Integer Orders and Real Parameters","authors":"Ho-Chul Shin","doi":"10.1137/23m1572726","DOIUrl":"https://doi.org/10.1137/23m1572726","url":null,"abstract":"SIAM Review, Volume 67, Issue 4, Page 661-733, December 2025. <br/> Abstract.The Mathieu function is a special function satisfying the Mathieu differential equation. Since its inception in 1868, numerous algorithms and programs have been published to calculate it, and so it is about time to review the performance of available software. First, the fundamentals of Mathieu functions are summarized such as definition, normalization, nomenclature, and methods of solution. Then, we review several programs for Mathieu functions of integer orders with real parameters and compare the results numerically by running individual software; in addition, Bessel function routines are also compared. Finally, a straightforward algorithm is recommended with codes written in MATLAB and GNU Octave.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"27 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145448289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Review, Volume 67, Issue 4, Page 873-902, December 2025. Abstract.Equations, particularly differential equations, are fundamental for understanding natural phenomena and predicting complex dynamics across various scientific and engineering disciplines. However, the governing equations for many complex systems remain unknown due to intricate underlying mechanisms. Recent advancements in machine learning and data science offer a new paradigm for modeling unknown equations from measurement or simulation data. This paradigm shift, known as data-driven discovery or modeling, stands at the forefront of artificial intelligence for science (AI4Science), with significant progress made in recent years. In this paper, we introduce a systematic educational framework for data-driven modeling of unknown equations using deep learning. This versatile framework is capable of learning unknown ordinary differential equations (ODEs), partial differential equations (PDEs), differential-algebraic equations (DAEs), integro-differential equations (IDEs), stochastic differential equations (SDEs), reduced or partially observed systems, and nonautonomous differential equations. Based on this framework, we have developed Deep Unknown Equations (DUE), an open-source software package designed to facilitate the data-driven modeling of unknown equations using modern deep learning techniques. DUE serves as an educational tool for classroom instruction, enabling students and newcomers to gain hands-on experience with differential equations, data-driven modeling, and contemporary deep learning approaches such as fully connected neural networks (FNNs), residual neural networks (ResNet), generalized ResNet (gResNet), operator semigroup networks (OSG-Net), and transformers from large language models (LLMs). Additionally, DUE is a versatile and accessible toolkit for researchers across various scientific and engineering fields. It is applicable not only for learning unknown equations from data, but also for surrogate modeling of known, yet complex equations that are costly to solve using traditional numerical methods. We provide detailed descriptions of DUE and demonstrate its capabilities through diverse examples which serve as templates that can be easily adapted for other applications. The source code for DUE is available at https://github.com/AI4Equations/due.
{"title":"DUE: A Deep Learning Framework and Library for Modeling Unknown Equations","authors":"Junfeng Chen, Kailiang Wu, Dongbin Xiu","doi":"10.1137/24m1671827","DOIUrl":"https://doi.org/10.1137/24m1671827","url":null,"abstract":"SIAM Review, Volume 67, Issue 4, Page 873-902, December 2025. <br/> Abstract.Equations, particularly differential equations, are fundamental for understanding natural phenomena and predicting complex dynamics across various scientific and engineering disciplines. However, the governing equations for many complex systems remain unknown due to intricate underlying mechanisms. Recent advancements in machine learning and data science offer a new paradigm for modeling unknown equations from measurement or simulation data. This paradigm shift, known as data-driven discovery or modeling, stands at the forefront of artificial intelligence for science (AI4Science), with significant progress made in recent years. In this paper, we introduce a systematic educational framework for data-driven modeling of unknown equations using deep learning. This versatile framework is capable of learning unknown ordinary differential equations (ODEs), partial differential equations (PDEs), differential-algebraic equations (DAEs), integro-differential equations (IDEs), stochastic differential equations (SDEs), reduced or partially observed systems, and nonautonomous differential equations. Based on this framework, we have developed Deep Unknown Equations (DUE), an open-source software package designed to facilitate the data-driven modeling of unknown equations using modern deep learning techniques. DUE serves as an educational tool for classroom instruction, enabling students and newcomers to gain hands-on experience with differential equations, data-driven modeling, and contemporary deep learning approaches such as fully connected neural networks (FNNs), residual neural networks (ResNet), generalized ResNet (gResNet), operator semigroup networks (OSG-Net), and transformers from large language models (LLMs). Additionally, DUE is a versatile and accessible toolkit for researchers across various scientific and engineering fields. It is applicable not only for learning unknown equations from data, but also for surrogate modeling of known, yet complex equations that are costly to solve using traditional numerical methods. We provide detailed descriptions of DUE and demonstrate its capabilities through diverse examples which serve as templates that can be easily adapted for other applications. The source code for DUE is available at https://github.com/AI4Equations/due.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"109 1","pages":""},"PeriodicalIF":10.2,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145448281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}