We consider the Weak Galerkin finite element approximation of the Singularly Perturbed Biharmonic elliptic problem on a unit square domain with clamped boundary conditions. Shishkin mesh is used for domain discretization as the solution exhibits boundary layers near the domain boundary. Error estimates in the equivalent $H^{2}-$ norm have been established and the uniform convergence of the proposed method has been proved. Numerical examples are presented corroborating our theoretical findings.
{"title":"Anisotropic Error Analysis of Weak Galerkin finite element method for Singularly Perturbed Biharmonic Problems","authors":"Aayushman Raina, Srinivasan Natesan, Şuayip Toprakseven","doi":"arxiv-2409.07217","DOIUrl":"https://doi.org/arxiv-2409.07217","url":null,"abstract":"We consider the Weak Galerkin finite element approximation of the Singularly\u0000Perturbed Biharmonic elliptic problem on a unit square domain with clamped\u0000boundary conditions. Shishkin mesh is used for domain discretization as the\u0000solution exhibits boundary layers near the domain boundary. Error estimates in\u0000the equivalent $H^{2}-$ norm have been established and the uniform convergence\u0000of the proposed method has been proved. Numerical examples are presented\u0000corroborating our theoretical findings.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142181957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an innovative framework that integrates hierarchical matrix (H-matrix) compression techniques into the structure and training of Physics-Informed Neural Networks (PINNs). By leveraging the low-rank properties of matrix sub-blocks, the proposed dynamic, error-bounded H-matrix compression method significantly reduces computational complexity and storage requirements without compromising accuracy. This approach is rigorously compared to traditional compression techniques, such as Singular Value Decomposition (SVD), pruning, and quantization, demonstrating superior performance, particularly in maintaining the Neural Tangent Kernel (NTK) properties critical for the stability and convergence of neural networks. The findings reveal that H-matrix compression not only enhances training efficiency but also ensures the scalability and robustness of PINNs for complex, large-scale applications in physics-based modeling. This work offers a substantial contribution to the optimization of deep learning models, paving the way for more efficient and practical implementations of PINNs in real-world scenarios.
本文提出了一种创新框架,将分层矩阵(H-matrix)压缩技术集成到物理信息神经网络(PINNs)的结构和训练中。通过利用矩阵子块的低秩特性,所提出的动态、有误差限制的 H 矩阵压缩方法大大降低了计算复杂度和存储要求,同时不影响准确性。该方法与奇异值分解(SVD)、剪枝和量化等传统压缩技术进行了严格比较,显示出卓越的性能,尤其是在保持对神经网络的稳定性和收敛性至关重要的神经切分核(NTK)特性方面。研究结果表明,H-matrix 压缩不仅能提高训练效率,还能确保 PINNs 的可扩展性和鲁棒性,适用于基于物理学建模的复杂、大规模应用。这项工作为深度学习模型的优化做出了重大贡献,为 PINN 在现实世界场景中更高效、更实用的实现铺平了道路。
{"title":"Dynamic Error-Bounded Hierarchical Matrices in Neural Network Compression","authors":"John Mango, Ronald Katende","doi":"arxiv-2409.07028","DOIUrl":"https://doi.org/arxiv-2409.07028","url":null,"abstract":"This paper presents an innovative framework that integrates hierarchical\u0000matrix (H-matrix) compression techniques into the structure and training of\u0000Physics-Informed Neural Networks (PINNs). By leveraging the low-rank properties\u0000of matrix sub-blocks, the proposed dynamic, error-bounded H-matrix compression\u0000method significantly reduces computational complexity and storage requirements\u0000without compromising accuracy. This approach is rigorously compared to\u0000traditional compression techniques, such as Singular Value Decomposition (SVD),\u0000pruning, and quantization, demonstrating superior performance, particularly in\u0000maintaining the Neural Tangent Kernel (NTK) properties critical for the\u0000stability and convergence of neural networks. The findings reveal that H-matrix\u0000compression not only enhances training efficiency but also ensures the\u0000scalability and robustness of PINNs for complex, large-scale applications in\u0000physics-based modeling. This work offers a substantial contribution to the\u0000optimization of deep learning models, paving the way for more efficient and\u0000practical implementations of PINNs in real-world scenarios.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142181964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose and rigorously analyze a finite element method for the approximation of stationary Fokker--Planck--Kolmogorov (FPK) equations subject to periodic boundary conditions in two settings: one with weakly differentiable coefficients, and one with merely essentially bounded measurable coefficients under a Cordes-type condition. These problems arise as governing equations for the invariant measure in the homogenization of nondivergence-form equations with large drifts. In particular, the Cordes setting guarantees the existence and uniqueness of a square-integrable invariant measure. We then suggest and rigorously analyze an approximation scheme for the effective diffusion matrix in both settings, based on the finite element scheme for stationary FPK problems developed in the first part. Finally, we demonstrate the performance of the methods through numerical experiments.
{"title":"Finite element approximation of stationary Fokker--Planck--Kolmogorov equations with application to periodic numerical homogenization","authors":"Timo Sprekeler, Endre Süli, Zhiwen Zhang","doi":"arxiv-2409.07371","DOIUrl":"https://doi.org/arxiv-2409.07371","url":null,"abstract":"We propose and rigorously analyze a finite element method for the\u0000approximation of stationary Fokker--Planck--Kolmogorov (FPK) equations subject\u0000to periodic boundary conditions in two settings: one with weakly differentiable\u0000coefficients, and one with merely essentially bounded measurable coefficients\u0000under a Cordes-type condition. These problems arise as governing equations for\u0000the invariant measure in the homogenization of nondivergence-form equations\u0000with large drifts. In particular, the Cordes setting guarantees the existence\u0000and uniqueness of a square-integrable invariant measure. We then suggest and\u0000rigorously analyze an approximation scheme for the effective diffusion matrix\u0000in both settings, based on the finite element scheme for stationary FPK\u0000problems developed in the first part. Finally, we demonstrate the performance\u0000of the methods through numerical experiments.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142181928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces generative Residual Networks (ResNet) as a surrogate Machine Learning (ML) tool for Large Eddy Simulation (LES) Sub Grid Scale (SGS) resolving. The study investigates the impact of incorporating Dual Scale Residual Blocks (DS-RB) within the ResNet architecture. Two LES SGS resolving models are proposed and tested for prior analysis test cases: a super-resolution model (SR-ResNet) and a SGS stress tensor inference model (SGS-ResNet). The SR-ResNet model task is to upscale LES solutions from coarse to finer grids by inferring unresolved SGS velocity fluctuations, exhibiting success in preserving high-frequency velocity fluctuation information, and aligning with higher-resolution LES solutions' energy spectrum. Furthermore, employing DS-RB enhances prediction accuracy and precision of high-frequency velocity fields compared to Single Scale Residual Blocks (SS-RB), evident in both spatial and spectral domains. The SR-ResNet model is tested and trained on filtered/downsampled 2-D LES planar jet injection problems at two Reynolds numbers, two jet configurations, and two upscale ratios. In the case of SGS stress tensor inference, both SS-RB and DS-RB exhibit higher prediction accuracy over the Smagorinsky model with reference to the true DNS SGS stress tensor, with DS-RB-based SGS-ResNet showing stronger statistical alignment with DNS data. The SGS-ResNet model is tested on a filtered/downsampled 2-D DNS isotropic homogenous decay turbulence problem. The adoption of DS-RB incurs notable increases in network size, training time, and forward inference time, with the network size expanding by over tenfold, and training and forward inference times increasing by approximately 0.5 and 3 times, respectively.
本文介绍了作为大涡模拟(LES)子网格尺度(SGS)解析的替代机器学习(ML)工具的生成残差网络(ResNet)。该研究探讨了在 ResNet 架构中加入双尺度残差块(DS-RB)的影响。针对先期分析测试案例,提出并测试了两种 LES SGS 解析模型:超解析模型(SR-ResNet)和 SGS 应力张量推理模型(SGS-ResNet)。SR-ResNet 模型的任务是通过推断未解决的 SGS 速度波动,将 LES 解决方案从粗网格提升到更精细的网格,成功保留了高频速度波动信息,并与更高分辨率 LES 解决方案的能谱相一致。此外,与单尺度残差块(SS-RB)相比,采用 DS-RB 提高了高频速度场的预测精度和准确性,这在空间和频谱域都很明显。SR-ResNet 模型在两种雷诺数、两种喷流配置和两种升尺度比的过滤/降采样 2-D LES 平面喷流注入问题上进行了测试和训练。在 SGS 应力张量推断方面,参照真实 DNS SGS 应变张量,SS-RB 和 DS-RB 都比 Smagorinsky 模型显示出更高的预测精度,基于 DS-RB 的 SGS-ResNet 与 DNS 数据显示出更强的统计一致性。SGS-ResNet 模型在滤波/降采样的 2-D DNS 各向同性均质衰减湍流问题上进行了测试。采用 DS-RB 后,网络规模、训练时间和前向推理时间显著增加,网络规模扩大了 10 倍以上,训练时间和前向推理时间分别增加了约 0.5 倍和 3 倍。
{"title":"Dual scale Residual-Network for turbulent flow sub grid scale resolving: A prior analysis","authors":"Omar Sallam, Mirjam Fürth","doi":"arxiv-2409.07605","DOIUrl":"https://doi.org/arxiv-2409.07605","url":null,"abstract":"This paper introduces generative Residual Networks (ResNet) as a surrogate\u0000Machine Learning (ML) tool for Large Eddy Simulation (LES) Sub Grid Scale (SGS)\u0000resolving. The study investigates the impact of incorporating Dual Scale\u0000Residual Blocks (DS-RB) within the ResNet architecture. Two LES SGS resolving\u0000models are proposed and tested for prior analysis test cases: a\u0000super-resolution model (SR-ResNet) and a SGS stress tensor inference model\u0000(SGS-ResNet). The SR-ResNet model task is to upscale LES solutions from coarse\u0000to finer grids by inferring unresolved SGS velocity fluctuations, exhibiting\u0000success in preserving high-frequency velocity fluctuation information, and\u0000aligning with higher-resolution LES solutions' energy spectrum. Furthermore,\u0000employing DS-RB enhances prediction accuracy and precision of high-frequency\u0000velocity fields compared to Single Scale Residual Blocks (SS-RB), evident in\u0000both spatial and spectral domains. The SR-ResNet model is tested and trained on\u0000filtered/downsampled 2-D LES planar jet injection problems at two Reynolds\u0000numbers, two jet configurations, and two upscale ratios. In the case of SGS\u0000stress tensor inference, both SS-RB and DS-RB exhibit higher prediction\u0000accuracy over the Smagorinsky model with reference to the true DNS SGS stress\u0000tensor, with DS-RB-based SGS-ResNet showing stronger statistical alignment with\u0000DNS data. The SGS-ResNet model is tested on a filtered/downsampled 2-D DNS\u0000isotropic homogenous decay turbulence problem. The adoption of DS-RB incurs\u0000notable increases in network size, training time, and forward inference time,\u0000with the network size expanding by over tenfold, and training and forward\u0000inference times increasing by approximately 0.5 and 3 times, respectively.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142181925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A unified construction of canonical $H^m$-nonconforming finite elements is developed for $n$-dimensional simplices for any $m, n geq 1$. Consistency with the Morley-Wang-Xu elements [Math. Comp. 82 (2013), pp. 25-43] is maintained when $m leq n$. In the general case, the degrees of freedom and the shape function space exhibit well-matched multi-layer structures that ensure their alignment. Building on the concept of the nonconforming bubble function, the unisolvence is established using an equivalent integral-type representation of the shape function space and by applying induction on $m$. The corresponding nonconforming finite element method applies to $2m$-th order elliptic problems, with numerical results for $m=3$ and $m=4$ in 2D supporting the theoretical analysis.
{"title":"A construction of canonical nonconforming finite element spaces for elliptic equations of any order in any dimension","authors":"Jia Li, Shuonan Wu","doi":"arxiv-2409.06134","DOIUrl":"https://doi.org/arxiv-2409.06134","url":null,"abstract":"A unified construction of canonical $H^m$-nonconforming finite elements is\u0000developed for $n$-dimensional simplices for any $m, n geq 1$. Consistency with\u0000the Morley-Wang-Xu elements [Math. Comp. 82 (2013), pp. 25-43] is maintained\u0000when $m leq n$. In the general case, the degrees of freedom and the shape\u0000function space exhibit well-matched multi-layer structures that ensure their\u0000alignment. Building on the concept of the nonconforming bubble function, the\u0000unisolvence is established using an equivalent integral-type representation of\u0000the shape function space and by applying induction on $m$. The corresponding\u0000nonconforming finite element method applies to $2m$-th order elliptic problems,\u0000with numerical results for $m=3$ and $m=4$ in 2D supporting the theoretical\u0000analysis.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142181969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel method for eigenvalue computation using a distributed cooperative neural network framework. Unlike traditional techniques that struggle with scalability in large systems, our decentralized algorithm enables multiple autonomous agents to collaboratively estimate the smallest eigenvalue of large matrices. Each agent uses a localized neural network model, refining its estimates through inter-agent communication. Our approach guarantees convergence to the true eigenvalue, even with communication failures or network disruptions. Theoretical analysis confirms the robustness and accuracy of the method, while empirical results demonstrate its better performance compared to some traditional centralized algorithms
{"title":"Distributed Cooperative AI for Large-Scale Eigenvalue Computations Using Neural Networks","authors":"Ronald Katende","doi":"arxiv-2409.06746","DOIUrl":"https://doi.org/arxiv-2409.06746","url":null,"abstract":"This paper presents a novel method for eigenvalue computation using a\u0000distributed cooperative neural network framework. Unlike traditional techniques\u0000that struggle with scalability in large systems, our decentralized algorithm\u0000enables multiple autonomous agents to collaboratively estimate the smallest\u0000eigenvalue of large matrices. Each agent uses a localized neural network model,\u0000refining its estimates through inter-agent communication. Our approach\u0000guarantees convergence to the true eigenvalue, even with communication failures\u0000or network disruptions. Theoretical analysis confirms the robustness and\u0000accuracy of the method, while empirical results demonstrate its better\u0000performance compared to some traditional centralized algorithms","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"50 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142181962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic differentiation is everywhere, but there exists only minimal documentation of how it works in complex arithmetic beyond stating "derivatives in $mathbb{C}^d$" $cong$ "derivatives in $mathbb{R}^{2d}$" and, at best, shallow references to Wirtinger calculus. Unfortunately, the equivalence $mathbb{C}^d cong mathbb{R}^{2d}$ becomes insufficient as soon as we need to derive custom gradient rules, e.g., to avoid differentiating "through" expensive linear algebra functions or differential equation simulators. To combat such a lack of documentation, this article surveys forward- and reverse-mode automatic differentiation with complex numbers, covering topics such as Wirtinger derivatives, a modified chain rule, and different gradient conventions while explicitly avoiding holomorphicity and the Cauchy--Riemann equations (which would be far too restrictive). To be precise, we will derive, explain, and implement a complex version of Jacobian-vector and vector-Jacobian products almost entirely with linear algebra without relying on complex analysis or differential geometry. This tutorial is a call to action, for users and developers alike, to take complex values seriously when implementing custom gradient propagation rules -- the manuscript explains how.
{"title":"A tutorial on automatic differentiation with complex numbers","authors":"Nicholas Krämer","doi":"arxiv-2409.06752","DOIUrl":"https://doi.org/arxiv-2409.06752","url":null,"abstract":"Automatic differentiation is everywhere, but there exists only minimal\u0000documentation of how it works in complex arithmetic beyond stating \"derivatives\u0000in $mathbb{C}^d$\" $cong$ \"derivatives in $mathbb{R}^{2d}$\" and, at best,\u0000shallow references to Wirtinger calculus. Unfortunately, the equivalence\u0000$mathbb{C}^d cong mathbb{R}^{2d}$ becomes insufficient as soon as we need to\u0000derive custom gradient rules, e.g., to avoid differentiating \"through\"\u0000expensive linear algebra functions or differential equation simulators. To\u0000combat such a lack of documentation, this article surveys forward- and\u0000reverse-mode automatic differentiation with complex numbers, covering topics\u0000such as Wirtinger derivatives, a modified chain rule, and different gradient\u0000conventions while explicitly avoiding holomorphicity and the Cauchy--Riemann\u0000equations (which would be far too restrictive). To be precise, we will derive,\u0000explain, and implement a complex version of Jacobian-vector and vector-Jacobian\u0000products almost entirely with linear algebra without relying on complex\u0000analysis or differential geometry. This tutorial is a call to action, for users\u0000and developers alike, to take complex values seriously when implementing custom\u0000gradient propagation rules -- the manuscript explains how.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"171 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142181961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work proposes and analyzes a new class of numerical integrators for computing low-rank approximations to solutions of matrix differential equation. We combine an explicit Runge-Kutta method with repeated randomized low-rank approximation to keep the rank of the stages limited. The so-called generalized Nystr"om method is particularly well suited for this purpose; it builds low-rank approximations from random sketches of the discretized dynamics. In contrast, all existing dynamical low-rank approximation methods are deterministic and usually perform tangent space projections to limit rank growth. Using such tangential projections can result in larger error compared to approximating the dynamics directly. Moreover, sketching allows for increased flexibility and efficiency by choosing structured random matrices adapted to the structure of the matrix differential equation. Under suitable assumptions, we establish moment and tail bounds on the error of our randomized low-rank Runge-Kutta methods. When combining the classical Runge-Kutta method with generalized Nystr"om, we obtain a method called Rand RK4, which exhibits fourth-order convergence numerically -- up to the low-rank approximation error. For a modified variant of Rand RK4, we also establish fourth-order convergence theoretically. Numerical experiments for a range of examples from the literature demonstrate that randomized low-rank Runge-Kutta methods compare favorably with two popular dynamical low-rank approximation methods, in terms of robustness and speed of convergence.
{"title":"Randomized low-rank Runge-Kutta methods","authors":"Hei Yin Lam, Gianluca Ceruti, Daniel Kressner","doi":"arxiv-2409.06384","DOIUrl":"https://doi.org/arxiv-2409.06384","url":null,"abstract":"This work proposes and analyzes a new class of numerical integrators for\u0000computing low-rank approximations to solutions of matrix differential equation.\u0000We combine an explicit Runge-Kutta method with repeated randomized low-rank\u0000approximation to keep the rank of the stages limited. The so-called generalized\u0000Nystr\"om method is particularly well suited for this purpose; it builds\u0000low-rank approximations from random sketches of the discretized dynamics. In\u0000contrast, all existing dynamical low-rank approximation methods are\u0000deterministic and usually perform tangent space projections to limit rank\u0000growth. Using such tangential projections can result in larger error compared\u0000to approximating the dynamics directly. Moreover, sketching allows for\u0000increased flexibility and efficiency by choosing structured random matrices\u0000adapted to the structure of the matrix differential equation. Under suitable\u0000assumptions, we establish moment and tail bounds on the error of our randomized\u0000low-rank Runge-Kutta methods. When combining the classical Runge-Kutta method\u0000with generalized Nystr\"om, we obtain a method called Rand RK4, which exhibits\u0000fourth-order convergence numerically -- up to the low-rank approximation error.\u0000For a modified variant of Rand RK4, we also establish fourth-order convergence\u0000theoretically. Numerical experiments for a range of examples from the\u0000literature demonstrate that randomized low-rank Runge-Kutta methods compare\u0000favorably with two popular dynamical low-rank approximation methods, in terms\u0000of robustness and speed of convergence.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142181966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sinan Wang, Yitong Deng, Molin Deng, Hong-Xing Yu, Junwei Zhou, Duowen Chen, Taku Komura, Jiajun Wu, Bo Zhu
We present an Eulerian vortex method based on the theory of flow maps to simulate the complex vortical motions of incompressible fluids. Central to our method is the novel incorporation of the flow-map transport equations for line elements, which, in combination with a bi-directional marching scheme for flow maps, enables the high-fidelity Eulerian advection of vorticity variables. The fundamental motivation is that, compared to impulse $mathbf{m}$, which has been recently bridged with flow maps to encouraging results, vorticity $boldsymbol{omega}$ promises to be preferable for its numerically stability and physical interpretability. To realize the full potential of this novel formulation, we develop a new Poisson solving scheme for vorticity-to-velocity reconstruction that is both efficient and able to accurately handle the coupling near solid boundaries. We demonstrate the efficacy of our approach with a range of vortex simulation examples, including leapfrog vortices, vortex collisions, cavity flow, and the formation of complex vortical structures due to solid-fluid interactions.
{"title":"An Eulerian Vortex Method on Flow Maps","authors":"Sinan Wang, Yitong Deng, Molin Deng, Hong-Xing Yu, Junwei Zhou, Duowen Chen, Taku Komura, Jiajun Wu, Bo Zhu","doi":"arxiv-2409.06201","DOIUrl":"https://doi.org/arxiv-2409.06201","url":null,"abstract":"We present an Eulerian vortex method based on the theory of flow maps to\u0000simulate the complex vortical motions of incompressible fluids. Central to our\u0000method is the novel incorporation of the flow-map transport equations for line\u0000elements, which, in combination with a bi-directional marching scheme for flow\u0000maps, enables the high-fidelity Eulerian advection of vorticity variables. The\u0000fundamental motivation is that, compared to impulse $mathbf{m}$, which has\u0000been recently bridged with flow maps to encouraging results, vorticity\u0000$boldsymbol{omega}$ promises to be preferable for its numerically stability\u0000and physical interpretability. To realize the full potential of this novel\u0000formulation, we develop a new Poisson solving scheme for vorticity-to-velocity\u0000reconstruction that is both efficient and able to accurately handle the\u0000coupling near solid boundaries. We demonstrate the efficacy of our approach\u0000with a range of vortex simulation examples, including leapfrog vortices, vortex\u0000collisions, cavity flow, and the formation of complex vortical structures due\u0000to solid-fluid interactions.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"118 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142181972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Causality analysis is a powerful tool for determining cause-and-effect relationships between variables in a system by quantifying the influence of one variable on another. Despite significant advancements in the field, many existing studies are constrained by their focus on unidirectional causality or Gaussian external forcing, limiting their applicability to complex real-world problems. This study proposes a novel data-driven approach to causality analysis for complex stochastic differential systems, integrating the concepts of Liang-Kleeman information flow and linear inverse modeling. Our method models environmental noise as either memoryless Gaussian white noise or memory-retaining Ornstein-Uhlenbeck colored noise, and allows for self and mutual causality, providing a more realistic representation and interpretation of the underlying system. Moreover, this LIM-based approach can identify the individual contribution of dynamics and correlation to causality. We apply this approach to re-examine the causal relationships between the El Ni~{n}o-Southern Oscillation (ENSO) and the Indian Ocean Dipole (IOD), two major climate phenomena that significantly influence global climate patterns. In general, regardless of the type of noise used, the causality between ENSO and IOD is mutual but asymmetric, with the causality map reflecting an ENSO-like pattern consistent with previous studies. Notably, in the case of colored noise, the noise memory map reveals a hotspot in the Ni~no 3 region, which is further related to the information flow. This suggests that our approach offers a more comprehensive framework and provides deeper insights into the causal inference of global climate systems.
{"title":"A Liang-Kleeman Causality Analysis based on Linear Inverse Modeling","authors":"Justin Lien","doi":"arxiv-2409.06797","DOIUrl":"https://doi.org/arxiv-2409.06797","url":null,"abstract":"Causality analysis is a powerful tool for determining cause-and-effect\u0000relationships between variables in a system by quantifying the influence of one\u0000variable on another. Despite significant advancements in the field, many\u0000existing studies are constrained by their focus on unidirectional causality or\u0000Gaussian external forcing, limiting their applicability to complex real-world\u0000problems. This study proposes a novel data-driven approach to causality\u0000analysis for complex stochastic differential systems, integrating the concepts\u0000of Liang-Kleeman information flow and linear inverse modeling. Our method\u0000models environmental noise as either memoryless Gaussian white noise or\u0000memory-retaining Ornstein-Uhlenbeck colored noise, and allows for self and\u0000mutual causality, providing a more realistic representation and interpretation\u0000of the underlying system. Moreover, this LIM-based approach can identify the\u0000individual contribution of dynamics and correlation to causality. We apply this\u0000approach to re-examine the causal relationships between the El\u0000Ni~{n}o-Southern Oscillation (ENSO) and the Indian Ocean Dipole (IOD), two\u0000major climate phenomena that significantly influence global climate patterns.\u0000In general, regardless of the type of noise used, the causality between ENSO\u0000and IOD is mutual but asymmetric, with the causality map reflecting an\u0000ENSO-like pattern consistent with previous studies. Notably, in the case of\u0000colored noise, the noise memory map reveals a hotspot in the Ni~no 3 region,\u0000which is further related to the information flow. This suggests that our\u0000approach offers a more comprehensive framework and provides deeper insights\u0000into the causal inference of global climate systems.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142181960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}