Pub Date : 2024-03-04DOI: 10.1007/s10092-024-00569-1
Rong Li, Bing Zheng
This paper addresses the tensor completion problem, whose task is to estimate missing values with limited information. However, the crux of this problem is how to reasonably represent the low-rank structure embedded in the underlying data. In this work, we consider a new low-rank tensor completion model combined with the multi-directional partial tensor nuclear norm and the total variation (TV) regularization. Specifically, the partial sum of the tensor nuclear norm (PSTNN) is used to narrow the gap between the tensor tubal rank and its lower convex envelop [i.e. tensor nuclear norm (TNN)], and the TV regularization is adopted to maintain the smooth structure along the spatial dimension. In addition, the weighted sum of the tensor nuclear norm (WSTNN) is introduced to replace the traditional TNN to extend the PSTNN to the high-order tensor, which also can flexibly handle different correlations along different modes, resulting in an improved low d-tubal rank approximation. To tackle this new model, we develop the alternating directional method of multipliers (ADMM) algorithm tailored for the proposed optimization problem. Theoretical analysis of the ADMM is conducted to prove the Karush–Kuhn–Tucker (KKT) conditions. Numerical examples demonstrate the proposed method outperforms some state-of-the-art methods in qualitative and quantitative aspects.
{"title":"Tensor completion via multi-directional partial tensor nuclear norm with total variation regularization","authors":"Rong Li, Bing Zheng","doi":"10.1007/s10092-024-00569-1","DOIUrl":"https://doi.org/10.1007/s10092-024-00569-1","url":null,"abstract":"<p>This paper addresses the tensor completion problem, whose task is to estimate missing values with limited information. However, the crux of this problem is how to reasonably represent the low-rank structure embedded in the underlying data. In this work, we consider a new low-rank tensor completion model combined with the multi-directional partial tensor nuclear norm and the total variation (TV) regularization. Specifically, the partial sum of the tensor nuclear norm (PSTNN) is used to narrow the gap between the tensor tubal rank and its lower convex envelop [i.e. tensor nuclear norm (TNN)], and the TV regularization is adopted to maintain the smooth structure along the spatial dimension. In addition, the weighted sum of the tensor nuclear norm (WSTNN) is introduced to replace the traditional TNN to extend the PSTNN to the high-order tensor, which also can flexibly handle different correlations along different modes, resulting in an improved low <i>d</i>-tubal rank approximation. To tackle this new model, we develop the alternating directional method of multipliers (ADMM) algorithm tailored for the proposed optimization problem. Theoretical analysis of the ADMM is conducted to prove the Karush–Kuhn–Tucker (KKT) conditions. Numerical examples demonstrate the proposed method outperforms some state-of-the-art methods in qualitative and quantitative aspects.</p>","PeriodicalId":9522,"journal":{"name":"Calcolo","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140032925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01DOI: 10.1007/s10092-024-00568-2
Qiaohua Liu, Shan Wang, Yimin Wei
The approximate linear equation (Axapprox b) with some columns of A error-free can be solved via mixed least squares-total least squares (MTLS) model by minimizing a nonlinear function. This paper is devoted to the Gauss–Newton iteration for the MTLS problem. With an appropriately chosen initial vector, each iteration step of the standard Gauss–Newton method requires to solve a smaller-size least squares problem, in which the QR of the coefficient matrix needs a rank-one modification. To improve the convergence, we devise a relaxed Gauss–Newton (RGN) method by introducing a relaxation factor and provide the convergence results as well. The convergence is shown to be closely related to the ratio of the square of subspace-restricted singular values of [A, b]. The RGN can also be modified to solve the total least squares (TLS) problem. Applying the RGN method to a Bursa–Wolf model in parameter estimation, numerical results show that the RGN-based MTLS method behaves much better than the RGN-based TLS method. Theoretical convergence properties of the RGN-MTLS algorithm are also illustrated by numerical tests.
{"title":"A Gauss–Newton method for mixed least squares-total least squares problems","authors":"Qiaohua Liu, Shan Wang, Yimin Wei","doi":"10.1007/s10092-024-00568-2","DOIUrl":"https://doi.org/10.1007/s10092-024-00568-2","url":null,"abstract":"<p>The approximate linear equation <span>(Axapprox b)</span> with some columns of <i>A</i> error-free can be solved via mixed least squares-total least squares (MTLS) model by minimizing a nonlinear function. This paper is devoted to the Gauss–Newton iteration for the MTLS problem. With an appropriately chosen initial vector, each iteration step of the standard Gauss–Newton method requires to solve a smaller-size least squares problem, in which the QR of the coefficient matrix needs a rank-one modification. To improve the convergence, we devise a relaxed Gauss–Newton (RGN) method by introducing a relaxation factor and provide the convergence results as well. The convergence is shown to be closely related to the ratio of the square of subspace-restricted singular values of [<i>A</i>, <i>b</i>]. The RGN can also be modified to solve the total least squares (TLS) problem. Applying the RGN method to a Bursa–Wolf model in parameter estimation, numerical results show that the RGN-based MTLS method behaves much better than the RGN-based TLS method. Theoretical convergence properties of the RGN-MTLS algorithm are also illustrated by numerical tests.</p>","PeriodicalId":9522,"journal":{"name":"Calcolo","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140018579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.1007/s10092-024-00571-7
Abstract
Some popular stabilization techniques, such as nonsymmetric interior penalty Galerkin (NIPG) method, have important application value in computational fluid dynamics. In this paper, we analyze a NIPG method on Shishkin mesh for a singularly perturbed convection diffusion problem, which is a typical simplified fluid model. According to the characteristics of the solution, the mesh and the numerical scheme, a new interpolation is designed for convergence analysis. More specifically, Gauß Lobatto interpolation and Gauß Radau interpolation are introduced inside and outside the layer, respectively. On the basis of that, by selecting special penalty parameters at different mesh points, we establish supercloseness of almost (k+1) order in an energy norm. Here (kge 1) is the degree of piecewise polynomials. Then, a simple post-processing operator is constructed, and it is proved that the corresponding post-processing can make the numerical solution achieve higher accuracy. In this process, a new analysis is proposed for the stability analysis of this operator. Finally, superconvergence is derived under a discrete energy norm. These conclusions can be verified numerically. Furthermore, numerical experiments show that the increase of polynomial degree k and mesh parameter N, the decrease of perturbation parameter (varepsilon ) or the use of over-penalty technology may increase the condition number of linear system. Therefore, we need to cautiously consider the application of high-order algorithm.
{"title":"Supercloseness of the NIPG method for a singularly perturbed convection diffusion problem on Shishkin mesh","authors":"","doi":"10.1007/s10092-024-00571-7","DOIUrl":"https://doi.org/10.1007/s10092-024-00571-7","url":null,"abstract":"<h3>Abstract</h3> <p>Some popular stabilization techniques, such as nonsymmetric interior penalty Galerkin (NIPG) method, have important application value in computational fluid dynamics. In this paper, we analyze a NIPG method on Shishkin mesh for a singularly perturbed convection diffusion problem, which is a typical simplified fluid model. According to the characteristics of the solution, the mesh and the numerical scheme, a new interpolation is designed for convergence analysis. More specifically, Gauß Lobatto interpolation and Gauß Radau interpolation are introduced inside and outside the layer, respectively. On the basis of that, by selecting special penalty parameters at different mesh points, we establish supercloseness of almost <span> <span>(k+1)</span> </span> order in an energy norm. Here <span> <span>(kge 1)</span> </span> is the degree of piecewise polynomials. Then, a simple post-processing operator is constructed, and it is proved that the corresponding post-processing can make the numerical solution achieve higher accuracy. In this process, a new analysis is proposed for the stability analysis of this operator. Finally, superconvergence is derived under a discrete energy norm. These conclusions can be verified numerically. Furthermore, numerical experiments show that the increase of polynomial degree <em>k</em> and mesh parameter <em>N</em>, the decrease of perturbation parameter <span> <span>(varepsilon )</span> </span> or the use of over-penalty technology may increase the condition number of linear system. Therefore, we need to cautiously consider the application of high-order algorithm.</p>","PeriodicalId":9522,"journal":{"name":"Calcolo","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140007586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-21DOI: 10.1007/s10092-024-00567-3
Xiaoli Feng, Xiaoyu Yuan, Meixia Zhao, Zhi Qian
In this paper, we consider the numerical methods for both the forward and backward problems of a time-space fractional diffusion equation. For the two-dimensional forward problem, we propose a finite difference method. The stability of the scheme and the corresponding Fast Preconditioned Conjugated Gradient algorithm are given. For the backward problem, since it is ill-posed, we use a quasi-boundary-value method to deal with it. Based on the Fourier transform, we obtain two kinds of order optimal convergence rates by using an a-priori and an a-posteriori regularization parameter choice rules. Numerical examples for both forward and backward problems show that the proposed numerical methods work well.
{"title":"Numerical methods for the forward and backward problems of a time-space fractional diffusion equation","authors":"Xiaoli Feng, Xiaoyu Yuan, Meixia Zhao, Zhi Qian","doi":"10.1007/s10092-024-00567-3","DOIUrl":"https://doi.org/10.1007/s10092-024-00567-3","url":null,"abstract":"<p>In this paper, we consider the numerical methods for both the forward and backward problems of a time-space fractional diffusion equation. For the two-dimensional forward problem, we propose a finite difference method. The stability of the scheme and the corresponding Fast Preconditioned Conjugated Gradient algorithm are given. For the backward problem, since it is ill-posed, we use a quasi-boundary-value method to deal with it. Based on the Fourier transform, we obtain two kinds of order optimal convergence rates by using an a-priori and an a-posteriori regularization parameter choice rules. Numerical examples for both forward and backward problems show that the proposed numerical methods work well.</p>","PeriodicalId":9522,"journal":{"name":"Calcolo","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139955728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-19DOI: 10.1007/s10092-023-00564-y
Peter Binev, Andrea Bonito, Ronald DeVore, Guergana Petrova
This paper studies the problem of learning an unknown function f from given data about f. The learning problem is to give an approximation ({hat{f}}) to f that predicts the values of f away from the data. There are numerous settings for this learning problem depending on (i) what additional information we have about f (known as a model class assumption), (ii) how we measure the accuracy of how well ({hat{f}}) predicts f, (iii) what is known about the data and data sites, (iv) whether the data observations are polluted by noise. A mathematical description of the optimal performance possible (the smallest possible error of recovery) is known in the presence of a model class assumption. Under standard model class assumptions, it is shown in this paper that a near optimal ({hat{f}}) can be found by solving a certain finite-dimensional over-parameterized optimization problem with a penalty term. Here, near optimal means that the error is bounded by a fixed constant times the optimal error. This explains the advantage of over-parameterization which is commonly used in modern machine learning. The main results of this paper prove that over-parameterized learning with an appropriate loss function gives a near optimal approximation ({hat{f}}) of the function f from which the data is collected. Quantitative bounds are given for how much over-parameterization needs to be employed and how the penalization needs to be scaled in order to guarantee a near optimal recovery of f. An extension of these results to the case where the data is polluted by additive deterministic noise is also given.
本文研究从给定的关于 f 的数据中学习未知函数 f 的问题。学习问题是给出 f 的近似值 ({hat{f}}),该近似值可以预测 f 在数据之外的值。这个学习问题有多种设置,取决于:(i) 我们有哪些关于 f 的额外信息(称为模型类假设);(ii) 我们如何衡量 ({hat{f}}) 预测 f 的准确性;(iii) 我们对数据和数据站点的了解;(iv) 数据观测是否受到噪声污染。在有模型类假设的情况下,可能的最佳性能(可能的最小恢复误差)的数学描述是已知的。本文表明,在标准模型类假设条件下,通过求解某个带有惩罚项的有限维超参数优化问题,可以找到一个接近最优的 ({hhat{f}})。这里的近似最优指的是误差以最优误差乘以一个固定常数为界。这就解释了现代机器学习中常用的超参数化的优势。本文的主要结果证明,使用适当的损失函数进行过参数化学习,可以得到函数 f 的近似值 ({hat{f}})。本文还给出了定量约束,说明需要采用多少过度参数化以及如何调整惩罚比例才能保证近似最优地恢复 f。
{"title":"Optimal learning","authors":"Peter Binev, Andrea Bonito, Ronald DeVore, Guergana Petrova","doi":"10.1007/s10092-023-00564-y","DOIUrl":"https://doi.org/10.1007/s10092-023-00564-y","url":null,"abstract":"<p>This paper studies the problem of learning an unknown function <i>f</i> from given data about <i>f</i>. The learning problem is to give an approximation <span>({hat{f}})</span> to <i>f</i> that predicts the values of <i>f</i> away from the data. There are numerous settings for this learning problem depending on (i) what additional information we have about <i>f</i> (known as a model class assumption), (ii) how we measure the accuracy of how well <span>({hat{f}})</span> predicts <i>f</i>, (iii) what is known about the data and data sites, (iv) whether the data observations are polluted by noise. A mathematical description of the optimal performance possible (the smallest possible error of recovery) is known in the presence of a model class assumption. Under standard model class assumptions, it is shown in this paper that a near optimal <span>({hat{f}})</span> can be found by solving a certain finite-dimensional over-parameterized optimization problem with a penalty term. Here, near optimal means that the error is bounded by a fixed constant times the optimal error. This explains the advantage of over-parameterization which is commonly used in modern machine learning. The main results of this paper prove that over-parameterized learning with an appropriate loss function gives a near optimal approximation <span>({hat{f}})</span> of the function <i>f</i> from which the data is collected. Quantitative bounds are given for how much over-parameterization needs to be employed and how the penalization needs to be scaled in order to guarantee a near optimal recovery of <i>f</i>. An extension of these results to the case where the data is polluted by additive deterministic noise is also given.</p>","PeriodicalId":9522,"journal":{"name":"Calcolo","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139927993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we conceive and analyze a discrete duality finite volume (DDFV) scheme for the unsteady generalized thermistor problem, including a p-Laplacian for the diffusion and a Joule heating source. As in the continuous setting, the main difficulty in the design of the discrete model comes from the Joule heating term. To cope with this issue, the Joule heating term is replaced with an equivalent key formulation on which a fully implicit scheme is constructed. Introducing a tricky cut-off function in the proposed discretization, we are able to recover the energy estimates on the discrete temperature. Another feature of this approach is that we dispense with the discrete maximum principle on the approximate electric potential, which in essence poses restrictive constraints on the mesh shape. Then, the existence of discrete solution to the coupled scheme is established. Compactness estimates are also shown. Under general assumptions on the data and meshes, the convergence of the numerical scheme is addressed. Numerical results are finally presented to show the efficiency and accuracy of the proposed methodology as well as the behavior of the implemented nonlinear solver.
{"title":"Discrete duality finite volume scheme for a generalized Joule heating problem","authors":"Mustapha Bahari, El-Houssaine Quenjel, Mohamed Rhoudaf","doi":"10.1007/s10092-024-00566-4","DOIUrl":"https://doi.org/10.1007/s10092-024-00566-4","url":null,"abstract":"<p>In this paper we conceive and analyze a discrete duality finite volume (DDFV) scheme for the unsteady generalized thermistor problem, including a <i>p</i>-Laplacian for the diffusion and a Joule heating source. As in the continuous setting, the main difficulty in the design of the discrete model comes from the Joule heating term. To cope with this issue, the Joule heating term is replaced with an equivalent key formulation on which a fully implicit scheme is constructed. Introducing a tricky cut-off function in the proposed discretization, we are able to recover the energy estimates on the discrete temperature. Another feature of this approach is that we dispense with the discrete maximum principle on the approximate electric potential, which in essence poses restrictive constraints on the mesh shape. Then, the existence of discrete solution to the coupled scheme is established. Compactness estimates are also shown. Under general assumptions on the data and meshes, the convergence of the numerical scheme is addressed. Numerical results are finally presented to show the efficiency and accuracy of the proposed methodology as well as the behavior of the implemented nonlinear solver.\u0000</p>","PeriodicalId":9522,"journal":{"name":"Calcolo","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139759786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01DOI: 10.1007/s10092-023-00563-z
Djoko Kamdem Jules, Gidey Hagos, Koko Jonas, Sayah Toni
In this work, three discontinuous Galerkin (DG) methods are formulated and analysed to solve Stokes equations with power law slip boundary condition. Numerical examples exhibited confirm the theoretical findings, moreover we also test the methods on the lid Driven cavity and compare the three DG methods.
{"title":"Discontinuous Galerkin methods for Stokes equations under power law slip boundary condition: a priori analysis","authors":"Djoko Kamdem Jules, Gidey Hagos, Koko Jonas, Sayah Toni","doi":"10.1007/s10092-023-00563-z","DOIUrl":"https://doi.org/10.1007/s10092-023-00563-z","url":null,"abstract":"<p>In this work, three discontinuous Galerkin (DG) methods are formulated and analysed to solve Stokes equations with power law slip boundary condition. Numerical examples exhibited confirm the theoretical findings, moreover we also test the methods on the lid Driven cavity and compare the three DG methods.</p>","PeriodicalId":9522,"journal":{"name":"Calcolo","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139663101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-29DOI: 10.1007/s10092-023-00565-x
Ben Adcock, Nick Dexter, Sebastian Moraga
Over the several decades, approximating functions in infinite dimensions from samples has gained increasing attention in computational science and engineering, especially in computational uncertainty quantification. This is primarily due to the relevance of functions that are solutions to parametric differential equations in various fields, e.g. chemistry, economics, engineering, and physics. While acquiring accurate and reliable approximations of such functions is inherently difficult, current benchmark methods exploit the fact that such functions often belong to certain classes of holomorphic functions to get algebraic convergence rates in infinite dimensions with respect to the number of (potentially adaptive) samples m. Our work focuses on providing theoretical approximation guarantees for the class of so-called ((varvec{b},varepsilon ))-holomorphic functions, demonstrating that these algebraic rates are the best possible for Banach-valued functions in infinite dimensions. We establish lower bounds using a reduction to a discrete problem in combination with the theory of m-widths, Gelfand widths and Kolmogorov widths. We study two cases, known and unknown anisotropy, in which the relative importance of the variables is known and unknown, respectively. A key conclusion of our paper is that in the latter setting, approximation from finite samples is impossible without some inherent ordering of the variables, even if the samples are chosen adaptively. Finally, in both cases, we demonstrate near-optimal, non-adaptive (random) sampling and recovery strategies which achieve close to same rates as the lower bounds.
几十年来,在计算科学与工程领域,特别是在计算不确定性量化方面,从样本逼近无限维函数的方法越来越受到关注。这主要是由于作为参数微分方程解的函数在化学、经济学、工程学和物理学等各个领域的重要性。虽然获取这类函数准确可靠的近似值本身就很困难,但目前的基准方法利用了这样一个事实,即这类函数通常属于某些类全态函数,可以在无限维度上获得相对于(潜在自适应)样本数 m 的代数收敛率。我们的工作重点是为所谓的 ((varvec{b},varepsilon )) -全纯函数类提供理论上的近似保证,证明这些代数率是无限维度中巴纳赫值函数可能达到的最佳代数率。我们结合 m 宽度、Gelfand 宽度和 Kolmogorov 宽度理论,利用对离散问题的还原建立了下限。我们研究了已知各向异性和未知各向异性两种情况,其中变量的相对重要性分别为已知和未知。本文的一个重要结论是,在后一种情况下,如果不对变量进行某种固有排序,即使样本是自适应选择的,也不可能从有限样本中得到近似值。最后,在这两种情况下,我们都展示了接近最优的非自适应(随机)采样和恢复策略,这些策略能达到与下限接近的速率。
{"title":"Optimal approximation of infinite-dimensional holomorphic functions","authors":"Ben Adcock, Nick Dexter, Sebastian Moraga","doi":"10.1007/s10092-023-00565-x","DOIUrl":"https://doi.org/10.1007/s10092-023-00565-x","url":null,"abstract":"<p>Over the several decades, approximating functions in infinite dimensions from samples has gained increasing attention in computational science and engineering, especially in computational uncertainty quantification. This is primarily due to the relevance of functions that are solutions to parametric differential equations in various fields, e.g. chemistry, economics, engineering, and physics. While acquiring accurate and reliable approximations of such functions is inherently difficult, current benchmark methods exploit the fact that such functions often belong to certain classes of holomorphic functions to get algebraic convergence rates in infinite dimensions with respect to the number of (potentially adaptive) samples <i>m</i>. Our work focuses on providing theoretical approximation guarantees for the class of so-called <span>((varvec{b},varepsilon ))</span>-<i>holomorphic</i> functions, demonstrating that these algebraic rates are the best possible for Banach-valued functions in infinite dimensions. We establish lower bounds using a reduction to a discrete problem in combination with the theory of <i>m</i>-widths, Gelfand widths and Kolmogorov widths. We study two cases, <i>known</i> and <i>unknown anisotropy</i>, in which the relative importance of the variables is known and unknown, respectively. A key conclusion of our paper is that in the latter setting, approximation from finite samples is impossible without some inherent ordering of the variables, even if the samples are chosen adaptively. Finally, in both cases, we demonstrate near-optimal, non-adaptive (random) sampling and recovery strategies which achieve close to same rates as the lower bounds.</p>","PeriodicalId":9522,"journal":{"name":"Calcolo","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139649608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-11DOI: 10.1007/s10092-023-00562-0
Yanchen He, Christoph Schwab
In an open, bounded Lipschitz polygon (Omega subset mathbb {R}^2), we establish weighted analytic regularity for a semilinear, elliptic PDE with analytic nonlinearity and subject to a source term f which is analytic in (Omega ). The boundary conditions on each edge of (partial Omega ) are either homogeneous Dirichlet or homogeneous Neumann BCs. The presently established weighted analytic regularity of solutions implies exponential convergence of various approximation schemes: hp-finite elements, reduced order models via Kolmogorov n-widths of solution sets in (H^1(Omega )), quantized tensor formats and certain deep neural networks.
{"title":"Analytic regularity and solution approximation for a semilinear elliptic partial differential equation in a polygon","authors":"Yanchen He, Christoph Schwab","doi":"10.1007/s10092-023-00562-0","DOIUrl":"https://doi.org/10.1007/s10092-023-00562-0","url":null,"abstract":"<p>In an open, bounded Lipschitz polygon <span>(Omega subset mathbb {R}^2)</span>, we establish weighted analytic regularity for a semilinear, elliptic PDE with analytic nonlinearity and subject to a source term <i>f</i> which is analytic in <span>(Omega )</span>. The boundary conditions on each edge of <span>(partial Omega )</span> are either homogeneous Dirichlet or homogeneous Neumann BCs. The presently established weighted analytic regularity of solutions implies exponential convergence of various approximation schemes: <i>hp</i>-finite elements, reduced order models via Kolmogorov <i>n</i>-widths of solution sets in <span>(H^1(Omega ))</span>, quantized tensor formats and certain deep neural networks.</p>","PeriodicalId":9522,"journal":{"name":"Calcolo","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139460559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-03DOI: 10.1007/s10092-023-00555-z
Jikun Zhao, Teng Chen, Bei Zhang, Xiaojing Dong
We present a Hermite-type virtual element method with interior penalty to solve the fourth-order elliptic problem over general polygonal meshes, where some interior penalty terms are added to impose the (C^1) continuity. A (C^0)-continuous Hermite-type virtual element with local (H^2) regularity is constructed, such that it can be used in the interior penalty scheme. We prove the boundedness of basis functions and interpolation error estimates of Hermite-type virtual element. After introducing a discrete energy norm, we present the optimal convergence of the interior penalty scheme. Compared with some existing methods, the proposed interior penalty method uses fewer degrees of freedom. Finally, we verify the theoretical results through some numerical examples.
{"title":"The Hermite-type virtual element method with interior penalty for the fourth-order elliptic problem","authors":"Jikun Zhao, Teng Chen, Bei Zhang, Xiaojing Dong","doi":"10.1007/s10092-023-00555-z","DOIUrl":"https://doi.org/10.1007/s10092-023-00555-z","url":null,"abstract":"<p>We present a Hermite-type virtual element method with interior penalty to solve the fourth-order elliptic problem over general polygonal meshes, where some interior penalty terms are added to impose the <span>(C^1)</span> continuity. A <span>(C^0)</span>-continuous Hermite-type virtual element with local <span>(H^2)</span> regularity is constructed, such that it can be used in the interior penalty scheme. We prove the boundedness of basis functions and interpolation error estimates of Hermite-type virtual element. After introducing a discrete energy norm, we present the optimal convergence of the interior penalty scheme. Compared with some existing methods, the proposed interior penalty method uses fewer degrees of freedom. Finally, we verify the theoretical results through some numerical examples.</p>","PeriodicalId":9522,"journal":{"name":"Calcolo","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139093136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}