Pub Date : 2024-07-23DOI: 10.1007/s10444-024-10162-3
Gabriel Nóbrega Bufolo, Yuri Dumaresq Sobral
The discrete element method (DEM) is a numerical technique widely used to simulate granular materials. The temporal evolution of these simulations is often performed using a Verlet-type algorithm, because of its second order and its desirable property of better energy conservation. However, when dissipative forces are considered in the model, such as the nonlinear Kuwabara-Kono model, the Verlet method no longer behaves as a second order method, but instead its order decreases to 1.5. This is caused by the singular behavior of the derivative of the damping force in the Kuwabara-Kono model at the beginning of particle collisions. In this work, we introduce a simplified problem which reproduces the singularity of the Kuwabara-Kono model and prove that the order of the method decreases from 2 to (1+q), where (0< q < 1) is the exponent of the nonlinear singular term.
{"title":"Analysis of the leapfrog-Verlet method applied to the Kuwabara-Kono force model in discrete element method simulations of granular materials","authors":"Gabriel Nóbrega Bufolo, Yuri Dumaresq Sobral","doi":"10.1007/s10444-024-10162-3","DOIUrl":"10.1007/s10444-024-10162-3","url":null,"abstract":"<div><p>The discrete element method (DEM) is a numerical technique widely used to simulate granular materials. The temporal evolution of these simulations is often performed using a Verlet-type algorithm, because of its second order and its desirable property of better energy conservation. However, when dissipative forces are considered in the model, such as the nonlinear Kuwabara-Kono model, the Verlet method no longer behaves as a second order method, but instead its order decreases to 1.5. This is caused by the singular behavior of the derivative of the damping force in the Kuwabara-Kono model at the beginning of particle collisions. In this work, we introduce a simplified problem which reproduces the singularity of the Kuwabara-Kono model and prove that the order of the method decreases from 2 to <span>(1+q)</span>, where <span>(0< q < 1)</span> is the exponent of the nonlinear singular term.</p></div>","PeriodicalId":50869,"journal":{"name":"Advances in Computational Mathematics","volume":"50 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141764226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-22DOI: 10.1007/s10444-024-10172-1
Ralf Zimmermann, Kai Cheng
An established way to tackle model nonlinearities in projection-based model reduction is via relying on partial information. This idea is shared by the methods of gappy proper orthogonal decomposition (POD), missing point estimation (MPE), masked projection, hyper reduction, and the (discrete) empirical interpolation method (DEIM). The selected indices of the partial information components are often referred to as “magic points.” The original contribution of the work at hand is a novel randomized greedy magic point selection. It is known that the greedy method is associated with minimizing the norm of an oblique projection operator, which, in turn, is associated with solving a sequence of rank-one SVD update problems. We propose simplification measures so that the resulting greedy point selection has the following main features: (1) The inherent rank-one SVD update problem is tackled in a way, such that its dimension does not grow with the number of selected magic points. (2) The approach is online efficient in the sense that the computational costs are independent from the dimension of the full-scale model. To the best of our knowledge, this is the first greedy magic point selection that features this property. We illustrate the findings by means of numerical examples. We find that the computational cost of the proposed method is orders of magnitude lower than that of its deterministic counterpart. Nevertheless, the prediction accuracy is just as good if not better. When compared to a state-of-the-art randomized method based on leverage scores, the randomized greedy method outperforms its competitor.
{"title":"Randomized greedy magic point selection schemes for nonlinear model reduction","authors":"Ralf Zimmermann, Kai Cheng","doi":"10.1007/s10444-024-10172-1","DOIUrl":"10.1007/s10444-024-10172-1","url":null,"abstract":"<div><p>An established way to tackle model nonlinearities in projection-based model reduction is via relying on partial information. This idea is shared by the methods of gappy proper orthogonal decomposition (POD), missing point estimation (MPE), masked projection, hyper reduction, and the (discrete) empirical interpolation method (DEIM). The selected indices of the partial information components are often referred to as “magic points.” The original contribution of the work at hand is a novel randomized greedy magic point selection. It is known that the greedy method is associated with minimizing the norm of an oblique projection operator, which, in turn, is associated with solving a sequence of rank-one SVD update problems. We propose simplification measures so that the resulting greedy point selection has the following main features: (1) The inherent rank-one SVD update problem is tackled in a way, such that its dimension does not grow with the number of selected magic points. (2) The approach is online efficient in the sense that the computational costs are independent from the dimension of the full-scale model. To the best of our knowledge, this is the first greedy magic point selection that features this property. We illustrate the findings by means of numerical examples. We find that the computational cost of the proposed method is orders of magnitude lower than that of its deterministic counterpart. Nevertheless, the prediction accuracy is just as good if not better. When compared to a state-of-the-art randomized method based on leverage scores, the randomized greedy method outperforms its competitor.</p></div>","PeriodicalId":50869,"journal":{"name":"Advances in Computational Mathematics","volume":"50 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10444-024-10172-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141737016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-22DOI: 10.1007/s10444-024-10177-w
Yang Linyi, Zhang Lei-Hong, Zhang Ya-Nan
Given samples of a real or complex-valued function on a set of distinct nodes, the traditional linear Chebyshev approximation is to compute the minimax approximation on a prescribed linear functional space. Lawson’s iteration is a classical and well-known method for the task. However, Lawson’s iteration converges only linearly and in many cases, the convergence is very slow. In this paper, relying upon the Lagrange duality, we establish an (L_q)-weighted dual programming for the discrete linear Chebyshev approximation. In this framework of dual problem, we revisit the convergence of Lawson’s iteration and provide a new and self-contained proof for the well-known Alternation Theorem in the real case; moreover, we propose a Newton type iteration, the interior-point method, to solve the (L_2)-weighted dual programming. Numerical experiments are reported to demonstrate its fast convergence and its capability in finding the reference points that characterize the unique minimax approximation.
{"title":"The (L_q)-weighted dual programming of the linear Chebyshev approximation and an interior-point method","authors":"Yang Linyi, Zhang Lei-Hong, Zhang Ya-Nan","doi":"10.1007/s10444-024-10177-w","DOIUrl":"10.1007/s10444-024-10177-w","url":null,"abstract":"<div><p>Given samples of a real or complex-valued function on a set of distinct nodes, the traditional linear Chebyshev approximation is to compute the minimax approximation on a prescribed linear functional space. Lawson’s iteration is a classical and well-known method for the task. However, Lawson’s iteration converges only linearly and in many cases, the convergence is very slow. In this paper, relying upon the Lagrange duality, we establish an <span>(L_q)</span>-weighted dual programming for the discrete linear Chebyshev approximation. In this framework of dual problem, we revisit the convergence of Lawson’s iteration and provide a new and self-contained proof for the well-known Alternation Theorem in the real case; moreover, we propose a Newton type iteration, the interior-point method, to solve the <span>(L_2)</span>-weighted dual programming. Numerical experiments are reported to demonstrate its fast convergence and its capability in finding the reference points that characterize the unique minimax approximation.</p></div>","PeriodicalId":50869,"journal":{"name":"Advances in Computational Mathematics","volume":"50 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141736934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-19DOI: 10.1007/s10444-024-10178-9
Kui Du, Jia-Jun Fan, Xiao-Hui Sun, Fang Wang, Ya-Lan Zhang
Krylov subspace methods for solving linear systems of equations involving skew-symmetric matrices have gained recent attention. Numerical equivalences among Krylov subspace methods for nonsingular skew-symmetric linear systems have been given in Greif et al. [SIAM J. Matrix Anal. Appl., 37 (2016), pp. 1071–1087]. In this work, we extend the results of Greif et al. to singular skew-symmetric linear systems. In addition, we systematically study three Krylov subspace methods (called S(^3)CG, S(^3)MR, and S(^3)LQ) for solving shifted skew-symmetric linear systems. They all are based on Lanczos triangularization for skew-symmetric matrices and correspond to CG, MINRES, and SYMMLQ for solving symmetric linear systems, respectively. To the best of our knowledge, this is the first work that studies S(^3)LQ. We give some new theoretical results on S(^3)CG, S(^3)MR, and S(^3)LQ. We also provide relations among the three methods and those based on Golub–Kahan bidiagonalization and Saunders–Simon–Yip tridiagonalization. Numerical examples are given to illustrate our theoretical findings.
{"title":"On Krylov subspace methods for skew-symmetric and shifted skew-symmetric linear systems","authors":"Kui Du, Jia-Jun Fan, Xiao-Hui Sun, Fang Wang, Ya-Lan Zhang","doi":"10.1007/s10444-024-10178-9","DOIUrl":"10.1007/s10444-024-10178-9","url":null,"abstract":"<div><p>Krylov subspace methods for solving linear systems of equations involving skew-symmetric matrices have gained recent attention. Numerical equivalences among Krylov subspace methods for nonsingular skew-symmetric linear systems have been given in Greif et al. [SIAM J. Matrix Anal. Appl., 37 (2016), pp. 1071–1087]. In this work, we extend the results of Greif et al. to singular skew-symmetric linear systems. In addition, we systematically study three Krylov subspace methods (called S<span>(^3)</span>CG, S<span>(^3)</span>MR, and S<span>(^3)</span>LQ) for solving shifted skew-symmetric linear systems. They all are based on Lanczos triangularization for skew-symmetric matrices and correspond to CG, MINRES, and SYMMLQ for solving symmetric linear systems, respectively. To the best of our knowledge, this is the first work that studies S<span>(^3)</span>LQ. We give some new theoretical results on S<span>(^3)</span>CG, S<span>(^3)</span>MR, and S<span>(^3)</span>LQ. We also provide relations among the three methods and those based on Golub–Kahan bidiagonalization and Saunders–Simon–Yip tridiagonalization. Numerical examples are given to illustrate our theoretical findings.</p></div>","PeriodicalId":50869,"journal":{"name":"Advances in Computational Mathematics","volume":"50 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-19DOI: 10.1007/s10444-024-10166-z
Quirin Aumann, Steffen W. R. Werner
Interpolation-based methods are well-established and effective approaches for the efficient generation of accurate reduced-order surrogate models. Common challenges for such methods are the automatic selection of good or even optimal interpolation points and the appropriate size of the reduced-order model. An approach that addresses the first problem for linear, unstructured systems is the iterative rational Krylov algorithm (IRKA), which computes optimal interpolation points through iterative updates by solving linear eigenvalue problems. However, in the case of preserving internal system structures, optimal interpolation points are unknown, and heuristics based on nonlinear eigenvalue problems result in numbers of potential interpolation points that typically exceed the reasonable size of reduced-order systems. In our work, we propose a projection-based iterative interpolation method inspired by IRKA for generally structured systems to adaptively compute near-optimal interpolation points as well as an appropriate size for the reduced-order system. Additionally, the iterative updates of the interpolation points can be chosen such that the reduced-order model provides an accurate approximation in specified frequency ranges of interest. For such applications, our new approach outperforms the established methods in terms of accuracy and computational effort. We show this in numerical examples with different structures.
{"title":"Adaptive choice of near-optimal expansion points for interpolation-based structure-preserving model reduction","authors":"Quirin Aumann, Steffen W. R. Werner","doi":"10.1007/s10444-024-10166-z","DOIUrl":"10.1007/s10444-024-10166-z","url":null,"abstract":"<div><p>Interpolation-based methods are well-established and effective approaches for the efficient generation of accurate reduced-order surrogate models. Common challenges for such methods are the automatic selection of good or even optimal interpolation points and the appropriate size of the reduced-order model. An approach that addresses the first problem for linear, unstructured systems is the iterative rational Krylov algorithm (IRKA), which computes optimal interpolation points through iterative updates by solving linear eigenvalue problems. However, in the case of preserving internal system structures, optimal interpolation points are unknown, and heuristics based on nonlinear eigenvalue problems result in numbers of potential interpolation points that typically exceed the reasonable size of reduced-order systems. In our work, we propose a projection-based iterative interpolation method inspired by IRKA for generally structured systems to adaptively compute near-optimal interpolation points as well as an appropriate size for the reduced-order system. Additionally, the iterative updates of the interpolation points can be chosen such that the reduced-order model provides an accurate approximation in specified frequency ranges of interest. For such applications, our new approach outperforms the established methods in terms of accuracy and computational effort. We show this in numerical examples with different structures.</p></div>","PeriodicalId":50869,"journal":{"name":"Advances in Computational Mathematics","volume":"50 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10444-024-10166-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141730632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-18DOI: 10.1007/s10444-024-10168-x
Zhengbang Cao, Yimin Wei, Pengpeng Xie
By exploiting the random sampling techniques, this paper derives an efficient randomized algorithm for computing a generalized CUR decomposition, which provides low-rank approximations of both matrices simultaneously in terms of some of their rows and columns. For large-scale data sets that are expensive to store and manipulate, a new variant of the discrete empirical interpolation method known as L-DEIM, which needs much lower cost and provides a significant acceleration in practice, is also combined with the random sampling approach to further improve the efficiency of our algorithm. Moreover, adopting the randomized algorithm to implement the truncation process of restricted singular value decomposition (RSVD), combined with the L-DEIM procedure, we propose a fast algorithm for computing an RSVD based CUR decomposition, which provides a coordinated low-rank approximation of the three matrices in a CUR-type format simultaneously and provides advantages over the standard CUR approximation for some applications. We establish detailed probabilistic error analysis for the algorithms and provide numerical results that show the promise of our approaches.
通过利用随机抽样技术,本文推导出了一种计算广义 CUR 分解的高效随机算法,该算法可同时根据两个矩阵的部分行和列提供低秩近似值。对于存储和处理成本高昂的大规模数据集,本文还将离散经验插值法的新变体 L-DEIM 与随机抽样方法相结合,进一步提高了算法的效率。此外,采用随机化算法实现受限奇异值分解(RSVD)的截断过程,并结合 L-DEIM 程序,我们提出了一种计算基于 RSVD 的 CUR 分解的快速算法,该算法可同时以 CUR 类型格式提供三个矩阵的协调低阶近似,在某些应用中比标准 CUR 近似更具优势。我们为算法建立了详细的概率误差分析,并提供了数值结果,展示了我们方法的前景。
{"title":"Randomized GCUR decompositions","authors":"Zhengbang Cao, Yimin Wei, Pengpeng Xie","doi":"10.1007/s10444-024-10168-x","DOIUrl":"10.1007/s10444-024-10168-x","url":null,"abstract":"<div><p>By exploiting the random sampling techniques, this paper derives an efficient randomized algorithm for computing a generalized CUR decomposition, which provides low-rank approximations of both matrices simultaneously in terms of some of their rows and columns. For large-scale data sets that are expensive to store and manipulate, a new variant of the discrete empirical interpolation method known as L-DEIM, which needs much lower cost and provides a significant acceleration in practice, is also combined with the random sampling approach to further improve the efficiency of our algorithm. Moreover, adopting the randomized algorithm to implement the truncation process of restricted singular value decomposition (RSVD), combined with the L-DEIM procedure, we propose a fast algorithm for computing an RSVD based CUR decomposition, which provides a coordinated low-rank approximation of the three matrices in a CUR-type format simultaneously and provides advantages over the standard CUR approximation for some applications. We establish detailed probabilistic error analysis for the algorithms and provide numerical results that show the promise of our approaches.</p></div>","PeriodicalId":50869,"journal":{"name":"Advances in Computational Mathematics","volume":"50 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142412317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-16DOI: 10.1007/s10444-024-10175-y
Julian Koellermeier, Philipp Krah, Jonas Kusch
Geophysical flow simulations using hyperbolic shallow water moment equations require an efficient discretization of a potentially large system of PDEs, the so-called moment system. This calls for tailored model order reduction techniques that allow for efficient and accurate simulations while guaranteeing physical properties like mass conservation. In this paper, we develop the first model reduction for the hyperbolic shallow water moment equations and achieve mass conservation. This is accomplished using a macro-micro decomposition of the model into a macroscopic (conservative) part and a microscopic (non-conservative) part with subsequent model reduction using either POD-Galerkin or dynamical low-rank approximation only on the microscopic (non-conservative) part. Numerical experiments showcase the performance of the new model reduction methods including high accuracy and fast computation times together with guaranteed conservation and consistency properties.
{"title":"Macro-micro decomposition for consistent and conservative model order reduction of hyperbolic shallow water moment equations: a study using POD-Galerkin and dynamical low-rank approximation","authors":"Julian Koellermeier, Philipp Krah, Jonas Kusch","doi":"10.1007/s10444-024-10175-y","DOIUrl":"10.1007/s10444-024-10175-y","url":null,"abstract":"<div><p>Geophysical flow simulations using hyperbolic shallow water moment equations require an efficient discretization of a potentially large system of PDEs, the so-called moment system. This calls for tailored model order reduction techniques that allow for efficient and accurate simulations while guaranteeing physical properties like mass conservation. In this paper, we develop the first model reduction for the hyperbolic shallow water moment equations and achieve mass conservation. This is accomplished using a macro-micro decomposition of the model into a macroscopic (conservative) part and a microscopic (non-conservative) part with subsequent model reduction using either POD-Galerkin or dynamical low-rank approximation only on the microscopic (non-conservative) part. Numerical experiments showcase the performance of the new model reduction methods including high accuracy and fast computation times together with guaranteed conservation and consistency properties.</p></div>","PeriodicalId":50869,"journal":{"name":"Advances in Computational Mathematics","volume":"50 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10444-024-10175-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141625059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-16DOI: 10.1007/s10444-024-10170-3
Hong Zhu, Xiaoxia Liu, Lin Huang, Zhaosong Lu, Jian Lu, Michael K. Ng
Multi-dimensional images can be viewed as tensors and have often embedded a low-rankness property that can be evaluated by tensor low-rank measures. In this paper, we first introduce a tensor low-rank and sparsity measure and then propose low-rank and sparsity models for tensor completion, tensor robust principal component analysis, and tensor denoising. The resulting tensor recovery models are further solved by the augmented Lagrangian method with a convergence guarantee. And its augmented Lagrangian subproblem is computed by the proximal alternative method, in which each variable has a closed-form solution. Numerical experiments on several multi-dimensional image recovery applications show the superiority of the proposed methods over the state-of-the-art methods in terms of several quantitative quality indices and visual quality.
{"title":"Augmented Lagrangian method for tensor low-rank and sparsity models in multi-dimensional image recovery","authors":"Hong Zhu, Xiaoxia Liu, Lin Huang, Zhaosong Lu, Jian Lu, Michael K. Ng","doi":"10.1007/s10444-024-10170-3","DOIUrl":"10.1007/s10444-024-10170-3","url":null,"abstract":"<div><p>Multi-dimensional images can be viewed as tensors and have often embedded a low-rankness property that can be evaluated by tensor low-rank measures. In this paper, we first introduce a tensor low-rank and sparsity measure and then propose low-rank and sparsity models for tensor completion, tensor robust principal component analysis, and tensor denoising. The resulting tensor recovery models are further solved by the augmented Lagrangian method with a convergence guarantee. And its augmented Lagrangian subproblem is computed by the proximal alternative method, in which each variable has a closed-form solution. Numerical experiments on several multi-dimensional image recovery applications show the superiority of the proposed methods over the state-of-the-art methods in terms of several quantitative quality indices and visual quality.</p></div>","PeriodicalId":50869,"journal":{"name":"Advances in Computational Mathematics","volume":"50 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141625052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-16DOI: 10.1007/s10444-024-10144-5
Mohan Zhao, Kirill Serkh
In this paper, we describe an algorithm for fitting an analytic and bandlimited closed or open curve to interpolate an arbitrary collection of points in (mathbb {R}^{2}). The main idea is to smooth the parametrization of the curve by iteratively filtering the Fourier or Chebyshev coefficients of both the derivative of the arc-length function and the tangential angle of the curve and applying smooth perturbations, after each filtering step, until the curve is represented by a reasonably small number of coefficients. The algorithm produces a curve passing through the set of points to an accuracy of machine precision, after a limited number of iterations. It costs O(N log N) operations at each iteration, provided that the number of discretization nodes is N. The resulting curves are smooth, affine invariant, and visually appealing and do not exhibit any ringing artifacts. The bandwidths of the constructed curves are much smaller than those of curves constructed by previous methods. We demonstrate the performance of our algorithm with several numerical experiments.
{"title":"A continuation method for fitting a bandlimited curve to points in the plane","authors":"Mohan Zhao, Kirill Serkh","doi":"10.1007/s10444-024-10144-5","DOIUrl":"10.1007/s10444-024-10144-5","url":null,"abstract":"<div><p>In this paper, we describe an algorithm for fitting an analytic and bandlimited closed or open curve to interpolate an arbitrary collection of points in <span>(mathbb {R}^{2})</span>. The main idea is to smooth the parametrization of the curve by iteratively filtering the Fourier or Chebyshev coefficients of both the derivative of the arc-length function and the tangential angle of the curve and applying smooth perturbations, after each filtering step, until the curve is represented by a reasonably small number of coefficients. The algorithm produces a curve passing through the set of points to an accuracy of machine precision, after a limited number of iterations. It costs <i>O</i>(<i>N</i> log <i>N</i>) operations at each iteration, provided that the number of discretization nodes is <i>N</i>. The resulting curves are smooth, affine invariant, and visually appealing and do not exhibit any ringing artifacts. The bandwidths of the constructed curves are much smaller than those of curves constructed by previous methods. We demonstrate the performance of our algorithm with several numerical experiments.</p></div>","PeriodicalId":50869,"journal":{"name":"Advances in Computational Mathematics","volume":"50 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141625051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-15DOI: 10.1007/s10444-024-10174-z
H. Zhang, V. Rokhlin
We present a scheme for finding all roots of an analytic function in a square domain in the complex plane. The scheme can be viewed as a generalization of the classical approach to finding roots of a function on the real line, by first approximating it by a polynomial in the Chebyshev basis, followed by diagonalizing the so-called “colleague matrices.” Our extension of the classical approach is based on several observations that enable the construction of polynomial bases in compact domains that satisfy three-term recurrences and are reasonably well-conditioned. This class of polynomial bases gives rise to “generalized colleague matrices,” whose eigenvalues are roots of functions expressed in these bases. In this paper, we also introduce a special-purpose QR algorithm for finding the eigenvalues of generalized colleague matrices, which is a straightforward extension of the recently introduced structured stable QR algorithm for the classical cases (see Serkh and Rokhlin 2021). The performance of the schemes is illustrated with several numerical examples.
我们提出了一种在复平面的方域中寻找解析函数所有根的方法。该方案可以看作是对实线上函数根的经典求法的推广,即首先用切比雪夫基的多项式对其进行逼近,然后对所谓的 "同事矩阵 "进行对角。我们对经典方法的扩展基于一些观察结果,这些观察结果使我们能够在紧凑域中构建满足三项递归且条件合理的多项式基。这类多项式基产生了 "广义同事矩阵",其特征值是用这些基表达的函数的根。在本文中,我们还引入了一种特殊用途的 QR 算法,用于寻找广义同事矩阵的特征值,它是最近引入的经典情况下结构稳定 QR 算法的直接扩展(见 Serkh 和 Rokhlin,2021 年)。我们用几个数值示例来说明这些方案的性能。
{"title":"Finding roots of complex analytic functions via generalized colleague matrices","authors":"H. Zhang, V. Rokhlin","doi":"10.1007/s10444-024-10174-z","DOIUrl":"10.1007/s10444-024-10174-z","url":null,"abstract":"<div><p>We present a scheme for finding all roots of an analytic function in a square domain in the complex plane. The scheme can be viewed as a generalization of the classical approach to finding roots of a function on the real line, by first approximating it by a polynomial in the Chebyshev basis, followed by diagonalizing the so-called “colleague matrices.” Our extension of the classical approach is based on several observations that enable the construction of polynomial bases in compact domains that satisfy three-term recurrences and are reasonably well-conditioned. This class of polynomial bases gives rise to “generalized colleague matrices,” whose eigenvalues are roots of functions expressed in these bases. In this paper, we also introduce a special-purpose QR algorithm for finding the eigenvalues of generalized colleague matrices, which is a straightforward extension of the recently introduced structured stable QR algorithm for the classical cases (see Serkh and Rokhlin 2021). The performance of the schemes is illustrated with several numerical examples.</p></div>","PeriodicalId":50869,"journal":{"name":"Advances in Computational Mathematics","volume":"50 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141618198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}