We study the preconditioned iterative methods for the linear systems arising from the numerical solution of the multi‐dimensional space fractional diffusion equations. A sine transform based preconditioning technique is developed according to the symmetric and skew‐symmetric splitting of the Toeplitz factor in the resulting coefficient matrix. Theoretical analyses show that the upper bound of relative residual norm of the GMRES method when applied to the preconditioned linear system is mesh‐independent which implies the linear convergence. Numerical experiments are carried out to illustrate the correctness of the theoretical results and the effectiveness of the proposed preconditioning technique.
{"title":"Sine transform based preconditioning techniques for space fractional diffusion equations","authors":"H. Qin, Hong-Kui Pang, Hai-wei Sun","doi":"10.1002/nla.2474","DOIUrl":"https://doi.org/10.1002/nla.2474","url":null,"abstract":"We study the preconditioned iterative methods for the linear systems arising from the numerical solution of the multi‐dimensional space fractional diffusion equations. A sine transform based preconditioning technique is developed according to the symmetric and skew‐symmetric splitting of the Toeplitz factor in the resulting coefficient matrix. Theoretical analyses show that the upper bound of relative residual norm of the GMRES method when applied to the preconditioned linear system is mesh‐independent which implies the linear convergence. Numerical experiments are carried out to illustrate the correctness of the theoretical results and the effectiveness of the proposed preconditioning technique.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2022-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48289203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peng Liu, Xiaoyu Wu, H. Shao, Yan Zhang, Shuhan Cao
In this work, by considering the hyperplane projection and hybrid techniques, three scaled three‐term conjugate gradient methods are extended to solve the system of constrained monotone nonlinear equations, and the developed methods have the advantages of low storage and only using function values. The new methods satisfy the sufficient descent condition independent of any line search criterion. It has been proved that three new methods converge globally under some mild conditions. The numerical experiments for constrained monotone nonlinear equations and image de‐blurring problems illustrate that the proposed methods are numerically effective and efficient.
{"title":"Three adaptive hybrid derivative‐free projection methods for constrained monotone nonlinear equations and their applications","authors":"Peng Liu, Xiaoyu Wu, H. Shao, Yan Zhang, Shuhan Cao","doi":"10.1002/nla.2471","DOIUrl":"https://doi.org/10.1002/nla.2471","url":null,"abstract":"In this work, by considering the hyperplane projection and hybrid techniques, three scaled three‐term conjugate gradient methods are extended to solve the system of constrained monotone nonlinear equations, and the developed methods have the advantages of low storage and only using function values. The new methods satisfy the sufficient descent condition independent of any line search criterion. It has been proved that three new methods converge globally under some mild conditions. The numerical experiments for constrained monotone nonlinear equations and image de‐blurring problems illustrate that the proposed methods are numerically effective and efficient.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2022-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41592358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
General sparse hybrid solvers are commonly used kernels for solving wide range of scientific and engineering problems. This work addresses the current problems of efficiently solving general sparse linear equations with direct/iterative hybrid solvers on many core distributed clusters. We briefly discuss the solution stages of Maphys, HIPS, and PDSLin hybrid solvers for large sparse linear systems with their major algorithmic differences. In this category of solvers, different methods with sophisticated preconditioning algorithms are suggested to solve the trade off between memory and convergence. Such solutions require a certain hierarchical level of parallelism more suitable for modern supercomputers that allow to scale for thousand numbers of processors using Schur complement framework. We study the effect of reordering and analyze the performance, scalability as well as memory for each solve phase of PDSLin, Maphys, and HIPS hybrid solvers using large set of challenging matrices arising from different actual applications and compare the results with SuperLU_DIST direct solver. We specifically focus on the level of parallel mechanisms used by the hybrid solvers and the effect on scalability. Tuning and Analysis Utilities (TAU) is employed to assess the efficient usage of heap memory profile and measuring communication volume. The tests are run on high performance large memory clusters using up to 512 processors.
{"title":"On the evaluation of general sparse hybrid linear solvers","authors":"Afrah Farea, M. S. Çelebi","doi":"10.1002/nla.2469","DOIUrl":"https://doi.org/10.1002/nla.2469","url":null,"abstract":"General sparse hybrid solvers are commonly used kernels for solving wide range of scientific and engineering problems. This work addresses the current problems of efficiently solving general sparse linear equations with direct/iterative hybrid solvers on many core distributed clusters. We briefly discuss the solution stages of Maphys, HIPS, and PDSLin hybrid solvers for large sparse linear systems with their major algorithmic differences. In this category of solvers, different methods with sophisticated preconditioning algorithms are suggested to solve the trade off between memory and convergence. Such solutions require a certain hierarchical level of parallelism more suitable for modern supercomputers that allow to scale for thousand numbers of processors using Schur complement framework. We study the effect of reordering and analyze the performance, scalability as well as memory for each solve phase of PDSLin, Maphys, and HIPS hybrid solvers using large set of challenging matrices arising from different actual applications and compare the results with SuperLU_DIST direct solver. We specifically focus on the level of parallel mechanisms used by the hybrid solvers and the effect on scalability. Tuning and Analysis Utilities (TAU) is employed to assess the efficient usage of heap memory profile and measuring communication volume. The tests are run on high performance large memory clusters using up to 512 processors.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46771212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The analytic connectivity (AC), defined via solving a series of constrained polynomial optimization problems, serves as a measure of connectivity in hypergraphs. How to compute such a quantity efficiently is important in practice and of theoretical challenge as well due to the non‐convex and combinatorial features in its definition. In this article, we first perform a careful analysis of several widely used structured hypergraphs in terms of their properties and heuristic upper bounds of ACs. We then present an affine‐scaling method to compute some upper bounds of ACs for uniform hypergraphs. To testify the tightness of the obtained upper bounds, two possible approaches via the Pólya theorem and semidefinite programming respectively are also proposed to verify the lower bounds generated by the obtained upper bounds minus a small gap. Numerical experiments on synthetic datasets are reported to demonstrate the efficiency of our proposed method. Further, we apply our method in hypergraphs constructed from social networks and text analysis to detect the network connectivity and rank the keywords, respectively.
{"title":"The analytic connectivity in uniform hypergraphs: Properties and computation","authors":"Chunfeng Cui, Ziyan Luo, L. Qi, Hong Yan","doi":"10.1002/nla.2468","DOIUrl":"https://doi.org/10.1002/nla.2468","url":null,"abstract":"The analytic connectivity (AC), defined via solving a series of constrained polynomial optimization problems, serves as a measure of connectivity in hypergraphs. How to compute such a quantity efficiently is important in practice and of theoretical challenge as well due to the non‐convex and combinatorial features in its definition. In this article, we first perform a careful analysis of several widely used structured hypergraphs in terms of their properties and heuristic upper bounds of ACs. We then present an affine‐scaling method to compute some upper bounds of ACs for uniform hypergraphs. To testify the tightness of the obtained upper bounds, two possible approaches via the Pólya theorem and semidefinite programming respectively are also proposed to verify the lower bounds generated by the obtained upper bounds minus a small gap. Numerical experiments on synthetic datasets are reported to demonstrate the efficiency of our proposed method. Further, we apply our method in hypergraphs constructed from social networks and text analysis to detect the network connectivity and rank the keywords, respectively.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47339448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Litvinenko, Y. Marzouk, H. Matthies, M. Scavino, Alessio Spantini
Very often, in the course of uncertainty quantification tasks or data analysis, one has to deal with high‐dimensional random variables. Here the interest is mainly to compute characterizations like the entropy, the Kullback–Leibler divergence, more general f$$ f $$ ‐divergences, or other such characteristics based on the probability density. The density is often not available directly, and it is a computational challenge to just represent it in a numerically feasible fashion in case the dimension is even moderately large. It is an even stronger numerical challenge to then actually compute said characteristics in the high‐dimensional case. In this regard it is proposed to approximate the discretized density in a compressed form, in particular by a low‐rank tensor. This can alternatively be obtained from the corresponding probability characteristic function, or more general representations of the underlying random variable. The mentioned characterizations need point‐wise functions like the logarithm. This normally rather trivial task becomes computationally difficult when the density is approximated in a compressed resp. low‐rank tensor format, as the point values are not directly accessible. The computations become possible by considering the compressed data as an element of an associative, commutative algebra with an inner product, and using matrix algorithms to accomplish the mentioned tasks. The representation as a low‐rank element of a high order tensor space allows to reduce the computational complexity and storage cost from exponential in the dimension to almost linear.
{"title":"Computing f ‐divergences and distances of high‐dimensional probability density functions","authors":"A. Litvinenko, Y. Marzouk, H. Matthies, M. Scavino, Alessio Spantini","doi":"10.1002/nla.2467","DOIUrl":"https://doi.org/10.1002/nla.2467","url":null,"abstract":"Very often, in the course of uncertainty quantification tasks or data analysis, one has to deal with high‐dimensional random variables. Here the interest is mainly to compute characterizations like the entropy, the Kullback–Leibler divergence, more general f$$ f $$ ‐divergences, or other such characteristics based on the probability density. The density is often not available directly, and it is a computational challenge to just represent it in a numerically feasible fashion in case the dimension is even moderately large. It is an even stronger numerical challenge to then actually compute said characteristics in the high‐dimensional case. In this regard it is proposed to approximate the discretized density in a compressed form, in particular by a low‐rank tensor. This can alternatively be obtained from the corresponding probability characteristic function, or more general representations of the underlying random variable. The mentioned characterizations need point‐wise functions like the logarithm. This normally rather trivial task becomes computationally difficult when the density is approximated in a compressed resp. low‐rank tensor format, as the point values are not directly accessible. The computations become possible by considering the compressed data as an element of an associative, commutative algebra with an inner product, and using matrix algorithms to accomplish the mentioned tasks. The representation as a low‐rank element of a high order tensor space allows to reduce the computational complexity and storage cost from exponential in the dimension to almost linear.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48740288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Consider the method of fundamental solutions (MFS) for 2D Laplace's equation in a bounded simply connected domain S$$ S $$ . In the standard MFS, the source nodes are located on a closed contour outside the domain boundary Γ(=∂S)$$ Gamma left(=partial Sright) $$ , which is called pseudo‐boundary. For circular, elliptic, and general closed pseudo‐boundaries, analysis and computation have been studied extensively. New locations of source nodes are proposed along two pseudo radial‐lines outside Γ$$ Gamma $$ . Numerical results are very encouraging and promising. Since the success of the MFS mainly depends on stability, our efforts are focused on deriving the lower and upper bounds of condition number (Cond). The study finds stability properties of new Vandermonde‐wise matrices on nodes xi∈[a,b]$$ {x}_iin left[a,bright] $$ with 0
{"title":"Lower and upper bounds of condition number for Vandermonde‐wise matrices and method of fundamental solutions using pseudo radial‐lines","authors":"Li-Ping Zhang, Zi-Cai Li, Ming-Gong Lee, Hung-Tsai Huang","doi":"10.1002/nla.2466","DOIUrl":"https://doi.org/10.1002/nla.2466","url":null,"abstract":"Consider the method of fundamental solutions (MFS) for 2D Laplace's equation in a bounded simply connected domain S$$ S $$ . In the standard MFS, the source nodes are located on a closed contour outside the domain boundary Γ(=∂S)$$ Gamma left(=partial Sright) $$ , which is called pseudo‐boundary. For circular, elliptic, and general closed pseudo‐boundaries, analysis and computation have been studied extensively. New locations of source nodes are proposed along two pseudo radial‐lines outside Γ$$ Gamma $$ . Numerical results are very encouraging and promising. Since the success of the MFS mainly depends on stability, our efforts are focused on deriving the lower and upper bounds of condition number (Cond). The study finds stability properties of new Vandermonde‐wise matrices on nodes xi∈[a,b]$$ {x}_iin left[a,bright] $$ with 0","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44120010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Issue Information","authors":"","doi":"10.1002/nla.2396","DOIUrl":"https://doi.org/10.1002/nla.2396","url":null,"abstract":"","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45181838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Low Tucker rank tensor completion has wide applications in science and engineering. Many existing approaches dealt with the Tucker rank by unfolding matrix rank. However, unfolding a tensor to a matrix would destroy the data's original multi‐way structure, resulting in vital information loss and degraded performance. In this article, we establish a relationship between the Tucker ranks and the ranks of the factor matrices in Tucker decomposition. Then, we reformulate the low Tucker rank tensor completion problem as a multilinear low rank matrix completion problem. For the reformulated problem, a symmetric block coordinate descent method is customized. For each matrix rank minimization subproblem, the classical truncated nuclear norm minimization is adopted. Furthermore, temporal characteristics in image and video data are introduced to such a model, which benefits the performance of the method. Numerical simulations illustrate the efficiency of our proposed models and methods.
{"title":"Low Tucker rank tensor completion using a symmetric block coordinate descent method","authors":"Quan Yu, Xinzhen Zhang, Yannan Chen, Liqun Qi","doi":"10.1002/nla.2464","DOIUrl":"https://doi.org/10.1002/nla.2464","url":null,"abstract":"Low Tucker rank tensor completion has wide applications in science and engineering. Many existing approaches dealt with the Tucker rank by unfolding matrix rank. However, unfolding a tensor to a matrix would destroy the data's original multi‐way structure, resulting in vital information loss and degraded performance. In this article, we establish a relationship between the Tucker ranks and the ranks of the factor matrices in Tucker decomposition. Then, we reformulate the low Tucker rank tensor completion problem as a multilinear low rank matrix completion problem. For the reformulated problem, a symmetric block coordinate descent method is customized. For each matrix rank minimization subproblem, the classical truncated nuclear norm minimization is adopted. Furthermore, temporal characteristics in image and video data are introduced to such a model, which benefits the performance of the method. Numerical simulations illustrate the efficiency of our proposed models and methods.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2022-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44764962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article investigates a fast and highly efficient algorithm to find the strong approximation inverse of an invertible tensor. The convergence analysis shows that the proposed method is of ten order of convergence using only six tensor–tensor multiplications per iteration. Also, we obtain a bound for the perturbation error in each iteration. We show that the proposed algorithm can be used for finding the Moore–Penrose and outer inverses of tensors. We obtain the relationship between the singular values of an arbitrary tensor 𝒜 and eigenvalues of the 𝒜∗⋆N𝒜 . We give the computational complexity of our algorithm and prove the theoretical aspects of the article. The generalized Moore–Penrose inverse of tensors is defined. As an application, we use the iteration obtained by the algorithm as preconditioning of the Krylov subspace methods to solve the multilinear system 𝒜⋆N𝒳=ℬ . Several numerical experiments are proposed to show the effectiveness and accuracy of the method. Finally, we give some concluding remarks.
{"title":"On finding strong approximate inverses for tensors","authors":"Eisa Khosravi Dehdezi, S. Karimi","doi":"10.1002/nla.2460","DOIUrl":"https://doi.org/10.1002/nla.2460","url":null,"abstract":"This article investigates a fast and highly efficient algorithm to find the strong approximation inverse of an invertible tensor. The convergence analysis shows that the proposed method is of ten order of convergence using only six tensor–tensor multiplications per iteration. Also, we obtain a bound for the perturbation error in each iteration. We show that the proposed algorithm can be used for finding the Moore–Penrose and outer inverses of tensors. We obtain the relationship between the singular values of an arbitrary tensor 𝒜 and eigenvalues of the 𝒜∗⋆N𝒜 . We give the computational complexity of our algorithm and prove the theoretical aspects of the article. The generalized Moore–Penrose inverse of tensors is defined. As an application, we use the iteration obtained by the algorithm as preconditioning of the Krylov subspace methods to solve the multilinear system 𝒜⋆N𝒳=ℬ . Several numerical experiments are proposed to show the effectiveness and accuracy of the method. Finally, we give some concluding remarks.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46034359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study compares various multigrid strategies for the fast solution of elliptic equations discretized by the hybrid high‐order method. Combinations of h$$ h $$ ‐, p$$ p $$ ‐, and hp$$ hp $$ ‐coarsening strategies are considered, combined with diverse intergrid transfer operators. Comparisons are made experimentally on 2D and 3D test cases, with structured and unstructured meshes, and with nested and non‐nested hierarchies. Advantages and drawbacks of each strategy are discussed for each case to establish simplified guidelines for the optimization of the time to solution.
本文比较了用混合高阶方法离散椭圆方程的各种多网格快速求解策略。考虑了h $$ h $$‐,p $$ p $$‐和hp $$ hp $$‐粗化策略的组合,并结合了不同的电网间转移算子。对二维和三维测试用例、结构化和非结构化网格、嵌套和非嵌套层次结构进行了实验比较。针对每种情况,讨论了每种策略的优缺点,以建立优化求解时间的简化准则。
{"title":"High‐order multigrid strategies for hybrid high‐order discretizations of elliptic equations","authors":"D. D. Pietro, P. Matalon, Paul Mycek, U. Rüde","doi":"10.1002/nla.2456","DOIUrl":"https://doi.org/10.1002/nla.2456","url":null,"abstract":"This study compares various multigrid strategies for the fast solution of elliptic equations discretized by the hybrid high‐order method. Combinations of h$$ h $$ ‐, p$$ p $$ ‐, and hp$$ hp $$ ‐coarsening strategies are considered, combined with diverse intergrid transfer operators. Comparisons are made experimentally on 2D and 3D test cases, with structured and unstructured meshes, and with nested and non‐nested hierarchies. Advantages and drawbacks of each strategy are discussed for each case to establish simplified guidelines for the optimization of the time to solution.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47875588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}