{"title":"正则张量分解中交替最小二乘的块方向加速","authors":"D. Evans, Nan Ye","doi":"10.1002/nla.2516","DOIUrl":null,"url":null,"abstract":"The canonical polyadic (CP) decomposition of tensors is one of the most important tensor decompositions. While the well‐known alternating least squares (ALS) algorithm is often considered the workhorse algorithm for computing the CP decomposition, it is known to suffer from slow convergence in many cases and various algorithms have been proposed to accelerate it. In this article, we propose a new accelerated ALS algorithm that accelerates ALS in a blockwise manner using a simple momentum‐based extrapolation technique and a random perturbation technique. Specifically, our algorithm updates one factor matrix (i.e., block) at a time, as in ALS, with each update consisting of a minimization step that directly reduces the reconstruction error, an extrapolation step that moves the factor matrix along the previous update direction, and a random perturbation step for breaking convergence bottlenecks. Our extrapolation strategy takes a simpler form than the state‐of‐the‐art extrapolation strategies and is easier to implement. Our algorithm has negligible computational overheads relative to ALS and is simple to apply. Empirically, our proposed algorithm shows strong performance as compared to the state‐of‐the‐art acceleration techniques on both simulated and real tensors.","PeriodicalId":49731,"journal":{"name":"Numerical Linear Algebra with Applications","volume":" ","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2023-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Blockwise acceleration of alternating least squares for canonical tensor decomposition\",\"authors\":\"D. Evans, Nan Ye\",\"doi\":\"10.1002/nla.2516\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The canonical polyadic (CP) decomposition of tensors is one of the most important tensor decompositions. While the well‐known alternating least squares (ALS) algorithm is often considered the workhorse algorithm for computing the CP decomposition, it is known to suffer from slow convergence in many cases and various algorithms have been proposed to accelerate it. In this article, we propose a new accelerated ALS algorithm that accelerates ALS in a blockwise manner using a simple momentum‐based extrapolation technique and a random perturbation technique. Specifically, our algorithm updates one factor matrix (i.e., block) at a time, as in ALS, with each update consisting of a minimization step that directly reduces the reconstruction error, an extrapolation step that moves the factor matrix along the previous update direction, and a random perturbation step for breaking convergence bottlenecks. Our extrapolation strategy takes a simpler form than the state‐of‐the‐art extrapolation strategies and is easier to implement. Our algorithm has negligible computational overheads relative to ALS and is simple to apply. Empirically, our proposed algorithm shows strong performance as compared to the state‐of‐the‐art acceleration techniques on both simulated and real tensors.\",\"PeriodicalId\":49731,\"journal\":{\"name\":\"Numerical Linear Algebra with Applications\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2023-06-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Numerical Linear Algebra with Applications\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1002/nla.2516\",\"RegionNum\":3,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Numerical Linear Algebra with Applications","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1002/nla.2516","RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS","Score":null,"Total":0}
Blockwise acceleration of alternating least squares for canonical tensor decomposition
The canonical polyadic (CP) decomposition of tensors is one of the most important tensor decompositions. While the well‐known alternating least squares (ALS) algorithm is often considered the workhorse algorithm for computing the CP decomposition, it is known to suffer from slow convergence in many cases and various algorithms have been proposed to accelerate it. In this article, we propose a new accelerated ALS algorithm that accelerates ALS in a blockwise manner using a simple momentum‐based extrapolation technique and a random perturbation technique. Specifically, our algorithm updates one factor matrix (i.e., block) at a time, as in ALS, with each update consisting of a minimization step that directly reduces the reconstruction error, an extrapolation step that moves the factor matrix along the previous update direction, and a random perturbation step for breaking convergence bottlenecks. Our extrapolation strategy takes a simpler form than the state‐of‐the‐art extrapolation strategies and is easier to implement. Our algorithm has negligible computational overheads relative to ALS and is simple to apply. Empirically, our proposed algorithm shows strong performance as compared to the state‐of‐the‐art acceleration techniques on both simulated and real tensors.
期刊介绍:
Manuscripts submitted to Numerical Linear Algebra with Applications should include large-scale broad-interest applications in which challenging computational results are integral to the approach investigated and analysed. Manuscripts that, in the Editor’s view, do not satisfy these conditions will not be accepted for review.
Numerical Linear Algebra with Applications receives submissions in areas that address developing, analysing and applying linear algebra algorithms for solving problems arising in multilinear (tensor) algebra, in statistics, such as Markov Chains, as well as in deterministic and stochastic modelling of large-scale networks, algorithm development, performance analysis or related computational aspects.
Topics covered include: Standard and Generalized Conjugate Gradients, Multigrid and Other Iterative Methods; Preconditioning Methods; Direct Solution Methods; Numerical Methods for Eigenproblems; Newton-like Methods for Nonlinear Equations; Parallel and Vectorizable Algorithms in Numerical Linear Algebra; Application of Methods of Numerical Linear Algebra in Science, Engineering and Economics.