{"title":"用于 CP 张量分解的带重要性采样的分块随机方法","authors":"Yajie Yu, Hanyu Li","doi":"10.1007/s10444-024-10119-6","DOIUrl":null,"url":null,"abstract":"<div><p>One popular way to compute the CANDECOMP/PARAFAC (CP) decomposition of a tensor is to transform the problem into a sequence of overdetermined least squares subproblems with Khatri-Rao product (KRP) structure involving factor matrices. In this work, based on choosing the factor matrix randomly, we propose a mini-batch stochastic gradient descent method with importance sampling for those special least squares subproblems. Two different sampling strategies are provided. They can avoid forming the full KRP explicitly and computing the corresponding probabilities directly. The adaptive step size version of the method is also given. For the proposed method, we present its theoretical properties and comprehensive numerical performance. The results on synthetic and real data show that our method is effective and efficient, and for unevenly distributed data, it performs better than the corresponding one in the literature.</p></div>","PeriodicalId":50869,"journal":{"name":"Advances in Computational Mathematics","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A block-randomized stochastic method with importance sampling for CP tensor decomposition\",\"authors\":\"Yajie Yu, Hanyu Li\",\"doi\":\"10.1007/s10444-024-10119-6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>One popular way to compute the CANDECOMP/PARAFAC (CP) decomposition of a tensor is to transform the problem into a sequence of overdetermined least squares subproblems with Khatri-Rao product (KRP) structure involving factor matrices. In this work, based on choosing the factor matrix randomly, we propose a mini-batch stochastic gradient descent method with importance sampling for those special least squares subproblems. Two different sampling strategies are provided. They can avoid forming the full KRP explicitly and computing the corresponding probabilities directly. The adaptive step size version of the method is also given. For the proposed method, we present its theoretical properties and comprehensive numerical performance. The results on synthetic and real data show that our method is effective and efficient, and for unevenly distributed data, it performs better than the corresponding one in the literature.</p></div>\",\"PeriodicalId\":50869,\"journal\":{\"name\":\"Advances in Computational Mathematics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2024-03-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advances in Computational Mathematics\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10444-024-10119-6\",\"RegionNum\":3,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in Computational Mathematics","FirstCategoryId":"100","ListUrlMain":"https://link.springer.com/article/10.1007/s10444-024-10119-6","RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
A block-randomized stochastic method with importance sampling for CP tensor decomposition
One popular way to compute the CANDECOMP/PARAFAC (CP) decomposition of a tensor is to transform the problem into a sequence of overdetermined least squares subproblems with Khatri-Rao product (KRP) structure involving factor matrices. In this work, based on choosing the factor matrix randomly, we propose a mini-batch stochastic gradient descent method with importance sampling for those special least squares subproblems. Two different sampling strategies are provided. They can avoid forming the full KRP explicitly and computing the corresponding probabilities directly. The adaptive step size version of the method is also given. For the proposed method, we present its theoretical properties and comprehensive numerical performance. The results on synthetic and real data show that our method is effective and efficient, and for unevenly distributed data, it performs better than the corresponding one in the literature.
期刊介绍:
Advances in Computational Mathematics publishes high quality, accessible and original articles at the forefront of computational and applied mathematics, with a clear potential for impact across the sciences. The journal emphasizes three core areas: approximation theory and computational geometry; numerical analysis, modelling and simulation; imaging, signal processing and data analysis.
This journal welcomes papers that are accessible to a broad audience in the mathematical sciences and that show either an advance in computational methodology or a novel scientific application area, or both. Methods papers should rely on rigorous analysis and/or convincing numerical studies.