使用随机低库近似的非凸稳健高阶张量补全

Wenjin Qin;Hailin Wang;Feng Zhang;Weijun Ma;Jianjun Wang;Tingwen Huang
{"title":"使用随机低库近似的非凸稳健高阶张量补全","authors":"Wenjin Qin;Hailin Wang;Feng Zhang;Weijun Ma;Jianjun Wang;Tingwen Huang","doi":"10.1109/TIP.2024.3385284","DOIUrl":null,"url":null,"abstract":"Within the \n<italic>tensor singular value decomposition</i>\n (T-SVD) framework, existing robust low-rank tensor completion approaches have made great achievements in various areas of science and engineering. Nevertheless, these methods involve the T-SVD based low-rank approximation, which suffers from high computational costs when dealing with large-scale tensor data. Moreover, most of them are only applicable to third-order tensors. Against these issues, in this article, two efficient low-rank tensor approximation approaches fusing random projection techniques are first devised under the order-\n<italic>d</i>\n (\n<inline-formula> <tex-math>$d\\geq 3$ </tex-math></inline-formula>\n) T-SVD framework. Theoretical results on error bounds for the proposed randomized algorithms are provided. On this basis, we then further investigate the robust high-order tensor completion problem, in which a double nonconvex model along with its corresponding fast optimization algorithms with convergence guarantees are developed. Experimental results on large-scale synthetic and real tensor data illustrate that the proposed method outperforms other state-of-the-art approaches in terms of both computational efficiency and estimated precision.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"2835-2850"},"PeriodicalIF":0.0000,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Nonconvex Robust High-Order Tensor Completion Using Randomized Low-Rank Approximation\",\"authors\":\"Wenjin Qin;Hailin Wang;Feng Zhang;Weijun Ma;Jianjun Wang;Tingwen Huang\",\"doi\":\"10.1109/TIP.2024.3385284\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Within the \\n<italic>tensor singular value decomposition</i>\\n (T-SVD) framework, existing robust low-rank tensor completion approaches have made great achievements in various areas of science and engineering. Nevertheless, these methods involve the T-SVD based low-rank approximation, which suffers from high computational costs when dealing with large-scale tensor data. Moreover, most of them are only applicable to third-order tensors. Against these issues, in this article, two efficient low-rank tensor approximation approaches fusing random projection techniques are first devised under the order-\\n<italic>d</i>\\n (\\n<inline-formula> <tex-math>$d\\\\geq 3$ </tex-math></inline-formula>\\n) T-SVD framework. Theoretical results on error bounds for the proposed randomized algorithms are provided. On this basis, we then further investigate the robust high-order tensor completion problem, in which a double nonconvex model along with its corresponding fast optimization algorithms with convergence guarantees are developed. Experimental results on large-scale synthetic and real tensor data illustrate that the proposed method outperforms other state-of-the-art approaches in terms of both computational efficiency and estimated precision.\",\"PeriodicalId\":94032,\"journal\":{\"name\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"volume\":\"33 \",\"pages\":\"2835-2850\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10496551/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10496551/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在张量奇异值分解(T-SVD)框架内,现有的稳健低秩张量补全方法在科学和工程的各个领域取得了巨大成就。然而,这些方法涉及基于 T-SVD 的低秩近似,在处理大规模张量数据时存在计算成本高的问题。此外,它们大多只适用于三阶张量。针对这些问题,本文首先在阶d($d\geq 3$)T-SVD框架下设计了两种融合随机投影技术的高效低阶张量近似方法。文章提供了所提随机算法误差边界的理论结果。在此基础上,我们进一步研究了鲁棒高阶张量补全问题,并在此基础上开发了双非凸模型及其相应的具有收敛保证的快速优化算法。在大规模合成和真实张量数据上的实验结果表明,所提出的方法在计算效率和估计精度方面都优于其他最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Nonconvex Robust High-Order Tensor Completion Using Randomized Low-Rank Approximation
Within the tensor singular value decomposition (T-SVD) framework, existing robust low-rank tensor completion approaches have made great achievements in various areas of science and engineering. Nevertheless, these methods involve the T-SVD based low-rank approximation, which suffers from high computational costs when dealing with large-scale tensor data. Moreover, most of them are only applicable to third-order tensors. Against these issues, in this article, two efficient low-rank tensor approximation approaches fusing random projection techniques are first devised under the order- d ( $d\geq 3$ ) T-SVD framework. Theoretical results on error bounds for the proposed randomized algorithms are provided. On this basis, we then further investigate the robust high-order tensor completion problem, in which a double nonconvex model along with its corresponding fast optimization algorithms with convergence guarantees are developed. Experimental results on large-scale synthetic and real tensor data illustrate that the proposed method outperforms other state-of-the-art approaches in terms of both computational efficiency and estimated precision.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Learning Cross-Attention Point Transformer With Global Porous Sampling Salient Object Detection From Arbitrary Modalities GSSF: Generalized Structural Sparse Function for Deep Cross-Modal Metric Learning AnlightenDiff: Anchoring Diffusion Probabilistic Model on Low Light Image Enhancement Exploring Multi-Modal Contextual Knowledge for Open-Vocabulary Object Detection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1