线性和二次非负矩阵分解的乘法算法的统一发展。

IEEE transactions on neural networks Pub Date : 2011-12-01 Epub Date: 2011-10-17 DOI:10.1109/TNN.2011.2170094
Zhirong Yang, Erkki Oja
{"title":"线性和二次非负矩阵分解的乘法算法的统一发展。","authors":"Zhirong Yang,&nbsp;Erkki Oja","doi":"10.1109/TNN.2011.2170094","DOIUrl":null,"url":null,"abstract":"<p><p>Multiplicative updates have been widely used in approximative nonnegative matrix factorization (NMF) optimization because they are convenient to deploy. Their convergence proof is usually based on the minimization of an auxiliary upper-bounding function, the construction of which however remains specific and only available for limited types of dissimilarity measures. Here we make significant progress in developing convergent multiplicative algorithms for NMF. First, we propose a general approach to derive the auxiliary function for a wide variety of NMF problems, as long as the approximation objective can be expressed as a finite sum of monomials with real exponents. Multiplicative algorithms with theoretical guarantee of monotonically decreasing objective function sequence can thus be obtained. The solutions of NMF based on most commonly used dissimilarity measures such as α- and β-divergence as well as many other more comprehensive divergences can be derived by the new unified principle. Second, our method is extended to a nonseparable case that includes e.g., γ-divergence and Rényi divergence. Third, we develop multiplicative algorithms for NMF using second-order approximative factorizations, in which each factorizing matrix may appear twice. Preliminary numerical experiments demonstrate that the multiplicative algorithms developed using the proposed procedure can achieve satisfactory Karush-Kuhn-Tucker optimality. We also demonstrate NMF problems where algorithms by the conventional method fail to guarantee descent at each iteration but those by our principle are immune to such violation.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2170094","citationCount":"72","resultStr":"{\"title\":\"Unified development of multiplicative algorithms for linear and quadratic nonnegative matrix factorization.\",\"authors\":\"Zhirong Yang,&nbsp;Erkki Oja\",\"doi\":\"10.1109/TNN.2011.2170094\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Multiplicative updates have been widely used in approximative nonnegative matrix factorization (NMF) optimization because they are convenient to deploy. Their convergence proof is usually based on the minimization of an auxiliary upper-bounding function, the construction of which however remains specific and only available for limited types of dissimilarity measures. Here we make significant progress in developing convergent multiplicative algorithms for NMF. First, we propose a general approach to derive the auxiliary function for a wide variety of NMF problems, as long as the approximation objective can be expressed as a finite sum of monomials with real exponents. Multiplicative algorithms with theoretical guarantee of monotonically decreasing objective function sequence can thus be obtained. The solutions of NMF based on most commonly used dissimilarity measures such as α- and β-divergence as well as many other more comprehensive divergences can be derived by the new unified principle. Second, our method is extended to a nonseparable case that includes e.g., γ-divergence and Rényi divergence. Third, we develop multiplicative algorithms for NMF using second-order approximative factorizations, in which each factorizing matrix may appear twice. Preliminary numerical experiments demonstrate that the multiplicative algorithms developed using the proposed procedure can achieve satisfactory Karush-Kuhn-Tucker optimality. We also demonstrate NMF problems where algorithms by the conventional method fail to guarantee descent at each iteration but those by our principle are immune to such violation.</p>\",\"PeriodicalId\":13434,\"journal\":{\"name\":\"IEEE transactions on neural networks\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1109/TNN.2011.2170094\",\"citationCount\":\"72\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on neural networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TNN.2011.2170094\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2011/10/17 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TNN.2011.2170094","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2011/10/17 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 72

摘要

乘法更新由于便于部署,在近似非负矩阵分解优化中得到了广泛的应用。它们的收敛性证明通常是基于一个辅助上限函数的最小化,然而它的构造仍然是特定的,并且只适用于有限类型的不相似测度。在这里,我们在NMF的收敛乘法算法的开发方面取得了重大进展。首先,我们提出了一种推导辅助函数的一般方法,适用于各种各样的NMF问题,只要近似目标可以表示为具有实数指数的单项式的有限和。由此可以得到具有目标函数序列单调递减理论保证的乘法算法。利用新的统一原理,可以推导出基于α-散度和β-散度等最常用的非相似性测度以及许多其他更全面的散度的NMF解。其次,我们的方法被推广到不可分离的情况下,包括例如,γ-散度和r逍遥散。第三,我们使用二阶近似分解开发了NMF的乘法算法,其中每个分解矩阵可以出现两次。初步的数值实验表明,利用该方法开发的乘法算法可以达到满意的Karush-Kuhn-Tucker最优性。我们还演示了NMF问题,其中传统方法的算法不能保证每次迭代的下降,但我们的原理的算法不受这种破坏。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Unified development of multiplicative algorithms for linear and quadratic nonnegative matrix factorization.

Multiplicative updates have been widely used in approximative nonnegative matrix factorization (NMF) optimization because they are convenient to deploy. Their convergence proof is usually based on the minimization of an auxiliary upper-bounding function, the construction of which however remains specific and only available for limited types of dissimilarity measures. Here we make significant progress in developing convergent multiplicative algorithms for NMF. First, we propose a general approach to derive the auxiliary function for a wide variety of NMF problems, as long as the approximation objective can be expressed as a finite sum of monomials with real exponents. Multiplicative algorithms with theoretical guarantee of monotonically decreasing objective function sequence can thus be obtained. The solutions of NMF based on most commonly used dissimilarity measures such as α- and β-divergence as well as many other more comprehensive divergences can be derived by the new unified principle. Second, our method is extended to a nonseparable case that includes e.g., γ-divergence and Rényi divergence. Third, we develop multiplicative algorithms for NMF using second-order approximative factorizations, in which each factorizing matrix may appear twice. Preliminary numerical experiments demonstrate that the multiplicative algorithms developed using the proposed procedure can achieve satisfactory Karush-Kuhn-Tucker optimality. We also demonstrate NMF problems where algorithms by the conventional method fail to guarantee descent at each iteration but those by our principle are immune to such violation.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE transactions on neural networks
IEEE transactions on neural networks 工程技术-工程:电子与电气
自引率
0.00%
发文量
2
审稿时长
8.7 months
期刊最新文献
Extracting rules from neural networks as decision diagrams. Design of a data-driven predictive controller for start-up process of AMT vehicles. Data-based hybrid tension estimation and fault diagnosis of cold rolling continuous annealing processes. Unified development of multiplicative algorithms for linear and quadratic nonnegative matrix factorization. Data-based system modeling using a type-2 fuzzy neural network with a hybrid learning algorithm.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1