{"title":"Triple Factorization-Based SNLF Representation With Improved Momentum-Incorporated AGD: A Knowledge Transfer Approach","authors":"Ming Li;Yan Song;Derui Ding;Ran Sun","doi":"10.1109/TKDE.2024.3450469","DOIUrl":null,"url":null,"abstract":"Symmetric, high-dimensional and sparse (SHiDS) networks usually contain rich knowledge regarding various patterns. To adequately extract useful information from SHiDS networks, a novel biased triple factorization-based (TF) symmetric and non-negative latent factor (SNLF) model is put forward by utilizing the transfer learning (TL) method, namely biased TL-incorporated TF-SNLF (BT\n<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\n-SNLF) model. The proposed BT\n<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\n-SNLF model mainly includes the following four ideas: 1) the implicit knowledge of the auxiliary matrix in the ternary rating domain is transferred to the target matrix in the numerical rating domain, facilitating the feature extraction; 2) two linear bias vectors are considered into the objective function to discover the knowledge describing the individual entity-oriented effect; 3) an improved momentum-incorporated additive gradient descent algorithm is developed to speed up the model convergence as well as guarantee the non-negativity of target SHiDS networks; and 4) a rigorous proof is provided to show that, under the assumption that the objective function is \n<inline-formula><tex-math>$L$</tex-math></inline-formula>\n-smooth and \n<inline-formula><tex-math>$\\mu$</tex-math></inline-formula>\n-convex, when \n<inline-formula><tex-math>$t\\geq t_{0}$</tex-math></inline-formula>\n, the algorithm begins to descend and it can find an \n<inline-formula><tex-math>$\\epsilon$</tex-math></inline-formula>\n-solution within \n<inline-formula><tex-math>$O(ln((1+\\frac{\\mu L}{L(1+\\mu )+8\\mu })/\\epsilon ))$</tex-math></inline-formula>\n. Experimental results on six datasets from real applications demonstrate the effectiveness of our proposed T\n<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\n-SNLF and BT\n<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\n-SNLF models.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"9448-9463"},"PeriodicalIF":8.9000,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Knowledge and Data Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10652887/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Symmetric, high-dimensional and sparse (SHiDS) networks usually contain rich knowledge regarding various patterns. To adequately extract useful information from SHiDS networks, a novel biased triple factorization-based (TF) symmetric and non-negative latent factor (SNLF) model is put forward by utilizing the transfer learning (TL) method, namely biased TL-incorporated TF-SNLF (BT
$^{2}$
-SNLF) model. The proposed BT
$^{2}$
-SNLF model mainly includes the following four ideas: 1) the implicit knowledge of the auxiliary matrix in the ternary rating domain is transferred to the target matrix in the numerical rating domain, facilitating the feature extraction; 2) two linear bias vectors are considered into the objective function to discover the knowledge describing the individual entity-oriented effect; 3) an improved momentum-incorporated additive gradient descent algorithm is developed to speed up the model convergence as well as guarantee the non-negativity of target SHiDS networks; and 4) a rigorous proof is provided to show that, under the assumption that the objective function is
$L$
-smooth and
$\mu$
-convex, when
$t\geq t_{0}$
, the algorithm begins to descend and it can find an
$\epsilon$
-solution within
$O(ln((1+\frac{\mu L}{L(1+\mu )+8\mu })/\epsilon ))$
. Experimental results on six datasets from real applications demonstrate the effectiveness of our proposed T
$^{2}$
-SNLF and BT
$^{2}$
-SNLF models.
期刊介绍:
The IEEE Transactions on Knowledge and Data Engineering encompasses knowledge and data engineering aspects within computer science, artificial intelligence, electrical engineering, computer engineering, and related fields. It provides an interdisciplinary platform for disseminating new developments in knowledge and data engineering and explores the practicality of these concepts in both hardware and software. Specific areas covered include knowledge-based and expert systems, AI techniques for knowledge and data management, tools, and methodologies, distributed processing, real-time systems, architectures, data management practices, database design, query languages, security, fault tolerance, statistical databases, algorithms, performance evaluation, and applications.