Xiuze Li , Zhenhua Huang , Zhengyang Wu , Changdong Wang , Yunwen Chen
{"title":"Cross-domain recommendation via knowledge distillation","authors":"Xiuze Li , Zhenhua Huang , Zhengyang Wu , Changdong Wang , Yunwen Chen","doi":"10.1016/j.knosys.2025.113112","DOIUrl":null,"url":null,"abstract":"<div><div>Recommendation systems frequently suffer from data sparsity, resulting in less-than-ideal recommendations. A prominent solution to this problem is Cross-Domain Recommendation (CDR), which employs data from various domains to mitigate data sparsity and cold-start issues. Nevertheless, current mainstream methods, like feature mapping and co-training exploring domain relationships, overlook latent user–user and user–item similarities in the shared user–item interaction graph. Spurred by these deficiencies, this paper introduces KDCDR, a novel cross-domain recommendation framework that relies on knowledge distillation to utilize the data from the graph. KDCDR aims to improve the recommendation performance in both domains by efficiently utilizing information from the shared interaction graph. Furthermore, we enhance the effectiveness of user and item representations by exploring the relationships between user–user similarity and item–item similarity, as well as user–item interactions. The developed scheme utilizes the inner-domain graph as a teacher and the cross-domain graph as a student, where the student learns by distilling knowledge from the two teachers after undergoing a high-temperature distillation process. Furthermore, we introduce dynamic weight that regulates the learning process to prevent the student network from overly favoring learning from one domain and focusing on learning knowledge that the teachers have taught incorrectly. Through extensive experiments on four real-world datasets, KDCDR demonstrates significant improvements over state-of-the-art methods, proving the effectiveness of KDCDR in addressing data sparsity issues and enhancing cross-domain recommendation performance. Our code and data are available at <span><span>https://github.com/pandas-bondage/KDCDR</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"311 ","pages":"Article 113112"},"PeriodicalIF":7.2000,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125001595","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Recommendation systems frequently suffer from data sparsity, resulting in less-than-ideal recommendations. A prominent solution to this problem is Cross-Domain Recommendation (CDR), which employs data from various domains to mitigate data sparsity and cold-start issues. Nevertheless, current mainstream methods, like feature mapping and co-training exploring domain relationships, overlook latent user–user and user–item similarities in the shared user–item interaction graph. Spurred by these deficiencies, this paper introduces KDCDR, a novel cross-domain recommendation framework that relies on knowledge distillation to utilize the data from the graph. KDCDR aims to improve the recommendation performance in both domains by efficiently utilizing information from the shared interaction graph. Furthermore, we enhance the effectiveness of user and item representations by exploring the relationships between user–user similarity and item–item similarity, as well as user–item interactions. The developed scheme utilizes the inner-domain graph as a teacher and the cross-domain graph as a student, where the student learns by distilling knowledge from the two teachers after undergoing a high-temperature distillation process. Furthermore, we introduce dynamic weight that regulates the learning process to prevent the student network from overly favoring learning from one domain and focusing on learning knowledge that the teachers have taught incorrectly. Through extensive experiments on four real-world datasets, KDCDR demonstrates significant improvements over state-of-the-art methods, proving the effectiveness of KDCDR in addressing data sparsity issues and enhancing cross-domain recommendation performance. Our code and data are available at https://github.com/pandas-bondage/KDCDR.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.