鲁棒性改进的Meta推荐

IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Knowledge and Data Engineering Pub Date : 2024-11-29 DOI:10.1109/TKDE.2024.3509416
Zeyu Zhang;Chaozhuo Li;Xu Chen;Xing Xie;Philip S. Yu
{"title":"鲁棒性改进的Meta推荐","authors":"Zeyu Zhang;Chaozhuo Li;Xu Chen;Xing Xie;Philip S. Yu","doi":"10.1109/TKDE.2024.3509416","DOIUrl":null,"url":null,"abstract":"Meta learning has been recognized as an effective remedy for solving the cold-start problem in the recommendation domain. Existing models aim to learn how to generalize from the user behaviors in the training set to testing set. However, in the cold start settings, with only a small number of training samples, the testing distribution may easily deviate from the training one, which may invalidate the learned generalization patterns, and lower the recommendation performance. For alleviating this problem, in this paper, we propose a robust meta recommender framework to address the distribution shift problem. In specific, we argue that the distribution shift may exist on both the user- and interaction-levels, and in order to mitigate them simultaneously, we design a novel distributionally robust model by hierarchically reweighing the training samples. Different sample weights correspond to different training distributions, and we minimize the largest loss induced by the sample weights in a simplex, which essentially optimizes the upper bound of the testing loss. In addition, we analyze our framework on the convergence rates and generalization error bound to provide more theoretical insights. Empirically, we conduct extensive experiments based on different meta recommender models and real-world datasets to verify the generality and effectiveness of our framework.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 2","pages":"781-793"},"PeriodicalIF":8.9000,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Meta Recommendation With Robustness Improvement\",\"authors\":\"Zeyu Zhang;Chaozhuo Li;Xu Chen;Xing Xie;Philip S. Yu\",\"doi\":\"10.1109/TKDE.2024.3509416\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Meta learning has been recognized as an effective remedy for solving the cold-start problem in the recommendation domain. Existing models aim to learn how to generalize from the user behaviors in the training set to testing set. However, in the cold start settings, with only a small number of training samples, the testing distribution may easily deviate from the training one, which may invalidate the learned generalization patterns, and lower the recommendation performance. For alleviating this problem, in this paper, we propose a robust meta recommender framework to address the distribution shift problem. In specific, we argue that the distribution shift may exist on both the user- and interaction-levels, and in order to mitigate them simultaneously, we design a novel distributionally robust model by hierarchically reweighing the training samples. Different sample weights correspond to different training distributions, and we minimize the largest loss induced by the sample weights in a simplex, which essentially optimizes the upper bound of the testing loss. In addition, we analyze our framework on the convergence rates and generalization error bound to provide more theoretical insights. Empirically, we conduct extensive experiments based on different meta recommender models and real-world datasets to verify the generality and effectiveness of our framework.\",\"PeriodicalId\":13496,\"journal\":{\"name\":\"IEEE Transactions on Knowledge and Data Engineering\",\"volume\":\"37 2\",\"pages\":\"781-793\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2024-11-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Knowledge and Data Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10772005/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Knowledge and Data Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10772005/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

元学习被认为是解决推荐领域冷启动问题的有效补救措施。现有模型的目标是学习如何将训练集中的用户行为推广到测试集中。然而,在冷启动设置中,由于训练样本数量较少,测试分布很容易偏离训练分布,这可能会使学习到的泛化模式失效,降低推荐性能。为了缓解这一问题,在本文中,我们提出了一个鲁棒的元推荐框架来解决分布转移问题。具体来说,我们认为分布转移可能同时存在于用户和交互层面,为了同时缓解它们,我们通过分层重加权训练样本设计了一个新的分布鲁棒模型。不同的样本权值对应不同的训练分布,我们最小化单纯形中由样本权值引起的最大损失,实质上优化了测试损失的上界。此外,我们还分析了我们的框架对收敛率和泛化误差界的影响,以提供更多的理论见解。在经验上,我们基于不同的元推荐模型和真实世界的数据集进行了广泛的实验,以验证我们框架的通用性和有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Meta Recommendation With Robustness Improvement
Meta learning has been recognized as an effective remedy for solving the cold-start problem in the recommendation domain. Existing models aim to learn how to generalize from the user behaviors in the training set to testing set. However, in the cold start settings, with only a small number of training samples, the testing distribution may easily deviate from the training one, which may invalidate the learned generalization patterns, and lower the recommendation performance. For alleviating this problem, in this paper, we propose a robust meta recommender framework to address the distribution shift problem. In specific, we argue that the distribution shift may exist on both the user- and interaction-levels, and in order to mitigate them simultaneously, we design a novel distributionally robust model by hierarchically reweighing the training samples. Different sample weights correspond to different training distributions, and we minimize the largest loss induced by the sample weights in a simplex, which essentially optimizes the upper bound of the testing loss. In addition, we analyze our framework on the convergence rates and generalization error bound to provide more theoretical insights. Empirically, we conduct extensive experiments based on different meta recommender models and real-world datasets to verify the generality and effectiveness of our framework.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Knowledge and Data Engineering 工程技术-工程:电子与电气
CiteScore
11.70
自引率
3.40%
发文量
515
审稿时长
6 months
期刊介绍: The IEEE Transactions on Knowledge and Data Engineering encompasses knowledge and data engineering aspects within computer science, artificial intelligence, electrical engineering, computer engineering, and related fields. It provides an interdisciplinary platform for disseminating new developments in knowledge and data engineering and explores the practicality of these concepts in both hardware and software. Specific areas covered include knowledge-based and expert systems, AI techniques for knowledge and data management, tools, and methodologies, distributed processing, real-time systems, architectures, data management practices, database design, query languages, security, fault tolerance, statistical databases, algorithms, performance evaluation, and applications.
期刊最新文献
2024 Reviewers List Web-FTP: A Feature Transferring-Based Pre-Trained Model for Web Attack Detection Network-to-Network: Self-Supervised Network Representation Learning via Position Prediction AEGK: Aligned Entropic Graph Kernels Through Continuous-Time Quantum Walks Contextual Inference From Sparse Shopping Transactions Based on Motif Patterns
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1