{"title":"基于图的推荐系统中增量学习的结构感知经验回放","authors":"Kian Ahrabian, Yishi Xu, Yingxue Zhang, Jiapeng Wu, Yuening Wang, M. Coates","doi":"10.1145/3459637.3482193","DOIUrl":null,"url":null,"abstract":"Large-scale recommender systems are integral parts of many services. With the recent rapid growth of accessible data, the need for efficient training methods has arisen. Given the high computational cost of training state-of-the-art graph neural network (GNN) based models, it is infeasible to train them from scratch with every new set of interactions. In this work, we present a novel framework for incrementally training GNN-based models. Our framework takes advantage of an experience reply technique built on top of a structurally aware reservoir sampling method tailored for this setting. This framework addresses catastrophic forgetting, allowing the model to preserve its understanding of users' long-term behavioral patterns while adapting to new trends. Our experiments demonstrate the superior performance of our framework on numerous datasets when combined with state-of-the-art GNN-based models.","PeriodicalId":405296,"journal":{"name":"Proceedings of the 30th ACM International Conference on Information & Knowledge Management","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Structure Aware Experience Replay for Incremental Learning in Graph-based Recommender Systems\",\"authors\":\"Kian Ahrabian, Yishi Xu, Yingxue Zhang, Jiapeng Wu, Yuening Wang, M. Coates\",\"doi\":\"10.1145/3459637.3482193\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large-scale recommender systems are integral parts of many services. With the recent rapid growth of accessible data, the need for efficient training methods has arisen. Given the high computational cost of training state-of-the-art graph neural network (GNN) based models, it is infeasible to train them from scratch with every new set of interactions. In this work, we present a novel framework for incrementally training GNN-based models. Our framework takes advantage of an experience reply technique built on top of a structurally aware reservoir sampling method tailored for this setting. This framework addresses catastrophic forgetting, allowing the model to preserve its understanding of users' long-term behavioral patterns while adapting to new trends. Our experiments demonstrate the superior performance of our framework on numerous datasets when combined with state-of-the-art GNN-based models.\",\"PeriodicalId\":405296,\"journal\":{\"name\":\"Proceedings of the 30th ACM International Conference on Information & Knowledge Management\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 30th ACM International Conference on Information & Knowledge Management\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3459637.3482193\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 30th ACM International Conference on Information & Knowledge Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3459637.3482193","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Structure Aware Experience Replay for Incremental Learning in Graph-based Recommender Systems
Large-scale recommender systems are integral parts of many services. With the recent rapid growth of accessible data, the need for efficient training methods has arisen. Given the high computational cost of training state-of-the-art graph neural network (GNN) based models, it is infeasible to train them from scratch with every new set of interactions. In this work, we present a novel framework for incrementally training GNN-based models. Our framework takes advantage of an experience reply technique built on top of a structurally aware reservoir sampling method tailored for this setting. This framework addresses catastrophic forgetting, allowing the model to preserve its understanding of users' long-term behavioral patterns while adapting to new trends. Our experiments demonstrate the superior performance of our framework on numerous datasets when combined with state-of-the-art GNN-based models.