Surong Yan, Chenglong Shi, Haosen Wang, Lei Chen, Ling Jiang, Ruilin Guo, Kwei-Jay Lin
{"title":"教学与探索:一种多重信息引导的序列推荐的有效强化学习","authors":"Surong Yan, Chenglong Shi, Haosen Wang, Lei Chen, Ling Jiang, Ruilin Guo, Kwei-Jay Lin","doi":"10.1145/3630003","DOIUrl":null,"url":null,"abstract":"Casting sequential recommendation (SR) as a reinforcement learning (RL) problem is promising and some RL-based methods have been proposed for SR. However, these models are sub-optimal due to the following limitations: a) they fail to leverage the supervision signals in the RL training to capture users’ explicit preferences, leading to slow convergence; and b) they do not utilize auxiliary information (e.g., knowledge graph) to avoid blindness when exploring users’ potential interests. To address the above-mentioned limitations, we propose a multiplex information-guided RL model (MELOD), which employs a novel RL training framework with Teach and Explore components for SR. We adopt a Teach component to accurately capture users’ explicit preferences and speed up RL convergence. Meanwhile, we design a dynamic intent induction network (DIIN) as a policy function to generate diverse predictions. We utilize the DIIN for the Explore component to mine users’ potential interests by conducting a sequential and knowledge information joint-guided exploration. Moreover, a sequential and knowledge-aware reward function is designed to achieve stable RL training. These components significantly improve MELOD’s performance and convergence against existing RL algorithms to achieve effectiveness and efficiency. Experimental results on seven real-world datasets show that our model significantly outperforms state-of-the-art methods.","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":"5 12","pages":"0"},"PeriodicalIF":5.4000,"publicationDate":"2023-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Teach and Explore: A Multiplex Information-guided Effective and Efficient Reinforcement Learning for Sequential Recommendation\",\"authors\":\"Surong Yan, Chenglong Shi, Haosen Wang, Lei Chen, Ling Jiang, Ruilin Guo, Kwei-Jay Lin\",\"doi\":\"10.1145/3630003\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Casting sequential recommendation (SR) as a reinforcement learning (RL) problem is promising and some RL-based methods have been proposed for SR. However, these models are sub-optimal due to the following limitations: a) they fail to leverage the supervision signals in the RL training to capture users’ explicit preferences, leading to slow convergence; and b) they do not utilize auxiliary information (e.g., knowledge graph) to avoid blindness when exploring users’ potential interests. To address the above-mentioned limitations, we propose a multiplex information-guided RL model (MELOD), which employs a novel RL training framework with Teach and Explore components for SR. We adopt a Teach component to accurately capture users’ explicit preferences and speed up RL convergence. Meanwhile, we design a dynamic intent induction network (DIIN) as a policy function to generate diverse predictions. We utilize the DIIN for the Explore component to mine users’ potential interests by conducting a sequential and knowledge information joint-guided exploration. Moreover, a sequential and knowledge-aware reward function is designed to achieve stable RL training. These components significantly improve MELOD’s performance and convergence against existing RL algorithms to achieve effectiveness and efficiency. Experimental results on seven real-world datasets show that our model significantly outperforms state-of-the-art methods.\",\"PeriodicalId\":50936,\"journal\":{\"name\":\"ACM Transactions on Information Systems\",\"volume\":\"5 12\",\"pages\":\"0\"},\"PeriodicalIF\":5.4000,\"publicationDate\":\"2023-10-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Information Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3630003\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Information Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3630003","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Teach and Explore: A Multiplex Information-guided Effective and Efficient Reinforcement Learning for Sequential Recommendation
Casting sequential recommendation (SR) as a reinforcement learning (RL) problem is promising and some RL-based methods have been proposed for SR. However, these models are sub-optimal due to the following limitations: a) they fail to leverage the supervision signals in the RL training to capture users’ explicit preferences, leading to slow convergence; and b) they do not utilize auxiliary information (e.g., knowledge graph) to avoid blindness when exploring users’ potential interests. To address the above-mentioned limitations, we propose a multiplex information-guided RL model (MELOD), which employs a novel RL training framework with Teach and Explore components for SR. We adopt a Teach component to accurately capture users’ explicit preferences and speed up RL convergence. Meanwhile, we design a dynamic intent induction network (DIIN) as a policy function to generate diverse predictions. We utilize the DIIN for the Explore component to mine users’ potential interests by conducting a sequential and knowledge information joint-guided exploration. Moreover, a sequential and knowledge-aware reward function is designed to achieve stable RL training. These components significantly improve MELOD’s performance and convergence against existing RL algorithms to achieve effectiveness and efficiency. Experimental results on seven real-world datasets show that our model significantly outperforms state-of-the-art methods.
期刊介绍:
The ACM Transactions on Information Systems (TOIS) publishes papers on information retrieval (such as search engines, recommender systems) that contain:
new principled information retrieval models or algorithms with sound empirical validation;
observational, experimental and/or theoretical studies yielding new insights into information retrieval or information seeking;
accounts of applications of existing information retrieval techniques that shed light on the strengths and weaknesses of the techniques;
formalization of new information retrieval or information seeking tasks and of methods for evaluating the performance on those tasks;
development of content (text, image, speech, video, etc) analysis methods to support information retrieval and information seeking;
development of computational models of user information preferences and interaction behaviors;
creation and analysis of evaluation methodologies for information retrieval and information seeking; or
surveys of existing work that propose a significant synthesis.
The information retrieval scope of ACM Transactions on Information Systems (TOIS) appeals to industry practitioners for its wealth of creative ideas, and to academic researchers for its descriptions of their colleagues'' work.