{"title":"Invariant representation learning to popularity distribution shift for recommendation","authors":"","doi":"10.1007/s11280-024-01242-x","DOIUrl":null,"url":null,"abstract":"<h3>Abstract</h3> <p>Recommender systems often suffer from severe performance drops due to popularity distribution shift (PDS), which arises from inconsistencies in item popularity between training and test data. Most existing methods aimed at mitigating PDS focus on reducing popularity bias, but they usually require inaccessible information or rely on implausible assumptions. To solve the above problem, in this work, we propose a novel framework called <strong>I</strong>nvariant <strong>R</strong>epresentation <strong>L</strong>earning (<strong>IRL</strong>) to PDS. Specifically, for simulating diverse popularity environments where popular items and active users become even more popular and active, or conversely, we apply perturbations to the user-item interaction matrix by adjusting the weights of popular items and active users in the matrix, without any prior assumptions or specialized information. In different simulated popularity environments, dissimilarities in the distribution of representations for items and users occur. We further utilize contrastive learning to minimize the dissimilarities among the representations of users and items under different simulated popularity environments, resulting in invariant representations that remain consistent across varying popularity distributions. Extensive experiments on three real-world datasets demonstrate that IRL outperforms state-of-the-art baselines in effectively alleviating PDS for recommendation.</p>","PeriodicalId":501180,"journal":{"name":"World Wide Web","volume":"5 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"World Wide Web","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11280-024-01242-x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recommender systems often suffer from severe performance drops due to popularity distribution shift (PDS), which arises from inconsistencies in item popularity between training and test data. Most existing methods aimed at mitigating PDS focus on reducing popularity bias, but they usually require inaccessible information or rely on implausible assumptions. To solve the above problem, in this work, we propose a novel framework called Invariant Representation Learning (IRL) to PDS. Specifically, for simulating diverse popularity environments where popular items and active users become even more popular and active, or conversely, we apply perturbations to the user-item interaction matrix by adjusting the weights of popular items and active users in the matrix, without any prior assumptions or specialized information. In different simulated popularity environments, dissimilarities in the distribution of representations for items and users occur. We further utilize contrastive learning to minimize the dissimilarities among the representations of users and items under different simulated popularity environments, resulting in invariant representations that remain consistent across varying popularity distributions. Extensive experiments on three real-world datasets demonstrate that IRL outperforms state-of-the-art baselines in effectively alleviating PDS for recommendation.