{"title":"冷启动视频推荐的潜在因子表示","authors":"S. Roy, Sharath Chandra Guntuku","doi":"10.1145/2959100.2959172","DOIUrl":null,"url":null,"abstract":"Recommending items that have rarely/never been viewed by users is a bottleneck for collaborative filtering (CF) based recommendation algorithms. To alleviate this problem, item content representation (mostly in textual form) has been used as auxiliary information for learning latent factor representations. In this work we present a novel method for learning latent factor representation for videos based on modelling the emotional connection between user and item. First of all we present a comparative analysis of state-of-the art emotion modelling approaches that brings out a surprising finding regarding the efficacy of latent factor representations in modelling emotion in video content. Based on this finding we present a method visual-CLiMF for learning latent factor representations for cold start videos based on implicit feedback. Visual-CLiMF is based on the popular collaborative less-is-more approach but demonstrates how emotional aspects of items could be used as auxiliary information to improve MRR performance. Experiments on a new data set and the Amazon products data set demonstrate the effectiveness of visual-CLiMF which outperforms existing CF methods with or without content information.","PeriodicalId":315651,"journal":{"name":"Proceedings of the 10th ACM Conference on Recommender Systems","volume":"72 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"52","resultStr":"{\"title\":\"Latent Factor Representations for Cold-Start Video Recommendation\",\"authors\":\"S. Roy, Sharath Chandra Guntuku\",\"doi\":\"10.1145/2959100.2959172\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recommending items that have rarely/never been viewed by users is a bottleneck for collaborative filtering (CF) based recommendation algorithms. To alleviate this problem, item content representation (mostly in textual form) has been used as auxiliary information for learning latent factor representations. In this work we present a novel method for learning latent factor representation for videos based on modelling the emotional connection between user and item. First of all we present a comparative analysis of state-of-the art emotion modelling approaches that brings out a surprising finding regarding the efficacy of latent factor representations in modelling emotion in video content. Based on this finding we present a method visual-CLiMF for learning latent factor representations for cold start videos based on implicit feedback. Visual-CLiMF is based on the popular collaborative less-is-more approach but demonstrates how emotional aspects of items could be used as auxiliary information to improve MRR performance. Experiments on a new data set and the Amazon products data set demonstrate the effectiveness of visual-CLiMF which outperforms existing CF methods with or without content information.\",\"PeriodicalId\":315651,\"journal\":{\"name\":\"Proceedings of the 10th ACM Conference on Recommender Systems\",\"volume\":\"72 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-09-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"52\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 10th ACM Conference on Recommender Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2959100.2959172\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 10th ACM Conference on Recommender Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2959100.2959172","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Latent Factor Representations for Cold-Start Video Recommendation
Recommending items that have rarely/never been viewed by users is a bottleneck for collaborative filtering (CF) based recommendation algorithms. To alleviate this problem, item content representation (mostly in textual form) has been used as auxiliary information for learning latent factor representations. In this work we present a novel method for learning latent factor representation for videos based on modelling the emotional connection between user and item. First of all we present a comparative analysis of state-of-the art emotion modelling approaches that brings out a surprising finding regarding the efficacy of latent factor representations in modelling emotion in video content. Based on this finding we present a method visual-CLiMF for learning latent factor representations for cold start videos based on implicit feedback. Visual-CLiMF is based on the popular collaborative less-is-more approach but demonstrates how emotional aspects of items could be used as auxiliary information to improve MRR performance. Experiments on a new data set and the Amazon products data set demonstrate the effectiveness of visual-CLiMF which outperforms existing CF methods with or without content information.