{"title":"Improving Recommender Systems with Human-in-the-Loop","authors":"Dmitry Ustalov, N. Fedorova, Nikita Pavlichenko","doi":"10.1145/3523227.3547373","DOIUrl":null,"url":null,"abstract":"Today, most recommender systems employ Machine Learning to recommend posts, products, and other items, usually produced by the users. Although the impressive progress in Deep Learning and Reinforcement Learning, we observe that recommendations made by such systems still do not correlate with actual human preferences. In our tutorial, we will bridge the gap between crowdsourcing and recommender systems communities by showing how one can incorporate human-in-the-loop into their recommender system to gather the real human feedback on the ranked recommendations. We will discuss the ranking data lifecycle and run through it step-by-step. A significant portion of tutorial time is devoted to a hands-on practice, when the attendees will, under our guidance, sample recommendations and build the ground truth dataset using crowdsourced data, and compute the offline evaluation scores.","PeriodicalId":443279,"journal":{"name":"Proceedings of the 16th ACM Conference on Recommender Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 16th ACM Conference on Recommender Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3523227.3547373","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Today, most recommender systems employ Machine Learning to recommend posts, products, and other items, usually produced by the users. Although the impressive progress in Deep Learning and Reinforcement Learning, we observe that recommendations made by such systems still do not correlate with actual human preferences. In our tutorial, we will bridge the gap between crowdsourcing and recommender systems communities by showing how one can incorporate human-in-the-loop into their recommender system to gather the real human feedback on the ranked recommendations. We will discuss the ranking data lifecycle and run through it step-by-step. A significant portion of tutorial time is devoted to a hands-on practice, when the attendees will, under our guidance, sample recommendations and build the ground truth dataset using crowdsourced data, and compute the offline evaluation scores.