{"title":"NEAR: A Partner to Explain Any Factorised Recommender System","authors":"Sixun Ouyang, A. Lawlor","doi":"10.1145/3314183.3323457","DOIUrl":null,"url":null,"abstract":"Many explainable recommender systems construct explanations of the recommendations these models produce, but it continues to be a di cult problem to explain to a user why an item was recommended by these high-dimensional latent factor models. In this work, We propose a technique that joint interpretations into recommendation training to make accurate predictions while at the same time learning to produce recommendations which have the most explanatory utility to the user. Our evaluation shows that we can jointly learn to make accurate and meaningful explanations with only a small sacri ce in recommendation accuracy. We also develop a new algorithm to measure explanation delity for the interpretation of top-n rankings. We prove that our approach can form the basis of a universal approach to explanation generation in recommender systems.","PeriodicalId":240482,"journal":{"name":"Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3314183.3323457","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Many explainable recommender systems construct explanations of the recommendations these models produce, but it continues to be a di cult problem to explain to a user why an item was recommended by these high-dimensional latent factor models. In this work, We propose a technique that joint interpretations into recommendation training to make accurate predictions while at the same time learning to produce recommendations which have the most explanatory utility to the user. Our evaluation shows that we can jointly learn to make accurate and meaningful explanations with only a small sacri ce in recommendation accuracy. We also develop a new algorithm to measure explanation delity for the interpretation of top-n rankings. We prove that our approach can form the basis of a universal approach to explanation generation in recommender systems.