{"title":"Explanations and User Control in Recommender Systems","authors":"D. Jannach, Michael Jugovac, Ingrid Nunes","doi":"10.1145/3345002.3349293","DOIUrl":null,"url":null,"abstract":"1 BACKGROUND The personalized selection and presentation of content have become common in today’s online world, for example on media streaming sites, e-commerce shops, and social networks. This automated personalization is often accomplished by recommender systems, which continuously collect and interpret information about the individual user. To determine which information items should be presented, these systems typically rely on machine learning. Over the last decades, a large variety of machine learning techniques of increasing complexity have been applied for building recommender systems. The recommendation models that are learned by such modern algorithms are, however, usually seen as black boxes. Technically, they often consist of values for hundreds or thousands of variables, making it impossible to provide a humanunderstandable rationale why a certain item is recommended to a particular user. Providing users with an explanation or at least with an intuition why an item is recommended can, however, be crucial, both for the acceptance of an individual recommendation and for the establishment of user trust towards the system as a whole [3]. Furthermore, such system-provided explanations can not only contribute to the acceptance of the system, but also serve as entry points for interactive approaches that allow users to give feedback as a means to correct system assumptions and, thus, take control of the recommendation process.","PeriodicalId":153835,"journal":{"name":"Proceedings of the 23rd International Workshop on Personalization and Recommendation on the Web and Beyond","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 23rd International Workshop on Personalization and Recommendation on the Web and Beyond","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3345002.3349293","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22
Abstract
1 BACKGROUND The personalized selection and presentation of content have become common in today’s online world, for example on media streaming sites, e-commerce shops, and social networks. This automated personalization is often accomplished by recommender systems, which continuously collect and interpret information about the individual user. To determine which information items should be presented, these systems typically rely on machine learning. Over the last decades, a large variety of machine learning techniques of increasing complexity have been applied for building recommender systems. The recommendation models that are learned by such modern algorithms are, however, usually seen as black boxes. Technically, they often consist of values for hundreds or thousands of variables, making it impossible to provide a humanunderstandable rationale why a certain item is recommended to a particular user. Providing users with an explanation or at least with an intuition why an item is recommended can, however, be crucial, both for the acceptance of an individual recommendation and for the establishment of user trust towards the system as a whole [3]. Furthermore, such system-provided explanations can not only contribute to the acceptance of the system, but also serve as entry points for interactive approaches that allow users to give feedback as a means to correct system assumptions and, thus, take control of the recommendation process.