{"title":"Comparative assessment of rating prediction techniques under response uncertainty","authors":"Sergej Sizov","doi":"10.1145/3106426.3106506","DOIUrl":null,"url":null,"abstract":"An objective assessment of collaborative filtering techniques and recommender systems requires application of suitable predictive accuracy metrics. In real life, individuals meet their decisions with considerable uncertainty. This raises the question to what extent the comparison between observed and predicted user responses can be seen as an evident proof of systematic quality differences. In this paper, we accordingly justify underlying assumptions of quality assessment, introduce an appropriate uncertainty-aware evaluation strategy for recommender comparisons, and demonstrate its feasibility and consistency in experiments with real users.","PeriodicalId":20685,"journal":{"name":"Proceedings of the 7th International Conference on Web Intelligence, Mining and Semantics","volume":"27 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 7th International Conference on Web Intelligence, Mining and Semantics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3106426.3106506","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
An objective assessment of collaborative filtering techniques and recommender systems requires application of suitable predictive accuracy metrics. In real life, individuals meet their decisions with considerable uncertainty. This raises the question to what extent the comparison between observed and predicted user responses can be seen as an evident proof of systematic quality differences. In this paper, we accordingly justify underlying assumptions of quality assessment, introduce an appropriate uncertainty-aware evaluation strategy for recommender comparisons, and demonstrate its feasibility and consistency in experiments with real users.