{"title":"Impact of Query Sample Selection Bias on Information Retrieval System Ranking","authors":"M. Melucci","doi":"10.1109/DSAA.2016.43","DOIUrl":null,"url":null,"abstract":"Information Retrieval (IR) effectiveness measures commonly assume that the experimental query sets consist of randomly drawn queries that represent the population of queries submitted to IR systems. In many practical situations, however, this assumption is violated, in a problem known as sample selection bias. It follows that the systems participating in evaluation campaigns are ranked by biased estimators of effectiveness. In this paper, we address the problem of query sample selection bias in machine learning terms and study experimentally how retrieval system rankings are affected by it. To this end, we apply a number of retrieval effectiveness measures and query probability estimation methods useful to correct sample selection bias. We report that the ranking of the most effective systems and that of the least effective systems is fairly affected by query sample selection bias, while the ranking of the average systems is much more affected. We also report that the measure of bias depends on the retrieval measure used to rank systems and eventually on the search task being evaluated.","PeriodicalId":193885,"journal":{"name":"2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSAA.2016.43","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Information Retrieval (IR) effectiveness measures commonly assume that the experimental query sets consist of randomly drawn queries that represent the population of queries submitted to IR systems. In many practical situations, however, this assumption is violated, in a problem known as sample selection bias. It follows that the systems participating in evaluation campaigns are ranked by biased estimators of effectiveness. In this paper, we address the problem of query sample selection bias in machine learning terms and study experimentally how retrieval system rankings are affected by it. To this end, we apply a number of retrieval effectiveness measures and query probability estimation methods useful to correct sample selection bias. We report that the ranking of the most effective systems and that of the least effective systems is fairly affected by query sample selection bias, while the ranking of the average systems is much more affected. We also report that the measure of bias depends on the retrieval measure used to rank systems and eventually on the search task being evaluated.