{"title":"Localized Fairness in Recommender Systems","authors":"Nasim Sonboli, R. Burke","doi":"10.1145/3314183.3323845","DOIUrl":null,"url":null,"abstract":"Recent research in fairness in machine learning has identified situations in which biases in input data can cause harmful or unwanted effects. Researchers in the areas of personalization and recommendation have begun to study similar types of bias. What these lines of research share is a fixed representation of the protected groups relative to which bias must be monitored. However, in some real-world application contexts, such groups cannot be defined apriori, but must be derived from the data itself. Furthermore, as we show, it may be insufficient in such cases to examine global system properties to identify protected groups. Thus, we demonstrate that fairness may be local, and the identification of protected groups only possible through consideration of local conditions.","PeriodicalId":240482,"journal":{"name":"Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3314183.3323845","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
Recent research in fairness in machine learning has identified situations in which biases in input data can cause harmful or unwanted effects. Researchers in the areas of personalization and recommendation have begun to study similar types of bias. What these lines of research share is a fixed representation of the protected groups relative to which bias must be monitored. However, in some real-world application contexts, such groups cannot be defined apriori, but must be derived from the data itself. Furthermore, as we show, it may be insufficient in such cases to examine global system properties to identify protected groups. Thus, we demonstrate that fairness may be local, and the identification of protected groups only possible through consideration of local conditions.