{"title":"Privacy in distributed reputation management","authors":"K. Ylitalo, Yki Kortesniemi","doi":"10.1109/SECCMW.2005.1588297","DOIUrl":null,"url":null,"abstract":"In online communities, the users typically do not meet personally, and, thus, they have to estimate the trustwor- thiness of the other parties using other means. To assist these estimations, various reputation systems have been developed. But collecting the required reputation information, which, essentially, is information about the user's past, also creates privacy con- cerns. In this paper, we examine how the distribution of reputation management using P2P networks deals with the privacy concerns of processing reputation information. We analyze the distributed reputation management from three angles: how the requirements of fair use practices should be reflected on the system design, what classes of information is leaked and, finally, how to manage the risks related to the social and technical issues. I. INTRODUCTION In online communities, people are typically strangers to each other and do not meet face to face. Consequently, estimating the trustworthiness of the other parties is more difficult than in every day life. To assist users in their trust decisions and to reduce the related risks, various reputation systems are being developed. These systems collect reputation information about the the users' past behavior, and have a mechanism to provide trustworthiness estimates based on the information. Characteristically, many of the current online communities manage the reputation information in a centralized manner. One of the most analyzed examples is the eBay's feedback forum (12). In this type of a centralized solution, one benefit is that the trusted third party (in this case: eBay) can play an important role in trust evaluations. In contrast, fully distributed peer-to-peer (P2P) networks have no centralized trusted third parties and the actual interactions happen directly between the peers. The peers, e.g., provide storage capacity to the community and they have to be able to evaluate other peers' trustworthiness on their own. Although the reputation information is useful in trustwor- thiness estimation (33), (23), collecting this information also presents privacy problems. In reputation management, the privacy problems arise when large amounts of the information is easily available and the user can be identified. In particu- lar, the identifiable information enables undesired tracing of the user's past behavior and preferences. And these threats increase along with the current trend of boosting data storage and processing capacity, which allows the possible malicious peers more capacity for monitoring others. In this paper, we examine how the decentralization of reputation management in P2P networks deals with the privacy concerns of processing reputation information. Fundamentally, the reputation information itself has to be public at least within the target community to be of any use. This means that the users, who have agreed to being evaluated by the reputation system, have also willingly given up some of their privacy. However, this should not mean that they automatically give up all privacy. In our study, we examine how to enhance the user's privacy so that only the necessary information is revealed and, thus, unnecessary information is not disclosed to others. We argue that even though no extra information is collected for assisting user's trust decision, its uncontrolled availability is a privacy concern as the information is always linked to the user's identifier. To address these issues, we examine distributed reputation management from three complementary angles. Firstly, we examine how the fair information practices proposed, e.g. by the data protection legislation, should be reflected on the design of distributed reputation management. Secondly, we discuss the different types of privacy in the context of distributed reputation management, and finally, we consider how to manage the privacy risks. The results show that protecting the privacy of any one peer is more difficult than preventing the collective misusage of sev- eral peers' reputation information. The possible misbehavior can be made more difficult, but all privacy problems can not be tackled inclusively - a certain loss of privacy is the price the user has to pay for the benefit of gaining reputation.","PeriodicalId":382662,"journal":{"name":"Workshop of the 1st International Conference on Security and Privacy for Emerging Areas in Communication Networks, 2005.","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Workshop of the 1st International Conference on Security and Privacy for Emerging Areas in Communication Networks, 2005.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SECCMW.2005.1588297","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
In online communities, the users typically do not meet personally, and, thus, they have to estimate the trustwor- thiness of the other parties using other means. To assist these estimations, various reputation systems have been developed. But collecting the required reputation information, which, essentially, is information about the user's past, also creates privacy con- cerns. In this paper, we examine how the distribution of reputation management using P2P networks deals with the privacy concerns of processing reputation information. We analyze the distributed reputation management from three angles: how the requirements of fair use practices should be reflected on the system design, what classes of information is leaked and, finally, how to manage the risks related to the social and technical issues. I. INTRODUCTION In online communities, people are typically strangers to each other and do not meet face to face. Consequently, estimating the trustworthiness of the other parties is more difficult than in every day life. To assist users in their trust decisions and to reduce the related risks, various reputation systems are being developed. These systems collect reputation information about the the users' past behavior, and have a mechanism to provide trustworthiness estimates based on the information. Characteristically, many of the current online communities manage the reputation information in a centralized manner. One of the most analyzed examples is the eBay's feedback forum (12). In this type of a centralized solution, one benefit is that the trusted third party (in this case: eBay) can play an important role in trust evaluations. In contrast, fully distributed peer-to-peer (P2P) networks have no centralized trusted third parties and the actual interactions happen directly between the peers. The peers, e.g., provide storage capacity to the community and they have to be able to evaluate other peers' trustworthiness on their own. Although the reputation information is useful in trustwor- thiness estimation (33), (23), collecting this information also presents privacy problems. In reputation management, the privacy problems arise when large amounts of the information is easily available and the user can be identified. In particu- lar, the identifiable information enables undesired tracing of the user's past behavior and preferences. And these threats increase along with the current trend of boosting data storage and processing capacity, which allows the possible malicious peers more capacity for monitoring others. In this paper, we examine how the decentralization of reputation management in P2P networks deals with the privacy concerns of processing reputation information. Fundamentally, the reputation information itself has to be public at least within the target community to be of any use. This means that the users, who have agreed to being evaluated by the reputation system, have also willingly given up some of their privacy. However, this should not mean that they automatically give up all privacy. In our study, we examine how to enhance the user's privacy so that only the necessary information is revealed and, thus, unnecessary information is not disclosed to others. We argue that even though no extra information is collected for assisting user's trust decision, its uncontrolled availability is a privacy concern as the information is always linked to the user's identifier. To address these issues, we examine distributed reputation management from three complementary angles. Firstly, we examine how the fair information practices proposed, e.g. by the data protection legislation, should be reflected on the design of distributed reputation management. Secondly, we discuss the different types of privacy in the context of distributed reputation management, and finally, we consider how to manage the privacy risks. The results show that protecting the privacy of any one peer is more difficult than preventing the collective misusage of sev- eral peers' reputation information. The possible misbehavior can be made more difficult, but all privacy problems can not be tackled inclusively - a certain loss of privacy is the price the user has to pay for the benefit of gaining reputation.