{"title":"Privacy in distributed reputation management","authors":"K. Ylitalo, Yki Kortesniemi","doi":"10.1109/SECCMW.2005.1588297","DOIUrl":null,"url":null,"abstract":"In online communities, the users typically do not meet personally, and, thus, they have to estimate the trustwor- thiness of the other parties using other means. To assist these estimations, various reputation systems have been developed. But collecting the required reputation information, which, essentially, is information about the user's past, also creates privacy con- cerns. In this paper, we examine how the distribution of reputation management using P2P networks deals with the privacy concerns of processing reputation information. We analyze the distributed reputation management from three angles: how the requirements of fair use practices should be reflected on the system design, what classes of information is leaked and, finally, how to manage the risks related to the social and technical issues. I. INTRODUCTION In online communities, people are typically strangers to each other and do not meet face to face. Consequently, estimating the trustworthiness of the other parties is more difficult than in every day life. To assist users in their trust decisions and to reduce the related risks, various reputation systems are being developed. These systems collect reputation information about the the users' past behavior, and have a mechanism to provide trustworthiness estimates based on the information. Characteristically, many of the current online communities manage the reputation information in a centralized manner. One of the most analyzed examples is the eBay's feedback forum (12). In this type of a centralized solution, one benefit is that the trusted third party (in this case: eBay) can play an important role in trust evaluations. In contrast, fully distributed peer-to-peer (P2P) networks have no centralized trusted third parties and the actual interactions happen directly between the peers. The peers, e.g., provide storage capacity to the community and they have to be able to evaluate other peers' trustworthiness on their own. Although the reputation information is useful in trustwor- thiness estimation (33), (23), collecting this information also presents privacy problems. In reputation management, the privacy problems arise when large amounts of the information is easily available and the user can be identified. In particu- lar, the identifiable information enables undesired tracing of the user's past behavior and preferences. And these threats increase along with the current trend of boosting data storage and processing capacity, which allows the possible malicious peers more capacity for monitoring others. In this paper, we examine how the decentralization of reputation management in P2P networks deals with the privacy concerns of processing reputation information. Fundamentally, the reputation information itself has to be public at least within the target community to be of any use. This means that the users, who have agreed to being evaluated by the reputation system, have also willingly given up some of their privacy. However, this should not mean that they automatically give up all privacy. In our study, we examine how to enhance the user's privacy so that only the necessary information is revealed and, thus, unnecessary information is not disclosed to others. We argue that even though no extra information is collected for assisting user's trust decision, its uncontrolled availability is a privacy concern as the information is always linked to the user's identifier. To address these issues, we examine distributed reputation management from three complementary angles. Firstly, we examine how the fair information practices proposed, e.g. by the data protection legislation, should be reflected on the design of distributed reputation management. Secondly, we discuss the different types of privacy in the context of distributed reputation management, and finally, we consider how to manage the privacy risks. The results show that protecting the privacy of any one peer is more difficult than preventing the collective misusage of sev- eral peers' reputation information. The possible misbehavior can be made more difficult, but all privacy problems can not be tackled inclusively - a certain loss of privacy is the price the user has to pay for the benefit of gaining reputation.","PeriodicalId":382662,"journal":{"name":"Workshop of the 1st International Conference on Security and Privacy for Emerging Areas in Communication Networks, 2005.","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Workshop of the 1st International Conference on Security and Privacy for Emerging Areas in Communication Networks, 2005.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SECCMW.2005.1588297","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

In online communities, the users typically do not meet personally, and, thus, they have to estimate the trustwor- thiness of the other parties using other means. To assist these estimations, various reputation systems have been developed. But collecting the required reputation information, which, essentially, is information about the user's past, also creates privacy con- cerns. In this paper, we examine how the distribution of reputation management using P2P networks deals with the privacy concerns of processing reputation information. We analyze the distributed reputation management from three angles: how the requirements of fair use practices should be reflected on the system design, what classes of information is leaked and, finally, how to manage the risks related to the social and technical issues. I. INTRODUCTION In online communities, people are typically strangers to each other and do not meet face to face. Consequently, estimating the trustworthiness of the other parties is more difficult than in every day life. To assist users in their trust decisions and to reduce the related risks, various reputation systems are being developed. These systems collect reputation information about the the users' past behavior, and have a mechanism to provide trustworthiness estimates based on the information. Characteristically, many of the current online communities manage the reputation information in a centralized manner. One of the most analyzed examples is the eBay's feedback forum (12). In this type of a centralized solution, one benefit is that the trusted third party (in this case: eBay) can play an important role in trust evaluations. In contrast, fully distributed peer-to-peer (P2P) networks have no centralized trusted third parties and the actual interactions happen directly between the peers. The peers, e.g., provide storage capacity to the community and they have to be able to evaluate other peers' trustworthiness on their own. Although the reputation information is useful in trustwor- thiness estimation (33), (23), collecting this information also presents privacy problems. In reputation management, the privacy problems arise when large amounts of the information is easily available and the user can be identified. In particu- lar, the identifiable information enables undesired tracing of the user's past behavior and preferences. And these threats increase along with the current trend of boosting data storage and processing capacity, which allows the possible malicious peers more capacity for monitoring others. In this paper, we examine how the decentralization of reputation management in P2P networks deals with the privacy concerns of processing reputation information. Fundamentally, the reputation information itself has to be public at least within the target community to be of any use. This means that the users, who have agreed to being evaluated by the reputation system, have also willingly given up some of their privacy. However, this should not mean that they automatically give up all privacy. In our study, we examine how to enhance the user's privacy so that only the necessary information is revealed and, thus, unnecessary information is not disclosed to others. We argue that even though no extra information is collected for assisting user's trust decision, its uncontrolled availability is a privacy concern as the information is always linked to the user's identifier. To address these issues, we examine distributed reputation management from three complementary angles. Firstly, we examine how the fair information practices proposed, e.g. by the data protection legislation, should be reflected on the design of distributed reputation management. Secondly, we discuss the different types of privacy in the context of distributed reputation management, and finally, we consider how to manage the privacy risks. The results show that protecting the privacy of any one peer is more difficult than preventing the collective misusage of sev- eral peers' reputation information. The possible misbehavior can be made more difficult, but all privacy problems can not be tackled inclusively - a certain loss of privacy is the price the user has to pay for the benefit of gaining reputation.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
分布式信誉管理中的隐私
在在线社区中,用户通常不会亲自见面,因此,他们必须使用其他方法来估计其他各方的可信度。为了协助这些评估,开发了各种声誉系统。但是,收集必要的声誉信息,本质上是关于用户过去的信息,也会产生隐私问题。在本文中,我们研究了使用P2P网络的声誉管理分布如何处理处理声誉信息的隐私问题。本文从三个角度对分布式信誉管理进行了分析:合理使用实践的要求应如何体现在制度设计中,泄露的信息类型是什么,以及如何管理与社会和技术问题相关的风险。在网络社区中,人们通常是陌生人,不见面。因此,估计对方的可信度比在日常生活中要困难得多。为了帮助用户作出信任决定并减少相关风险,正在开发各种信誉系统。这些系统收集有关用户过去行为的信誉信息,并具有基于这些信息提供可信度估计的机制。典型的特点是,当前许多在线社区以集中的方式管理声誉信息。分析最多的例子之一是eBay的反馈论坛(12)。在这种集中式解决方案中,一个好处是受信任的第三方(在本例中是eBay)可以在信任评估中发挥重要作用。相反,完全分布式的点对点(P2P)网络没有集中可信的第三方,实际的交互直接发生在对等体之间。例如,对等体为社区提供存储容量,它们必须能够自己评估其他对等体的可信度。尽管声誉信息在可信度估计中很有用(33),(23),但收集这些信息也会带来隐私问题。在声誉管理中,当大量信息很容易获得并且用户可以被识别时,就会出现隐私问题。特别是,可识别的信息可以对用户过去的行为和偏好进行不必要的跟踪。这些威胁随着当前数据存储和处理能力的提高而增加,这使得可能的恶意对等体有更多的能力来监视其他恶意对等体。在本文中,我们研究了P2P网络中声誉管理的去中心化如何处理处理声誉信息的隐私问题。从根本上说,声誉信息本身必须是公开的,至少在目标社区内是公开的。这意味着同意接受信誉系统评估的用户也愿意放弃一些隐私。然而,这并不意味着他们会自动放弃所有隐私。在我们的研究中,我们研究了如何增强用户的隐私,以便只披露必要的信息,从而不向他人披露不必要的信息。我们认为,即使没有收集额外的信息来帮助用户的信任决策,其不受控制的可用性是一个隐私问题,因为信息总是链接到用户的标识符。为了解决这些问题,我们从三个互补的角度研究分布式声誉管理。首先,我们考察了数据保护立法所提出的公平信息实践应如何反映在分布式信誉管理的设计中。其次,我们讨论了分布式信誉管理背景下不同类型的隐私,最后,我们考虑了如何管理隐私风险。结果表明,保护任意一个节点的隐私比防止多个节点声誉信息被集体滥用要困难得多。可能的不当行为可以变得更加困难,但所有的隐私问题都不能被包容性地解决——一定程度的隐私损失是用户为获得声誉所必须付出的代价。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Cyberprofiling: offender profiling and geographic profiling of crime on the Internet Sharing network logs for computer forensics: a new tool for the anonymization of netflow records Full agreement in BAN kerberos Privacy in distributed reputation management A policy-based approach to wireless LAN security management
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1