Linear programming is one of maths' greatest contributions to industry. There are many places where linear programming could be beneficially applied across more than one company, but there is a roadblock. Companies have secrets. The data needed for joint optimization may need to be kept private either through concerns about leaking competitively sensitive data, or due to privacy legislation. Recent research has tackled the problem of privacy-preserving linear programming. One appealing group of approaches uses a 'disguising' transformation to allow one party to perform the joint optimization without seeing the secret data of the other parties. These approaches are very appealing from the point of view of simplicity, efficiency, and flexibility, but we show here that all of the existing transformations have a critical flaw.
{"title":"Hiccups on the road to privacy-preserving linear programming","authors":"Alice Bednarz, N. Bean, M. Roughan","doi":"10.1145/1655188.1655207","DOIUrl":"https://doi.org/10.1145/1655188.1655207","url":null,"abstract":"Linear programming is one of maths' greatest contributions to industry. There are many places where linear programming could be beneficially applied across more than one company, but there is a roadblock. Companies have secrets. The data needed for joint optimization may need to be kept private either through concerns about leaking competitively sensitive data, or due to privacy legislation.\u0000 Recent research has tackled the problem of privacy-preserving linear programming. One appealing group of approaches uses a 'disguising' transformation to allow one party to perform the joint optimization without seeing the secret data of the other parties. These approaches are very appealing from the point of view of simplicity, efficiency, and flexibility, but we show here that all of the existing transformations have a critical flaw.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"72 1","pages":"117-120"},"PeriodicalIF":0.0,"publicationDate":"2009-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88171784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We give a practical polling protocol that is immune to tampering by either the pollster or the responder. It preserves responders' privacy in the manner of Warner's Randomized Response Technique, is easily understood without any knowledge of cryptography, and does not require the use of computers or other electronics. The key is to use physical noisy channels commonly found in lottery or game-show settings, which can deliver the desired properties without relying on a mechanism which is unfamiliar to the responder.
{"title":"Plinko: polling with a physical implementation of a noisy channel","authors":"Chris Alexander, Joel Reardon, I. Goldberg","doi":"10.1145/1655188.1655205","DOIUrl":"https://doi.org/10.1145/1655188.1655205","url":null,"abstract":"We give a practical polling protocol that is immune to tampering by either the pollster or the responder. It preserves responders' privacy in the manner of Warner's Randomized Response Technique, is easily understood without any knowledge of cryptography, and does not require the use of computers or other electronics. The key is to use physical noisy channels commonly found in lottery or game-show settings, which can deliver the desired properties without relying on a mechanism which is unfamiliar to the responder.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"15 1","pages":"109-112"},"PeriodicalIF":0.0,"publicationDate":"2009-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89610261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The amount of contextual data collected, stored, mined, and shared is increasing exponentially. Street cameras, credit card transactions, chat and Twitter logs, e-mail, web site visits, phone logs and recordings, social networking sites, all are examples of data that persists in a manner not under individual control, leading some to declare the death of privacy. We argue here that the ability to generate convincing fake contextual data can be a basic tool in the fight to preserve privacy. One use for the technology is for an individual to make his actual data indistinguishable amongst a pile of false data. In this paper we consider two examples of contextual data, search engine query data and location data. We describe the current state of faking these types of data and our own efforts in this direction.
{"title":"Faking contextual data for fun, profit, and privacy","authors":"Richard Chow, P. Golle","doi":"10.1145/1655188.1655204","DOIUrl":"https://doi.org/10.1145/1655188.1655204","url":null,"abstract":"The amount of contextual data collected, stored, mined, and shared is increasing exponentially. Street cameras, credit card transactions, chat and Twitter logs, e-mail, web site visits, phone logs and recordings, social networking sites, all are examples of data that persists in a manner not under individual control, leading some to declare the death of privacy. We argue here that the ability to generate convincing fake contextual data can be a basic tool in the fight to preserve privacy. One use for the technology is for an individual to make his actual data indistinguishable amongst a pile of false data.\u0000 In this paper we consider two examples of contextual data, search engine query data and location data. We describe the current state of faking these types of data and our own efforts in this direction.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"28 1","pages":"105-108"},"PeriodicalIF":0.0,"publicationDate":"2009-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77987569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Shokri, Julien Freudiger, Murtuza Jadliwala, J. Hubaux
We propose a novel framework for measuring and evaluating location privacy preserving mechanisms in mobile wireless networks. Within this framework, we first present a formal model of the system, which provides an efficient representation of the network users, the adversaries, the location privacy preserving mechanisms and the resulting location privacy of the users. This model is general enough to accurately express and analyze a variety of location privacy metrics that were proposed earlier. By using the proposed model, we provide formal representations of four metrics among the most relevant categories of location privacy metrics. We also present a detailed comparative analysis of these metrics based on a set of criteria for location privacy measurement. Finally, we propose a novel and effective metric for measuring location privacy, called the distortion-based metric, which satisfies these criteria for privacy measurement and is capable of capturing the mobile users' location privacy more precisely than the existing metrics. Our metric estimates location privacy as the expected distortion in the reconstructed users' trajectories by an adversary.
{"title":"A distortion-based metric for location privacy","authors":"R. Shokri, Julien Freudiger, Murtuza Jadliwala, J. Hubaux","doi":"10.1145/1655188.1655192","DOIUrl":"https://doi.org/10.1145/1655188.1655192","url":null,"abstract":"We propose a novel framework for measuring and evaluating location privacy preserving mechanisms in mobile wireless networks. Within this framework, we first present a formal model of the system, which provides an efficient representation of the network users, the adversaries, the location privacy preserving mechanisms and the resulting location privacy of the users. This model is general enough to accurately express and analyze a variety of location privacy metrics that were proposed earlier. By using the proposed model, we provide formal representations of four metrics among the most relevant categories of location privacy metrics. We also present a detailed comparative analysis of these metrics based on a set of criteria for location privacy measurement. Finally, we propose a novel and effective metric for measuring location privacy, called the distortion-based metric, which satisfies these criteria for privacy measurement and is capable of capturing the mobile users' location privacy more precisely than the existing metrics. Our metric estimates location privacy as the expected distortion in the reconstructed users' trajectories by an adversary.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"24 1","pages":"21-30"},"PeriodicalIF":0.0,"publicationDate":"2009-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73146102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We examine peer-to-peer anonymous communication systems that use Distributed Hash Table algorithms for relay selection. We show that common design flaws in these schemes lead to highly effective attacks against the anonymity provided by the schemes. These attacks stem from attacks on DHT routing, and are not mitigated by the well-known DHT security mechanisms due to a fundamental mismatch between the security requirements of DHT routing's put/get functionality and anonymous routing's relay selection functionality. Our attacks essentially allow an adversary that controls only a small fraction of the relays to function as a global active adversary. We apply these attacks in more detail to two schemes: Salsa and Cashmere. In the case of Salsa, we show that an attacker that controls 10% of the relays in a network of size 10,000 can compromise more than 80% of all completed circuits; and in the case of Cashmere, we show that an attacker that controls 20% of the relays in a network of size 64000 can compromise 42% of the circuits.
{"title":"Hashing it out in public: common failure modes of DHT-based anonymity schemes","authors":"Andrew Tran, Nicholas Hopper, Yongdae Kim","doi":"10.1145/1655188.1655199","DOIUrl":"https://doi.org/10.1145/1655188.1655199","url":null,"abstract":"We examine peer-to-peer anonymous communication systems that use Distributed Hash Table algorithms for relay selection. We show that common design flaws in these schemes lead to highly effective attacks against the anonymity provided by the schemes. These attacks stem from attacks on DHT routing, and are not mitigated by the well-known DHT security mechanisms due to a fundamental mismatch between the security requirements of DHT routing's put/get functionality and anonymous routing's relay selection functionality. Our attacks essentially allow an adversary that controls only a small fraction of the relays to function as a global active adversary. We apply these attacks in more detail to two schemes: Salsa and Cashmere. In the case of Salsa, we show that an attacker that controls 10% of the relays in a network of size 10,000 can compromise more than 80% of all completed circuits; and in the case of Cashmere, we show that an attacker that controls 20% of the relays in a network of size 64000 can compromise 42% of the circuits.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"17 1","pages":"71-80"},"PeriodicalIF":0.0,"publicationDate":"2009-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81336824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prior work in psychology shows that introspection inhibits intuition: asking human users to analyze judgements they make can cause them to be quantitatively worse at making those judgments. In this paper, we explore whether this seemingly contradictory phenomenon also occurs when humans craft privacy policies for a Facebook-like social network. Our study presents empirical evidence that suggests the act of introspecting upon one's personal security policy actually makes one worse at making policy decisions; if one aims to reduce privacy spills, the data indicate that educating users before letting them set their privacy policies may actually increase the exposure of private information.
{"title":"The effects of introspection on creating privacy policy","authors":"Stephanie Trudeau, S. Sinclair, Sean W. Smith","doi":"10.1145/1655188.1655190","DOIUrl":"https://doi.org/10.1145/1655188.1655190","url":null,"abstract":"Prior work in psychology shows that introspection inhibits intuition: asking human users to analyze judgements they make can cause them to be quantitatively worse at making those judgments. In this paper, we explore whether this seemingly contradictory phenomenon also occurs when humans craft privacy policies for a Facebook-like social network. Our study presents empirical evidence that suggests the act of introspecting upon one's personal security policy actually makes one worse at making policy decisions; if one aims to reduce privacy spills, the data indicate that educating users before letting them set their privacy policies may actually increase the exposure of private information.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"122 1","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2009-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74190942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We design and analyze the first practical anonymous payment mechanisms for network services. We start by reporting on our experience with the implementation of a routing micropayment solution for Tor. We then propose micropayment protocols of increasingly complex requirements for networked services, such as P2P or cloud-hosted services. The solutions are efficient, with bandwidth and latency overheads of under 4% and 0.9 ms respectively (in ORPay for Tor), provide full anonymity (both for payers and payees), and support thousands of transactions per second.
我们设计并分析了第一个实用的网络服务匿名支付机制。我们首先报告我们在实现Tor路由微支付解决方案方面的经验。然后,我们提出了越来越复杂的网络服务要求的微支付协议,如P2P或云托管服务。这些解决方案是高效的,带宽和延迟开销分别低于4%和0.9 ms(在ORPay for Tor中),提供完全匿名(对付款人和收款人),并支持每秒数千笔交易。
{"title":"XPay: practical anonymous payments for tor routing and other networked services","authors":"Yao Chen, R. Sion, Bogdan Carbunar","doi":"10.1145/1655188.1655195","DOIUrl":"https://doi.org/10.1145/1655188.1655195","url":null,"abstract":"We design and analyze the first practical anonymous payment mechanisms for network services. We start by reporting on our experience with the implementation of a routing micropayment solution for Tor. We then propose micropayment protocols of increasingly complex requirements for networked services, such as P2P or cloud-hosted services.\u0000 The solutions are efficient, with bandwidth and latency overheads of under 4% and 0.9 ms respectively (in ORPay for Tor), provide full anonymity (both for payers and payees), and support thousands of transactions per second.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"23 1","pages":"41-50"},"PeriodicalIF":0.0,"publicationDate":"2009-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75259745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reputation systems are popular tools to evaluate the trustworthiness of an unknown party before a transaction, but the reputation score can greatly impact the rated subject, such that it might be inclined to suppress negative ratings. In order to elicit coercion-resistant, honest feedback, this paper proposes a reputation system that provides complete privacy of the ratings, i.e. neither the ratee nor the reputation system will learn the value of the rating. We take both, a cryptographic as well as a non-cryptographic approach, to the problem. Privacy of ratings may foster bad mouthing attacks where an attacker leaves intentionally bad feedback. We limit the possibility for this attack by providing a token system such that one can only leave feedback after a transaction, and provide a cryptographic proof of the privacy of our system. We consider the Virtual Organization formation problem and develop and evaluate a novel reputation aggregation algorithm for it.
{"title":"A verifiable, centralized, coercion-free reputation system","authors":"F. Kerschbaum","doi":"10.1145/1655188.1655197","DOIUrl":"https://doi.org/10.1145/1655188.1655197","url":null,"abstract":"Reputation systems are popular tools to evaluate the trustworthiness of an unknown party before a transaction, but the reputation score can greatly impact the rated subject, such that it might be inclined to suppress negative ratings. In order to elicit coercion-resistant, honest feedback, this paper proposes a reputation system that provides complete privacy of the ratings, i.e. neither the ratee nor the reputation system will learn the value of the rating. We take both, a cryptographic as well as a non-cryptographic approach, to the problem. Privacy of ratings may foster bad mouthing attacks where an attacker leaves intentionally bad feedback. We limit the possibility for this attack by providing a token system such that one can only leave feedback after a transaction, and provide a cryptographic proof of the privacy of our system. We consider the Virtual Organization formation problem and develop and evaluate a novel reputation aggregation algorithm for it.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"26 1","pages":"61-70"},"PeriodicalIF":0.0,"publicationDate":"2009-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76467294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At STOC 2006 and CRYPTO 2007, Beimel et. al. introduced a set of privacy requirements for algorithms that solve search problems. In this paper, we consider the longest common subsequence (LCS) problem as a private search problem, where the task is to find a string of (or embedding corresponding to) an LCS. We show that deterministic selection strategies do not meet the privacy guarantees considered for private search problems and, in fact, may "leak" an amount of information proportional to the entire input. We then put forth and investigate several privacy structures for the LCS problem and design new and efficient output sampling and equivalence protecting algorithms that provably meet the corresponding privacy notions. Along the way, we also provide output sampling and equivalence protecting algorithms for finite regular languages, which may be of independent interest.
{"title":"Longest common subsequence as private search","authors":"Mark A. Gondree, Payman Mohassel","doi":"10.1145/1655188.1655200","DOIUrl":"https://doi.org/10.1145/1655188.1655200","url":null,"abstract":"At STOC 2006 and CRYPTO 2007, Beimel et. al. introduced a set of privacy requirements for algorithms that solve search problems. In this paper, we consider the longest common subsequence (LCS) problem as a private search problem, where the task is to find a string of (or embedding corresponding to) an LCS. We show that deterministic selection strategies do not meet the privacy guarantees considered for private search problems and, in fact, may \"leak\" an amount of information proportional to the entire input.\u0000 We then put forth and investigate several privacy structures for the LCS problem and design new and efficient output sampling and equivalence protecting algorithms that provably meet the corresponding privacy notions. Along the way, we also provide output sampling and equivalence protecting algorithms for finite regular languages, which may be of independent interest.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"122 1","pages":"81-90"},"PeriodicalIF":0.0,"publicationDate":"2009-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80984461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the main privacy concerns of users when submitting their data to an organization is that their data will be used only for the specified purposes. Although privacy policies can specify the purpose, enforcing such policies remains a challenge. In this paper we propose an approach to enforcing purpose in access control systems that uses workflows. The intuition behind this approach is that purpose of access can be inferred, and hence associated with, the workflow in which the access takes place. We thus propose to encode purposes as properties of workflows used by organizations and show how this can be implemented. The approach is more general than other known approaches to purpose-based enforcement, and can be used to implement them. We argue the advantages of the new approach in terms of accuracy and expressiveness.
{"title":"Enforcing purpose of use via workflows","authors":"Mohammad Jafari, R. Safavi-Naini, N. Sheppard","doi":"10.1145/1655188.1655206","DOIUrl":"https://doi.org/10.1145/1655188.1655206","url":null,"abstract":"One of the main privacy concerns of users when submitting their data to an organization is that their data will be used only for the specified purposes. Although privacy policies can specify the purpose, enforcing such policies remains a challenge. In this paper we propose an approach to enforcing purpose in access control systems that uses workflows. The intuition behind this approach is that purpose of access can be inferred, and hence associated with, the workflow in which the access takes place. We thus propose to encode purposes as properties of workflows used by organizations and show how this can be implemented. The approach is more general than other known approaches to purpose-based enforcement, and can be used to implement them. We argue the advantages of the new approach in terms of accuracy and expressiveness.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"1 1","pages":"113-116"},"PeriodicalIF":0.0,"publicationDate":"2009-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82086144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}