Anonymous Internet access has been researched extensively and many proposals exist for enhancing the privacy of users. However, there are vast amounts of legacy authentication systems that do not take the privacy of the users into consideration. Many networks use, for example, MAC address or IP address based authentication, despite of their limited security properties. These authentication systems hinder the possibility to use e.g. pseurandom MAC addresses for privacy protection. In this paper, we propose a privacy management system for layers below the transport layer in the IP stack. Our implementation allows the users to decide their privacy parameters depending on their current situation. The implementation uses the Host Identity Protocol to provide authenticated and secure seamless handovers for mobile nodes. The approach is also applicable to an IP stack without the Host Identity Protocol.
{"title":"Privacy management for secure mobility","authors":"J. Lindqvist, Laura Takkinen","doi":"10.1145/1179601.1179612","DOIUrl":"https://doi.org/10.1145/1179601.1179612","url":null,"abstract":"Anonymous Internet access has been researched extensively and many proposals exist for enhancing the privacy of users. However, there are vast amounts of legacy authentication systems that do not take the privacy of the users into consideration. Many networks use, for example, MAC address or IP address based authentication, despite of their limited security properties. These authentication systems hinder the possibility to use e.g. pseurandom MAC addresses for privacy protection. In this paper, we propose a privacy management system for layers below the transport layer in the IP stack. Our implementation allows the users to decide their privacy parameters depending on their current situation. The implementation uses the Host Identity Protocol to provide authenticated and secure seamless handovers for mobile nodes. The approach is also applicable to an IP stack without the Host Identity Protocol.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"10 1","pages":"63-66"},"PeriodicalIF":0.0,"publicationDate":"2006-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90633032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sometimes, it is necessary to remove author names and other personally identifiable information (PII) from documents before publication. We have implemented a novel defensive tool for detecting such data automatically. By using the detection tool, we have learned about where PII may be stored in documents and how it is put there. A key observation is that, contrary to common belief, user and machine identifiers and other metadata are not embedded in documents only by a single piece of software, such as a word processor, but by various tools used at different stages of the document authoring process.
{"title":"Scanning electronic documents for personally identifiable information","authors":"T. Aura, T. A. Kuhn, M. Roe","doi":"10.1145/1179601.1179608","DOIUrl":"https://doi.org/10.1145/1179601.1179608","url":null,"abstract":"Sometimes, it is necessary to remove author names and other personally identifiable information (PII) from documents before publication. We have implemented a novel defensive tool for detecting such data automatically. By using the detection tool, we have learned about where PII may be stored in documents and how it is put there. A key observation is that, contrary to common belief, user and machine identifiers and other metadata are not embedded in documents only by a single piece of software, such as a word processor, but by various tools used at different stages of the document authoring process.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"14 3 1","pages":"41-50"},"PeriodicalIF":0.0,"publicationDate":"2006-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78182444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present Scratch & Vote; (S&V), a cryptographic voting system designed to minimize cost and complexity: (1) ballots are paper-based and can be printed using today's technology, (2) ballots are universally verifiable without electionofficial intervention, and (3) tallying requires only one trustee decryption per race, thanks to homomorphic aggregation. Scratch & Vote combines the multi-candidate election techniques of Baudron et al. with the ballot-casting simplicity of Chaum and Ryan's paper-based techniques. In addition, S&V allows each voter to participate directly in the audit process on election day, prior; to casting their own ballot.
{"title":"Scratch & vote: self-contained paper-based cryptographic voting","authors":"B. Adida, R. Rivest","doi":"10.1145/1179601.1179607","DOIUrl":"https://doi.org/10.1145/1179601.1179607","url":null,"abstract":"We present Scratch & Vote; (S&V), a cryptographic voting system designed to minimize cost and complexity: (1) ballots are paper-based and can be printed using today's technology, (2) ballots are universally verifiable without electionofficial intervention, and (3) tallying requires only one trustee decryption per race, thanks to homomorphic aggregation. Scratch & Vote combines the multi-candidate election techniques of Baudron et al. with the ballot-casting simplicity of Chaum and Ryan's paper-based techniques. In addition, S&V allows each voter to participate directly in the audit process on election day, prior; to casting their own ballot.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"18 1","pages":"29-40"},"PeriodicalIF":0.0,"publicationDate":"2006-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82002544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many applications of mix networks such as anonymousWeb browsing require relationship anonymity: it should be hard for the attacker to determine who is communicating with whom. Conventional methods for measuring anonymity, however, focus on sender anonymity instead. Sender anonymity guarantees that it is difficult for the attacker to determine the origin of any given message exiting the mix network, but this may not be sufficient to ensure relationship anonymity. Even if the attacker cannot identify the origin of messages arriving to some destination, relationship anonymity will fail if he can determine with high probability that at least one of the messages originated from a particular sender, without necessarily being able to recognize this message among others. We give a formal definition and a calculation methodology for relationship anonymity. Our techniques are similar to those used for sender anonymity, but, unlike sender anonymity, relationship anonymity is sensitive to the distribution of message destinations. In particular, Zipfian distributions with skew values characteristic of Web browsing provide especially poor relationship anonymity. Our methodology takes route selection algorithms into account, and incorporates information-theoretic metrics such as entropy and min-entropy. We illustrate our methodology by calculating relationship anonymity in several simulated mix networks.
{"title":"Measuring relationship anonymity in mix networks","authors":"Vitaly Shmatikov, Ming-Hsiu Wang","doi":"10.1145/1179601.1179611","DOIUrl":"https://doi.org/10.1145/1179601.1179611","url":null,"abstract":"Many applications of mix networks such as anonymousWeb browsing require relationship anonymity: it should be hard for the attacker to determine who is communicating with whom. Conventional methods for measuring anonymity, however, focus on sender anonymity instead. Sender anonymity guarantees that it is difficult for the attacker to determine the origin of any given message exiting the mix network, but this may not be sufficient to ensure relationship anonymity. Even if the attacker cannot identify the origin of messages arriving to some destination, relationship anonymity will fail if he can determine with high probability that at least one of the messages originated from a particular sender, without necessarily being able to recognize this message among others. We give a formal definition and a calculation methodology for relationship anonymity. Our techniques are similar to those used for sender anonymity, but, unlike sender anonymity, relationship anonymity is sensitive to the distribution of message destinations. In particular, Zipfian distributions with skew values characteristic of Web browsing provide especially poor relationship anonymity. Our methodology takes route selection algorithms into account, and incorporates information-theoretic metrics such as entropy and min-entropy. We illustrate our methodology by calculating relationship anonymity in several simulated mix networks.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"29 1","pages":"59-62"},"PeriodicalIF":0.0,"publicationDate":"2006-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85211489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Connections in distributed systems, such as social networks, online communities or peer-to-peer networks, form complex graphs. These graphs are of interest to scientists in fields as varied as marketing, epidemiology and psychology. However, knowledge of the graph is typically distributed among a large number of subjects, each of whom knows only a small piece of the graph. Efforts to assemble these pieces often fail because of privacy concerns: subjects refuse to share their local knowledge of the graph. To assuage these privacy concerns, we propose reconstructing the whole graph privately, i.e., in a way that hides the correspondence between the nodes and edges in the graph and the real-life entities and relationships that they represent. We first model the privacy threats posed by the private reconstruction of a distributed graph. Our model takes into account the possibility that malicious nodes may report incorrect information about the graph in order to facilitate later attempts to de-anonymize the reconstructed graph. We then propose protocols to privately assemble the pieces of a graph in ways that mitigate these threats. These protocols severely restrict the ability of adversaries to compromise the privacy of honest subjects.
{"title":"Private social network analysis: how to assemble pieces of a graph privately","authors":"Keith B. Frikken, P. Golle","doi":"10.1145/1179601.1179619","DOIUrl":"https://doi.org/10.1145/1179601.1179619","url":null,"abstract":"Connections in distributed systems, such as social networks, online communities or peer-to-peer networks, form complex graphs. These graphs are of interest to scientists in fields as varied as marketing, epidemiology and psychology. However, knowledge of the graph is typically distributed among a large number of subjects, each of whom knows only a small piece of the graph. Efforts to assemble these pieces often fail because of privacy concerns: subjects refuse to share their local knowledge of the graph. To assuage these privacy concerns, we propose reconstructing the whole graph privately, i.e., in a way that hides the correspondence between the nodes and edges in the graph and the real-life entities and relationships that they represent. We first model the privacy threats posed by the private reconstruction of a distributed graph. Our model takes into account the possibility that malicious nodes may report incorrect information about the graph in order to facilitate later attempts to de-anonymize the reconstructed graph. We then propose protocols to privately assemble the pieces of a graph in ways that mitigate these threats. These protocols severely restrict the ability of adversaries to compromise the privacy of honest subjects.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"5 1","pages":"89-98"},"PeriodicalIF":0.0,"publicationDate":"2006-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88205095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federated Identity Management (FIM) allows for securely provisioning certified user identities and attributes to relying parties. It establishes higher security and data quality compared to user-asserted attributes and provides for stronger user privacy protection than technologies based upon user-side attribute certificates. Therefore, industry pursues the deployment of FIM solutions as one cornerstone of the WS-Security framework. Current research proposes even more powerful methods for security and privacy protection in identity management with so called anonymous credential systems. Being based on new, yet well-researched, signature schemes and cryptographic zero-knowledge proofs, these systems have the potential to improve the capabilities of FIM by superior privacy protection, user control, and multiple use of single credentials. Unfortunately, anonymous credential systems and their semantics being based upon zero-knowledge proofs are incompatible with the XML Signature Standard which is the basis for the WS-Security and most FIM frameworks. We put forth a general construction for integrating anonymous credential systems with the XML Signature Standard and FIM protocols. We apply this method to the WS-Security protocol framework and thus obtain a very flexible WS-Federation Active Requestor Profile with strong user control and superior privacy protection.
{"title":"Enhancing privacy of federated identity management protocols: anonymous credentials in WS-security","authors":"J. Camenisch, Thomas Gross, Dieter Sommer","doi":"10.1145/1179601.1179613","DOIUrl":"https://doi.org/10.1145/1179601.1179613","url":null,"abstract":"Federated Identity Management (FIM) allows for securely provisioning certified user identities and attributes to relying parties. It establishes higher security and data quality compared to user-asserted attributes and provides for stronger user privacy protection than technologies based upon user-side attribute certificates. Therefore, industry pursues the deployment of FIM solutions as one cornerstone of the WS-Security framework. Current research proposes even more powerful methods for security and privacy protection in identity management with so called anonymous credential systems. Being based on new, yet well-researched, signature schemes and cryptographic zero-knowledge proofs, these systems have the potential to improve the capabilities of FIM by superior privacy protection, user control, and multiple use of single credentials. Unfortunately, anonymous credential systems and their semantics being based upon zero-knowledge proofs are incompatible with the XML Signature Standard which is the basis for the WS-Security and most FIM frameworks. We put forth a general construction for integrating anonymous credential systems with the XML Signature Standard and FIM protocols. We apply this method to the WS-Security protocol framework and thus obtain a very flexible WS-Federation Active Requestor Profile with strong user control and superior privacy protection.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"14 1","pages":"67-72"},"PeriodicalIF":0.0,"publicationDate":"2006-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75281561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicola Zannone, S. Jajodia, F. Massacci, D. Wijesekera
Protecting privacy means to ensure users that access to their personal data complies with their preferences. However, information can be manipulated in order to derive new objects that may disclose part of the original information. Therefore, control of information flow is necessary for guaranteeing privacy protection since users should know and control not only who access their personal data, but also who access information derived from their data. Actually, current approaches for access control do not provide support for managing propagation of information and for representing user preferences.This paper proposes to extend the Flexible Authorization Framework (FAF) in order to automatically verify whether a subject is entitled to process personal data and derive the authorizations associated with the outcome of data processing. In order to control information flow, users may specify the range of authorizations that can be associated with objects derived from their data. The framework guarantees that every "valid" derived object does not disclose more information than users want and preserves the permissions that users want to maintain. To make the discussion more concrete, we illustrate the proposal with a bank case study.
{"title":"Maintaining privacy on derived objects","authors":"Nicola Zannone, S. Jajodia, F. Massacci, D. Wijesekera","doi":"10.1145/1102199.1102202","DOIUrl":"https://doi.org/10.1145/1102199.1102202","url":null,"abstract":"Protecting privacy means to ensure users that access to their personal data complies with their preferences. However, information can be manipulated in order to derive new objects that may disclose part of the original information. Therefore, control of information flow is necessary for guaranteeing privacy protection since users should know and control not only who access their personal data, but also who access information derived from their data. Actually, current approaches for access control do not provide support for managing propagation of information and for representing user preferences.This paper proposes to extend the Flexible Authorization Framework (FAF) in order to automatically verify whether a subject is entitled to process personal data and derive the authorizations associated with the outcome of data processing. In order to control information flow, users may specify the range of authorizations that can be associated with objects derived from their data. The framework guarantees that every \"valid\" derived object does not disclose more information than users want and preserves the permissions that users want to maintain. To make the discussion more concrete, we illustrate the proposal with a bank case study.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"21 1","pages":"10-19"},"PeriodicalIF":0.0,"publicationDate":"2005-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81648347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper examines a generalization of a two-stage game common on eBay: an ascending-price auction followed by price discrimination (the second chance offer). High bids in the auction lead to high price offers during price discrimination, and a financial disadvantage in the second stage. The disadvantage depends on (a) the amount of information revealed to the seller in the first stage, and hence the extent of privacy protection provided and (b) whether the bidder is non-strategic (ignores the possibility of price discrimination) or rational. A privacy cost of one mechanism over another is defined and studied.For the non-strategic bidder, the second chance offer provides a zero payoff. Addition of privacy protection (anonymity and bid secrecy) decreases revenue and increases expected payoff, with higher bidders benefiting more. Privacy protection can, however, decrease an individual bidder's payoff by shielding potential buyers from the seller and thus causing an opportunity loss.If the bidder is rational, price discrimination results in a lower revenue than consecutive auctions, and is a bad strategy for the seller. Additionally, rational behavior provides more advantage to the bidder than does anonymity protection.
{"title":"The privacy cost of the second-chance offer","authors":"Sumit Joshi, Yu-An Sun, P. Vora","doi":"10.1145/1102199.1102218","DOIUrl":"https://doi.org/10.1145/1102199.1102218","url":null,"abstract":"This paper examines a generalization of a two-stage game common on eBay: an ascending-price auction followed by price discrimination (the second chance offer). High bids in the auction lead to high price offers during price discrimination, and a financial disadvantage in the second stage. The disadvantage depends on (a) the amount of information revealed to the seller in the first stage, and hence the extent of privacy protection provided and (b) whether the bidder is non-strategic (ignores the possibility of price discrimination) or rational. A privacy cost of one mechanism over another is defined and studied.For the non-strategic bidder, the second chance offer provides a zero payoff. Addition of privacy protection (anonymity and bid secrecy) decreases revenue and increases expected payoff, with higher bidders benefiting more. Privacy protection can, however, decrease an individual bidder's payoff by shielding potential buyers from the seller and thus causing an opportunity loss.If the bidder is rational, price discrimination results in a lower revenue than consecutive auctions, and is a bad strategy for the seller. Additionally, rational behavior provides more advantage to the bidder than does anonymity protection.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"30 1","pages":"97-106"},"PeriodicalIF":0.0,"publicationDate":"2005-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87483616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One fundamental aspect of user privacy is to respect the privacy preferences that users have. A clear prerequisite to doing this is accurately gauging what user's privacy preferences are. Current approaches either offer limited privacy options or have so many choices that users are likely to be overwhelmed. We present a framework for modeling user privacy preferences in terms of a hierarchy of questions which can be asked. We describe two means of dynamically choosing which questions should be asked to efficiently determine what a user's privacy preferences are.
{"title":"Determining user privacy preferences by asking the right questions: an automated approach","authors":"Keith Irwin, Ting Yu","doi":"10.1145/1102199.1102209","DOIUrl":"https://doi.org/10.1145/1102199.1102209","url":null,"abstract":"One fundamental aspect of user privacy is to respect the privacy preferences that users have. A clear prerequisite to doing this is accurately gauging what user's privacy preferences are. Current approaches either offer limited privacy options or have so many choices that users are likely to be overwhelmed. We present a framework for modeling user privacy preferences in terms of a hierarchy of questions which can be asked. We describe two means of dynamically choosing which questions should be asked to efficiently determine what a user's privacy preferences are.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"6 1","pages":"47-50"},"PeriodicalIF":0.0,"publicationDate":"2005-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75021607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Radio Frequency Identification (RFID) technology raises significant privacy issues because it enables tracking of items and people possibly without their knowledge or consent. One of the biggest challenges for RFID technology is to provide privacy protection without raising tag production and management cost. We introduce a new architecture that uses trusted computing primitives to solve this problem. Our design splits the RFID reader into three software modules: a Reader Core with basic functionality, a Policy Engine that controls the use of RFID-derived data, and a Consumer Agent that performs privacy audits on the RFID reader and exports audit results to third party auditors. Readers use remote attestation to prove they are running a specific Reader Core, Policy Engine, and Consumer Agent. As a result, remote attestation allows concerned individuals to verify that RFID readers comply with privacy regulations, while also allowing the reader owner to verify that the reader has not been compromised.Furthermore, industry standards bodies have suggested several mechanisms to protect privacy in which authorized readers use a shared secret to authenticate themselves to the tag. These standards have not fully addressed issues of key management. First, how is the shared secret securely provided to the legitimate reader? Second, how do we guarantee that the reader will comply with a specific privacy policy? We show how, with remote attestation, the key-issuing authority can demand such a proof before releasing shared secrets to the reader. We also show how sealed storage can protect secrets even if the reader is compromised. Finally, we sketch how our design could be implemented today using existing RFID reader hardware.
{"title":"Privacy for RFID through trusted computing","authors":"D. Molnar, A. Soppera, D. Wagner","doi":"10.1145/1102199.1102206","DOIUrl":"https://doi.org/10.1145/1102199.1102206","url":null,"abstract":"Radio Frequency Identification (RFID) technology raises significant privacy issues because it enables tracking of items and people possibly without their knowledge or consent. One of the biggest challenges for RFID technology is to provide privacy protection without raising tag production and management cost. We introduce a new architecture that uses trusted computing primitives to solve this problem. Our design splits the RFID reader into three software modules: a Reader Core with basic functionality, a Policy Engine that controls the use of RFID-derived data, and a Consumer Agent that performs privacy audits on the RFID reader and exports audit results to third party auditors. Readers use remote attestation to prove they are running a specific Reader Core, Policy Engine, and Consumer Agent. As a result, remote attestation allows concerned individuals to verify that RFID readers comply with privacy regulations, while also allowing the reader owner to verify that the reader has not been compromised.Furthermore, industry standards bodies have suggested several mechanisms to protect privacy in which authorized readers use a shared secret to authenticate themselves to the tag. These standards have not fully addressed issues of key management. First, how is the shared secret securely provided to the legitimate reader? Second, how do we guarantee that the reader will comply with a specific privacy policy? We show how, with remote attestation, the key-issuing authority can demand such a proof before releasing shared secrets to the reader. We also show how sealed storage can protect secrets even if the reader is compromised. Finally, we sketch how our design could be implemented today using existing RFID reader hardware.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"117 1","pages":"31-34"},"PeriodicalIF":0.0,"publicationDate":"2005-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79865238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}