Storage of personal information by service providers risks privacy loss from data breaches. Our prior work on minimal disclosure credentials presented a mechanism to control the dissemination of personal information. In that work, personal data was broken into individual claims, which can be released in arbitrary subsets while still being cryptographically verifiable. In applying that work, we encountered the problem of connections between claims, which manifest as disclosure dependencies. In this work, we provide an efficient way to provide minimal disclosure, but with cryptographic enforcement of dependencies between claims, as specified by the claims certifier. This provides a mechanism for redactable signatures on data with disclosure dependencies. We show that an implementation of our scheme can verify thousands of dependent claims in tens of milliseconds. We also describe ongoing work in which the approach is being used within a larger system for dispensing personal health records.
{"title":"Redactable signatures on data with dependencies and their application to personal health records","authors":"David Bauer, D. Blough, A. Mohan","doi":"10.1145/1655188.1655201","DOIUrl":"https://doi.org/10.1145/1655188.1655201","url":null,"abstract":"Storage of personal information by service providers risks privacy loss from data breaches. Our prior work on minimal disclosure credentials presented a mechanism to control the dissemination of personal information. In that work, personal data was broken into individual claims, which can be released in arbitrary subsets while still being cryptographically verifiable. In applying that work, we encountered the problem of connections between claims, which manifest as disclosure dependencies. In this work, we provide an efficient way to provide minimal disclosure, but with cryptographic enforcement of dependencies between claims, as specified by the claims certifier. This provides a mechanism for redactable signatures on data with disclosure dependencies. We show that an implementation of our scheme can verify thousands of dependent claims in tens of milliseconds. We also describe ongoing work in which the approach is being used within a larger system for dispensing personal health records.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"56 1","pages":"91-100"},"PeriodicalIF":0.0,"publicationDate":"2009-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82662691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eleni Gessiou, Alexandros Labrinidis, S. Ioannidis
We highlight the privacy issues that have arisen from the introduction of the Greek Social Security Number (AMKA), in connection with the availability of personally identifiable information on Greek web sites. In particular, we identify privacy problems with the current AMKA setup and present data from a web study we conducted in May 2009, exposing these problems. Given the anticipated ubiquity of AMKA in Greece in the future, along the lines of the Social Security Number in the US.
{"title":"A Greek (privacy) tragedy: the introduction of social security numbers in Greece","authors":"Eleni Gessiou, Alexandros Labrinidis, S. Ioannidis","doi":"10.1145/1655188.1655203","DOIUrl":"https://doi.org/10.1145/1655188.1655203","url":null,"abstract":"We highlight the privacy issues that have arisen from the introduction of the Greek Social Security Number (AMKA), in connection with the availability of personally identifiable information on Greek web sites. In particular, we identify privacy problems with the current AMKA setup and present data from a web study we conducted in May 2009, exposing these problems. Given the anticipated ubiquity of AMKA in Greece in the future, along the lines of the Social Security Number in the US.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"26 1","pages":"101-104"},"PeriodicalIF":0.0,"publicationDate":"2009-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90944034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently many schemes, including k-anonymity [8], l-diversity [6] and t-closeness [5] have been introduced for preserving individual privacy when publishing database tables. Furthermore k-anonymity and l-diversity have been shown to have weaknesses. In this paper, we show that t-closeness also has limitations, more specifically we argue that: i) choosing the correct value for t is difficult, ii) t-closeness does not allow some values of sensitive attributes to be more sensitive than other values, and iii) to prevent certain types of privacy leaks t must be set to such a small value that it produces low-quality published data. In this paper we propose a new privacy metric,(αi, βi)-closeness, that mitigates these problems. We also show how to calculate an optimal release table (in the full domain model) that satisfies (αi, βi)-closeness and we present experimental results that show that the data quality provided by 9αi, β;i),-closeness is higher than t-closeness, k-anonymity, and l-diversity while achieving the same privacy goals.
{"title":"Yet another privacy metric for publishing micro-data","authors":"Keith B. Frikken, Yihua Zhang","doi":"10.1145/1456403.1456423","DOIUrl":"https://doi.org/10.1145/1456403.1456423","url":null,"abstract":"Recently many schemes, including <i>k</i>-anonymity [8], <i>l</i>-diversity [6] and <i>t</i>-closeness [5] have been introduced for preserving individual privacy when publishing database tables. Furthermore <i>k</i>-anonymity and <i>l</i>-diversity have been shown to have weaknesses. In this paper, we show that <i>t</i>-closeness also has limitations, more specifically we argue that: i) choosing the correct value for <i>t</i> is difficult, ii) <i>t</i>-closeness does not allow some values of sensitive attributes to be more sensitive than other values, and iii) to prevent certain types of privacy leaks <i>t</i> must be set to such a small value that it produces low-quality published data. In this paper we propose a new privacy metric,(α<sub><i>i</i></sub>, β<sub><i>i</i></sub>)-closeness, that mitigates these problems. We also show how to calculate an optimal release table (in the full domain model) that satisfies (α<sub><i>i</i></sub>, β<sub><i>i</i></sub>)-closeness and we present experimental results that show that the data quality provided by 9α<sub><i>i</i></sub>, β;<sub><i>i</i></sub>),-closeness is higher than <i>t</i>-closeness, <i>k</i>-anonymity, and <i>l</i>-diversity while achieving the same privacy goals.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"94 1","pages":"117-122"},"PeriodicalIF":0.0,"publicationDate":"2008-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83890824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unlinkability describes the inability of an observer to decide whether certain items of interest are related or not. Privacy aware protocol designers need a consistent and meaningful unlinkability measure to asses protocols in face of different attacks. In this paper we show that entropy measures are not sufficient for measuring unlinkability. We propose an alternative measure that estimates the error made by an attacker. We show by example that our expected distance provides a consistent measure that offers a better estimation of message-unlinkability.
{"title":"Measuring unlinkability revisited","authors":"Lars Fischer, S. Katzenbeisser, C. Eckert","doi":"10.1145/1456403.1456421","DOIUrl":"https://doi.org/10.1145/1456403.1456421","url":null,"abstract":"Unlinkability describes the inability of an observer to decide whether certain items of interest are related or not. Privacy aware protocol designers need a consistent and meaningful unlinkability measure to asses protocols in face of different attacks. In this paper we show that entropy measures are not sufficient for measuring unlinkability. We propose an alternative measure that estimates the error made by an attacker. We show by example that our expected distance provides a consistent measure that offers a better estimation of message-unlinkability.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"6 1","pages":"105-110"},"PeriodicalIF":0.0,"publicationDate":"2008-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90106127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benedikt Gierlichs, C. Troncoso, Claudia Díaz, B. Preneel, I. Verbauwhede
Recently, Edman et al. proposed the system's anonymity level [10], a combinatorial approach to measure the amount of additional information needed to reveal the communication pattern in a mix-based anonymous communication system as a whole. The metric is based on the number of possible bijective mappings between the inputs and the outputs of the mix. In this work we show that Edman et al.'s approach fails to capture the anonymity loss caused by subjects sending or receiving more than one message. We generalize the system's anonymity level in scenarios where user relations can be modeled as yes/no relations to cases where subjects send and receive an arbitrary number of messages. Further, we describe an algorithm to compute the redefined metric.
{"title":"Revisiting a combinatorial approach toward measuring anonymity","authors":"Benedikt Gierlichs, C. Troncoso, Claudia Díaz, B. Preneel, I. Verbauwhede","doi":"10.1145/1456403.1456422","DOIUrl":"https://doi.org/10.1145/1456403.1456422","url":null,"abstract":"Recently, Edman et al. proposed the system's anonymity level [10], a combinatorial approach to measure the amount of additional information needed to reveal the communication pattern in a mix-based anonymous communication system as a whole. The metric is based on the number of possible bijective mappings between the inputs and the outputs of the mix. In this work we show that Edman et al.'s approach fails to capture the anonymity loss caused by subjects sending or receiving more than one message. We generalize the system's anonymity level in scenarios where user relations can be modeled as yes/no relations to cases where subjects send and receive an arbitrary number of messages. Further, we describe an algorithm to compute the redefined metric.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"2017 1","pages":"111-116"},"PeriodicalIF":0.0,"publicationDate":"2008-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86742870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As devices move within a cellular network, they register their new location with cell base stations to allow for the correct forwarding of data. We show it is possible to identify a mobile user from these records and a pre-existing location profile, based on previous movement. Two different identification processes are studied, and their performances are evaluated on real cell location traces. The best of those allows for the identification of around 80% of users. We also study the misidentified users and characterise them using hierarchical clustering techniques. Our findings highlight the difficulty of anonymizing location data, and firmly establish they are personally identifiable.
{"title":"Identification via location-profiling in GSM networks","authors":"Yoni De Mulder, G. Danezis, L. Batina, B. Preneel","doi":"10.1145/1456403.1456409","DOIUrl":"https://doi.org/10.1145/1456403.1456409","url":null,"abstract":"As devices move within a cellular network, they register their new location with cell base stations to allow for the correct forwarding of data. We show it is possible to identify a mobile user from these records and a pre-existing location profile, based on previous movement. Two different identification processes are studied, and their performances are evaluated on real cell location traces. The best of those allows for the identification of around 80% of users. We also study the misidentified users and characterise them using hierarchical clustering techniques. Our findings highlight the difficulty of anonymizing location data, and firmly establish they are personally identifiable.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"118 1","pages":"23-32"},"PeriodicalIF":0.0,"publicationDate":"2008-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82572722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social networking websites are enormously popular, but they present a number of privacy risks to their users, one of the foremost of which being that social network service providers are able to observe and accumulate the information that users transmit through the network. We aim to mitigate this risk by presenting a new architecture for protecting information published through the social networking website, Facebook, through encryption. Our architecture makes a trade-off between security and usability in the interests of minimally affecting users' workflow and maintaining universal accessibility. While active attacks by Facebook could compromise users' privacy, our architecture dramatically raises the cost of such potential compromises and, importantly, places them within a framework for legal privacy protection because they would violate a user's reasonable expectation of privacy. We have built a prototype Facebook application implementing our architecture, addressing some of the limitations of the Facebook platform through proxy cryptography.
{"title":"FlyByNight: mitigating the privacy risks of social networking","authors":"Matthew M. Lucas, N. Borisov","doi":"10.1145/1456403.1456405","DOIUrl":"https://doi.org/10.1145/1456403.1456405","url":null,"abstract":"Social networking websites are enormously popular, but they present a number of privacy risks to their users, one of the foremost of which being that social network service providers are able to observe and accumulate the information that users transmit through the network. We aim to mitigate this risk by presenting a new architecture for protecting information published through the social networking website, Facebook, through encryption. Our architecture makes a trade-off between security and usability in the interests of minimally affecting users' workflow and maintaining universal accessibility. While active attacks by Facebook could compromise users' privacy, our architecture dramatically raises the cost of such potential compromises and, importantly, places them within a framework for legal privacy protection because they would violate a user's reasonable expectation of privacy. We have built a prototype Facebook application implementing our architecture, addressing some of the limitations of the Facebook platform through proxy cryptography.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"14 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2008-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78746986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Reeder, Patrick Gage Kelley, Aleecia M. McDonald, L. Cranor
Displaying website privacy policies to consumers in ways they understand is an important part of gaining consumers' trust and informed consent, yet most website privacy policies today are presented in confusing, legalistic natural language. Moreover, because website privacy policy presentations vary from website to website, policies are difficult to compare and it is difficult for consumers to determine which websites offer the best privacy protections. The Platform for Privacy Preferences P3P) addresses part of the problem with natural language policies by providing a formal, machine-readable language for expressing privacy policies in a manner that is standardized across websites. To address remaining problems, an automated tool must be developed to read P3P policies and display them to users in a comprehensible way. To this end, we have developed a P3P policy presentation tool based on the Expandable Grid, a visualization technique for displaying policies in an interactive matrix. In prior work, the Expandable Grid has been shown to work well for displaying file permissions policies, so it appears to hold promise for presenting online privacy policies as well. To evaluate our Expandable Grid interface, we conducted two user studies, an online study with 520 participants and a laboratory study with 12 participants. The studies compared participants' comprehension of privacy policies presented with the Grid interface with their comprehension of the same policies presented in natural language. To our surprise, comprehension of policies was, for the most part, no better with the Grid interface than with natural language. We describe why the Grid interface did not perform well in our study and discuss implications for when and how the Expandable Grid concept can be usefully applied.
以消费者理解的方式向他们展示网站隐私政策是获得消费者信任和知情同意的重要组成部分,然而今天大多数网站隐私政策都是用令人困惑的、法律主义的自然语言呈现的。此外,由于网站隐私政策的介绍因网站而异,政策很难比较,消费者也很难确定哪些网站提供了最好的隐私保护。隐私偏好平台(Platform for Privacy Preferences, P3P)通过提供一种正式的、机器可读的语言,以跨网站标准化的方式表达隐私策略,解决了部分自然语言策略问题。为了解决剩下的问题,必须开发一个自动化工具来读取P3P策略,并以一种可理解的方式将它们显示给用户。为此,我们开发了一个基于可扩展网格的P3P策略表示工具,可扩展网格是一种用于在交互式矩阵中显示策略的可视化技术。在之前的工作中,可扩展网格已经被证明可以很好地显示文件权限策略,因此它似乎也有希望显示在线隐私策略。为了评估我们的可扩展网格界面,我们进行了两项用户研究,一项有520名参与者的在线研究和一项有12名参与者的实验室研究。这些研究比较了参与者对以网格界面呈现的隐私政策的理解和他们对以自然语言呈现的隐私政策的理解。令我们惊讶的是,在大多数情况下,使用网格接口对策略的理解并不比使用自然语言更好。我们描述了网格接口在我们的研究中表现不佳的原因,并讨论了何时以及如何有效应用可扩展网格概念的含义。
{"title":"A user study of the expandable grid applied to P3P privacy policy visualization","authors":"R. Reeder, Patrick Gage Kelley, Aleecia M. McDonald, L. Cranor","doi":"10.1145/1572532.1572582","DOIUrl":"https://doi.org/10.1145/1572532.1572582","url":null,"abstract":"Displaying website privacy policies to consumers in ways they understand is an important part of gaining consumers' trust and informed consent, yet most website privacy policies today are presented in confusing, legalistic natural language. Moreover, because website privacy policy presentations vary from website to website, policies are difficult to compare and it is difficult for consumers to determine which websites offer the best privacy protections. The Platform for Privacy Preferences P3P) addresses part of the problem with natural language policies by providing a formal, machine-readable language for expressing privacy policies in a manner that is standardized across websites. To address remaining problems, an automated tool must be developed to read P3P policies and display them to users in a comprehensible way. To this end, we have developed a P3P policy presentation tool based on the Expandable Grid, a visualization technique for displaying policies in an interactive matrix. In prior work, the Expandable Grid has been shown to work well for displaying file permissions policies, so it appears to hold promise for presenting online privacy policies as well. To evaluate our Expandable Grid interface, we conducted two user studies, an online study with 520 participants and a laboratory study with 12 participants. The studies compared participants' comprehension of privacy policies presented with the Grid interface with their comprehension of the same policies presented in natural language. To our surprise, comprehension of policies was, for the most part, no better with the Grid interface than with natural language. We describe why the Grid interface did not perform well in our study and discuss implications for when and how the Expandable Grid concept can be usefully applied.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"31 1","pages":"45-54"},"PeriodicalIF":0.0,"publicationDate":"2008-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78850266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reed S. Abbott, Timothy W. van der Horst, K. Seamons
This paper presents the design and implementation of Closed Pseudonymous Groups (CPG), a pseudonymous communication system for a closed user community (e.g., a class of students, team of employees, residents of a neighborhood). In CPG, each legitimate user is known by a pseudonym that, while unlinkable to a true identity, enables service providers to link users' behavior and blacklist any abuser of the system. This system is useful for providing honest feedback without fear of reprisals (e.g., instructor/course ratings, employee comments, community feedback for local politics). CPG is designed to be easy to understand, to implement (using existing techniques), and to use. This paper also presents the results of an initial user study that resulted in an important design change.
{"title":"CPG: closed pseudonymous groups","authors":"Reed S. Abbott, Timothy W. van der Horst, K. Seamons","doi":"10.1145/1456403.1456414","DOIUrl":"https://doi.org/10.1145/1456403.1456414","url":null,"abstract":"This paper presents the design and implementation of Closed Pseudonymous Groups (CPG), a pseudonymous communication system for a closed user community (e.g., a class of students, team of employees, residents of a neighborhood). In CPG, each legitimate user is known by a pseudonym that, while unlinkable to a true identity, enables service providers to link users' behavior and blacklist any abuser of the system. This system is useful for providing honest feedback without fear of reprisals (e.g., instructor/course ratings, employee comments, community feedback for local politics). CPG is designed to be easy to understand, to implement (using existing techniques), and to use. This paper also presents the results of an initial user study that resulted in an important design change.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"62 1","pages":"55-64"},"PeriodicalIF":0.0,"publicationDate":"2008-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84292694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DNA analysis is increasingly used in forensics, where it is being pushed as the holy grail of identification. But we are approaching a dramatic "phase change" as we move from genetics to genomics: when sequencing the entire genome of a person becomes sufficiently cheap as to become a routine operation, as is likely to happen in the coming decades, then each DNA examination will expose a wealth of very sensitive personal information about the examined individual, as well as her relatives. In this interdisciplinary discussion paper we highlight the complexity of DNA-related privacy issues as we move into the genomic (as opposed to genetic) era: the "driftnet" approach of comparing scene-of-crime samples against the DNA of the whole population rather than just against that of chosen suspects; the potential for errors in forensic DNA analysis and the consequences on security and privacy; the civil liberties implications of the interaction between medical and forensic applications of genomics. For example, your kin can provide valuable information in a database matching procedure against you even if you don't; and being able to read the whole of a sampled genome, rather than just 13 specific markers from it, provides information about the medical and physical characteristics of the individual. Our aim is to offer a simple but thought-provoking and technically accurate summary of the many issues involved, hoping to stimulate an informed public debate on the statutes by which DNA collection, storage and processing should be regulated.
{"title":"Forensic genomics: kin privacy, driftnets and other open questions","authors":"F. Stajano, L. Bianchi, P. Lio’, D. Korff","doi":"10.1145/1456403.1456407","DOIUrl":"https://doi.org/10.1145/1456403.1456407","url":null,"abstract":"DNA analysis is increasingly used in forensics, where it is being pushed as the holy grail of identification. But we are approaching a dramatic \"phase change\" as we move from genetics to genomics: when sequencing the entire genome of a person becomes sufficiently cheap as to become a routine operation, as is likely to happen in the coming decades, then each DNA examination will expose a wealth of very sensitive personal information about the examined individual, as well as her relatives. In this interdisciplinary discussion paper we highlight the complexity of DNA-related privacy issues as we move into the genomic (as opposed to genetic) era: the \"driftnet\" approach of comparing scene-of-crime samples against the DNA of the whole population rather than just against that of chosen suspects; the potential for errors in forensic DNA analysis and the consequences on security and privacy; the civil liberties implications of the interaction between medical and forensic applications of genomics. For example, your kin can provide valuable information in a database matching procedure against you even if you don't; and being able to read the whole of a sampled genome, rather than just 13 specific markers from it, provides information about the medical and physical characteristics of the individual.\u0000 Our aim is to offer a simple but thought-provoking and technically accurate summary of the many issues involved, hoping to stimulate an informed public debate on the statutes by which DNA collection, storage and processing should be regulated.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"90 1","pages":"15-22"},"PeriodicalIF":0.0,"publicationDate":"2008-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80475156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}