{"title":"匿名:风险与现实","authors":"A. Basu, Toru Nakamura, Seira Hidano, S. Kiyomoto","doi":"10.1109/Trustcom.2015.473","DOIUrl":null,"url":null,"abstract":"Many a time, datasets containing private and sensitive information are useful for third-party data mining. To prevent identification of personal information, data owners release such data using privacy-preserving data publishing techniques. One well-known technique - k-anonymity - proposes that the records be grouped based on quasi-identifiers such that quasi-identifiers in a group have exactly the same values as any other in the same group. This process reduces the worst-case probability of re-identification of the records based on the quasi identifiers to 1/k. The problem of optimal k-anonymisation is NP-hard. Depending on the k-anonymisation method used and the number of quasi identifiers known to the attacker, the probability of re-identification could be lower than the worst-case guarantee. We quantify risk as the probability of re-identification and propose a mechanism to compute the empirical risk with respect to the cost of acquiring the knowledge about quasi-identifiers, using an real-world dataset released with some k-anonymity guarantee. In addition, we show that k-anonymity can be harmful because the knowledge of additional attributes other than quasi-identifiers can raise the probability of re-identification.","PeriodicalId":277092,"journal":{"name":"2015 IEEE Trustcom/BigDataSE/ISPA","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":"{\"title\":\"k-anonymity: Risks and the Reality\",\"authors\":\"A. Basu, Toru Nakamura, Seira Hidano, S. Kiyomoto\",\"doi\":\"10.1109/Trustcom.2015.473\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many a time, datasets containing private and sensitive information are useful for third-party data mining. To prevent identification of personal information, data owners release such data using privacy-preserving data publishing techniques. One well-known technique - k-anonymity - proposes that the records be grouped based on quasi-identifiers such that quasi-identifiers in a group have exactly the same values as any other in the same group. This process reduces the worst-case probability of re-identification of the records based on the quasi identifiers to 1/k. The problem of optimal k-anonymisation is NP-hard. Depending on the k-anonymisation method used and the number of quasi identifiers known to the attacker, the probability of re-identification could be lower than the worst-case guarantee. We quantify risk as the probability of re-identification and propose a mechanism to compute the empirical risk with respect to the cost of acquiring the knowledge about quasi-identifiers, using an real-world dataset released with some k-anonymity guarantee. In addition, we show that k-anonymity can be harmful because the knowledge of additional attributes other than quasi-identifiers can raise the probability of re-identification.\",\"PeriodicalId\":277092,\"journal\":{\"name\":\"2015 IEEE Trustcom/BigDataSE/ISPA\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"16\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 IEEE Trustcom/BigDataSE/ISPA\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/Trustcom.2015.473\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE Trustcom/BigDataSE/ISPA","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/Trustcom.2015.473","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Many a time, datasets containing private and sensitive information are useful for third-party data mining. To prevent identification of personal information, data owners release such data using privacy-preserving data publishing techniques. One well-known technique - k-anonymity - proposes that the records be grouped based on quasi-identifiers such that quasi-identifiers in a group have exactly the same values as any other in the same group. This process reduces the worst-case probability of re-identification of the records based on the quasi identifiers to 1/k. The problem of optimal k-anonymisation is NP-hard. Depending on the k-anonymisation method used and the number of quasi identifiers known to the attacker, the probability of re-identification could be lower than the worst-case guarantee. We quantify risk as the probability of re-identification and propose a mechanism to compute the empirical risk with respect to the cost of acquiring the knowledge about quasi-identifiers, using an real-world dataset released with some k-anonymity guarantee. In addition, we show that k-anonymity can be harmful because the knowledge of additional attributes other than quasi-identifiers can raise the probability of re-identification.