(p, N)-可识别性:实际对手下的匿名性

Tomoaki Mimoto, S. Kiyomoto, Katsuya Tanaka, A. Miyaji
{"title":"(p, N)-可识别性:实际对手下的匿名性","authors":"Tomoaki Mimoto, S. Kiyomoto, Katsuya Tanaka, A. Miyaji","doi":"10.1109/Trustcom/BigDataSE/ICESS.2017.343","DOIUrl":null,"url":null,"abstract":"Personal data has great potential for building an efficient and sustainable society; thus several privacy preserving techniques have been proposed to solve the essential issue of maintaining privacy in the use of personal data. Anonymization techniques are promising techniques applicable to huge-size personal data in order to reduce its re-identification risk. However, there is a trade-off between the utility of anonymized datasets and the risk of re-identification of individuals from the anonymized dataset, and so far no perfect solution has been provided. In previous studies, ideal adversaries in possession of all records of an original dataset have been considered in risk analyses, because an anonymized dataset is assumed to be publicly accessible, and once the record of a target is re-identified, privacy breaches are serious and may be uncontrollable. However, anonymized datasets are assumed to be distributed between organizations via secure channels in typical business situations. In this paper, we consider the actual risk to anonymized datasets and propose an analysis method that yields more stringent risk estimation in real settings with real adversaries. Furthermore, we present some experimental results using medical records. Our method is practical and useful for anonymized datasets generated by common anonymization methods such as generalization, noise addition and sampling, and can lead to generate more useful anonymized datasets.","PeriodicalId":170253,"journal":{"name":"2017 IEEE Trustcom/BigDataSE/ICESS","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"(p, N)-identifiability: Anonymity under Practical Adversaries\",\"authors\":\"Tomoaki Mimoto, S. Kiyomoto, Katsuya Tanaka, A. Miyaji\",\"doi\":\"10.1109/Trustcom/BigDataSE/ICESS.2017.343\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Personal data has great potential for building an efficient and sustainable society; thus several privacy preserving techniques have been proposed to solve the essential issue of maintaining privacy in the use of personal data. Anonymization techniques are promising techniques applicable to huge-size personal data in order to reduce its re-identification risk. However, there is a trade-off between the utility of anonymized datasets and the risk of re-identification of individuals from the anonymized dataset, and so far no perfect solution has been provided. In previous studies, ideal adversaries in possession of all records of an original dataset have been considered in risk analyses, because an anonymized dataset is assumed to be publicly accessible, and once the record of a target is re-identified, privacy breaches are serious and may be uncontrollable. However, anonymized datasets are assumed to be distributed between organizations via secure channels in typical business situations. In this paper, we consider the actual risk to anonymized datasets and propose an analysis method that yields more stringent risk estimation in real settings with real adversaries. Furthermore, we present some experimental results using medical records. Our method is practical and useful for anonymized datasets generated by common anonymization methods such as generalization, noise addition and sampling, and can lead to generate more useful anonymized datasets.\",\"PeriodicalId\":170253,\"journal\":{\"name\":\"2017 IEEE Trustcom/BigDataSE/ICESS\",\"volume\":\"51 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE Trustcom/BigDataSE/ICESS\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.343\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE Trustcom/BigDataSE/ICESS","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.343","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

个人资料对建立一个有效率和可持续发展的社会有很大的潜力;因此,人们提出了几种隐私保护技术来解决个人数据使用中维护隐私的基本问题。匿名化技术是一种很有前途的技术,适用于大规模的个人数据,以减少其重新识别的风险。然而,在匿名数据集的效用和从匿名数据集重新识别个人的风险之间存在权衡,到目前为止还没有提供完美的解决方案。在以前的研究中,在风险分析中考虑了拥有原始数据集所有记录的理想对手,因为假设匿名数据集是可公开访问的,一旦目标的记录被重新识别,隐私泄露是严重的,可能是不可控制的。然而,在典型的业务情况下,假定匿名数据集是通过安全通道在组织之间分发的。在本文中,我们考虑了匿名数据集的实际风险,并提出了一种分析方法,该方法可以在真实对手的真实设置中产生更严格的风险估计。此外,我们还介绍了一些利用病历的实验结果。该方法对于泛化、噪声添加和采样等常用匿名化方法生成的匿名数据集具有实用性和实用性,可以生成更有用的匿名数据集。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
(p, N)-identifiability: Anonymity under Practical Adversaries
Personal data has great potential for building an efficient and sustainable society; thus several privacy preserving techniques have been proposed to solve the essential issue of maintaining privacy in the use of personal data. Anonymization techniques are promising techniques applicable to huge-size personal data in order to reduce its re-identification risk. However, there is a trade-off between the utility of anonymized datasets and the risk of re-identification of individuals from the anonymized dataset, and so far no perfect solution has been provided. In previous studies, ideal adversaries in possession of all records of an original dataset have been considered in risk analyses, because an anonymized dataset is assumed to be publicly accessible, and once the record of a target is re-identified, privacy breaches are serious and may be uncontrollable. However, anonymized datasets are assumed to be distributed between organizations via secure channels in typical business situations. In this paper, we consider the actual risk to anonymized datasets and propose an analysis method that yields more stringent risk estimation in real settings with real adversaries. Furthermore, we present some experimental results using medical records. Our method is practical and useful for anonymized datasets generated by common anonymization methods such as generalization, noise addition and sampling, and can lead to generate more useful anonymized datasets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Insider Threat Detection Through Attributed Graph Clustering SEEAD: A Semantic-Based Approach for Automatic Binary Code De-obfuscation A Public Key Encryption Scheme for String Identification Vehicle Incident Hot Spots Identification: An Approach for Big Data Implementing Chain of Custody Requirements in Database Audit Records for Forensic Purposes
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1