Valuable information and knowledge can be learned from users’ location traces and support various location-based applications such as intelligent traffic control, incident response, and COVID-19 contact tracing. However, due to privacy concerns, no authority could simply collect users’ private location traces for mining or even publishing. To echo such concerns, local differential privacy (LDP) enables individual privacy by allowing each user to report a perturbed version of their data. Unfortunately, when applied to location traces, LDP cannot preserve the semantics in the context of location traces because it treats all locations (i.e., various points of interest) as equally sensitive. This results in a low utility of LDP mechanisms for collecting location traces. In this paper, we address the challenge of collecting and sharing location traces with valuable semantics while providing sufficient privacy protection for participating users. We first propose semantic-constrained local differential privacy (SLDP), a new privacy model to provide a provable mathematical privacy guarantee while preserving desirable semantics. Then, we design a location trace perturbation mechanism (LTPM) that users can use to perturb their traces in a way that satisfies SLDP. Finally, we propose a private location trace synthesis (PLTS) framework in which users use LTPM to perturb their traces before sending them to the collector, who aggregates the users’ perturbed data to generate location traces with valuable semantics. Extensive experiments on three real-world datasets demonstrate that our PLTS outperforms existing state-of-the-art methods by at least 21% in a range of real-world applications, such as spatial visiting queries and frequent pattern mining, under the same privacy leakage.
{"title":"Generating Location Traces With Semantic- Constrained Local Differential Privacy","authors":"Xinyue Sun;Qingqing Ye;Haibo Hu;Jiawei Duan;Qiao Xue;Tianyu Wo;Weizhe Zhang;Jie Xu","doi":"10.1109/TIFS.2024.3480712","DOIUrl":"10.1109/TIFS.2024.3480712","url":null,"abstract":"Valuable information and knowledge can be learned from users’ location traces and support various location-based applications such as intelligent traffic control, incident response, and COVID-19 contact tracing. However, due to privacy concerns, no authority could simply collect users’ private location traces for mining or even publishing. To echo such concerns, local differential privacy (LDP) enables individual privacy by allowing each user to report a perturbed version of their data. Unfortunately, when applied to location traces, LDP cannot preserve the semantics in the context of location traces because it treats all locations (i.e., various points of interest) as equally sensitive. This results in a low utility of LDP mechanisms for collecting location traces. In this paper, we address the challenge of collecting and sharing location traces with valuable semantics while providing sufficient privacy protection for participating users. We first propose semantic-constrained local differential privacy (SLDP), a new privacy model to provide a provable mathematical privacy guarantee while preserving desirable semantics. Then, we design a location trace perturbation mechanism (LTPM) that users can use to perturb their traces in a way that satisfies SLDP. Finally, we propose a private location trace synthesis (PLTS) framework in which users use LTPM to perturb their traces before sending them to the collector, who aggregates the users’ perturbed data to generate location traces with valuable semantics. Extensive experiments on three real-world datasets demonstrate that our PLTS outperforms existing state-of-the-art methods by at least 21% in a range of real-world applications, such as spatial visiting queries and frequent pattern mining, under the same privacy leakage.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"19 ","pages":"9850-9865"},"PeriodicalIF":6.3,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142439883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-11DOI: 10.1109/TIFS.2024.3478828
Ningping Mou;Binqing Guo;Lingchen Zhao;Cong Wang;Yue Zhao;Qian Wang
Recent advancements in adversarial attack research have seen a transition from white-box to black-box and even no-box threat models, greatly enhancing the practicality of these attacks. However, existing no-box attacks focus on instance-specific perturbations, leaving more powerful universal adversarial perturbations (UAPs) unexplored. This study addresses a crucial question: can UAPs be generated under a no-box threat model? Our findings provide an affirmative answer with a texture-based method. Artificially crafted textures can act as UAPs, termed Texture-Adv. With a modest density and a fixed budget for perturbations, it can achieve an attack success rate of 80% under the constraint of $l_{infty }$