Pub Date : 2024-11-14DOI: 10.1109/tifs.2024.3498464
Yige Liu, Che Wang, Yiwei Lou, Yongzhi Cao, Hanpin Wang
{"title":"Attackers Are Not the Same! Unveiling the Impact of Feature Distribution on Label Inference Attacks","authors":"Yige Liu, Che Wang, Yiwei Lou, Yongzhi Cao, Hanpin Wang","doi":"10.1109/tifs.2024.3498464","DOIUrl":"https://doi.org/10.1109/tifs.2024.3498464","url":null,"abstract":"","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"17 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142637250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-13DOI: 10.1109/tifs.2024.3497806
Weihan Li, Zongyang Zhang, Yanpei Guo, Sherman S. M. Chow, Zhiguo Wan
{"title":"Succinct Hash-based Arbitrary-Range Proofs","authors":"Weihan Li, Zongyang Zhang, Yanpei Guo, Sherman S. M. Chow, Zhiguo Wan","doi":"10.1109/tifs.2024.3497806","DOIUrl":"https://doi.org/10.1109/tifs.2024.3497806","url":null,"abstract":"","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"89 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142610715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-13DOI: 10.1109/TIFS.2024.3488517
Chengyu Jia;Jinyin Chen;Shouling Ji;Yao Cheng;Haibin Zheng;Qi Xuan
The backdoor attacks have posed a severe threat to deep neural networks (DNNs). Online training platforms and third-party model training providers are more vulnerable to backdoor attacks due to uncontrollable data sources, untrusted developers or unmonitorable training processes. Researchers have proposed to detect the backdoor in the well-trained models, and then remove them by some mitigation techniques, e.g., retraining and pruning. However, they are still limited from two aspects: (i) real-time - they cannot detect in time at the beginning of training due to their reliance on well-trained models; (ii) mitigation effect - the later discovery of backdoors usually leads to 1) deeper backdoors, 2) less effective mitigation, and 3) greater costs. To address these challenges, we rethink the evolution of the backdoor, and intend to cope with backdoors along with the online training process, that is to detect the backdoors sooner rather than later. We propose BackdoorTracer, a novel framework that detects the backdoor in the training phase. BackdoorTracer constructs the model into an equivalent graph based on the activated neural path during training, thereby detecting the backdoor through multiple graph metrics. BackdoorTracer can incorporate any existing backdoor mitigation approaches that require accessing training to stop the impact of backdoors as soon as possible. It differs from previous works in several key aspects: (i) lightweight - BackdoorTracer is independent of the training process, and thus it has little negative impact on the training efficiency and testing accuracy; (ii) generalizable - it works different modalities of data, models and different backdoor attacks. BackdoorTracer outperforms the state-of-the-art (SOTA) detection approaches in experiments on 5 modes, 10 models and 9 backdoor attack scenarios. Compared with the existing 5 backdoor detection methods, our method can detect backdoors earlier ( $sim ~1.5$