Federated Learning is a distributed machine learning paradigm that allows multiple clients to collaboratively train a global model while preserving privacy by avoiding the exchange of raw data. However, its distributed nature makes it vulnerable to backdoor attacks, which threaten the integrity and security of the model. Existing attacks often rely on fixed triggers or optimizations of the local model, failing to adapt to dynamic updates of the global model. We propose a new and effective attack named IDABA (Imperceptible Dynamic Anticipated Backdoor Attack), a novel dynamic backdoor attack method for FL, addressing these limitations by ensuring visual imperceptibility and persistence. IDABA generates visually imperceptible poisoned samples and employs Model-Contrastive Loss (MOON) to maintain similarity with the global model. It also predicts future global model states to optimize trigger effectiveness. Experiments on CIFAR10, MNIST, GTSRB, and TinyImageNet show that IDABA achieves higher Attack Success Rates (ASR) while maintaining model accuracy. It demonstrates strong adaptability against defense mechanisms such as Krum and Multi-Krum. GradCam analysis and image quality metrics confirm the visual stealthiness of IDABA’s backdoor samples.
扫码关注我们
求助内容:
应助结果提醒方式:
