Federated Learning (FL) has emerged as a promising technology that has garnered significant attention in the Internet of Things (IoT) domain. However, the non-independent and identically distributed (Non-IID) nature of IoT data, coupled with the vulnerability of gradient transmission in traditional federated learning frameworks, limits its broader applicability. Heterogeneous differential privacy offers tailored privacy protection for individual clients, making it particularly well-suited for the diverse functional requirements of IoT devices. This study proposes a clustered federated learning method with heterogeneous differential privacy (FedCDP) to balance model utility and privacy preservation on Non-IID data. Specifically, we employed a two-stage clustering technique to enhance clustering accuracy amidst noise perturbations, and implement a client verification procedure to mitigate the detrimental effects of erroneous clustering and malicious data injection. To solve the problem of noise accumulation in cluster models, we introduced an intra-cluster privacy budget weighting mechanism, and used model shuffling to prevent the server from obtaining the cluster identity corresponding to the local model. We conducted experimental evaluations under multiple data distribution scenarios, and these experimental results show that our method effectively improves robustness to noise and significantly improves model performance compared to the baseline methods. In addition, we perform ablation experiments on each module to further analyze the impact of each module on the method. These findings underscore the usability and robustness of the proposed method.
扫码关注我们
求助内容:
应助结果提醒方式:
