With the rapid advancement of edge intelligence, conventional federated learning (FL) frameworks still struggle to achieve competitive performance under highly heterogeneous and class-imbalanced data distributions. To address these limitations, this paper presents FedDAK, a distribution-aware and adaptive knowledge distillation framework for personalized federated learning. FedDAK enhances both stability and personalization capability through three key designs: dynamic distillation weighting, adaptive rare-class enhancement, and distribution-aware global aggregation. Unlike existing distillation-based FL systems that rely on static or heuristic weighting, FedDAK introduces a KL-divergence–guided dynamic distillation coefficient, enabling each client to automatically regulate the strength of global knowledge constraints according to its divergence from the global data distribution. Furthermore, FedDAK integrates class-level scarcity modeling, assigning increased importance to underrepresented categories to alleviate bias under severe class imbalance. At the global level, FedDAK employs distribution-aware aggregation, reducing the negative influence of highly divergent clients and improving global stability and generalization. Extensive experiments on benchmark datasets demonstrate that FedDAK achieves significantly better personalized performance and global convergence than existing FL baselines under the standard federated learning setting, without requiring the sharing of raw data. The code is available at https://github.com/youmurong50-cmd/fedDAK.
扫码关注我们
求助内容:
应助结果提醒方式:
