With the widespread deployment of machine learning models in privacy-sensitive domains such as healthcare and finance, the risk of training data leakage has attracted increasing attention. As a fundamental approach for evaluating model privacy leakage, Membership Inference Attack (MIA) has been extensively studied in distributed learning scenarios such as Federated Learning (FL). However, under black-box settings, attackers face severe challenges, including the unavailability of real non-member samples and the inaccessibility of target model architectures, which limit the generalization and accuracy of existing methods. To address these limitations, this paper proposes a DCGAN-enhanced black-box MIA framework, whose innovations are reflected in three major aspects: (1) a discriminator-guided pseudo-sample filtering mechanism that ensures the authenticity and diversity of non-member data; (2) a multi-shadow-model softmax high-dimensional concatenation strategy, which fuses the softmax probability outputs from multiple shadow models to construct discriminative high-dimensional attack representations; and (3) a SMOTE-based balancing module designed to mitigate class imbalance and further improve the generalization of the attack model. The proposed framework significantly enhances the discriminative capability and robustness of black-box MIAs without accessing the internal parameters or training procedures of the target model. Extensive experiments demonstrate that our method consistently outperforms state-of-the-art baselines across multiple federated learning protocols (FedAvg, FedMD, and FedProx) and benchmark datasets (CIFAR-10, CIFAR-100, Fashion-MNIST, and SVHN), achieving an accuracy of 0.9897, an AUC of 0.9899, and a TPR@FPR=1% of 0.9967. These results verify the robustness, generalizability, and wide applicability of the proposed framework, providing a systematic and scalable solution for privacy evaluation in federated learning environments.
扫码关注我们
求助内容:
应助结果提醒方式:
