The proliferation of deep neural networks (DNNs) drives the need for collaborative data processing across distributed nodes in next-generation systems. This mode poses a potential threat to distributed data privacy, necessitating the development of more reliable privacy-preserving machine learning (PPML) solutions. The functional encryption (FE) provides a new paradigm for PPML due to its unique advantages. Unfortunately, privacy requirements in existing FE-based schemes impose a priori constraints on permissible neural architectures, highlighting a fundamental tension with model expressiveness. To close this gap, we design a privacy-preserving DNN framework (PSCD) based on FE, mitigating structural constraints on model by integrating three independent modules. Specifically, we first design a secure aggregation module SAM with FE to ensure the confidentiality of local data upload. Then, we introduce FM Sketch to propose a query control module QCM to control the number of times ciphertext vectors are queried by cloud server. Finally, we develop a privacy-preserving training mechanism PPTM, which incorporates Dropout to flexibly adjust the network structure and synchronously enhance the robustness of model. Formal security analysis proves that PSCD can against semi-honest attacks and collusion attacks. Experiments on real-world datasets demonstrate that PSCD achieves at least a 48.5% improvement in operational efficiency and a 38.9% reduction in communication overhead compared to benchmark PPML schemes, while maintaining model accuracy comparable to that of a plaintext DNN.
扫码关注我们
求助内容:
应助结果提醒方式:
