Federated learning (FL) has garnered considerable attention owing to its capability of accomplishing model training through the sharing local models without accessing training datasets. Nevertheless, it has been demonstrated that the shared models still possess sensitive information related to the training data. Moreover, there is a possibility that malicious aggregation servers can return manipulated global models. While the verification problem in FL has been explored in existing schemes, most of these schemes employ bilinear pairing operations and homomorphic hash computations dependent on the model’s dimension, leading to substantial computational costs. Additionally, some schemes necessitate multiple parties to collectively manage one or more sets of confidential keys for privacy preservation and validation, which renders them vulnerable to collusion attacks between certain clients and servers. Consequently, we propose a privacy-preserving federated learning mechanism under a dual-server architecture. This mechanism adopts a coding matrix computation-based approach to ensure the privacy security of local models at the client side and achieves the aggregation of local models through collaborative efforts between two servers situated at the server side. To verify the correctness of the aggregated model, a Model Verification Code (MVC) mechanism is designed. By effectively combining the MVC mechanism with the coded matrix computation, there is no requirement for all clients to possess identical sets of confidential keys during the privacy preservation and verification process. Meanwhile, this ensures the fulfillment of security requirements under the malicious threat posed by the server. The computational overhead of this mechanism remains low since it avoids the application of complex cryptographic primitives. We perform extensive experiments on real datasets, and the experimental results further demonstrate the proposed scheme exhibits lightweight characteristics while ensuring the validity and usability of the model.
扫码关注我们
求助内容:
应助结果提醒方式:
