{"title":"联盟学习中的零知识证明可信模型聚合","authors":"Renwen Ma;Kai Hwang;Mo Li;Yiming Miao","doi":"10.1109/TPDS.2024.3455762","DOIUrl":null,"url":null,"abstract":"This paper proposes a new global model aggregation method based on using zero-knowledge federated learning (ZKFL). The purpose is to secure horizontal or P2P federated machine learning systems with shorter aggregation times, higher model accuracy, and lower system costs. We use a model parameter-sharing Chord overlay network among all client hosts. The overlay guarantees a trusted sharing of zero-knowledge proofs for aggregation integrity, even under malicious Byzantine attacks. We tested over popular datasets, Fashion-MNIST and CIFAR10, to prove the new system protection concept. Our benchmark experiments validate the claimed advantages of the ZKFL scheme in all objective functions. Our aggregation method can be applied to secure both rank-based and similarity-based aggregation schemes. For a large system with over 200 clients, our system takes only 3 seconds to yield high-precision global machine models under the ALIE attacks with the Fashion-MNIST dataset. We have achieved up to 85% model accuracy, compared to only 3%\n<inline-formula><tex-math>$\\sim$</tex-math></inline-formula>\n45% accuracy observed with federated schemes without protection. Moreover, our method demands a low memory overhead for handling zero-knowledge proofs as the system scales greatly to a larger number of client nodes.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":null,"pages":null},"PeriodicalIF":5.6000,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Trusted Model Aggregation With Zero-Knowledge Proofs in Federated Learning\",\"authors\":\"Renwen Ma;Kai Hwang;Mo Li;Yiming Miao\",\"doi\":\"10.1109/TPDS.2024.3455762\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper proposes a new global model aggregation method based on using zero-knowledge federated learning (ZKFL). The purpose is to secure horizontal or P2P federated machine learning systems with shorter aggregation times, higher model accuracy, and lower system costs. We use a model parameter-sharing Chord overlay network among all client hosts. The overlay guarantees a trusted sharing of zero-knowledge proofs for aggregation integrity, even under malicious Byzantine attacks. We tested over popular datasets, Fashion-MNIST and CIFAR10, to prove the new system protection concept. Our benchmark experiments validate the claimed advantages of the ZKFL scheme in all objective functions. Our aggregation method can be applied to secure both rank-based and similarity-based aggregation schemes. For a large system with over 200 clients, our system takes only 3 seconds to yield high-precision global machine models under the ALIE attacks with the Fashion-MNIST dataset. We have achieved up to 85% model accuracy, compared to only 3%\\n<inline-formula><tex-math>$\\\\sim$</tex-math></inline-formula>\\n45% accuracy observed with federated schemes without protection. Moreover, our method demands a low memory overhead for handling zero-knowledge proofs as the system scales greatly to a larger number of client nodes.\",\"PeriodicalId\":13257,\"journal\":{\"name\":\"IEEE Transactions on Parallel and Distributed Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.6000,\"publicationDate\":\"2024-09-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Parallel and Distributed Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10669208/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Parallel and Distributed Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10669208/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
Trusted Model Aggregation With Zero-Knowledge Proofs in Federated Learning
This paper proposes a new global model aggregation method based on using zero-knowledge federated learning (ZKFL). The purpose is to secure horizontal or P2P federated machine learning systems with shorter aggregation times, higher model accuracy, and lower system costs. We use a model parameter-sharing Chord overlay network among all client hosts. The overlay guarantees a trusted sharing of zero-knowledge proofs for aggregation integrity, even under malicious Byzantine attacks. We tested over popular datasets, Fashion-MNIST and CIFAR10, to prove the new system protection concept. Our benchmark experiments validate the claimed advantages of the ZKFL scheme in all objective functions. Our aggregation method can be applied to secure both rank-based and similarity-based aggregation schemes. For a large system with over 200 clients, our system takes only 3 seconds to yield high-precision global machine models under the ALIE attacks with the Fashion-MNIST dataset. We have achieved up to 85% model accuracy, compared to only 3%
$\sim$
45% accuracy observed with federated schemes without protection. Moreover, our method demands a low memory overhead for handling zero-knowledge proofs as the system scales greatly to a larger number of client nodes.
期刊介绍:
IEEE Transactions on Parallel and Distributed Systems (TPDS) is published monthly. It publishes a range of papers, comments on previously published papers, and survey articles that deal with the parallel and distributed systems research areas of current importance to our readers. Particular areas of interest include, but are not limited to:
a) Parallel and distributed algorithms, focusing on topics such as: models of computation; numerical, combinatorial, and data-intensive parallel algorithms, scalability of algorithms and data structures for parallel and distributed systems, communication and synchronization protocols, network algorithms, scheduling, and load balancing.
b) Applications of parallel and distributed computing, including computational and data-enabled science and engineering, big data applications, parallel crowd sourcing, large-scale social network analysis, management of big data, cloud and grid computing, scientific and biomedical applications, mobile computing, and cyber-physical systems.
c) Parallel and distributed architectures, including architectures for instruction-level and thread-level parallelism; design, analysis, implementation, fault resilience and performance measurements of multiple-processor systems; multicore processors, heterogeneous many-core systems; petascale and exascale systems designs; novel big data architectures; special purpose architectures, including graphics processors, signal processors, network processors, media accelerators, and other special purpose processors and accelerators; impact of technology on architecture; network and interconnect architectures; parallel I/O and storage systems; architecture of the memory hierarchy; power-efficient and green computing architectures; dependable architectures; and performance modeling and evaluation.
d) Parallel and distributed software, including parallel and multicore programming languages and compilers, runtime systems, operating systems, Internet computing and web services, resource management including green computing, middleware for grids, clouds, and data centers, libraries, performance modeling and evaluation, parallel programming paradigms, and programming environments and tools.