{"title":"OpenVFL: A Vertical Federated Learning Framework With Stronger Privacy-Preserving","authors":"Yunbo Yang;Xiang Chen;Yuhao Pan;Jiachen Shen;Zhenfu Cao;Xiaolei Dong;Xiaoguo Li;Jianfei Sun;Guomin Yang;Robert Deng","doi":"10.1109/TIFS.2024.3477924","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) allows multiple parties, each holding a dataset, to jointly train a model without leaking any information about their own datasets. In this paper, we focus on vertical FL (VFL). In VFL, each party holds a dataset with the same sample space and different feature spaces. All parties should first agree on the training dataset in the ID alignment phase. However, existing works may leak some information about the training dataset and cause privacy leakage. To address this issue, this paper proposes OpenVFL, a vertical federated learning framework with stronger privacy-preserving. We first propose NCLPSI, a new variant of labeled PSI, in which both parties can invoke this protocol to get the encrypted training dataset without leaking any additional information. After that, both parties train the model over the encrypted training dataset. We also formally analyze the security of OpenVFL. In addition, the experimental results show that OpenVFL achieves the best trade-offs between accuracy, performance, and privacy among the most state-of-the-art works.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"19 ","pages":"9670-9681"},"PeriodicalIF":6.3000,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10713409/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning (FL) allows multiple parties, each holding a dataset, to jointly train a model without leaking any information about their own datasets. In this paper, we focus on vertical FL (VFL). In VFL, each party holds a dataset with the same sample space and different feature spaces. All parties should first agree on the training dataset in the ID alignment phase. However, existing works may leak some information about the training dataset and cause privacy leakage. To address this issue, this paper proposes OpenVFL, a vertical federated learning framework with stronger privacy-preserving. We first propose NCLPSI, a new variant of labeled PSI, in which both parties can invoke this protocol to get the encrypted training dataset without leaking any additional information. After that, both parties train the model over the encrypted training dataset. We also formally analyze the security of OpenVFL. In addition, the experimental results show that OpenVFL achieves the best trade-offs between accuracy, performance, and privacy among the most state-of-the-art works.
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features