OpenVFL:具有更强隐私保护能力的垂直联合学习框架

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS IEEE Transactions on Information Forensics and Security Pub Date : 2024-10-10 DOI:10.1109/TIFS.2024.3477924
Yunbo Yang;Xiang Chen;Yuhao Pan;Jiachen Shen;Zhenfu Cao;Xiaolei Dong;Xiaoguo Li;Jianfei Sun;Guomin Yang;Robert Deng
{"title":"OpenVFL:具有更强隐私保护能力的垂直联合学习框架","authors":"Yunbo Yang;Xiang Chen;Yuhao Pan;Jiachen Shen;Zhenfu Cao;Xiaolei Dong;Xiaoguo Li;Jianfei Sun;Guomin Yang;Robert Deng","doi":"10.1109/TIFS.2024.3477924","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) allows multiple parties, each holding a dataset, to jointly train a model without leaking any information about their own datasets. In this paper, we focus on vertical FL (VFL). In VFL, each party holds a dataset with the same sample space and different feature spaces. All parties should first agree on the training dataset in the ID alignment phase. However, existing works may leak some information about the training dataset and cause privacy leakage. To address this issue, this paper proposes OpenVFL, a vertical federated learning framework with stronger privacy-preserving. We first propose NCLPSI, a new variant of labeled PSI, in which both parties can invoke this protocol to get the encrypted training dataset without leaking any additional information. After that, both parties train the model over the encrypted training dataset. We also formally analyze the security of OpenVFL. In addition, the experimental results show that OpenVFL achieves the best trade-offs between accuracy, performance, and privacy among the most state-of-the-art works.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"19 ","pages":"9670-9681"},"PeriodicalIF":6.3000,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"OpenVFL: A Vertical Federated Learning Framework With Stronger Privacy-Preserving\",\"authors\":\"Yunbo Yang;Xiang Chen;Yuhao Pan;Jiachen Shen;Zhenfu Cao;Xiaolei Dong;Xiaoguo Li;Jianfei Sun;Guomin Yang;Robert Deng\",\"doi\":\"10.1109/TIFS.2024.3477924\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning (FL) allows multiple parties, each holding a dataset, to jointly train a model without leaking any information about their own datasets. In this paper, we focus on vertical FL (VFL). In VFL, each party holds a dataset with the same sample space and different feature spaces. All parties should first agree on the training dataset in the ID alignment phase. However, existing works may leak some information about the training dataset and cause privacy leakage. To address this issue, this paper proposes OpenVFL, a vertical federated learning framework with stronger privacy-preserving. We first propose NCLPSI, a new variant of labeled PSI, in which both parties can invoke this protocol to get the encrypted training dataset without leaking any additional information. After that, both parties train the model over the encrypted training dataset. We also formally analyze the security of OpenVFL. In addition, the experimental results show that OpenVFL achieves the best trade-offs between accuracy, performance, and privacy among the most state-of-the-art works.\",\"PeriodicalId\":13492,\"journal\":{\"name\":\"IEEE Transactions on Information Forensics and Security\",\"volume\":\"19 \",\"pages\":\"9670-9681\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2024-10-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Information Forensics and Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10713409/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10713409/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

联合学习(FL)允许各自拥有数据集的多方联合训练一个模型,而不会泄露各自数据集的任何信息。本文重点讨论垂直联合学习(VFL)。在 VFL 中,每一方都持有具有相同样本空间和不同特征空间的数据集。在 ID 对齐阶段,各方应首先就训练数据集达成一致。然而,现有的工作可能会泄露训练数据集的一些信息,造成隐私泄露。为解决这一问题,本文提出了具有更强隐私保护能力的垂直联合学习框架 OpenVFL。我们首先提出了标签式 PSI 的新变体 NCLPSI,在该协议中,双方都可以调用该协议来获取加密的训练数据集,而不会泄露任何其他信息。之后,双方在加密的训练数据集上训练模型。我们还正式分析了 OpenVFL 的安全性。此外,实验结果表明,OpenVFL 在准确性、性能和隐私之间实现了最佳权衡。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
OpenVFL: A Vertical Federated Learning Framework With Stronger Privacy-Preserving
Federated learning (FL) allows multiple parties, each holding a dataset, to jointly train a model without leaking any information about their own datasets. In this paper, we focus on vertical FL (VFL). In VFL, each party holds a dataset with the same sample space and different feature spaces. All parties should first agree on the training dataset in the ID alignment phase. However, existing works may leak some information about the training dataset and cause privacy leakage. To address this issue, this paper proposes OpenVFL, a vertical federated learning framework with stronger privacy-preserving. We first propose NCLPSI, a new variant of labeled PSI, in which both parties can invoke this protocol to get the encrypted training dataset without leaking any additional information. After that, both parties train the model over the encrypted training dataset. We also formally analyze the security of OpenVFL. In addition, the experimental results show that OpenVFL achieves the best trade-offs between accuracy, performance, and privacy among the most state-of-the-art works.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
期刊最新文献
Attackers Are Not the Same! Unveiling the Impact of Feature Distribution on Label Inference Attacks Backdoor Online Tracing With Evolving Graphs LHADRO: A Robust Control Framework for Autonomous Vehicles Under Cyber-Physical Attacks Towards Mobile Palmprint Recognition via Multi-view Hierarchical Graph Learning Succinct Hash-based Arbitrary-Range Proofs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1