VAEFL: Integrating Variational Autoencoders for Privacy Preservation and Performance Retention in Federated Learning

Zhixin Li, Yicun Liu, Jiale Li, Guangnan Ye, Hongfeng Chai, Zhihui Lu, Jie Wu
{"title":"VAEFL: Integrating Variational Autoencoders for Privacy Preservation and Performance Retention in Federated Learning","authors":"Zhixin Li, Yicun Liu, Jiale Li, Guangnan Ye, Hongfeng Chai, Zhihui Lu, Jie Wu","doi":"10.1051/sands/2024005","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) heralds a paradigm shift in the training of artificial intelligence (AI) models by fostering collaborative model training while safeguarding client data privacy. In sectors where data sensitivity and AI model security are of paramount importance, such as fintech and biomedicine, maintaining the utility of models without compromising privacy is crucial with the growing application of artificial intelligence technologies. Therefore, the adoption of FL is attracting significant attention. However, traditional Federated Learning methods are vulnerable to Deep Leakage from Gradients (DLG) attacks, and typical defensive strategies often result in excessive computational costs or substantial decreases in model accuracy. To navigate these challenges, this research introduces VAEFL, an innovative FL framework that incorporates Variational Autoencoders (VAEs) to bolster privacy protection without undermining the predictive prowess of the models. VAEFL strategically partitions the model into a private encoder and a public decoder. The private encoder, remaining local, transmutes sensitive data into a latent space fortified for privacy, while the public decoder and classifier, through collaborative training across clients, learn to derive precise predictions from the encoded data. This bifurcation ensures that sensitive data attributes are not disclosed, circumventing gradient leakage attacks and simultaneously allowing the global model to benefit from the diverse knowledge of client datasets. Comprehensive experiments demonstrate that VAEFL not only surpasses standard FL benchmarks in privacy preservation but also maintains competitive performance in predictive tasks. VAEFL thus establishes a novel equilibrium between data privacy and model utility, offering a secure and efficient federated learning approach for the sensitive application of FL in the financial domain.","PeriodicalId":513337,"journal":{"name":"Security and Safety","volume":"10 4","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Security and Safety","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1051/sands/2024005","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL) heralds a paradigm shift in the training of artificial intelligence (AI) models by fostering collaborative model training while safeguarding client data privacy. In sectors where data sensitivity and AI model security are of paramount importance, such as fintech and biomedicine, maintaining the utility of models without compromising privacy is crucial with the growing application of artificial intelligence technologies. Therefore, the adoption of FL is attracting significant attention. However, traditional Federated Learning methods are vulnerable to Deep Leakage from Gradients (DLG) attacks, and typical defensive strategies often result in excessive computational costs or substantial decreases in model accuracy. To navigate these challenges, this research introduces VAEFL, an innovative FL framework that incorporates Variational Autoencoders (VAEs) to bolster privacy protection without undermining the predictive prowess of the models. VAEFL strategically partitions the model into a private encoder and a public decoder. The private encoder, remaining local, transmutes sensitive data into a latent space fortified for privacy, while the public decoder and classifier, through collaborative training across clients, learn to derive precise predictions from the encoded data. This bifurcation ensures that sensitive data attributes are not disclosed, circumventing gradient leakage attacks and simultaneously allowing the global model to benefit from the diverse knowledge of client datasets. Comprehensive experiments demonstrate that VAEFL not only surpasses standard FL benchmarks in privacy preservation but also maintains competitive performance in predictive tasks. VAEFL thus establishes a novel equilibrium between data privacy and model utility, offering a secure and efficient federated learning approach for the sensitive application of FL in the financial domain.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
VAEFL:在联合学习中整合变异自动编码器以保护隐私和保持性能
联合学习(FL)通过促进协作式模型训练,同时保护客户数据隐私,预示着人工智能(AI)模型训练模式的转变。在金融科技和生物医学等对数据敏感性和人工智能模型安全性要求极高的领域,随着人工智能技术的应用日益广泛,在不损害隐私的情况下保持模型的实用性至关重要。因此,FL 的采用备受关注。然而,传统的联合学习方法很容易受到来自梯度的深度泄漏(DLG)攻击,而典型的防御策略往往会导致过高的计算成本或模型准确性的大幅下降。为了应对这些挑战,本研究引入了 VAEFL,这是一种创新的集合学习框架,它结合了变异自动编码器(VAE),在不削弱模型预测能力的情况下加强了隐私保护。VAEFL 从战略上将模型分为私人编码器和公共解码器。私人编码器保持本地化,将敏感数据转换到一个加强隐私保护的潜空间,而公共解码器和分类器则通过跨客户端的协作训练,学习从编码数据中得出精确的预测结果。这种分叉可确保敏感数据属性不被泄露,规避梯度泄漏攻击,同时让全局模型从客户数据集的多样化知识中获益。综合实验证明,VAEFL 不仅在隐私保护方面超越了标准 FL 基准,而且在预测任务中也保持了极具竞争力的性能。因此,VAEFL 在数据隐私和模型效用之间建立了一种新的平衡,为金融领域对 FL 的敏感应用提供了一种安全高效的联合学习方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
On Cultivation of Cybersecurity and Safety talents and Responsible Developers RiskTree: Decision Trees for Asset and Process Risk Assessment Quantification in Big Data Platforms Preface: Security and Privacy for Space-Air-Ground Integrated Networks VAEFL: Integrating Variational Autoencoders for Privacy Preservation and Performance Retention in Federated Learning Privacy-Preserving Location Authentication for Low-altitude UAVs: A Blockchain-based Approach
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1