Privacy Preserving Machine Learning With Federated Personalized Learning in Artificially Generated Environment

Md. Tanzib Hosain;Mushfiqur Rahman Abir;Md. Yeasin Rahat;M. F. Mridha;Saddam Hossain Mukta
{"title":"Privacy Preserving Machine Learning With Federated Personalized Learning in Artificially Generated Environment","authors":"Md. Tanzib Hosain;Mushfiqur Rahman Abir;Md. Yeasin Rahat;M. F. Mridha;Saddam Hossain Mukta","doi":"10.1109/OJCS.2024.3466859","DOIUrl":null,"url":null,"abstract":"The widespread adoption of Privacy Preserving Machine Learning (PPML) with Federated Personalized Learning (FPL) has been driven by significant advances in intelligent systems research. This progress has raised concerns about data privacy in the artificially generated environment, leading to growing awareness of the need for privacy-preserving solutions. There has been a seismic shift in interest towards Federated Personalized Learning (FPL), which is the leading paradigm for training Machine Learning (ML) models on decentralized data silos while maintaining data privacy. This research article presents a comprehensive analysis of a cutting-edge approach to personalize ML models while preserving privacy, achieved through the innovative framework of Privacy Preserving Machine Learning with Federated Personalized Learning (PPMLFPL). Regarding the increasing concerns about data privacy in virtual environments, this study evaluated the effectiveness of PPMLFPL in addressing the critical balance between personalized model refinement and maintaining the confidentiality of individual user data. According to our results based on various effectiveness metrics, the use of the Adaptive Personalized Cross-Silo Federated Learning with Homomorphic Encryption (APPLE+HE) algorithm for privacy-preserving machine learning tasks in federated personalized learning settings within the artificially generated environment is strongly recommended, obtaining an accuracy of 99.34%.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"5 ","pages":"694-704"},"PeriodicalIF":0.0000,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10691662","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10691662/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The widespread adoption of Privacy Preserving Machine Learning (PPML) with Federated Personalized Learning (FPL) has been driven by significant advances in intelligent systems research. This progress has raised concerns about data privacy in the artificially generated environment, leading to growing awareness of the need for privacy-preserving solutions. There has been a seismic shift in interest towards Federated Personalized Learning (FPL), which is the leading paradigm for training Machine Learning (ML) models on decentralized data silos while maintaining data privacy. This research article presents a comprehensive analysis of a cutting-edge approach to personalize ML models while preserving privacy, achieved through the innovative framework of Privacy Preserving Machine Learning with Federated Personalized Learning (PPMLFPL). Regarding the increasing concerns about data privacy in virtual environments, this study evaluated the effectiveness of PPMLFPL in addressing the critical balance between personalized model refinement and maintaining the confidentiality of individual user data. According to our results based on various effectiveness metrics, the use of the Adaptive Personalized Cross-Silo Federated Learning with Homomorphic Encryption (APPLE+HE) algorithm for privacy-preserving machine learning tasks in federated personalized learning settings within the artificially generated environment is strongly recommended, obtaining an accuracy of 99.34%.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在人工生成环境中通过联合个性化学习保护机器学习隐私
在智能系统研究取得重大进展的推动下,隐私保护机器学习(PPML)与联合个性化学习(FPL)得到了广泛应用。这一进展引起了人们对人工生成环境中数据隐私的关注,导致人们越来越意识到需要隐私保护解决方案。人们对联邦个性化学习(Federated Personalized Learning,FPL)的兴趣发生了巨大转变,FPL 是在分散数据孤岛上训练机器学习(Machine Learning,ML)模型同时维护数据隐私的主要范式。本研究文章全面分析了在保护隐私的同时个性化机器学习模型的前沿方法,该方法是通过创新框架 "隐私保护机器学习与联合个性化学习(PPMLFPL)"实现的。鉴于虚拟环境中的数据隐私问题日益受到关注,本研究评估了 PPMLFPL 在解决个性化模型完善与维护个人用户数据机密性之间的关键平衡方面的有效性。根据我们基于各种有效性指标得出的结果,强烈建议在人工生成环境中的联合个性化学习设置中使用具有同态加密功能的自适应个性化跨ilo联合学习(APPLE+HE)算法来完成保护隐私的机器学习任务,其准确率高达99.34%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
12.60
自引率
0.00%
发文量
0
期刊最新文献
2024 List of Reviewers* New Incoming EIC Editorial Statistical Validity of Neural-Net Benchmarks Energy Efficiency of Kernel and User Space Level VPN Solutions in AIoT Networks Large Pretrained Foundation Model for Key Performance Indicator Multivariate Time Series Anomaly Detection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1