面向未来网络的领域泛化中的联合分析与数据增强

Xunzheng Zhang;Juan Marcelo Parra-Ullauri;Shadi Moazzeni;Xenofon Vasilakos;Reza Nejabati;Dimitra Simeonidou
{"title":"面向未来网络的领域泛化中的联合分析与数据增强","authors":"Xunzheng Zhang;Juan Marcelo Parra-Ullauri;Shadi Moazzeni;Xenofon Vasilakos;Reza Nejabati;Dimitra Simeonidou","doi":"10.1109/TMLCN.2024.3393892","DOIUrl":null,"url":null,"abstract":"Federated Domain Generalization (FDG) aims to train a global model that generalizes well to new clients in a privacy-conscious manner, even when domain shifts are encountered. The increasing concerns of knowledge generalization and data privacy also challenge the traditional gather-and-analyze paradigm in networks. Recent investigations mainly focus on aggregation optimization and domain-invariant representations. However, without directly considering the data augmentation and leveraging the knowledge among existing domains, the domain-only data cannot guarantee the generalization ability of the FDG model when testing on the unseen domain. To overcome the problem, this paper proposes a distributed data augmentation method which combines Generative Adversarial Networks (GANs) and Federated Analytics (FA) to enhance the generalization ability of the trained FDG model, called FA-FDG. First, FA-FDG integrates GAN data generators from each Federated Learning (FL) client. Second, an evaluation index called generalization ability of domain (GAD) is proposed in the FA server. Then, the targeted data augmentation is implemented in each FL client with the GAD index and the integrated data generators. Extensive experiments on several data sets have shown the effectiveness of FA-FDG. Specifically, the accuracy of the FDG model improves up to 5.12% in classification problems, and the R-squared index of the FDG model advances up to 0.22 in the regression problem.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"560-579"},"PeriodicalIF":0.0000,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10508396","citationCount":"0","resultStr":"{\"title\":\"Federated Analytics With Data Augmentation in Domain Generalization Toward Future Networks\",\"authors\":\"Xunzheng Zhang;Juan Marcelo Parra-Ullauri;Shadi Moazzeni;Xenofon Vasilakos;Reza Nejabati;Dimitra Simeonidou\",\"doi\":\"10.1109/TMLCN.2024.3393892\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Domain Generalization (FDG) aims to train a global model that generalizes well to new clients in a privacy-conscious manner, even when domain shifts are encountered. The increasing concerns of knowledge generalization and data privacy also challenge the traditional gather-and-analyze paradigm in networks. Recent investigations mainly focus on aggregation optimization and domain-invariant representations. However, without directly considering the data augmentation and leveraging the knowledge among existing domains, the domain-only data cannot guarantee the generalization ability of the FDG model when testing on the unseen domain. To overcome the problem, this paper proposes a distributed data augmentation method which combines Generative Adversarial Networks (GANs) and Federated Analytics (FA) to enhance the generalization ability of the trained FDG model, called FA-FDG. First, FA-FDG integrates GAN data generators from each Federated Learning (FL) client. Second, an evaluation index called generalization ability of domain (GAD) is proposed in the FA server. Then, the targeted data augmentation is implemented in each FL client with the GAD index and the integrated data generators. Extensive experiments on several data sets have shown the effectiveness of FA-FDG. Specifically, the accuracy of the FDG model improves up to 5.12% in classification problems, and the R-squared index of the FDG model advances up to 0.22 in the regression problem.\",\"PeriodicalId\":100641,\"journal\":{\"name\":\"IEEE Transactions on Machine Learning in Communications and Networking\",\"volume\":\"2 \",\"pages\":\"560-579\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10508396\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Machine Learning in Communications and Networking\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10508396/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Machine Learning in Communications and Networking","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10508396/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

联合领域泛化(Federated Domain Generalization,FDG)旨在训练一个全局模型,即使在遇到领域转移时,该模型也能以注重隐私的方式很好地泛化到新客户。人们对知识泛化和数据隐私的关注与日俱增,这也对网络中传统的 "收集-分析 "模式提出了挑战。最近的研究主要集中在聚合优化和领域不变表示法上。然而,如果不直接考虑数据增强和利用现有领域间的知识,纯领域数据就无法保证 FDG 模型在未见领域进行测试时的泛化能力。为了克服这一问题,本文提出了一种结合生成对抗网络(GANs)和联合分析(FA)的分布式数据增强方法,以增强训练好的 FDG 模型的泛化能力,称为 FA-FDG。首先,FA-FDG 整合了每个联邦学习(FL)客户端的 GAN 数据生成器。其次,在 FA 服务器中提出了一个名为领域泛化能力(GAD)的评价指标。然后,利用 GAD 指数和集成的数据生成器,在每个 FL 客户端实施有针对性的数据增强。在多个数据集上进行的大量实验证明了 FA-FDG 的有效性。具体来说,在分类问题上,FDG 模型的准确率提高了 5.12%,在回归问题上,FDG 模型的 R 平方指数提高了 0.22。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Federated Analytics With Data Augmentation in Domain Generalization Toward Future Networks
Federated Domain Generalization (FDG) aims to train a global model that generalizes well to new clients in a privacy-conscious manner, even when domain shifts are encountered. The increasing concerns of knowledge generalization and data privacy also challenge the traditional gather-and-analyze paradigm in networks. Recent investigations mainly focus on aggregation optimization and domain-invariant representations. However, without directly considering the data augmentation and leveraging the knowledge among existing domains, the domain-only data cannot guarantee the generalization ability of the FDG model when testing on the unseen domain. To overcome the problem, this paper proposes a distributed data augmentation method which combines Generative Adversarial Networks (GANs) and Federated Analytics (FA) to enhance the generalization ability of the trained FDG model, called FA-FDG. First, FA-FDG integrates GAN data generators from each Federated Learning (FL) client. Second, an evaluation index called generalization ability of domain (GAD) is proposed in the FA server. Then, the targeted data augmentation is implemented in each FL client with the GAD index and the integrated data generators. Extensive experiments on several data sets have shown the effectiveness of FA-FDG. Specifically, the accuracy of the FDG model improves up to 5.12% in classification problems, and the R-squared index of the FDG model advances up to 0.22 in the regression problem.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Conditional Denoising Diffusion Probabilistic Models for Data Reconstruction Enhancement in Wireless Communications Multi-Agent Reinforcement Learning With Action Masking for UAV-Enabled Mobile Communications Online Learning for Intelligent Thermal Management of Interference-Coupled and Passively Cooled Base Stations Robust and Lightweight Modeling of IoT Network Behaviors From Raw Traffic Packets Front Cover
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1