Federated Feature Augmentation and Alignment

Tianfei Zhou;Ye Yuan;Binglu Wang;Ender Konukoglu
{"title":"Federated Feature Augmentation and Alignment","authors":"Tianfei Zhou;Ye Yuan;Binglu Wang;Ender Konukoglu","doi":"10.1109/TPAMI.2024.3457751","DOIUrl":null,"url":null,"abstract":"Federated learning is a distributed paradigm that allows multiple parties to collaboratively train deep learning models without direct exchange of raw data. Nevertheless, the inherent non-independent and identically distributed (non-i.i.d.) nature of data distribution among clients results in significant degradation of the acquired model. The primary goal of this study is to develop a robust federated learning algorithm to address \n<i>feature shift</i>\n in clients’ samples, potentially arising from a range of factors such as acquisition discrepancies in medical imaging. To reach this goal, we first propose federated feature augmentation (\n<small>FedFA</small>\n<inline-formula><tex-math>$^{l}$</tex-math></inline-formula>\n), a novel feature augmentation technique tailored for federated learning. \n<small>FedFA</small>\n<inline-formula><tex-math>$^{l}$</tex-math></inline-formula>\n is based on a crucial insight that each client's data distribution can be characterized by first-/second-order statistics (\n<i>a.k.a.</i>\n, mean and standard deviation) of latent features; and it is feasible to manipulate these local statistics \n<i>globally</i>\n, i.e., based on information in the entire federation, to let clients have a better sense of the global distribution across clients. Grounded on this insight, we propose to augment each local feature statistic based on a normal distribution, wherein the mean corresponds to the original statistic, and the variance defines the augmentation scope. Central to \n<small>FedFA</small>\n<inline-formula><tex-math>$^{l}$</tex-math></inline-formula>\n is the determination of a meaningful Gaussian variance, which is accomplished by taking into account not only biased data of each individual client, but also underlying feature statistics represented by all participating clients. Beyond consideration of \n<i>low-order</i>\n statistics in \n<small>FedFA</small>\n<inline-formula><tex-math>$^{l}$</tex-math></inline-formula>\n, we propose a federated feature alignment component (\n<small>FedFA</small>\n<inline-formula><tex-math>$^{h}$</tex-math></inline-formula>\n) that exploits \n<i>higher-order</i>\n feature statistics to gain a more detailed understanding of local feature distribution and enables explicit alignment of augmented features in different clients to promote more consistent feature learning. Combining \n<small>FedFA</small>\n<inline-formula><tex-math>$^{l}$</tex-math></inline-formula>\n and \n<small>FedFA</small>\n<inline-formula><tex-math>$^{h}$</tex-math></inline-formula>\n yields our full approach \n<small><b>FedFA<inline-formula><tex-math>$+$</tex-math><alternatives><mml:math><mml:mo>+</mml:mo></mml:math><inline-graphic></alternatives></inline-formula></b></small>\n. \n<small>FedFA<inline-formula><tex-math>$+$</tex-math><alternatives><mml:math><mml:mo>+</mml:mo></mml:math><inline-graphic></alternatives></inline-formula></small>\n is non-parametric, incurs negligible additional communication costs, and can be seamlessly incorporated into popular CNN and Transformer architectures. We offer rigorous theoretical analysis, as well as extensive empirical justifications to demonstrate the effectiveness of the algorithm.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"46 12","pages":"11119-11135"},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10680999/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Federated learning is a distributed paradigm that allows multiple parties to collaboratively train deep learning models without direct exchange of raw data. Nevertheless, the inherent non-independent and identically distributed (non-i.i.d.) nature of data distribution among clients results in significant degradation of the acquired model. The primary goal of this study is to develop a robust federated learning algorithm to address feature shift in clients’ samples, potentially arising from a range of factors such as acquisition discrepancies in medical imaging. To reach this goal, we first propose federated feature augmentation ( FedFA $^{l}$ ), a novel feature augmentation technique tailored for federated learning. FedFA $^{l}$ is based on a crucial insight that each client's data distribution can be characterized by first-/second-order statistics ( a.k.a. , mean and standard deviation) of latent features; and it is feasible to manipulate these local statistics globally , i.e., based on information in the entire federation, to let clients have a better sense of the global distribution across clients. Grounded on this insight, we propose to augment each local feature statistic based on a normal distribution, wherein the mean corresponds to the original statistic, and the variance defines the augmentation scope. Central to FedFA $^{l}$ is the determination of a meaningful Gaussian variance, which is accomplished by taking into account not only biased data of each individual client, but also underlying feature statistics represented by all participating clients. Beyond consideration of low-order statistics in FedFA $^{l}$ , we propose a federated feature alignment component ( FedFA $^{h}$ ) that exploits higher-order feature statistics to gain a more detailed understanding of local feature distribution and enables explicit alignment of augmented features in different clients to promote more consistent feature learning. Combining FedFA $^{l}$ and FedFA $^{h}$ yields our full approach FedFA$+$+ . FedFA$+$+ is non-parametric, incurs negligible additional communication costs, and can be seamlessly incorporated into popular CNN and Transformer architectures. We offer rigorous theoretical analysis, as well as extensive empirical justifications to demonstrate the effectiveness of the algorithm.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
联合特征增强和对齐
联盟学习是一种分布式范式,允许多方协作训练深度学习模型,而无需直接交换原始数据。然而,客户端之间数据分布固有的非独立和同分布(非 i.i.d.)特性会导致获得的模型严重退化。本研究的主要目标是开发一种稳健的联合学习算法,以解决客户样本中可能由医学影像采集差异等一系列因素引起的特征偏移问题。为了实现这一目标,我们首先提出了联合特征增强(FedFA$^{l}$),这是一种为联合学习量身定制的新型特征增强技术。FedFA$^{l}$ 基于一个重要的见解,即每个客户端的数据分布都可以用潜在特征的一阶/二阶统计量(又称平均值和标准偏差)来表征;而全局操作这些局部统计量是可行的,即基于整个联盟的信息,让客户端更好地了解整个客户端的全局分布。基于这一见解,我们建议根据正态分布来增强每个局部特征统计量,其中均值对应于原始统计量,方差定义了增强范围。FedFA$^{l}$ 的核心是确定有意义的高斯方差,这不仅要考虑每个客户端的偏差数据,还要考虑所有参与客户端所代表的基本特征统计。除了考虑 FedFA$^{l}$ 中的低阶统计数据外,我们还提出了一个联合特征对齐组件(FedFA$^{h}$),该组件利用高阶特征统计数据来更详细地了解本地特征分布,并对不同客户端中的增强特征进行明确对齐,以促进更一致的特征学习。将 FedFA$^{l}$ 和 FedFA$^{h}$ 结合起来,就产生了我们的完整方法 FedFA$+$+。FedFA$+$+ 是非参数的,产生的额外通信成本可以忽略不计,而且可以无缝集成到流行的 CNN 和 Transformer 架构中。我们提供了严谨的理论分析和广泛的经验论证,以证明该算法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Language-Inspired Relation Transfer for Few-Shot Class-Incremental Learning. Multi-Modality Multi-Attribute Contrastive Pre-Training for Image Aesthetics Computing. 360SFUDA++: Towards Source-Free UDA for Panoramic Segmentation by Learning Reliable Category Prototypes. Anti-Forgetting Adaptation for Unsupervised Person Re-Identification. Evolved Hierarchical Masking for Self-Supervised Learning.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1