FedMEKT:基于蒸馏的嵌入式知识转移,用于多模式联合学习。

IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neural Networks Pub Date : 2025-03-01 Epub Date: 2024-12-09 DOI:10.1016/j.neunet.2024.107017
Huy Q Le, Minh N H Nguyen, Chu Myaet Thwal, Yu Qiao, Chaoning Zhang, Choong Seon Hong
{"title":"FedMEKT:基于蒸馏的嵌入式知识转移,用于多模式联合学习。","authors":"Huy Q Le, Minh N H Nguyen, Chu Myaet Thwal, Yu Qiao, Chaoning Zhang, Choong Seon Hong","doi":"10.1016/j.neunet.2024.107017","DOIUrl":null,"url":null,"abstract":"<p><p>Federated learning (FL) enables a decentralized machine learning paradigm for multiple clients to collaboratively train a generalized global model without sharing their private data. Most existing works have focused on designing FL systems for unimodal data, limiting their potential to exploit valuable multimodal data for future personalized applications. Moreover, the majority of FL approaches still rely on labeled data at the client side, which is often constrained by the inability of users to self-annotate their data in real-world applications. In light of these limitations, we propose a novel multimodal FL framework, namely FedMEKT, based on a semi-supervised learning approach to leverage representations from different modalities. To address the challenges of modality discrepancy and labeled data constraints in existing FL systems, our proposed FedMEKT framework comprises local multimodal autoencoder learning, generalized multimodal autoencoder construction, and generalized classifier learning. Bringing this concept into the proposed framework, we develop a distillation-based multimodal embedding knowledge transfer mechanism which allows the server and clients to exchange joint multimodal embedding knowledge extracted from a multimodal proxy dataset. Specifically, our FedMEKT iteratively updates the generalized global encoders with joint multimodal embedding knowledge from participating clients through upstream and downstream multimodal embedding knowledge transfer for local learning. Through extensive experiments on four multimodal datasets, we demonstrate that FedMEKT not only achieves superior global encoder performance in linear evaluation but also guarantees user privacy for personal data and model parameters while demanding less communication cost than other baselines.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"107017"},"PeriodicalIF":6.0000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FedMEKT: Distillation-based embedding knowledge transfer for multimodal federated learning.\",\"authors\":\"Huy Q Le, Minh N H Nguyen, Chu Myaet Thwal, Yu Qiao, Chaoning Zhang, Choong Seon Hong\",\"doi\":\"10.1016/j.neunet.2024.107017\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Federated learning (FL) enables a decentralized machine learning paradigm for multiple clients to collaboratively train a generalized global model without sharing their private data. Most existing works have focused on designing FL systems for unimodal data, limiting their potential to exploit valuable multimodal data for future personalized applications. Moreover, the majority of FL approaches still rely on labeled data at the client side, which is often constrained by the inability of users to self-annotate their data in real-world applications. In light of these limitations, we propose a novel multimodal FL framework, namely FedMEKT, based on a semi-supervised learning approach to leverage representations from different modalities. To address the challenges of modality discrepancy and labeled data constraints in existing FL systems, our proposed FedMEKT framework comprises local multimodal autoencoder learning, generalized multimodal autoencoder construction, and generalized classifier learning. Bringing this concept into the proposed framework, we develop a distillation-based multimodal embedding knowledge transfer mechanism which allows the server and clients to exchange joint multimodal embedding knowledge extracted from a multimodal proxy dataset. Specifically, our FedMEKT iteratively updates the generalized global encoders with joint multimodal embedding knowledge from participating clients through upstream and downstream multimodal embedding knowledge transfer for local learning. Through extensive experiments on four multimodal datasets, we demonstrate that FedMEKT not only achieves superior global encoder performance in linear evaluation but also guarantees user privacy for personal data and model parameters while demanding less communication cost than other baselines.</p>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"183 \",\"pages\":\"107017\"},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2025-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1016/j.neunet.2024.107017\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/12/9 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.neunet.2024.107017","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/12/9 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

联邦学习(FL)为多个客户端提供了一种分散的机器学习范式,可以在不共享私有数据的情况下协作训练通用的全局模型。大多数现有的工作都集中在为单模态数据设计FL系统,限制了它们为未来个性化应用开发有价值的多模态数据的潜力。此外,大多数FL方法仍然依赖于客户端的标记数据,这通常受到用户无法在实际应用程序中对其数据进行自我注释的限制。鉴于这些限制,我们提出了一个新的多模态FL框架,即FedMEKT,基于半监督学习方法来利用来自不同模态的表示。为了解决现有FL系统中模态差异和标记数据约束的挑战,我们提出的FedMEKT框架包括局部多模态自编码器学习、广义多模态自编码器构建和广义分类器学习。将这一概念引入所提出的框架中,我们开发了一种基于蒸馏的多模态嵌入知识传递机制,该机制允许服务器和客户端交换从多模态代理数据集中提取的联合多模态嵌入知识。具体来说,我们的FedMEKT通过上游和下游的多模态嵌入知识转移进行局部学习,通过参与客户端的联合多模态嵌入知识迭代更新广义全局编码器。通过在四个多模态数据集上的大量实验,我们证明了FedMEKT不仅在线性评估方面具有优越的全局编码器性能,而且在保证用户个人数据和模型参数隐私的同时,所需的通信成本比其他基线低。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
FedMEKT: Distillation-based embedding knowledge transfer for multimodal federated learning.

Federated learning (FL) enables a decentralized machine learning paradigm for multiple clients to collaboratively train a generalized global model without sharing their private data. Most existing works have focused on designing FL systems for unimodal data, limiting their potential to exploit valuable multimodal data for future personalized applications. Moreover, the majority of FL approaches still rely on labeled data at the client side, which is often constrained by the inability of users to self-annotate their data in real-world applications. In light of these limitations, we propose a novel multimodal FL framework, namely FedMEKT, based on a semi-supervised learning approach to leverage representations from different modalities. To address the challenges of modality discrepancy and labeled data constraints in existing FL systems, our proposed FedMEKT framework comprises local multimodal autoencoder learning, generalized multimodal autoencoder construction, and generalized classifier learning. Bringing this concept into the proposed framework, we develop a distillation-based multimodal embedding knowledge transfer mechanism which allows the server and clients to exchange joint multimodal embedding knowledge extracted from a multimodal proxy dataset. Specifically, our FedMEKT iteratively updates the generalized global encoders with joint multimodal embedding knowledge from participating clients through upstream and downstream multimodal embedding knowledge transfer for local learning. Through extensive experiments on four multimodal datasets, we demonstrate that FedMEKT not only achieves superior global encoder performance in linear evaluation but also guarantees user privacy for personal data and model parameters while demanding less communication cost than other baselines.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
期刊最新文献
Estimating global phase synchronization by quantifying multivariate mutual information and detecting network structure. Event-based adaptive fixed-time optimal control for saturated fault-tolerant nonlinear multiagent systems via reinforcement learning algorithm. Lie group convolution neural networks with scale-rotation equivariance. Multi-hop interpretable meta learning for few-shot temporal knowledge graph completion. An object detection-based model for automated screening of stem-cells senescence during drug screening.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1