Federated Cross-Incremental Self-Supervised Learning for Medical Image Segmentation.

IF 10.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE transactions on neural networks and learning systems Pub Date : 2024-10-14 DOI:10.1109/tnnls.2024.3469962
Fan Zhang,Huiying Liu,Qing Cai,Chun-Mei Feng,Binglu Wang,Shanshan Wang,Junyu Dong,David Zhang
{"title":"Federated Cross-Incremental Self-Supervised Learning for Medical Image Segmentation.","authors":"Fan Zhang,Huiying Liu,Qing Cai,Chun-Mei Feng,Binglu Wang,Shanshan Wang,Junyu Dong,David Zhang","doi":"10.1109/tnnls.2024.3469962","DOIUrl":null,"url":null,"abstract":"Federated cross learning has shown impressive performance in medical image segmentation. However, it encounters the catastrophic forgetting issue caused by data heterogeneity across different clients and is particularly pronounced when simultaneously facing pixelwise label deficiency problem. In this article, we propose a novel federated cross-incremental self-supervised learning method, coined FedCSL, which not only can enable any client in the federation incrementally yet effectively learn from others without inducing knowledge forgetting or requiring massive labeled samples, but also preserve maximum data privacy. Specifically, to overcome the catastrophic forgetting issue, a novel cross-incremental collaborative distillation (CCD) mechanism is proposed, which distills explicit knowledge learned from previous clients to subsequent clients based on secure multiparty computation (MPC). Besides, an effective retrospect mechanism is designed to rearrange the training sequence of clients per round, further releasing the power of CCD by enforcing interclient knowledge propagation. In addition, to alleviate the need of large-scale densely annotated pretraining medical datasets, we also propose a two-stage training framework, in which federated cross-incremental self-supervised pretraining paradigm first extracts robust yet general image-level patterns across multi-institutional data silos via a novel round-robin distributed masked image modeling (MIM) pipeline; then, the resulting visual concepts, e.g., semantics, are transferred to the federated cross-incremental supervised fine-tuning paradigm, favoring various cross-silo medical image segmentation tasks. The experimental results on public datasets demonstrate the effectiveness of the proposed method as well as the consistently superior performance of our method over most state-of-the-art methods quantitatively and qualitatively.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":null,"pages":null},"PeriodicalIF":10.2000,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tnnls.2024.3469962","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Federated cross learning has shown impressive performance in medical image segmentation. However, it encounters the catastrophic forgetting issue caused by data heterogeneity across different clients and is particularly pronounced when simultaneously facing pixelwise label deficiency problem. In this article, we propose a novel federated cross-incremental self-supervised learning method, coined FedCSL, which not only can enable any client in the federation incrementally yet effectively learn from others without inducing knowledge forgetting or requiring massive labeled samples, but also preserve maximum data privacy. Specifically, to overcome the catastrophic forgetting issue, a novel cross-incremental collaborative distillation (CCD) mechanism is proposed, which distills explicit knowledge learned from previous clients to subsequent clients based on secure multiparty computation (MPC). Besides, an effective retrospect mechanism is designed to rearrange the training sequence of clients per round, further releasing the power of CCD by enforcing interclient knowledge propagation. In addition, to alleviate the need of large-scale densely annotated pretraining medical datasets, we also propose a two-stage training framework, in which federated cross-incremental self-supervised pretraining paradigm first extracts robust yet general image-level patterns across multi-institutional data silos via a novel round-robin distributed masked image modeling (MIM) pipeline; then, the resulting visual concepts, e.g., semantics, are transferred to the federated cross-incremental supervised fine-tuning paradigm, favoring various cross-silo medical image segmentation tasks. The experimental results on public datasets demonstrate the effectiveness of the proposed method as well as the consistently superior performance of our method over most state-of-the-art methods quantitatively and qualitatively.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于医学图像分割的联合交叉增量自监督学习
联合交叉学习在医学图像分割中表现出令人印象深刻的性能。然而,它遇到了由不同客户端数据异质性引起的灾难性遗忘问题,尤其是在同时面临像素级标签缺失问题时更为明显。在本文中,我们提出了一种新颖的联盟交叉增量自监督学习方法(FedCSL),它不仅能让联盟中的任何客户端增量而有效地向其他客户端学习,不会引起知识遗忘,也不需要大量的标记样本,而且还能最大限度地保护数据隐私。具体来说,为了克服灾难性遗忘问题,我们提出了一种新颖的交叉增量协作蒸馏(CCD)机制,该机制基于安全的多方计算(MPC),将从以前的客户端学到的显式知识蒸馏给后续客户端。此外,还设计了一种有效的回溯机制来重新安排每轮客户机的训练顺序,通过强制客户机间的知识传播进一步释放 CCD 的威力。此外,为了缓解对大规模高密度注释预训练医学数据集的需求,我们还提出了一个两阶段训练框架,其中联合交叉递增自监督预训练范例首先通过一个新颖的轮循分布式掩蔽图像建模(MIM)管道,在多机构数据孤岛中提取稳健而通用的图像级模式;然后,将由此产生的视觉概念(如:语义)转移到 CCD 中、然后,生成的视觉概念(如语义)被转移到联合交叉递增监督微调范例中,从而有利于各种跨数据仓库的医学图像分割任务。在公共数据集上的实验结果证明了所提方法的有效性,以及我们的方法在定量和定性方面始终优于大多数最先进方法的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
期刊最新文献
FMCNet$+$: Feature-Level Modality Compensation for Visible-Infrared Person Re-Identification Dual-Decoupling With Frequency–Spatial Domains for Image Manipulation Localization Multistability of Almost Periodic Solutions for Fuzzy Competitive NNs With Time-Varying Delays Federated Cross-Incremental Self-Supervised Learning for Medical Image Segmentation. Event-Assisted Recurrent Network for Arbitrary-Temporal-Scale Blurry Image Unfolding.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1