Fan Zhang,Huiying Liu,Qing Cai,Chun-Mei Feng,Binglu Wang,Shanshan Wang,Junyu Dong,David Zhang
{"title":"用于医学图像分割的联合交叉增量自监督学习","authors":"Fan Zhang,Huiying Liu,Qing Cai,Chun-Mei Feng,Binglu Wang,Shanshan Wang,Junyu Dong,David Zhang","doi":"10.1109/tnnls.2024.3469962","DOIUrl":null,"url":null,"abstract":"Federated cross learning has shown impressive performance in medical image segmentation. However, it encounters the catastrophic forgetting issue caused by data heterogeneity across different clients and is particularly pronounced when simultaneously facing pixelwise label deficiency problem. In this article, we propose a novel federated cross-incremental self-supervised learning method, coined FedCSL, which not only can enable any client in the federation incrementally yet effectively learn from others without inducing knowledge forgetting or requiring massive labeled samples, but also preserve maximum data privacy. Specifically, to overcome the catastrophic forgetting issue, a novel cross-incremental collaborative distillation (CCD) mechanism is proposed, which distills explicit knowledge learned from previous clients to subsequent clients based on secure multiparty computation (MPC). Besides, an effective retrospect mechanism is designed to rearrange the training sequence of clients per round, further releasing the power of CCD by enforcing interclient knowledge propagation. In addition, to alleviate the need of large-scale densely annotated pretraining medical datasets, we also propose a two-stage training framework, in which federated cross-incremental self-supervised pretraining paradigm first extracts robust yet general image-level patterns across multi-institutional data silos via a novel round-robin distributed masked image modeling (MIM) pipeline; then, the resulting visual concepts, e.g., semantics, are transferred to the federated cross-incremental supervised fine-tuning paradigm, favoring various cross-silo medical image segmentation tasks. The experimental results on public datasets demonstrate the effectiveness of the proposed method as well as the consistently superior performance of our method over most state-of-the-art methods quantitatively and qualitatively.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":null,"pages":null},"PeriodicalIF":10.2000,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Federated Cross-Incremental Self-Supervised Learning for Medical Image Segmentation.\",\"authors\":\"Fan Zhang,Huiying Liu,Qing Cai,Chun-Mei Feng,Binglu Wang,Shanshan Wang,Junyu Dong,David Zhang\",\"doi\":\"10.1109/tnnls.2024.3469962\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated cross learning has shown impressive performance in medical image segmentation. However, it encounters the catastrophic forgetting issue caused by data heterogeneity across different clients and is particularly pronounced when simultaneously facing pixelwise label deficiency problem. In this article, we propose a novel federated cross-incremental self-supervised learning method, coined FedCSL, which not only can enable any client in the federation incrementally yet effectively learn from others without inducing knowledge forgetting or requiring massive labeled samples, but also preserve maximum data privacy. Specifically, to overcome the catastrophic forgetting issue, a novel cross-incremental collaborative distillation (CCD) mechanism is proposed, which distills explicit knowledge learned from previous clients to subsequent clients based on secure multiparty computation (MPC). Besides, an effective retrospect mechanism is designed to rearrange the training sequence of clients per round, further releasing the power of CCD by enforcing interclient knowledge propagation. In addition, to alleviate the need of large-scale densely annotated pretraining medical datasets, we also propose a two-stage training framework, in which federated cross-incremental self-supervised pretraining paradigm first extracts robust yet general image-level patterns across multi-institutional data silos via a novel round-robin distributed masked image modeling (MIM) pipeline; then, the resulting visual concepts, e.g., semantics, are transferred to the federated cross-incremental supervised fine-tuning paradigm, favoring various cross-silo medical image segmentation tasks. The experimental results on public datasets demonstrate the effectiveness of the proposed method as well as the consistently superior performance of our method over most state-of-the-art methods quantitatively and qualitatively.\",\"PeriodicalId\":13303,\"journal\":{\"name\":\"IEEE transactions on neural networks and learning systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":10.2000,\"publicationDate\":\"2024-10-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on neural networks and learning systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1109/tnnls.2024.3469962\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tnnls.2024.3469962","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Federated Cross-Incremental Self-Supervised Learning for Medical Image Segmentation.
Federated cross learning has shown impressive performance in medical image segmentation. However, it encounters the catastrophic forgetting issue caused by data heterogeneity across different clients and is particularly pronounced when simultaneously facing pixelwise label deficiency problem. In this article, we propose a novel federated cross-incremental self-supervised learning method, coined FedCSL, which not only can enable any client in the federation incrementally yet effectively learn from others without inducing knowledge forgetting or requiring massive labeled samples, but also preserve maximum data privacy. Specifically, to overcome the catastrophic forgetting issue, a novel cross-incremental collaborative distillation (CCD) mechanism is proposed, which distills explicit knowledge learned from previous clients to subsequent clients based on secure multiparty computation (MPC). Besides, an effective retrospect mechanism is designed to rearrange the training sequence of clients per round, further releasing the power of CCD by enforcing interclient knowledge propagation. In addition, to alleviate the need of large-scale densely annotated pretraining medical datasets, we also propose a two-stage training framework, in which federated cross-incremental self-supervised pretraining paradigm first extracts robust yet general image-level patterns across multi-institutional data silos via a novel round-robin distributed masked image modeling (MIM) pipeline; then, the resulting visual concepts, e.g., semantics, are transferred to the federated cross-incremental supervised fine-tuning paradigm, favoring various cross-silo medical image segmentation tasks. The experimental results on public datasets demonstrate the effectiveness of the proposed method as well as the consistently superior performance of our method over most state-of-the-art methods quantitatively and qualitatively.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.