Deep learning for medical imaging has shown great potential in improving patient outcomes due to its high accuracy in disease diagnosis. However, a major challenge preventing the widespread adoption of such models in clinical settings is data accessibility, which conflicts with the General Data Protection Regulation (GDPR) in a traditional centralised training environment. Hence, to address this issue, Federated Learning (FL) was introduced as a decentralised alternative that enables collaborative model training among data owners without sharing any private data. Despite its significance in healthcare, limited research has explored FL for medical imaging, particularly in multimodal brain tumour segmentation, due to challenges such as data heterogeneity.
In this study, we present Federated E-CATBraTS, an advanced federated deep learning model derived from the existing E-CATBraTS framework. This model is designed to segment brain tumours from multimodal magnetic resonance imaging (MRI) while preserving data privacy. Our framework introduces a novel aggregation method, DaQAvg, which optimally combines model weights based on data size and quality, demonstrating resilience against corrupted medical images.
We evaluated the performance of Federated E-CATBraTS using two publicly available datasets: UPenn-GBM and UCSF-PDGM, including a degraded version of the latter to assess the efficacy of our aggregation method. The results indicate a 6% overall improvement over traditional centralised approaches. Furthermore, we conducted a comprehensive comparison against state-of-the-art FL aggregation algorithms, including FedAVG, FedProx and FedNova. While FedNova demonstrated the highest overall DSC, DaQAvg demonstrated superior robustness to noisy conditions, showcasing its specific advantage in maintaining performance with variable data quality, a critical aspect in medical imaging.
扫码关注我们
求助内容:
应助结果提醒方式:
