SEMC-Net: A Shared-Encoder Multi-Class Learner

Rahul Jain, Satvik Dixit, Vikas Kumar, Bindu Verma
{"title":"SEMC-Net: A Shared-Encoder Multi-Class Learner","authors":"Rahul Jain, Satvik Dixit, Vikas Kumar, Bindu Verma","doi":"10.1109/INCET57972.2023.10170284","DOIUrl":null,"url":null,"abstract":"Brain tumour segmentation is a crucial task in medical imaging that involves identifying and delineating the boundaries of tumour tissues in the brain from MRI scans. Accurate segmentation plays an indispensable role in the diagnosis, treatment planning, and monitoring of patients with brain tumours. This study presents a novel approach to address the class imbalance prevalent in brain tumour segmentation using a shared-encoder multi-class segmentation framework. The proposed method involves training a single encoder class learner and multiple decoder class learners, which are designed to learn feature representation of a certain class subset, in addition to a shared encoder between them that extracts common features across all classes. The outputs of the complement-class learners are combined and propagated to a meta-learner to obtain the final segmentation map. The authors evaluate their method on a publicly available brain tumour segmentation dataset (BraTS20) and assess performance against the 2D U-Net model trained on all classes using standard evaluation metrics for multi-class semantic segmentation. The IoU and DSC scores for the proposed architecture stands at 0.644 and 0.731, respectively, as compared to 0.604 and 0.690 obtained by the base models. Furthermore, our model exhibits significant performance boosts in individual classes, as evidenced by the DSC scores of 0.588, 0.734, and 0.684 for the necrotic tumour core, peritumoral edema, and the GD-enhancing tumour classes, respectively. In contrast, the 2D-Unet model yields DSC scores of 0.554, 0.699, and 0.641 for the same classes, respectively. The approach exhibits notable performance gains in segmenting the T1-Gd class, which not only poses a formidable challenge in terms of segmentation but also holds paramount clinical significance for radiation therapy.","PeriodicalId":403008,"journal":{"name":"2023 4th International Conference for Emerging Technology (INCET)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 4th International Conference for Emerging Technology (INCET)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INCET57972.2023.10170284","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Brain tumour segmentation is a crucial task in medical imaging that involves identifying and delineating the boundaries of tumour tissues in the brain from MRI scans. Accurate segmentation plays an indispensable role in the diagnosis, treatment planning, and monitoring of patients with brain tumours. This study presents a novel approach to address the class imbalance prevalent in brain tumour segmentation using a shared-encoder multi-class segmentation framework. The proposed method involves training a single encoder class learner and multiple decoder class learners, which are designed to learn feature representation of a certain class subset, in addition to a shared encoder between them that extracts common features across all classes. The outputs of the complement-class learners are combined and propagated to a meta-learner to obtain the final segmentation map. The authors evaluate their method on a publicly available brain tumour segmentation dataset (BraTS20) and assess performance against the 2D U-Net model trained on all classes using standard evaluation metrics for multi-class semantic segmentation. The IoU and DSC scores for the proposed architecture stands at 0.644 and 0.731, respectively, as compared to 0.604 and 0.690 obtained by the base models. Furthermore, our model exhibits significant performance boosts in individual classes, as evidenced by the DSC scores of 0.588, 0.734, and 0.684 for the necrotic tumour core, peritumoral edema, and the GD-enhancing tumour classes, respectively. In contrast, the 2D-Unet model yields DSC scores of 0.554, 0.699, and 0.641 for the same classes, respectively. The approach exhibits notable performance gains in segmenting the T1-Gd class, which not only poses a formidable challenge in terms of segmentation but also holds paramount clinical significance for radiation therapy.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
SEMC-Net:一个共享编码器的多类学习器
脑肿瘤分割是医学成像中的一项关键任务,它涉及到从MRI扫描中识别和描绘大脑肿瘤组织的边界。准确的分割在脑肿瘤患者的诊断、治疗计划和监测中起着不可缺少的作用。本研究提出了一种利用共享编码器多类分割框架来解决脑肿瘤分割中普遍存在的类不平衡问题的新方法。所提出的方法包括训练一个编码器类学习器和多个解码器类学习器,它们被设计用来学习特定类子集的特征表示,此外,它们之间还有一个共享的编码器,用于提取所有类的共同特征。将互补类学习器的输出组合并传播到元学习器以获得最终的分割映射。作者在一个公开可用的脑肿瘤分割数据集(BraTS20)上评估了他们的方法,并使用多类语义分割的标准评估指标,对在所有类别上训练的2D U-Net模型进行了性能评估。与基本模型获得的0.604和0.690相比,所提议架构的IoU和DSC得分分别为0.644和0.731。此外,我们的模型在个别类别中表现出显著的性能提升,坏死肿瘤核心、肿瘤周围水肿和gd增强肿瘤类别的DSC评分分别为0.588、0.734和0.684。相比之下,2D-Unet模型对同一类的DSC得分分别为0.554、0.699和0.641。该方法在分割T1-Gd类方面表现出显著的性能提升,这不仅在分割方面提出了巨大的挑战,而且对放射治疗具有重要的临床意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Deep Learning-Based Solution for Differently-Abled Persons in The Society CARP-YOLO: A Detection Framework for Recognising and Counting Fish Species in a Cluttered Environment Implementation of Covid patient Health Monitoring System using IoT ESP Tuning to Reduce Auxiliary Power Consumption and Preserve Environment Real-time Recognition of Indian Sign Language using OpenCV and Deep Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1