Automated Segmentation of Brain Gliomas in Multimodal MRI Data

IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC International Journal of Imaging Systems and Technology Pub Date : 2024-06-27 DOI:10.1002/ima.23128
Changxiong Xie, Jianming Ye, Xiaofei Ma, Leshui Dong, Guohua Zhao, Jingliang Cheng, Guang Yang, Xiaobo Lai
{"title":"Automated Segmentation of Brain Gliomas in Multimodal MRI Data","authors":"Changxiong Xie,&nbsp;Jianming Ye,&nbsp;Xiaofei Ma,&nbsp;Leshui Dong,&nbsp;Guohua Zhao,&nbsp;Jingliang Cheng,&nbsp;Guang Yang,&nbsp;Xiaobo Lai","doi":"10.1002/ima.23128","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Brain gliomas, common in adults, pose significant diagnostic challenges. Accurate segmentation from multimodal magnetic resonance imaging (MRI) scans is critical for effective treatment planning. Traditional manual segmentation methods, labor-intensive and error-prone, often lead to inconsistent diagnoses. To overcome these limitations, our study presents a sophisticated framework for the automated segmentation of brain gliomas from multimodal MRI images. This framework consists of three integral components: a 3D UNet, a classifier, and a Classifier Weight Transformer (CWT). The 3D UNet, acting as both an encoder and decoder, is instrumental in extracting comprehensive features from MRI scans. The classifier, employing a streamlined 1 × 1 convolutional architecture, performs detailed pixel-wise classification. The CWT integrates self-attention mechanisms through three linear layers, a multihead attention module, and layer normalization, dynamically refining the classifier's parameters based on the features extracted by the 3D UNet, thereby improving segmentation accuracy. Our model underwent a two-stage training process for maximum efficiency: in the first stage, supervised learning was used to pre-train the encoder and decoder, focusing on robust feature representation. In the second stage, meta-training was applied to the classifier, with the encoder and decoder remaining unchanged, ensuring precise fine-tuning based on the initially developed features. Extensive evaluation of datasets such as BraTS2019, BraTS2020, BraTS2021, and a specialized private dataset (ZZU) underscored the robustness and clinical potential of our framework, highlighting its superiority and competitive advantage over several state-of-the-art approaches across various segmentation metrics in training and validation sets.</p>\n </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Imaging Systems and Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ima.23128","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Brain gliomas, common in adults, pose significant diagnostic challenges. Accurate segmentation from multimodal magnetic resonance imaging (MRI) scans is critical for effective treatment planning. Traditional manual segmentation methods, labor-intensive and error-prone, often lead to inconsistent diagnoses. To overcome these limitations, our study presents a sophisticated framework for the automated segmentation of brain gliomas from multimodal MRI images. This framework consists of three integral components: a 3D UNet, a classifier, and a Classifier Weight Transformer (CWT). The 3D UNet, acting as both an encoder and decoder, is instrumental in extracting comprehensive features from MRI scans. The classifier, employing a streamlined 1 × 1 convolutional architecture, performs detailed pixel-wise classification. The CWT integrates self-attention mechanisms through three linear layers, a multihead attention module, and layer normalization, dynamically refining the classifier's parameters based on the features extracted by the 3D UNet, thereby improving segmentation accuracy. Our model underwent a two-stage training process for maximum efficiency: in the first stage, supervised learning was used to pre-train the encoder and decoder, focusing on robust feature representation. In the second stage, meta-training was applied to the classifier, with the encoder and decoder remaining unchanged, ensuring precise fine-tuning based on the initially developed features. Extensive evaluation of datasets such as BraTS2019, BraTS2020, BraTS2021, and a specialized private dataset (ZZU) underscored the robustness and clinical potential of our framework, highlighting its superiority and competitive advantage over several state-of-the-art approaches across various segmentation metrics in training and validation sets.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在多模态磁共振成像数据中自动分割脑胶质瘤
脑胶质瘤常见于成年人,给诊断带来了巨大挑战。从多模态磁共振成像(MRI)扫描中进行精确分割对于有效的治疗规划至关重要。传统的人工分割方法耗费大量人力且容易出错,往往导致诊断结果不一致。为了克服这些局限性,我们的研究提出了一个复杂的框架,用于从多模态磁共振成像图像中自动分割脑胶质瘤。该框架由三部分组成:三维 UNet、分类器和分类器权重变换器 (CWT)。3D UNet 既是编码器也是解码器,在从核磁共振扫描图像中提取综合特征方面发挥着重要作用。分类器采用精简的 1 × 1 卷积架构,执行详细的像素分类。CWT 通过三个线性层、一个多头注意模块和层归一化整合了自我注意机制,可根据 3D UNet 提取的特征动态完善分类器的参数,从而提高分割准确性。为了实现最高效率,我们的模型采用了两阶段训练流程:第一阶段,使用监督学习对编码器和解码器进行预训练,重点是稳健的特征表示。在第二阶段,对分类器进行元训练,编码器和解码器保持不变,确保根据最初开发的特征进行精确微调。对 BraTS2019、BraTS2020、BraTS2021 等数据集和一个专门的私人数据集(ZZU)进行的广泛评估突出了我们的框架的鲁棒性和临床潜力,在训练集和验证集的各种分割指标方面,与几种最先进的方法相比,凸显了其优越性和竞争优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
International Journal of Imaging Systems and Technology
International Journal of Imaging Systems and Technology 工程技术-成像科学与照相技术
CiteScore
6.90
自引率
6.10%
发文量
138
审稿时长
3 months
期刊介绍: The International Journal of Imaging Systems and Technology (IMA) is a forum for the exchange of ideas and results relevant to imaging systems, including imaging physics and informatics. The journal covers all imaging modalities in humans and animals. IMA accepts technically sound and scientifically rigorous research in the interdisciplinary field of imaging, including relevant algorithmic research and hardware and software development, and their applications relevant to medical research. The journal provides a platform to publish original research in structural and functional imaging. The journal is also open to imaging studies of the human body and on animals that describe novel diagnostic imaging and analyses methods. Technical, theoretical, and clinical research in both normal and clinical populations is encouraged. Submissions describing methods, software, databases, replication studies as well as negative results are also considered. The scope of the journal includes, but is not limited to, the following in the context of biomedical research: Imaging and neuro-imaging modalities: structural MRI, functional MRI, PET, SPECT, CT, ultrasound, EEG, MEG, NIRS etc.; Neuromodulation and brain stimulation techniques such as TMS and tDCS; Software and hardware for imaging, especially related to human and animal health; Image segmentation in normal and clinical populations; Pattern analysis and classification using machine learning techniques; Computational modeling and analysis; Brain connectivity and connectomics; Systems-level characterization of brain function; Neural networks and neurorobotics; Computer vision, based on human/animal physiology; Brain-computer interface (BCI) technology; Big data, databasing and data mining.
期刊最新文献
Predicting the Early Detection of Breast Cancer Using Hybrid Machine Learning Systems and Thermographic Imaging CATNet: A Cross Attention and Texture-Aware Network for Polyp Segmentation VMC-UNet: A Vision Mamba-CNN U-Net for Tumor Segmentation in Breast Ultrasound Image Suppression of the Tissue Component With the Total Least-Squares Algorithm to Improve Second Harmonic Imaging of Ultrasound Contrast Agents Segmentation and Classification of Breast Masses From the Whole Mammography Images Using Transfer Learning and BI-RADS Characteristics
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1