多模态融合增强脑肿瘤成像中的语义分割:通过先进的三维语义分割架构整合深度学习和引导式过滤技术

IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC International Journal of Imaging Systems and Technology Pub Date : 2024-08-12 DOI:10.1002/ima.23152
Abbadullah .H Saleh, Ümit Atila, Oğuzhan Menemencioğlu
{"title":"多模态融合增强脑肿瘤成像中的语义分割:通过先进的三维语义分割架构整合深度学习和引导式过滤技术","authors":"Abbadullah .H Saleh,&nbsp;Ümit Atila,&nbsp;Oğuzhan Menemencioğlu","doi":"10.1002/ima.23152","DOIUrl":null,"url":null,"abstract":"<p>Brain tumor segmentation is paramount in medical diagnostics. This study presents a multistage segmentation model consisting of two main steps. First, the fusion of magnetic resonance imaging (MRI) modalities creates new and more effective tumor imaging modalities. Second, the semantic segmentation of the original and fused modalities, utilizing various modified architectures of the U-Net model. In the first step, a residual network with multi-scale backbone architecture (Res2Net) and guided filter are employed for pixel-by-pixel image fusion tasks without requiring any training or learning process. This method captures both detailed and base elements from the multimodal images to produce better and more informative fused images that significantly enhance the segmentation process. Many fusion scenarios were performed and analyzed, revealing that the best fusion results are attained when combining T2-weighted (T2) with fluid-attenuated inversion recovery (FLAIR) and T1-weighted contrast-enhanced (T1CE) with FLAIR modalities. In the second step, several models, including the U-Net and its many modifications (adding attention layers, residual connections, and depthwise separable connections), are trained using both the original and fused modalities. Further, a “Model Selection-based” fusion of these individual models is also considered for more enhancement. In the preprocessing step, the images are resized by cropping them to decrease the pixel count and minimize background interference. Experiments utilizing the brain tumor segmentation (BraTS) 2020 dataset were performed to verify the efficiency and accuracy of the proposed methodology. The “Model Selection-based” fusion model achieved an average Dice score of 88.4%, an individual score of 91.1% for the whole tumor (WT) class, an average sensitivity score of 86.26%, and a specificity score of 91.7%. These results prove the robustness and high performance of the proposed methodology compared to other state-of-the-art methods.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23152","citationCount":"0","resultStr":"{\"title\":\"Multimodal Fusion for Enhanced Semantic Segmentation in Brain Tumor Imaging: Integrating Deep Learning and Guided Filtering Via Advanced 3D Semantic Segmentation Architectures\",\"authors\":\"Abbadullah .H Saleh,&nbsp;Ümit Atila,&nbsp;Oğuzhan Menemencioğlu\",\"doi\":\"10.1002/ima.23152\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Brain tumor segmentation is paramount in medical diagnostics. This study presents a multistage segmentation model consisting of two main steps. First, the fusion of magnetic resonance imaging (MRI) modalities creates new and more effective tumor imaging modalities. Second, the semantic segmentation of the original and fused modalities, utilizing various modified architectures of the U-Net model. In the first step, a residual network with multi-scale backbone architecture (Res2Net) and guided filter are employed for pixel-by-pixel image fusion tasks without requiring any training or learning process. This method captures both detailed and base elements from the multimodal images to produce better and more informative fused images that significantly enhance the segmentation process. Many fusion scenarios were performed and analyzed, revealing that the best fusion results are attained when combining T2-weighted (T2) with fluid-attenuated inversion recovery (FLAIR) and T1-weighted contrast-enhanced (T1CE) with FLAIR modalities. In the second step, several models, including the U-Net and its many modifications (adding attention layers, residual connections, and depthwise separable connections), are trained using both the original and fused modalities. Further, a “Model Selection-based” fusion of these individual models is also considered for more enhancement. In the preprocessing step, the images are resized by cropping them to decrease the pixel count and minimize background interference. Experiments utilizing the brain tumor segmentation (BraTS) 2020 dataset were performed to verify the efficiency and accuracy of the proposed methodology. The “Model Selection-based” fusion model achieved an average Dice score of 88.4%, an individual score of 91.1% for the whole tumor (WT) class, an average sensitivity score of 86.26%, and a specificity score of 91.7%. These results prove the robustness and high performance of the proposed methodology compared to other state-of-the-art methods.</p>\",\"PeriodicalId\":14027,\"journal\":{\"name\":\"International Journal of Imaging Systems and Technology\",\"volume\":\"34 5\",\"pages\":\"\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-08-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23152\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Imaging Systems and Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ima.23152\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Imaging Systems and Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ima.23152","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

脑肿瘤分割在医学诊断中至关重要。本研究提出了一种由两个主要步骤组成的多阶段分割模型。首先,融合磁共振成像(MRI)模式,创建新的、更有效的肿瘤成像模式。其次,利用 U-Net 模型的各种改进架构,对原始模式和融合模式进行语义分割。第一步,采用具有多尺度骨干架构的残差网络(Res2Net)和引导滤波器来完成逐像素图像融合任务,无需任何训练或学习过程。这种方法可以捕捉多模态图像中的细节和基本元素,生成更好、信息量更大的融合图像,从而显著增强分割过程。我们对许多融合情况进行了分析,结果表明,将 T2 加权(T2)与流体衰减反转恢复(FLAIR)和 T1 加权对比增强(T1CE)与 FLAIR 模式相结合,可获得最佳融合效果。第二步,使用原始模态和融合模态训练多个模型,包括 U-Net 及其多种修改(增加注意力层、残差连接和深度可分离连接)。此外,还考虑对这些单独的模型进行 "基于模型选择 "的融合,以进一步增强效果。在预处理步骤中,通过裁剪调整图像大小,以减少像素数量并最大限度地降低背景干扰。利用脑肿瘤分割(BraTS)2020 数据集进行了实验,以验证所提方法的效率和准确性。基于模型选择的 "融合模型的平均骰子得分率为 88.4%,整个肿瘤(WT)类的单个得分率为 91.1%,平均灵敏度得分率为 86.26%,特异性得分率为 91.7%。与其他最先进的方法相比,这些结果证明了所提出方法的稳健性和高性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Multimodal Fusion for Enhanced Semantic Segmentation in Brain Tumor Imaging: Integrating Deep Learning and Guided Filtering Via Advanced 3D Semantic Segmentation Architectures

Brain tumor segmentation is paramount in medical diagnostics. This study presents a multistage segmentation model consisting of two main steps. First, the fusion of magnetic resonance imaging (MRI) modalities creates new and more effective tumor imaging modalities. Second, the semantic segmentation of the original and fused modalities, utilizing various modified architectures of the U-Net model. In the first step, a residual network with multi-scale backbone architecture (Res2Net) and guided filter are employed for pixel-by-pixel image fusion tasks without requiring any training or learning process. This method captures both detailed and base elements from the multimodal images to produce better and more informative fused images that significantly enhance the segmentation process. Many fusion scenarios were performed and analyzed, revealing that the best fusion results are attained when combining T2-weighted (T2) with fluid-attenuated inversion recovery (FLAIR) and T1-weighted contrast-enhanced (T1CE) with FLAIR modalities. In the second step, several models, including the U-Net and its many modifications (adding attention layers, residual connections, and depthwise separable connections), are trained using both the original and fused modalities. Further, a “Model Selection-based” fusion of these individual models is also considered for more enhancement. In the preprocessing step, the images are resized by cropping them to decrease the pixel count and minimize background interference. Experiments utilizing the brain tumor segmentation (BraTS) 2020 dataset were performed to verify the efficiency and accuracy of the proposed methodology. The “Model Selection-based” fusion model achieved an average Dice score of 88.4%, an individual score of 91.1% for the whole tumor (WT) class, an average sensitivity score of 86.26%, and a specificity score of 91.7%. These results prove the robustness and high performance of the proposed methodology compared to other state-of-the-art methods.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Imaging Systems and Technology
International Journal of Imaging Systems and Technology 工程技术-成像科学与照相技术
CiteScore
6.90
自引率
6.10%
发文量
138
审稿时长
3 months
期刊介绍: The International Journal of Imaging Systems and Technology (IMA) is a forum for the exchange of ideas and results relevant to imaging systems, including imaging physics and informatics. The journal covers all imaging modalities in humans and animals. IMA accepts technically sound and scientifically rigorous research in the interdisciplinary field of imaging, including relevant algorithmic research and hardware and software development, and their applications relevant to medical research. The journal provides a platform to publish original research in structural and functional imaging. The journal is also open to imaging studies of the human body and on animals that describe novel diagnostic imaging and analyses methods. Technical, theoretical, and clinical research in both normal and clinical populations is encouraged. Submissions describing methods, software, databases, replication studies as well as negative results are also considered. The scope of the journal includes, but is not limited to, the following in the context of biomedical research: Imaging and neuro-imaging modalities: structural MRI, functional MRI, PET, SPECT, CT, ultrasound, EEG, MEG, NIRS etc.; Neuromodulation and brain stimulation techniques such as TMS and tDCS; Software and hardware for imaging, especially related to human and animal health; Image segmentation in normal and clinical populations; Pattern analysis and classification using machine learning techniques; Computational modeling and analysis; Brain connectivity and connectomics; Systems-level characterization of brain function; Neural networks and neurorobotics; Computer vision, based on human/animal physiology; Brain-computer interface (BCI) technology; Big data, databasing and data mining.
期刊最新文献
Predicting the Early Detection of Breast Cancer Using Hybrid Machine Learning Systems and Thermographic Imaging CATNet: A Cross Attention and Texture-Aware Network for Polyp Segmentation VMC-UNet: A Vision Mamba-CNN U-Net for Tumor Segmentation in Breast Ultrasound Image Suppression of the Tissue Component With the Total Least-Squares Algorithm to Improve Second Harmonic Imaging of Ultrasound Contrast Agents Segmentation and Classification of Breast Masses From the Whole Mammography Images Using Transfer Learning and BI-RADS Characteristics
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1